url
stringlengths 15
1.13k
| text
stringlengths 100
1.04M
| metadata
stringlengths 1.06k
1.1k
|
---|---|---|
https://pixel-druid.com/even-and-odd-functions-through-representation-theory.html | ## § Even and odd functions through representation theory
Consider the action of $\mathbb Z/ 2\mathbb Z$ on the space of functions $\mathbb R \to \mathbb R$. given by $\phi(0)(f) = f$, and $phi(1)(f) = \lambda x. f(-x)$. How do we write this in terms of irreps?
• On the even functions, since $e(x) = e(-x)$ for $e$ even, we have that, $\phi(0)(e) = e$ and $\phi(1)(e) = e$ [since $e(-x) = e(x)$], or $\phi(x)(e) = id(e)$, hence the action of $\phi$ is that of the trivial representation on the subspace spanned by even functions.
• On the odd functions, since $o(-x) = -o(x)$, we have that $\phi(1)(o)(x) = o(-x) = -o(x) = sgn(o)(x)$ hence $\phi(1)(o) = -o$, hence $\phi(x)(o) = sgn(x)(o)$ where $sgn$is the sign representation!
Since the even and odd functions span the space of all functions, as we can write any function $f$ as the sum of an even part $e_f(x) \equiv [f(x) + f(-x)]/2$ and an odd part $o_f(x) \equiv [f(x) - f(-x)]/2$. So, we have described the action of $\phi$ in terms of subspaces which span the space, so we've found the irrep decomposition. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 20, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9889076352119446, "perplexity": 107.89489390741741}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499829.29/warc/CC-MAIN-20230130201044-20230130231044-00484.warc.gz"} |
https://tex.stackexchange.com/questions/296275/insert-a-graphic-within-a-sentence-nicely | # Insert a graphic within a sentence nicely?
I am using string diagrams that I draw in Illustrator a lot, and I would like to draw them within a sentence.
When I just put \includegraphics{...} in the text, the image appears but its bottom is aligned with the text. I would like the center of the image aligned with the text, analogous to how in \displaystyle the center of the sum or integral is aligned with the text rather than its bottom.
• \begin{tabular}{@{}c@{}}\includegraphics{...}\end{tabular} – egreg Feb 27 '16 at 18:24
• or use adjustbox package which adds keys to \includegraphics for vertical alignement – David Carlisle Feb 27 '16 at 18:31
The simplest way is
\begin{tabular}{@{}c@{}}\includegraphics{...}\end{tabular}
More complex, but perhaps handier,
\usepackage[export]{adjustbox}
and then
\includegraphics[valign=M]{...}
Example:
\documentclass{article}
\usepackage{graphicx}
\begin{document}
Some text \includegraphics[height=3ex]{example-image-1x1} some text
\bigskip
Some text
\begin{tabular}{@{}c@{}}\includegraphics[width=3ex]{example-image-1x1}\end{tabular}
some text
\bigskip
Some text \includegraphics[height=3ex,valign=M]{example-image-1x1} some text
\bigskip
Some text \includegraphics[height=3ex,valign=m]{example-image-1x1} some text
\end{document}
If you want full control on the amount of raising or lowering, use \raisebox:
\raisebox{-.5\height}{\includegraphics{...}}
Instead of \height or a multiple thereof, you can use an explicit length. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9323537349700928, "perplexity": 3041.802861698251}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525312.3/warc/CC-MAIN-20190717141631-20190717163631-00103.warc.gz"} |
http://cms.math.ca/cmb/msc/46L07?fromjnl=cmb&jnl=CMB | location: Publications → journals
Search results
Search: MSC category 46L07 ( Operator spaces and completely bounded maps [See also 47L25] )
Expand all Collapse all Results 1 - 5 of 5
1. CMB Online first
Wang, Yuanyi
Condition $C'_{\wedge}$ of Operator Spaces In this paper, we study condition $C'_{\wedge}$ which is a projective tensor product analogue of condition $C'$. We show that the finite-dimensional OLLP operator spaces have condition $C'_{\wedge}$ and $M_{n}$ $(n\gt 2)$ does not have that property. Keywords:operator space, local theory, tensor productCategory:46L07
2. CMB 2012 (vol 57 pp. 166)
Öztop, Serap; Spronk, Nico
On Minimal and Maximal $p$-operator Space Structures We show that for $p$-operator spaces, there are natural notions of minimal and maximal structures. These are useful for dealing with tensor products. Keywords:$p$-operator space, min space, max spaceCategories:46L07, 47L25, 46G10
3. CMB 2011 (vol 54 pp. 654)
Forrest, Brian E.; Runde, Volker
Norm One Idempotent $cb$-Multipliers with Applications to the Fourier Algebra in the $cb$-Multiplier Norm For a locally compact group $G$, let $A(G)$ be its Fourier algebra, let $M_{cb}A(G)$ denote the completely bounded multipliers of $A(G)$, and let $A_{\mathit{Mcb}}(G)$ stand for the closure of $A(G)$ in $M_{cb}A(G)$. We characterize the norm one idempotents in $M_{cb}A(G)$: the indicator function of a set $E \subset G$ is a norm one idempotent in $M_{cb}A(G)$ if and only if $E$ is a coset of an open subgroup of $G$. As applications, we describe the closed ideals of $A_{\mathit{Mcb}}(G)$ with an approximate identity bounded by $1$, and we characterize those $G$ for which $A_{\mathit{Mcb}}(G)$ is $1$-amenable in the sense of B. E. Johnson. (We can even slightly relax the norm bounds.) Keywords:amenability, bounded approximate identity, $cb$-multiplier norm, Fourier algebra, norm one idempotentCategories:43A22, 20E05, 43A30, 46J10, 46J40, 46L07, 47L25
4. CMB 2009 (vol 53 pp. 239)
Dong, Z.
A Note on the Exactness of Operator Spaces In this paper, we give two characterizations of the exactness of operator spaces. Keywords:operator space, exactnessCategory:46L07
5. CMB 2007 (vol 50 pp. 519)
Henson, C. Ward; Raynaud, Yves; Rizzo, Andrew
On Axiomatizability of Non-Commutative $L_p$-Spaces It is shown that Schatten $p$-classes of operators between Hilbert spaces of different (infinite) dimensions have ultrapowers which are (completely) isometric to non-commutative $L_p$-spaces. On the other hand, these Schatten classes are not themselves isomorphic to non-commutative $L_p$ spaces. As a consequence, the class of non-commutative $L_p$-spaces is not axiomatizable in the first-order language developed by Henson and Iovino for normed space structures, neither in the signature of Banach spaces, nor in that of operator spaces. Other examples of the same phenomenon are presented that belong to the class of corners of non-commutative $L_p$-spaces. For $p=1$ this last class, which is the same as the class of preduals of ternary rings of operators, is itself axiomatizable in the signature of operator spaces. Categories:46L52, 03C65, 46B20, 46L07, 46M07
top of page | contact us | privacy | site map | | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9530592560768127, "perplexity": 1045.938759857214}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719815.3/warc/CC-MAIN-20161020183839-00343-ip-10-171-6-4.ec2.internal.warc.gz"} |
http://mathhelpforum.com/differential-geometry/137791-series-functions-uniform-convergence.html | # Thread: Series of functions & uniform convergence
1. ## Series of functions & uniform convergence
OK, so the answer I got for part a is:
S(x) = $x^2$ / [1- 1/(1+ $x^2$)] = $1 + x^2$ if x≠0
S(x) = 0 if x=0
Can someone help me with the part b, please? In general, I am having a lot of headaches on problems about uniform convergence. I know the precise definition of it, and I re-read the definition many times, but I have no idea how to actually APPLY the definition to solve actual problems.
Should we use the Weierstrass M-test here? Is this the only way to prove that a SERIES of functions is uniformly convergent?
Thanks for any help!!
[also under discussion in math link forum]
2. Originally Posted by kingwinner
OK, so the answer I got for part a is:
S(x) = $x^2$ / [1- 1/(1+ $x^2$)] = $1 + x^2$ if x≠0
S(x) = 0 if x=0
Can someone help me with the part b, please? In general, I am having a lot of headaches on problems about uniform convergence. I know the precise definition of it, and I re-read the definition many times, but I have no idea how to actually APPLY the definition to solve actual problems.
Should we use the Weierstrass M-test here? Is this the only way to prove that a SERIES of functions is uniformly convergent?
Thanks for any help!!
[also under discussion in math link forum]
Hint:
Spoiler:
$f_n(x)\leqslant\frac{\left(1-\frac{1}{n}\right)^n}{n}$.
3. Sorry, I don't understand...
How did you get that inequality and how is that going to help?
4. Originally Posted by kingwinner
How did you get that inequality
Use calculus (max and mins)
and how is that going to help?
Weierstrass M-test.
5. Sorry, I still don't get it. That bound is not too obvious/natural to me.
For part b, what interval [a,b] should we work with? In what interval does it converge uniformly?
I hope someone can help me out! Thank you!
6. Originally Posted by kingwinner
Sorry, I still don't get it. That bound is not too obvious/natural to me.
I used simple calculus. In other words, I calculated $f'_n(x)$ set it equal to zero..blah blah blah.
For part b, what interval [a,b] should we work with? In what interval does it converge uniformly?
What are you thinking?
I hope someone can help me out! Thank you!
I'm saying this in passing, but you say stuff like this quite often in all your threads and it makes it sound as though you are disregarding those who have helped you. It could be offensive to some! just saying | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9320009350776672, "perplexity": 544.6848861715652}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982931818.60/warc/CC-MAIN-20160823200851-00254-ip-10-153-172-175.ec2.internal.warc.gz"} |
https://en.wikipedia.org/wiki/Proportionality_(mathematics) | # Proportionality (mathematics)
For other uses, see Proportionality.
Variable y is directly proportional to the variable x.
In mathematics, two variables are proportional if a change in one is always accompanied by a change in the other, and if the changes are always related by use of a constant multiplier. The constant is called the coefficient of proportionality or proportionality constant.
• If one variable is always the product of the other and a constant, the two are said to be directly proportional. x and y are directly proportional if the ratio y/x is constant.
• If the product of the two variables is always a constant, the two are said to be inversely proportional. x and y are inversely proportional if the product xy is constant.
To express the statement "y is (directly) proportional to x" mathematically, we write an equation y = cx, for some real constant, c. Symbolically, this is written yx.
To express the statement "y is inversely proportional to x" mathematically, we write an equation y = c/x. We can equivalently write "y is proportional to 1/x", which y = c/x would represent.
If a linear function transforms 0, a and b into 0, c and d, and if the product a b c d is not zero, we say a and b are proportional to c and d. An equality of two ratios such as a/c = b/d, where no term is zero, is called a proportion.
## Direct proportionality
Given two variables x and y, y is directly proportional to x (x and y vary directly, or x and y are in direct variation)[1] if there is a non-zero constant k such that
${\displaystyle y=kx.\,}$
U+221D ∝ PROPORTIONAL TO (HTML ∝ · ∝) U+007E ~ TILDE (HTML ~) U+223C ∼ TILDE OPERATOR (HTML ∼ · ∼) U+223A ∺ GEOMETRIC PROPORTION (HTML ∺) See also: Equals sign
The relation is often denoted, using the ∝ or ~ symbol, as
${\displaystyle y\propto x}$
and the constant ratio
${\displaystyle k={\frac {y}{x}}\,}$
is called the proportionality constant, constant of variation or constant of proportionality.
### Examples
• If an object travels at a constant speed, then the distance traveled is directly proportional to the time spent traveling, with the speed being the constant of proportionality.
• The circumference of a circle is directly proportional to its diameter, with the constant of proportionality equal to π.
• On a map drawn to scale, the distance between any two points on the map is directly proportional to the distance between the two locations that the points represent, with the constant of proportionality being the scale of the map.
• The force acting on a certain object due to gravity is directly proportional to the object's mass; the constant of proportionality between the mass and the force is known as gravitational acceleration.
### Properties
Since
${\displaystyle y=kx}$
is equivalent to
${\displaystyle x=\left({\frac {1}{k}}\right)y,}$
it follows that if y is directly proportional to x, with (nonzero) proportionality constant k, then x is also directly proportional to y with proportionality constant 1/k.
If y is directly proportional to x, then the graph of y as a function of x is a straight line passing through the origin with the slope of the line equal to the constant of proportionality: it corresponds to linear growth.
## Inverse proportionality
Inverse proportionality with a function of y = 1/x.
The concept of inverse proportionality can be contrasted against direct proportionality. Consider two variables said to be "inversely proportional" to each other. If all other variables are held constant, the magnitude or absolute value of one inversely proportional variable decreases if the other variable increases, while their product (the constant of proportionality k) is always the same.
Formally, two variables are inversely proportional (also called varying inversely, in inverse variation, in inverse proportion, in reciprocal proportion) if each of the variables is directly proportional to the multiplicative inverse (reciprocal) of the other, or equivalently if their product is a constant.[2] It follows that the variable y is inversely proportional to the variable x if there exists a non-zero constant k such that
${\displaystyle y=\left({\frac {k}{x}}\right)}$
(Also sometimes written as: ${\displaystyle xy=k}$)
The constant can be found by multiplying the original x variable and the original y variable.
As an example, the time taken for a journey is inversely proportional to the speed of travel; the time needed to dig a hole is (approximately) inversely proportional to the number of people digging.
The graph of two variables varying inversely on the Cartesian coordinate plane is a rectangular hyperbola. The product of the x and y values of each point on the curve equals the constant of proportionality (k). Since neither x nor y can equal zero (if k is non-zero), the graph never crosses either axis.
## Hyperbolic coordinates
The concepts of direct and inverse proportion lead to the location of points in the Cartesian plane by hyperbolic coordinates; the two coordinates correspond to the constant of direct proportionality that locates a point on a ray and the constant of inverse proportionality that locates a point on a hyperbola.
## Exponential and logarithmic proportionality
A variable y is exponentially proportional to a variable x, if y is directly proportional to the exponential function of x, that is if there exist non-zero constants k and a such that
${\displaystyle y=ka^{x}.\,}$
Likewise, a variable y is logarithmically proportional to a variable x, if y is directly proportional to the logarithm of x, that is if there exist non-zero constants k and a such that
${\displaystyle y=k\log _{a}(x).\,}$[dubious ]
## Notes
1. ^ Weisstein, Eric W. "Directly Proportional." MathWorld -- A Wolfram Web Resource
2. ^ Weisstein, Eric W. "Inversely Proportional." From MathWorld--A Wolfram Web Resource. http://mathworld.wolfram.com/InverselyProportional.html
## References
• Ya.B. Zeldovich, I.M. Yaglom: Higher math for Beginners. pp. 34-35
• Brian Burell: Merriam-Webster's Guide to Everyday Math: A Home and Business Reference. Merriam-Webster, 1998, ISBN 9780877796213, pp. 85-101
• Lanius, Cynthia S.; Williams Susan E.: PROPORTIONALITY: A Unifying Theme for the Middle Grades. Mathematics Teaching in the Middle School 8.8 (2003), pp. 392-96 (JSTOR)
• Seeley, Cathy; Schielack Jane F.: A Look at the Development of Ratios, Rates, and Proportionality. Mathematics Teaching in the Middle School, 13.3, 2007, pp. 140-42 (JSTOR)
• Van Dooren, Wim; De Bock Dirk; Evers Marleen; Verschaffel Lieven : Students' Overuse of Proportionality on Missing-Value Problems: How Numbers May Change Solutions. Journal for Research in Mathematics Education, 40.2, 2009, pp. 187-211 (JSTOR) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 9, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9397865533828735, "perplexity": 628.0789558255215}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738661962.7/warc/CC-MAIN-20160924173741-00021-ip-10-143-35-109.ec2.internal.warc.gz"} |
https://www.computer.org/csdl/trans/tm/2008/01/ttm2008010127-abs.html | Issue No. 01 - January (2008 vol. 7)
ISSN: 1536-1233
pp: 127-139
ABSTRACT
This paper presents a geographic routing protocol Boundary State Routing (BSR) which consists of two components.The first is an improved forwarding strategy, Greedy-BoundedCompass, which can forward packets around concave boundarieswhere the packet moves away from the destination without looping. The second component is a Boundary Mapping Protocol(BMP) which is used to maintain link state information for boundaries containing concave vertices. The proposed forwardingstrategy Greedy-BoundedCompass is shown to produce a higher rate of path completion than Greedy forwarding andsignificantly improves the performance of GPSR in sparse networks when used in place of Greedy forwarding. The proposedgeographic routing protocol BSR is shown to produce significant improvements in performance in comparison to GPSR insparse networks due to informed decisions regarding direction of boundary traversal at local minima.
INDEX TERMS
Algorithm, packet switching, networks, network routing, wireless lan
CITATION
Colin J. Lemmon, Phillip Musumeci, "Boundary Mapping and Boundary-State Routing (BSR) in Ad Hoc Networks", IEEE Transactions on Mobile Computing, vol. 7, no. , pp. 127-139, January 2008, doi:10.1109/TMC.2007.70722 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8609209656715393, "perplexity": 4653.559631168031}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660342.42/warc/CC-MAIN-20160924173740-00217-ip-10-143-35-109.ec2.internal.warc.gz"} |
https://video.ias.edu/csdm/kopparty2012Apr10 | List-Decoding Multiplicity Codes
List-Decoding Multiplicity Codes - Swastik Kopparty
Swastik Kopparty
Rutgers University
April 10, 2012
We study the list-decodability of multiplicity codes.
These codes, which are based on evaluations of high-degree polynomials and their derivatives, have rate approaching 1 while simultaneously allowing for sublinear-time error-correction. In this paper, we show that multiplicity codes also admit powerful list-decoding and local list-decoding algorithms correcting a large fraction of errors.
Our first main result shows that univariate multiplicity codes over prime fields achieve list decoding capacity". This provides a new (and perhaps more natural) example of such a code after the original Folded Reed-Solomon Codes of Guruswami and Rudra. The list decoding algorithm is based on constructing a differential equation of which the desired codeword is a solution; this differential equation is then solved using a power-series approach (a variation of Hensel lifting) along with other algebraic ideas.
Our second main result is a list-decoding algorithm for decoding multivariate multiplicity codes up to the Johnson radius.
In particular, this is the first algorithm to decode multiplicity codes up to half their minimum distance. The key ingredient of this algorithm is the construction of a special family of algebraically-repelling" curves passing through the points of F_q^m; no moderate-degree multivariate polynomial over F_q^m can simultaneously vanish on all these curves.
As a corollary, we show that multivariate multiplicity codes of length n and rate nearly 1 can be locally list-decoded up to the Johnson radius in O(n^{epsilon}) time. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8521472811698914, "perplexity": 838.2669807720864}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583658981.19/warc/CC-MAIN-20190117123059-20190117145059-00322.warc.gz"} |
http://www.physicsforums.com/showthread.php?s=a1af66cd0fd614d74d015ccd7687e3c1&p=4501320 | # CMRR formula gives wrong result!
by simpComp
Tags: cmrr, formula, result
P: 44
Hello,
If you go to this link:
http://en.wikipedia.org/wiki/Common-...ejection_ratio
and scroll all the way down to the bottom where they show the:
"Example: operational amplifiers"
section.... we have:
So for example, an op-amp with 90dB CMRR operating with 10V of common-mode will have an output error of ±316uV.
I get +/-222mv ????
Am I doing the math wrong?
Mentor
P: 39,720
Quote by simpComp Hello, If you go to this link: http://en.wikipedia.org/wiki/Common-...ejection_ratio and scroll all the way down to the bottom where they show the: "Example: operational amplifiers" section.... we have: I get +/-222mv ???? Am I doing the math wrong?
I get the 316mV number. Can you show how you are typing the numbers into your calculator?
P: 44 90/10 then the result is multiplied by 10. then 10/the result above hence: 10/((90/20)10) = 0.222??? and they get 316 micro volts?
Mentor
P: 39,720
## CMRR formula gives wrong result!
Quote by simpComp 90/10 then the result is multiplied by 10. then 10/the result above hence: 10/((90/20)10) = 0.222??? and they get 316 micro volts?
I think there are several issues with the way you are trying to solve this. First remember that for voltage ratios, the equation is dB = 20log(ratio). | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9506588578224182, "perplexity": 3756.284778566338}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00471-ip-10-147-4-33.ec2.internal.warc.gz"} |
https://www.oreilly.com/library/view/microfluidics-modeling-mechanics/9781455731510/B9781455731411500344_2.xhtml | #### 34.2.3.3 Instationary Flows
Momentum Equations. The second scenario we will consider involves instationary flows. For instationary flows we have to solve Eq. 34.1, Eq. 34.2, and Eq. 34.3 using a suitable numerical scheme for the timesteps $\frac{\partial {v}_{x}}{\partial t},\frac{\partial {v}_{y}}{\partial t}$ and $\frac{\partial {v}_{z}}{\partial t}$ ,. Usually, we would choose a forward Euler method (see section 27.2.2) for the time step. We could then rewrite Eq. 34.1, Eq. 34.2, and Eq. 34.3 as
$\begin{array}{l}\frac{\partial {v}_{x}}{\partial t}+{v}_{x}\frac{\partial {v}_{x}}{\partial x}+{v}_{y}\frac{\partial {v}_{x}}{\partial y}+{v}_{z}\frac{\partial {v}_{x}}{\partial z}=-\frac{1}{\rho }\frac{\partial p}{\partial x}+\frac{\eta }{\rho }\left(\frac{{\partial }^{2}{v}_{x}}{\partial {x}^{2}}+\frac{{\partial }^{2}{v}_{x}}{\partial {y}^{2}}+\frac{{\partial }^{2}{v}_{x}}{\partial {z}^{2}}\right)+\frac{{k}_{x}}{\rho }\\ \frac{\partial {v}_{y}}{\partial t}+{v}_{x}\frac{\partial {v}_{y}}{\partial x}+{v}_{y}\frac{\partial {v}_{y}}{\partial y}+{v}_{z}\frac{\partial {v}_{y}}{\partial z}=-\frac{1}{\rho }\frac{\partial p}{\partial y}+\frac{\eta }{\rho }\left(\frac{{\partial }^{2}{v}_{y}}{\partial {x}^{2}}+\frac{{\partial }^{2}{v}_{y}}{\partial {y}^{2}}+\frac{{\partial }^{2}{v}_{y}}{\partial {z}^{2}}\right)+\frac{{k}_{y}}{\rho }\\ \frac{\partial {v}_{x}}{\partial }\end{array}$
Get Microfluidics: Modeling, Mechanics and Mathematics now with O’Reilly online learning.
O’Reilly members experience live online training, plus books, videos, and digital content from 200+ publishers. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 3, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8824130892753601, "perplexity": 4437.718868494336}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301063.81/warc/CC-MAIN-20220118213028-20220119003028-00376.warc.gz"} |
https://math.stackexchange.com/questions/2355519/is-completeness-equivalent-to-closure-in-complete-metric-spaces | # Is completeness equivalent to closure in complete metric spaces?
Let $(X, d)$ be a complete metric space and consider a subset $A \subset X$:
• $A$ closed $\implies$ $A$ complete (I know that)
• $A$ complete $\stackrel{(?)}{\implies}$ $A$ closed
I was wondering about the truth of the second implication. If it's true does the following proof is correct?
Proof (Proof by contrapositive, $\neg A$ closed $\implies \neg A$ complete ):
Take $a \in A' \cap A^c$ (exists since A is not closed, closed sets contains their limit points) and consider the following sequence in $A$ such that
$$\large{(x_n)_{n\in\mathbb{N}} \in N_{1/n}(a) }$$
This is a Cauchy sequence in fact $\forall \epsilon>0$ I can set $\large{n_\epsilon := \lceil{\frac{1}{\epsilon}} \rceil}$ to have $d(x_n, x_m)<\epsilon \hspace{4pt} \forall n \ge n_\epsilon$.
But $(x_n)$ does not converge because $a \in A^c. \blacksquare$
Notations: with $A'$ I mean the derived set; with $N_r(p)$ I mean the neighbourhood of $p$ with radius r.
• I think your proof is correct Jul 11, 2017 at 23:46
• you can just notice that a convergent sequence is a cauchy sequence, then you ar done Jul 11, 2017 at 23:49
• @JensRenders ok, it's clear, thanks! Jul 12, 2017 at 0:04
Assume $A$ complete.
let prove that $\bar {A}\subset A$. take $a\in \bar {A}$.
then $a=\lim_{n\to+\infty}a_n$ with $a_n\in A$.
$(a_n)$ is Cauchy in $A$ since it converges.
but $A$ is complete then $a\in A$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9906979203224182, "perplexity": 290.17871812508463}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662530066.45/warc/CC-MAIN-20220519204127-20220519234127-00743.warc.gz"} |
https://iwaponline.com/hr/article-abstract/49/6/2002/41419/Hydrologic-assessment-of-the-TMPA-3B42-V7-product?redirectedFrom=PDF | ## Abstract
To evaluate the accuracy and applicability of the TMPA 3B42-V7 precipitation product for the Lancang River basin, we used different statistical indices to explore the performance of the product in comparison to gauge data. Then, we performed a hydrologic simulation using the Variable Infiltration Capacity (VIC) hydrological model with two scenarios (Scenario I: streamflow simulation using gauge-calibrated parameters; Scenario II: streamflow simulation using 3B42-V7-recalibrated parameters) to verify the applicability of the product. The results of the precipitation analysis show good accuracy of the V7 precipitation data. The accuracy increases with the increase of both space and time scales, while time scale increases cause a stronger effect. The satellite can accurately measure most of the precipitation but tends to misidentify non-precipitation events as light precipitation events (<1 mm/day). The results of the hydrologic simulation show that the VIC hydrological model has good applicability for the Lancang River basin. However, 3B42-V7 data did not perform as well under Scenario I with the lowest Nash–Sutcliffe coefficient of efficiency (NSCE) of 0.42; Scenario II suggests that the error drops significantly and the NSCE increases to 0.70 or beyond. In addition, the simulation accuracy increases with increased temporal scale. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8770185112953186, "perplexity": 3105.5465676510858}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027314353.10/warc/CC-MAIN-20190818231019-20190819013019-00236.warc.gz"} |
http://math.stackexchange.com/questions/480837/is-the-finite-dimension-required-in-this-proof | # Is the finite dimension required in this proof?
Let $V$ and $W$ be vector spaces over a field $K$. If a linear map $L:V \rightarrow W$ is surjective then its dual is injective. If $V$ and $W$ are finite dimensional then the converse holds, i.e. $L^*:W^* \rightarrow V^*$ injective implies $L$ surjective.
I have proved both statements but I don't see where I used the finite dimensional requirement for the second. Here is my proof:
Assume $L$ is not surjective, say the element $e_i$ of the basis of $W$ is not in the image of $L$. Take its corresponding dual $\alpha_i \in W^*$, then $L^*(\alpha_i)=\alpha_i \circ L =0$ so the kernel of $L^*$ is not 0 and therefore $L^*$ is not injective.
-
Your argument seems correct, but I might be missing something. Where did you see this statement? Do you know that every vector space has a basis (i.e. are you allowed to use the Axiom of Choice)? – Michael Albanese Aug 31 '13 at 20:35
You claimed that $e_i\not\in L(V)$ implies $\alpha_i\circ L=0$. I don't think this is true, even for finite-dimensional spaces. – Julian Rosen Aug 31 '13 at 20:40
I think you need to make sure to first choose a basis of the image of $L$, and then extend it to a basis of $W$. If you don't do that, there is no guarantee that your dual basis element acts properly on the image of $L$. – Carl Aug 31 '13 at 21:31
It is problem 10.5 of Tu's book "An introduction to Manifolds". Yes the axiom of choice is assumed so every vector space has a basis. @Pink Elephants the only elements which are not mapped to $0$ by $\alpha_i \circ L$ are those whose image by $L$ is a multiple of $e_i$. – inquisitor Sep 1 '13 at 0:36
@inquisitor Let $e_1,\ldots,e_n\in W$ be a basis, $\alpha_1,\ldots,\alpha_n\in W^*$ the dual basis. It is not true that the only elements of $W$ not mapped to 0 by $\alpha_i$ are scalar multiples of $e_i$. For example, $\alpha_1(e_1+e_2)=1$. So if $e_1$ is not in the image of $L$ but $e_1+e_2$ is, then $\alpha_1$ is not in the kernel of $L^*$. – Julian Rosen Sep 1 '13 at 0:55
My first argument has to be changed as the comments of @Julian Rosen show. Finite dimension is not needed:
Take a basis $B'$ of the image of $L$ in $W$ and complete it to a basis $B$ of $W$ (by assumption $B \setminus B'$ is not empty). Define the linear functional $\alpha$ by $\alpha(e)=1$ where $e\in B\setminus B'$ and $\alpha(v)=0$ for $v \in B'$ (and extend linearly). Then $L^*(\alpha)=\alpha \circ L = 0$, and $\alpha$ is not the $0$ linear functional, hence $L^*$ is not injective.
For the proof of $L^*$ surjective iff $L$ injective see the last answer of: Are injectivity and surjectivity dual?
-
I think Pete L. Clark's answer is intuitive and I don't mean to obfuscate the problem, but could I add some abstract nonsense (which obviously one could ignore)?
In the category of vector spaces, we can easily show every mono (injective) and epi (surjective) splits. The dual construction is faithfully functorial (another straightforward exercise), hence it both preserves and reflects split monos and split epis, i.e. monos iff the dual is epi and epi iff dual is mono.
Thank you for the indulgence.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9650241136550903, "perplexity": 165.9638220170277}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657114204.83/warc/CC-MAIN-20140914011154-00139-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"} |
https://www.arxiv-vanity.com/papers/astro-ph/9509074/ | arXiv Vanity renders academic papers from arXiv as responsive web pages so you don’t have to squint at a PDF. Read this paper on arXiv.org.
# On the Density of PBH’s in the Galactic Halo
Edward L. Wright UCLA Dept. of Physics & Astronomy Division of Astronomy
P.O. Box 951562
Los Angeles CA 90095-1562
8 June 199518 Aug 199512 Sep 1995
8 June 199518 Aug 199512 Sep 1995
8 June 199518 Aug 199512 Sep 1995
###### Abstract
Calculations of the rate of local Primordial Black Hole explosions often assume that the PBH’s can be highly concentrated into galaxies, thereby weakening the Page-Hawking limit on the cosmological density of PBH’s. But if the PBH’s are concentrated by a factor exceeding , where kpc is the scale of the Milky Way, then the steady emission from the PBH’s in the halo will produce an anisotropic high latitude diffuse gamma ray intensity larger than the observed anisotropy. This provides a limit on the rate-density of evaporating PBH’s of pcyr which is more than 6 orders of magnitude lower than recent experimental limits. However, the weak observed anisotropic high latitude diffuse gamma ray intensity is consistent with the idea that the dark matter that closes the Universe is Planck mass remnants of evaporated black holes.
gamma rays: observations – cosmology: dark matter – Galaxy: halo
slugcomment: astro-ph/9509074 - UCLA-ASTRO-ELW-95-03
## 1 Introduction
The average density of primordial black hole’s (PBH’s) in the Universe is constrained by the Page-Hawking (1976) limit on the diffuse gamma ray intensity, since the Hawking (1974) radiation from PBH’s produces copious gamma rays. Halzen et al. (1991) computed the photon spectrum from a uniform density of PBH’s with the initial mass function , and found that with km/sec/Mpc, or an average mass density of PBH’s of and an average number density of PBH’s of pc.
Now I wish to calculate the maximum allowed density of PBH’s in the halo of the Milky Way. The halo mass density is given by
ρh=v2c4πGR2=8.4×10−25\rm gm\,\,cm−3 (1)
at the position of the solar circle, kpc, for a circular velocity of km/sec. Since the halo density is times higher than the average density of the Universe, one could hope that the PBH’s would have a higher density in the halo of the Milky Way which would make it easier to detect the explosions caused by their final evaporation in high energy gamma ray experiments. In fact, Halzen et al. (1991) considered concentration factors up to by assuming that the PBH’s were as highly concentrated as the luminous matter. Cline & Hong (1992) considered local densities as high as pc by assuming that some of the gamma-ray bursts observed by BATSE were PBH explosions.
## 2 Calculation of Anisotropy
However, the PBH’s in the halo will contribute an anisotropic diffuse gamma ray intensity that will be much easier to measure for instrumental reasons than the isotropic intensity. A rough order of magnitude estimate for this anisotropic signal is a fraction times the isotropic background. In order to improve this calculation, I need to compute the local average emissivity from the Page-Hawking limit which is integrated over time. For simplicity, I will do this calculation for the bolometric gamma ray intensity, though a frequency-dependent calculation would give a better limit. The mass spectrum of PBH’s produced by a scale invariant perturbation spectrum in the early Universe is , and the initial mass of a PBH that is evaporating at time is . The total comoving density of PBH’s scales like
ρ/(1+z)3∝∫t1/3mm−2.5dm∝t−1/6 (2)
and thus the comoving luminosity density scales like for . The bolometric intensity is related to the comoving emissivity by
I=∫jCM(t)1+zcdt=∫j∘(1+z)7/4(1+z)7/2cH∘dz (3)
This gives the relationship between the current average emissivity and the isotropic integrated intensity,
Iiso=4j∘c3H∘. (4)
Now I will calculate the emission from the halo of the Milky Way. If the emissivity of the halo is , then its contribution to the anisotropic intensity at angle with respect to the Galactic center is
Ianiso=∫jh(R)ds=π−θsinθjh(R∘)R∘ (5)
for a spherical halo with density following the singular isothermal sphere model: . This is a special case of an “isothermal” halo with core radius , flattening , and density varying like:
ρ=ρ∘(R2∘+r2c)R2+z2/q2+r2c (6)
with and being cylindrical coordinates. This gives an anisotropic intensity of
Ianiso=∫jhds=jh(R∘)R∘η(l,b,rc/R∘,q) (7)
with
η(l,b,rc/R∘,q)=(1+(rc/R∘)2)[π/2+tan−1(cosθ′/sinθ′′)]sinθ′′√1+(q−2−1)sin2b (8)
where and
Most of the uncertainty in the endpoint of primordial black hole evaporation cancels out in the ratio of which is given by
IanisoIiso=η(l,b,rc/R∘,q)jh(R∘)j∘3H∘R∘4c (9)
## 3 Comparison with Data
Using the combined Phase I EGRET sky maps in the directory /compton_data/egret/combined_data on legacy.gsfc.nasa.gov, I have constructed the rate map in Figure 1. This shows the rate of gamma rays with MeV. Figure 1 has been smoothed with a FWHM quasi-Gaussian test function (Wright et al. 1994), but the fits are based on unsmoothed maps.
PBH’s are not the only, or even the primary, source of galactic gamma ray emission. The process
pCR+pISM→p+p+π∘→p+p+2γ (10)
where a cosmic ray proton hits an interstellar medium proton is the dominant source of galactic gamma rays. Digel, Hunter & Mukherjee (1995) find an average emissivity of photons/sec/sr/proton for MeV in the Orion region which is similar to interstellar medium emission rates elsewhere in the solar neighborhood. While the cosmic ray density appears to be higher in the inner galaxy, leading to a higher emissivity per ISM proton, I can avoid this complication by not using regions close to the galactic plane in my fits. I have used the 100 m intensity as a proxy for the column density of ISM protons. This will automatically include the H and H II components of the ISM that are not measured by the 21 cm neutral hydrogen line. The 100 m emissivity per proton will depend on the local dust properties and radiation field, but avoidance of the galactic plane minimizes these complications as well. The 100 m intensity at high is approximately MJy/sr/proton/cm, so one expects about photons/m/sr with MeV for each MJy/sr of 100 m intensity.
When I fit the gamma ray map to the form
Iγ(l,b)=jh(R∘)R∘η(l,b,rc/R∘,q)+dIγdI100I100(l,b)+dIγdcsc|b|csc|b|+I∘+ϵ(l,b) (11)
with being the 100 m intensity from the DIRBE instrument on COBE with a model of the zodiacal light removed, and being the residual, I get the results shown in Table 1. Missing parameter values are fixed at their default values, which are zero except for and (which approximate the simple singular isothermal sphere model.) The isotropic intensity is required to be non-negative. These fits were done using the least sum of absolute errors instead of least squares fitting to avoid the effect of bright sources, and an elliptical region surrounding the Galactic Center with and was excluded along with . The excluded region is marked on Figure 1. The pixels used for this fit were COBE DMR pixels with 4328 pixels outside the exclusion region, so differences in greater than 0.1% are statistically significant. Simulating fits to 100 skies based on the best fit model in Table 1 along with the observed residuals gave values of with a standard deviation of 0.00165, which is negligible when compared to the range of obtained using different galactic tracers and halo models.
The coefficient found for the term is consistent with the result expected from studies of the interstellar medium in the solar neighborhood (Digel et al. 1995), but when or flattened haloes are allowed to absorb some of the galactic flux the coefficient is slightly lower but not unreasonable. Fits that do not include as a tracer tend to favor flattened haloes with large core radii, which makes the halo model approximately proportional to . Models that do include a separate term favor spherical, singular isothermal sphere haloes. Because a flattened halo has a smaller thickness at high , the fitted halo density is higher for the flattened models. The large core radius in the flattened models causes the halo to produce a large monopole contribution to the intensity, so the isotropic term goes to zero in the flattened models.
Models similar to the large core radius spherical haloes proposed for gamma ray burst (GRB) sources do not provide an anisotropic intensity, for the simple reason that the GRB’s are observed to be isotropic. In these models the halo density is limited by the isotropic intensity, and scales like . Thus large core radius models are not significant for placing an upper limit on the local halo density.
Different methods of fitting for the galaxy give very different estimates for the isotropic background, but the estimate for the local halo density is slightly more stable. Thus it is more appropriate to normalize to the Halzen et al. )1991) model upon which their calculation of the Page-Hawking limit is based instead of the uncertain isotropic background. Integrating Figure 3 of Halzen et al. (1991) for MeV gives a flux of 0.06 photons/m/sec/sr which is in the range of the isotropic intensities from the fits in Table 1, and is also consistent with the darkest sky intensity of photons/m/sec/sr given by Bertsch et al. (1995).
Comparing these results with Equation 9 implies that
jh(R∘)R∘(4/3)j∘(c/H∘)=0.023%−−0.150.06=0.4--2.5 (12)
or
ζ=4c3H∘R∘×0.4--2.5=(2--12)/h×105. (13)
Using this I get a local density of PBH’s of .
An alternative fit is shown in Figure 2. The data with was binned into twenty bins equally spaced in . Within each bin, the mid-average, or average of the two middle quartiles of the data sorted by intensity, was taken. The filled points in Figure 2 are these mid-averaged intensities. Since large only occurs for small , the mid-average value of was also computed in each bin. The mid-averaged intensities were then fit to the form
Iγ=I∘+dIγdcsc|b|csc|b|+jh(R∘)R∘η(l,b,0,0) (14)
giving coefficients photons/m/sec/sr, a slope of photons/m/sec/sr, and a local halo emissivity of photons/m/sec/sr. This fit is the curve in Figure 2, and the contribution of the term is shown as the open circles.
## 4 Discussion
If PBH’s were strongly concentrated in the halo of the Milky Way, they would produce a large anisotropic gamma ray flux which could easily be observed. A weak anisotropic signal of the predicted form is present. With the resulting value for the concentration factor , I can use the Halzen et al. (1991) calculation of the average PBH explosion rate to estimate that the explosion rate of halo PBH’s is
dndt≤0.07--0.42pc−3yr−1. (15)
Since there are several possible emission mechanisms other than PBH’s which could be located in a galactic halo, I have taken the rate from the fits as an upper limit. The dominant uncertainty in this limit is the physical thickness of the galactic density enhancement, and this is reflected in the range of models considered in Table 1. The highest densities correspond to flattened haloes, and for a collisionless species like PBH’s the highest likely flattening corresponds to an E7 galaxy shape with . The best fit values obtained here when not including a separate term, or 7.5, are both equivalent to an E6 galaxy shape. The observed flattening of the dark matter in polar ring galaxies is in this range (Sackett et al. 1994).
The uncertainties in modeling the last few seconds of PBH lifetime have very little effect on the limit derived here, because the diffuse gamma ray flux comes primarily from PBH’s with masses gm and temperatures MeV, and radiation under these conditions is well understood. However, the behavior of PBH’s at higher temperatures is not so well known, and different models of the final burst can give vastly different detection limits. For example, the recent EGRET limit of pcyr (Fichtel et al. 1994) assumed that the last grams of PBH rest mass evaporate producing ergs of 100 MeV gamma rays in less than a microsecond based on the Hagedorn (1970) model for high energy particles, while in the standard model of high energy particle physics it takes years to evaporate this mass (Halzen et al. 1991). The particular technique used in Fichtel et al. (1994) would not be sensitive to standard model bursts. Porter & Weekes (1978) derived a limit of pcyr using the Hagedorn model, but this limit is weakened to pcyr in the standard model (Alexandreas et al. 1993). Cline & Hong (1992) proposed that an expanding fireball could convert much of the ergs from a Hagedorn model burst into MeV gamma rays detectable by the BATSE experiment. Given the limit on above, BATSE would have to be able to detect PBH explosions out to distances cm if even a few percent of the BATSE bursts were due to PBH’s. Other sensitive limits on the PBH explosion rate density (Phinney & Taylor 1979) depend on conversion of the last grams of PBH rest mass into an expanding fireball that produces a GHz radio pulse by displacing the interstellar magnetic field (Rees 1977). However, in the standard model it takes a few days for the last grams to evaporate (Halzen et al. 1991), so no radio pulse is generated.
Similarly, variations in the initial mass function of PBH’s do not affect the ratio of the local emissivity to the rate density of evaporating bursts, because both the gm PBH’s radiating the diffuse gamma rays and the gm evaporating PBH’s were initially formed with very similar masses near gm. Thus changing the slope of the initial mass function has very little effect on the ratio of their abundances, even though such a slope change has a large effect on the Page-Hawking limit on .
The limit derived in the paper on the local rate-density of evaporating PBH’s provides a very difficult target for all techniques to directly detect standard model PBH explosions. For example, the CYGNUS experiment presented a limit pcyr (Alexandreas et al. 1993), and my estimate is 6–7 orders of magnitude smaller.
On the other hand, the ratio of the halo density of PBH’s to the Page-Hawking limit is quite close the ratio of the total halo density to the critical density in the Universe. This suggests that PBH’s could be tracers of the Cold Dark Matter (CDM). This would naturally occur if the PBH’s were the CDM, but this requires either a modified mass spectrum with an enhanced abundance of PBH’s with gm, or else that evaporating PBH’s leave behind a stable Planck mass remnant (MacGibbon 1987; Carr, Gilbert & Lidsey 1994). In general, the fits in this paper give concentration ratios that are slightly too high for this hypothesis if and . The lowest ratio of to in Table 1 gives a concentration of , while if PBH’s trace the dark matter the expected ratio is . These are equal only if . For a more typical low value of (corresponding to pcyr), the required value of is , which is consistent with theories of large-scale structure formation in CDM (Peacock & Dodds 1994) or CDM+ (Efstathiou, Sutherland & Maddox 1990) models. If this admittedly weak correspondence is correct, then 100 MeV gamma rays are providing the first non-gravitational evidence for CDM. Any model for dark matter that gives a 100 MeV gamma ray emissivity proportional to the density, such as particles with a very slow radiative decay, would also be supported by this correspondence, while models with emissivity proportional to , such as annihilating particles, would not. More data and better galactic modeling are needed to test this exciting possibility.
One possible test is to look for gamma rays from large concentrations of dark matter. The flux from the Galaxy, , in Figure 1 is while the flux from the halo term of the best fitting model in Table 1 is . Scaling the latter flux to the mass and distance of M87 (Stewart et al. 1984), assuming a constant gamma ray luminosity to mass ratio, predicts a flux of for MeV which is only 10 times lower than the limit of reported by Sreekumar et al. (1994). Detection of a gamma ray flux from clusters of galaxies that correlates with dark matter column density instead of gas column density would support of association of dark matter and PBH’s.
This research has made use of data obtained through the Compton Observatory Science Support Center GOF account, provided by the NASA-Goddard Space Flight Center.
## References
• (1)
• (2) Alexandreas, D. E. et al. 1993, Phys. Rev. Lett., 71, 2524.
• (3)
• (4) Bertsch, D. et al. 1995, BAAS, 27, 820.
• (5)
• (6) Carr, B. J., Gilbert, J. H & Lidsey, J. E. 1994, Phys. Rev. D, 50, 4853.
• (7)
• (8) Cline, D. & Hong, W. 1992, ApJ, 401, L57.
• (9)
• (10) Digel, S. W., Hunter, S. D. & Mukherjee, R. 1995, ApJ, 441, 270.
• (11)
• (12) Efstathiou, G., Sutherland, W. J. & Maddox, S. J. 1990, Nature, 348, 705.
• (13)
• (14) Fichtel, C. E. et al. 1994, ApJ, 434, 557.
• (15)
• (16) Hagedorn, R. 1970, A&A, 5, 184.
• (17)
• (18) Halzen, F., Zas, E., MacGibbon, J. H. & Weekes, T. C. 1991, Nature, 353, 807.
• (19)
• (20) Hawking, S. W. 1974, Nature, 248, 30.
• (21)
• (22) MacGibbon, J. 1987, Nature, 329, 308.
• (23)
• (24) Page, D. N. & Hawking, S. W. 1976, ApJ, 206, 1.
• (25)
• (26) Peacock, J. A. & Dodds, S. J. 1994, MNRAS, 267, 1020.
• (27)
• (28) Phinney, S. & Taylor, J. H. 1979, Nature, 277, 117.
• (29)
• (30) Porter, N. A. & Weekes, T. C. 1978, MNRAS, 183, 205.
• (31)
• (32) Rees, M. J. 1977, Nature, 266, 333.
• (33)
• (34) Sackett, P., Rix, H.-W., Jarvis, B. J. & Freeman, K. C. 1994, ApJ, 436, 629.
• (35)
• (36) Sreekumar, P., Bertsch, D. L., Dingus, B. L., Esposito, J. A., Fichtel, C. E., Hartman, R. C., Hunter, S. D., Kanbach, G., Kniffen, D. A. & Lin, Y. C. 1994, ApJ, 426, 105.
• (37)
• (38) Stewart, G., Canizares, C., Fabian, A. & Nulsen, P. 1984, ApJ, 278, 536.
• (39)
• (40) Wright, E. L., Smoot, G. F., Kogut, A., Hinshaw, G., Tenorio, L., Lineweaver, C., Bennett, C. L. & Lubin, P. M. 1994, ApJ, 420, 1.
• (41) | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9570940732955933, "perplexity": 2310.271343028651}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439739134.49/warc/CC-MAIN-20200814011517-20200814041517-00361.warc.gz"} |
http://wiki.engageeducation.org.au/further-maths/number-patterns/fibonacci-sequences-as-second-order-difference-equations/ | The Fibonacci numbers are a unique sequence of numbers where each new term is found by adding the two previous terms.
The Fibonacci Numbers are as follows:
$1, 1, 2, 3, 5, 8, 13, 21, 34, 55, ...$
## Fibonacci Sequences
A Fibonacci sequence refers to any sequence where each new term is found by adding the previous two terms, given any two starting values.
### Example 1
Which of the following are Fibonacci sequences?
a) $1, 4, 5, 9, 14, ...$
Calculate whether each new term is found by adding the previous two terms.
$t_3=t_1+t_2$
$t_3=1+4$
$t_3=5$
Check that \$t_3\$ is indeed equal to 5 by checking with the question.
This is correct.
$t_4=t_2+t_3$
$t_4=4+5$
$t_4=9$
This is correct.
$t_5=t_3+t_4$
$t_5=5+9$
$t_5=14$
This is correct.
This is a Fibonacci sequence as each new term is found by adding the two previous terms.
b) $3, 7, 10, 13, 23, ...$
Calculate whether each new term is found by adding the previous two terms.
$t_3=t_1+t_2$
$t_3=3+7$
$t_3=10$
Check that \$t_3\$ is indeed equal to 10 by checking with the question.
This is correct.
$t_4=t_2+t_3$
$t_4=7+10$
$t_4=17$
$t_4$ is supposed to be 13.
This is wrong.
This is not a Fibonacci sequence as each new term is not found by adding the previous term.
## Second Order Difference Equations for a Fibonacci Sequence
Second order difference equations for Fibonacci sequences follows the following equation:
$f_{n+2}=f_n+f_{n+1}$ given $f_1$ and $f_2$
### Example 2
Find the first five terms of the following Fibonacci sequence given by the second order difference equation:
$f_{n+2}=f_n+f_{n+1}$ $f_1=2$ $f_2=1$
The question defines the first two terms so use these in the second order difference equation to calculate the remaining terms.
$f_1=2$ and $f_2=1$
$f_{n+2}=f_n+f_{n+1}$
$f_{3}=f_1+f_2$
$f_3=2+1$
$f_3=3$
$f_4=f_2+f_3$
$f_4=1+3$
$f_4=4$
$f_5=f_3+f_4$
$f_5=3+4$
$f_5=7$
The first five terms of the sequence are 2, 1, 3, 4 and 7.
### Example 3
Find the value of $t_2$ given $t_1=5, t_4=7$ and $t_5=13$.
We are given two consecutive values so all we need to do is to work backwards to find $t_3$.
$t_3=t_5-t_4$
$t_3=13-7$
$t_3=6$
Now we know $t_3$ we can work backwards to find $t_2$.
$t_2=t_4-t_3$
$t_2=7-6$
$t_2=1$
The value of $t_2$ is 1. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 50, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8695043325424194, "perplexity": 476.9951358909766}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057337.81/warc/CC-MAIN-20210922072047-20210922102047-00279.warc.gz"} |
https://www.thehinduportal.com/2016/11/quantum-theory-and-om-aum.html | Just In
latest
Q uantum, derived from the word quantity, means the smallest identifiable unit in the universe of any physical property like energy or ma...
Quantum, derived from the word quantity, means the smallest identifiable unit in the universe of any physical property like energy or matter. Quantum theory deals with the infrastructure in the sub-atomic field. It reveals the nature and behavior of matter and energy in that range. This exposure is referred to as a quantum theory which is the theoretical basis of modern physics.
With the introduction of Superstring Theory of Quantum Reality, the quantum theory has discovered that at the sub-atomic levels matter exists in small strings. In simple words, everything at its ultimate microscopic grade is made up of extremely small vibrating strands or strings like in a musical instrument of the violin.
These strings have repeated oscillatory pattern of vibration. Each pattern presents the string its mass and force, and that confers it the appearance of a particle. Together all these particles have the same physical feature of producing resonant patterns of vibration. The undulation of strings creating up and down loops is the manifestation of resonance in the sub-atomic environs.
And when we exit from the microscopic environment the same phenomenon of transmission of resonance is being played within everything in this universe. The sonority of particles composing the vibrating strings with their mass and energy is also responsible to produce the atoms.
The latter is made up of energy and not physical matter. As a result, the entire universe is made out of energy. But the energy appears as a matter or object like the particles of the vibrating string in the sub-atomic field. This is the fundamental feature upon which the universe has been constructed and unified.
The stringsriyantra-basic Christian religion theory is considered as the theory of everything. And this corresponds well with the metaphysical concept of Om being the primordial sound originating from the strings advanced in the quantum theory of modern physics.
As Om resonates in the stings of atoms then according to the science of quantum physics atoms themselves are made up of whirling mass of radiating energy without manifested structure. Likewise, Om is not merely a sound but a mass of energy itself in invisible formation.
Om is energy constituting the universe.
The universe begins with Om. There is a sound of Om in every matter. It resonates there till eternity. Its resilience lies both in the matter and the sound itself.
The creation of Om is, in fact, is the creation of the universe. And its cosmic vibrations keep the constituents of the universe connected.
In Hindu theology, Om is referred to as God in the form of sound. And the open design of its symbol represents the incomprehensible all powerful Absolute.
In its phenomenal role as constituting and preceding matter, and as vortices of energy that Om is considered as a sacred sound of genesis in the Hindu spiritual philosophy.
The unique symbol of Om occupies the foremost spot in the Hindu iconography. It is a spiritual icon. It is not merely a “tool” for meditations or for contemplating on mantras, but the syllable invokes cosmic presence in them.
“Hari Om,” is a two-word mantra in itself, along with “Hari Om Tat Sat” or simply “Om Tat Sat”. The word ‘tat’ means ‘that’ or ‘all that is’. And ‘sat’ refers to ‘truth’. The latter is not evanescent or ephemeral rather everlasting. The mantra “Om Tat Sat” means: ‘that’ energy is the truth.
Om inaugurates spiritual prayers, rituals and yoga practices, and sanctifies these events. The expression ‘Hari Om and Hari Hara Om’ is a popular form of greetings or salutation among Hindus.
The word ‘Hari Hara’ is a representation of God and Om implies energy.
1. Do the Vedas really mention some serious physics?
Partha Shakkottai, former Retired (1969-2015)
Yes.
“The Surya Siddhanta also estimates the diameters of the planets. The estimate for the diameter of Mercury is 3,008 miles, an error of less than 1% from the currently accepted diameter of 3,032 miles. It also estimates the diameter of Saturn as 73,882 miles, which again has an error of less than 1% from the currently accepted diameter of 74,580. Its estimate for the diameter of Mars is 3,772 miles, which has an error within 11% of the currently accepted diameter of 4,218 miles. It also estimated the diameter of Venus as 4,011 miles and Jupiter as 41,624 miles, which are roughly half the currently accepted values, 7,523 miles and 88,748 miles, respectively.” the wiki at https://en.wikipedia.org/wiki/Surya_Siddhanta
[The Surya Siddhanta is the name of a Sanskrit treatise in Indian astronomy from late ... It calculates the earth's diameter to be 8,000 miles (modern: 7,928 miles), diameter of moon as 2,400 miles (actual ~2,160) and the distance between moon ...
How were the planetary sizes determined? What are the possible scaling rules? Let us define
D = Planetary diameter
M = planetary mass ~ D^3
T = Orbit time = 2 Pi/ Omega
I = angular momentum = Integral of R^2 dm .Omega ~ R^2 D^3/ T
Possible relations: (This law of gravitation is most unlikely to be known so early in History)
If force is G M_sun M_planet /R^2 = Mplanet. Omega^2 R
R^3. Omega^2 = constant
R^3 ~T^2
Snow plow theory:
D^3 ~ 2Pi R no pf particles ~ 2Pi R n. volume ~ 2 pi R R. thickness
~ R^2 if thickness is fixed, n being number density
Or
D^3 ~ R^2
Thin disk:
In this case the planet grows to bigger than the thickness and
D^3 ~ R^3 or D ~ R
It appears that ancients assumed the last possibility. The planetary diameter scales with the orbital size.
It appears that ancients assumed the last possibility. The planetary diameter scales with the orbital size.
Body mile(D) ``Relative Size `Orbit size (modern)
Mercury 3008. `0.38 `3.87E-01
Venus 4011.00 ` `0.50 `7.20E-01
Earth 8.00E+03 ` 1.00 `1.00E+00
Mars ``3.77E+03 ``0.47 `1.52E+00
Jupiter 4.16E+04 ` 5.20 `5.19E+00
Saturn 7.39E+04 ``9.24E+00 9.24 9.53E+00
( Ignore the`. I had to use it to line numbers up.) Relative size is from Surya Siddhanta and is compared to modern measurements of orbits compared to Earth’s orbit. Venus and Mars have the most disagreement . They are the rocky planets. The agreement is very good in general. Especially good for gas giants!
Conclusion: The vedas predicted planetary sizes using the acretion model from the initial solar nebula. So it automatically means they were the first to hit upon the idea of sun-centered planetary system!
This also means Indian Astronomy was developed by Indians with no input from the Greeks. No others have planetary diameters! The kalachakra was a giant astronomical clock and was used to calculate orbits of planets visible to the naked eye from which planet sizes were determined, a far cry from the flat earth theory of Christianity! See
https://www.quora.com/How-was-kalachakra-used-in-Indian-Asronomy
This uses four dials, one of which is the Zodiac, the same as the Greek one, most likely copied by the Greeks, an exact translation from Sanskrit! For fixed stars, the kalachakra uses bright stars with Sanskrit names. The Indians have been observing stars from long enough to know the period of nutation of Earth is 25,000 years.
Rg Veda is variously dated from 1500 BC to 8000 BC (from internal evidence on the order of time when River Saraswati was still flowing).
Edit
2. These are some extraordinary devices that i unquestionably use for SEO work. This is an awesome rundown to use later on.. סיינטולוגיה | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8627356290817261, "perplexity": 3175.042961404329}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999800.5/warc/CC-MAIN-20190625051950-20190625073950-00130.warc.gz"} |
https://www.britannica.com/science/superconductivity | # Superconductivity
physics
Alternative Titles: cryogenic conductor, superconductor
Superconductivity, complete disappearance of electrical resistance in various solids when they are cooled below a characteristic temperature. This temperature, called the transition temperature, varies for different materials but generally is below 20 K (−253 °C).
The use of superconductors in magnets is limited by the fact that strong magnetic fields above a certain critical value, depending upon the material, cause a superconductor to revert to its normal, or nonsuperconducting, state, even though the material is kept well below the transition temperature.
Suggested uses for superconducting materials include medical magnetic-imaging devices, magnetic energy-storage systems, motors, generators, transformers, computer parts, and very sensitive devices for measuring magnetic fields, voltages, or currents. The main advantages of devices made from superconductors are low power dissipation, high-speed operation, and high sensitivity.
## Discovery
Superconductivity was discovered in 1911 by the Dutch physicist Heike Kamerlingh Onnes; he was awarded the Nobel Prize for Physics in 1913 for his low-temperature research. Kamerlingh Onnes found that the electrical resistivity of a mercury wire disappears suddenly when it is cooled below a temperature of about 4 K (−269 °C); absolute zero is 0 K, the temperature at which all matter loses its disorder. He soon discovered that a superconducting material can be returned to the normal (i.e., nonsuperconducting) state either by passing a sufficiently large current through it or by applying a sufficiently strong magnetic field to it.
For many years it was believed that, except for the fact that they had no electrical resistance (i.e., that they had infinite electrical conductivity), superconductors had the same properties as normal materials. This belief was shattered in 1933 by the discovery that a superconductor is highly diamagnetic; that is, it is strongly repelled by and tends to expel a magnetic field. This phenomenon, which is very strong in superconductors, is called the Meissner effect for one of the two men who discovered it. Its discovery made it possible to formulate, in 1934, a theory of the electromagnetic properties of superconductors that predicted the existence of an electromagnetic penetration depth, which was first confirmed experimentally in 1939. In 1950 it was clearly shown for the first time that a theory of superconductivity must take into account the fact that free electrons in a crystal are influenced by the vibrations of atoms that define the crystal structure, called the lattice vibrations. In 1953, in an analysis of the thermal conductivity of superconductors, it was recognized that the distribution of energies of the free electrons in a superconductor is not uniform but has a separation called the energy gap.
The theories referred to thus far served to show some of the interrelationships between observed phenomena but did not explain them as consequences of the fundamental laws of physics. For almost 50 years after Kamerlingh Onnes’s discovery, theorists were unable to develop a fundamental theory of superconductivity. Finally, in 1957 such a theory was presented by the physicists John Bardeen, Leon N. Cooper, and John Robert Schrieffer of the United States; it won for them the Nobel Prize for Physics in 1972. It is now called the BCS theory in their honour, and most later theoretical work is based on it. The BCS theory also provided a foundation for an earlier model that had been introduced by the Russian physicists Lev Davidovich Landau and Vitaly Lazarevich Ginzburg (1950). This model has been useful in understanding electromagnetic properties, including the fact that any internal magnetic flux in superconductors exists only in discrete amounts (instead of in a continuous spectrum of values), an effect called the quantization of magnetic flux. This flux quantization, which had been predicted from quantum mechanical principles, was first observed experimentally in 1961.
Science: Fact or Fiction?
In 1962 the British physicist Brian D. Josephson predicted that two superconducting objects placed in electric contact would display certain remarkable electromagnetic properties. These properties have since been observed in a wide variety of experiments, demonstrating quantum mechanical effects on a macroscopic scale.
The theory of superconductivity has been tested in a wide range of experiments, involving, for example, ultrasonic absorption studies, nuclear-spin phenomena, low-frequency infrared absorption, and electron-tunneling experiments. The results of these measurements have brought understanding to many of the detailed properties of various superconductors.
## Thermal properties of superconductors
Superconductivity is a startling departure from the properties of normal (i.e., nonsuperconducting) conductors of electricity. In materials that are electric conductors, some of the electrons are not bound to individual atoms but are free to move through the material; their motion constitutes an electric current. In normal conductors these so-called conduction electrons are scattered by impurities, dislocations, grain boundaries, and lattice vibrations (phonons). In a superconductor, however, there is an ordering among the conduction electrons that prevents this scattering. Consequently, electric current can flow with no resistance at all. The ordering of the electrons, called Cooper pairing, involves the momenta of the electrons rather than their positions. The energy per electron that is associated with this ordering is extremely small, typically about one thousandth of the amount by which the energy per electron changes when a chemical reaction takes place. One reason that superconductivity remained unexplained for so long is the smallness of the energy changes that accompany the transition between normal and superconducting states. In fact, many incorrect theories of superconductivity were advanced before the BCS theory was proposed. For additional details on electric conduction in metals and the effects of temperature and other influences, see the article electricity.
Hundreds of materials are known to become superconducting at low temperatures. Twenty-seven of the chemical elements, all of them metals, are superconductors in their usual crystallographic forms at low temperatures and low (atmospheric) pressure. Among these are commonly known metals such as aluminum, tin, lead, and mercury and less common ones such as rhenium, lanthanum, and protactinium. In addition, 11 chemical elements that are metals, semimetals, or semiconductors are superconductors at low temperatures and high pressures. Among these are uranium, cerium, silicon, and selenium. Bismuth and five other elements, though not superconducting in their usual crystallographic form, can be made superconducting by preparing them in a highly disordered form, which is stable at extremely low temperatures. Superconductivity is not exhibited by any of the magnetic elements chromium, manganese, iron, cobalt, or nickel.
Most of the known superconductors are alloys or compounds. It is possible for a compound to be superconducting even if the chemical elements constituting it are not; examples are disilver fluoride (Ag2F) and a compound of carbon and potassium (C8K). Some semiconducting compounds, such as tin telluride (SnTe), become superconducting if they are properly doped with impurities.
Since 1986 some compounds containing copper and oxygen (called cuprates) have been found to have extraordinarily high transition temperatures, denoted Tc. This is the temperature below which a substance is superconducting. The properties of these high-Tc compounds are different in some respects from those of the types of superconductors known prior to 1986, which will be referred to as classic superconductors in this discussion. For the most part, the high-Tc superconductors are treated explicitly toward the end of this section. In the discussion that immediately follows, the properties possessed by both kinds of superconductors will be described, with attention paid to specific differences for the high-Tc materials. A further classification problem is presented by the superconducting compounds of carbon (sometimes doped with other atoms) in which the carbon atoms are on the surface of a cluster with a spherical or spheroidal crystallographic structure. These compounds, discovered in the 1980s, are called fullerenes (if only carbon is present) or fullerides (if doped). They have superconducting transition temperatures higher than those of the classic superconductors. It is not yet known whether these compounds are fundamentally similar to the cuprate high-temperature superconductors.
## Transition temperatures
The vast majority of the known superconductors have transition temperatures that lie between 1 K and 10 K. Of the chemical elements, tungsten has the lowest transition temperature, 0.015 K, and niobium the highest, 9.2 K. The transition temperature is usually very sensitive to the presence of magnetic impurities. A few parts per million of manganese in zinc, for example, lowers the transition temperature considerably.
## Specific heat and thermal conductivity
The thermal properties of a superconductor can be compared with those of the same material at the same temperature in the normal state. (The material can be forced into the normal state at low temperature by a large enough magnetic field.)
When a small amount of heat is put into a system, some of the energy is used to increase the lattice vibrations (an amount that is the same for a system in the normal and in the superconducting state), and the remainder is used to increase the energy of the conduction electrons. The electronic specific heat (Ce) of the electrons is defined as the ratio of that portion of the heat used by the electrons to the rise in temperature of the system. The specific heat of the electrons in a superconductor varies with the absolute temperature (T ) in the normal and in the superconducting state (as shown in Figure 1). The electronic specific heat in the superconducting state (designated Ces) is smaller than in the normal state (designated Cen) at low enough temperatures, but Ces becomes larger than Cen as the transition temperature Tc is approached, at which point it drops abruptly to Cen for the classic superconductors, although the curve has a cusp shape near Tc for the high-Tc superconductors. Precise measurements have indicated that, at temperatures considerably below the transition temperature, the logarithm of the electronic specific heat is inversely proportional to the temperature. This temperature dependence, together with the principles of statistical mechanics, strongly suggests that there is a gap in the distribution of energy levels available to the electrons in a superconductor, so that a minimum energy is required for the excitation of each electron from a state below the gap to a state above the gap. Some of the high-Tc superconductors provide an additional contribution to the specific heat, which is proportional to the temperature. This behaviour indicates that there are electronic states lying at low energy; additional evidence of such states is obtained from optical properties and tunneling measurements.
The heat flow per unit area of a sample equals the product of the thermal conductivity (K) and the temperature gradient △T: JQ = -KT, the minus sign indicating that heat always flows from a warmer to a colder region of a substance.
The thermal conductivity in the normal state (Kn) approaches the thermal conductivity in the superconducting state (Ks) as the temperature (T ) approaches the transition temperature (Tc) for all materials, whether they are pure or impure. This suggests that the energy gap (Δ) for each electron approaches zero as the temperature (T ) approaches the transition temperature (Tc). This would also account for the fact that the electronic specific heat in the superconducting state (Ces) is higher than in the normal state (Cen) near the transition temperature: as the temperature is raised toward the transition temperature (Tc), the energy gap in the superconducting state decreases, the number of thermally excited electrons increases, and this requires the absorption of heat.
## Energy gaps
As stated above, the thermal properties of superconductors indicate that there is a gap in the distribution of energy levels available to the electrons, and so a finite amount of energy, designated as delta (Δ), must be supplied to an electron to excite it. This energy is maximum (designated Δ0) at absolute zero and changes little with increase of temperature until the transition temperature is approached, where Δ decreases to zero, its value in the normal state. The BCS theory predicts an energy gap with just this type of temperature dependence.
According to the BCS theory, there is a type of electron pairing (electrons of opposite spin acting in unison) in the superconductor that is important in interpreting many superconducting phenomena. The electron pairs, called Cooper pairs, are broken up as the superconductor is heated. Each time a pair is broken, an amount of energy that is at least as much as the energy gap (Δ) must be supplied to each of the two electrons in the pair, so an energy at least twice as great (2Δ) must be supplied to the superconductor. The value of twice the energy gap at 0 K (which is 2Δ0) might be assumed to be higher when the transition temperature of the superconductor is higher. In fact, the BCS theory predicts a relation of this type—namely, that the energy supplied to the superconductor at absolute zero would be 2Δ0 = 3.53 kTc, where k is Boltzmann’s constant (1.38 × 10−23 joule per kelvin). In the high-Tc cuprate compounds, values of 2Δ0 range from approximately three to eight multiplied by kTc.
The energy gap (Δ) can be measured most precisely in a tunneling experiment (a process in quantum mechanics that allows an electron to escape from a metal without acquiring the energy required along the way according to the laws of classical physics). In this experiment, a thin insulating junction is prepared between a superconductor and another metal, assumed here to be in the normal state. In this situation, electrons can quantum mechanically tunnel from the normal metal to the superconductor if they have sufficient energy. This energy can be supplied by applying a negative voltage (V) to the normal metal, with respect to the voltage of the superconductor.
Tunneling will occur if eV—the product of the electron charge, e (−1.60 × 10−19 coulomb), and the voltage—is at least as large as the energy gap Δ. The current flowing between the two sides of the junction is small up to a voltage equal to V = Δ/e, but then it rises sharply. This provides an experimental determination of the energy gap (Δ). In describing this experiment it is assumed here that the tunneling electrons must get their energy from the applied voltage rather than from thermal excitation.
## Critical field
One of the ways in which a superconductor can be forced into the normal state is by applying a magnetic field. The weakest magnetic field that will cause this transition is called the critical field (Hc) if the sample is in the form of a long, thin cylinder or ellipsoid and the field is oriented parallel to the long axis of the sample. (In other configurations the sample goes from the superconducting state into an intermediate state, in which some regions are normal and others are superconducting, and finally into the normal state.) The critical field increases with decreasing temperature. For the superconducting elements, its values (H0) at absolute zero range from 1.1 oersted for tungsten to 830 oersteds for tantalum.
These remarks about the critical field apply to ordinary (so-called type I) superconductors. In the following section the behaviour of other (type II) superconductors is examined.
## The Meissner effect
As was stated above, a type I superconductor in the form of a long, thin cylinder or ellipsoid remains superconducting at a fixed temperature as an axially oriented magnetic field is applied, provided the applied field does not exceed a critical value (Hc). Under these conditions, superconductors exclude the magnetic field from their interior, as could be predicted from the laws of electromagnetism and the fact that the superconductor has no electric resistance. A more astonishing effect occurs if the magnetic field is applied in the same way to the same type of sample at a temperature above the transition temperature and is then held at a fixed value while the sample is cooled. It is found that the sample expels the magnetic flux as it becomes superconducting. This is called the Meissner effect. Complete expulsion of the magnetic flux (a complete Meissner effect) occurs in this way for certain superconductors, called type I superconductors, but only for samples that have the described geometry. For samples of other shapes, including hollow structures, some of the magnetic flux can be trapped, producing an incomplete or partial Meissner effect.
Type II superconductors have a different magnetic behaviour. Examples of materials of this type are niobium and vanadium (the only type II superconductors among the chemical elements) and some alloys and compounds, including the high-Tc compounds. As a sample of this type, in the form of a long, thin cylinder or ellipsoid, is exposed to a decreasing magnetic field that is axially oriented with the sample, the increase of magnetization, instead of occurring suddenly at the critical field (Hc), sets in gradually. Beginning at the upper critical field (Hc2), it is completed at a lower critical field (Hc1; see Figure 2). If the sample is of some other shape, is hollow, or is inhomogeneous or strained, some magnetic flux remains trapped, and some magnetization of the sample remains after the applied field is completely removed. Known values of the upper critical field extend up to 6 × 105 oersteds, the value for the compound of lead, molybdenum, and sulfur with formula PbMo6S8.
The expulsion of magnetic flux by type I superconductors in fields below the critical field (Hc) or by type II superconductors in fields below Hc1 is never quite as complete as has been stated in this simplified presentation, because the field always penetrates into a sample for a small distance, known as the electromagnetic penetration depth. Values of the penetration depth for the superconducting elements at low temperature lie in the range from about 390 to 1,300 angstroms. As the temperature approaches the critical temperature, the penetration depth becomes extremely large.
## High-frequency electromagnetic properties
The foregoing descriptions have pertained to the behaviour of superconductors in the absence of electromagnetic fields or in the presence of steady or slowly varying fields; the properties of superconductors in the presence of high-frequency electromagnetic fields, however, have also been studied.
The energy gap in a superconductor has a direct effect on the absorption of electromagnetic radiation. At low temperatures, at which a negligible fraction of the electrons are thermally excited to states above the gap, the superconductor can absorb energy only in a quantized amount that is at least twice the gap energy (at absolute zero, 2Δ0). In the absorption process, a photon (a quantum of electromagnetic energy) is absorbed, and a Cooper pair is broken; both electrons in the pair become excited. The photon’s energy (E) is related to its frequency (ν) by the Planck relation, E = hν, in which h is Planck’s constant (6.63 × 10−34 joule second). Hence the superconductor can absorb electromagnetic energy only for frequencies at least as large as 2Δ0/h.
## Magnetic-flux quantization
The laws of quantum mechanics dictate that electrons have wave properties and that the properties of an electron can be summed up in what is called a wave function. If several wave functions are in phase (i.e., act in unison), they are said to be coherent. The theory of superconductivity indicates that there is a single, coherent, quantum mechanical wave function that determines the behaviour of all the superconducting electrons. As a consequence, a direct relationship can be shown to exist between the velocity of these electrons and the magnetic flux (Φ) enclosed within any closed path inside the superconductor. Indeed, inasmuch as the magnetic flux arises because of the motion of the electrons, the magnetic flux can be shown to be quantized; i.e., the intensity of this trapped flux can change only by units of Planck’s constant divided by twice the electron charge.
When a magnetic field enters a type II superconductor (in an applied field between the lower and upper critical fields, Hc1 and Hc2), it does so in the form of quantized fluxoids, each carrying one quantum of flux. These fluxoids tend to arrange themselves in regular patterns that have been detected by electron microscopy and by neutron diffraction. If a large enough current is passed through the superconductor, the fluxoids move. This motion leads to energy dissipation that can heat the superconductor and drive it into the normal state. The maximum current per unit area that a superconductor can carry without being forced into the normal state is called the critical current density (Jc). In making wire for superconducting high-field magnets, manufacturers try to fix the positions of the fluxoids by making the wire inhomogeneous in composition.
## Josephson currents
If two superconductors are separated by an insulating film that forms a low-resistance junction between them, it is found that Cooper pairs can tunnel from one side of the junction to the other. (This process occurs in addition to the single-particle tunneling already described.) Thus, a flow of electrons, called the Josephson current, is generated and is intimately related to the phases of the coherent quantum mechanical wave function for all the superconducting electrons on the two sides of the junction. It was predicted that several novel phenomena should be observable, and experiments have demonstrated them. These are collectively called the Josephson effect or effects.
The first of these phenomena is the passage of current through the junction in the absence of a voltage across the junction. The maximum current that can flow at zero voltage depends on the magnetic flux (Φ) passing through the junction as a result of the magnetic field generated by currents in the junction and elsewhere. The dependence of the maximum zero-voltage current on the magnetic field applied to a junction between two superconductors is shown in Figure 3.
A second type of Josephson effect is an oscillating current resulting from a relation between the voltage across the junction and the frequency (ν) of the currents associated with Cooper pairs passing through the junction. The frequency (ν) of this Josephson current is given by ν = 2eV/h, where e is the charge of the electron. Thus, the frequency increases by 4.84 × 1014 hertz (cycles per second) for each additional volt applied to the junction. This effect can be demonstrated in various ways. The voltage can be established with a source of direct-current (DC) power, for instance, and the oscillating current can be detected by the electromagnetic radiation of frequency (ν) that it generates. Another method is to expose the junction to radiation of another frequency (ν′) generated externally. It is found that a graph of the DC current versus voltage has current steps at values of the voltage corresponding to Josephson frequencies that are integral multiples (n) of the external frequency (ν = nν′); that is, V = nhν′/2e. The observation of current steps of this type has made it possible to measure h/e with far greater precision than by any other method and has therefore contributed to a knowledge of the fundamental constants of nature.
The Josephson effect has been used in the invention of novel devices for extremely high-sensitivity measurements of currents, voltages, and magnetic fields.
MEDIA FOR:
superconductivity
Previous
Next
Citation
• MLA
• APA
• Harvard
• Chicago
Email
You have successfully emailed this.
Error when sending the email. Try again later.
Edit Mode
Superconductivity
Physics
Tips For Editing
We welcome suggested improvements to any of our articles. You can make it easier for us to review and, hopefully, publish your contribution by keeping a few points in mind.
1. Encyclopædia Britannica articles are written in a neutral objective tone for a general audience.
2. You may find it helpful to search within the site to see how similar or related subjects are covered.
3. Any text you add should be original, not copied from other sources.
4. At the bottom of the article, feel free to list any sources that support your changes, so that we can fully understand their context. (Internet URLs are the best.)
Your contribution may be further edited by our staff, and its publication is subject to our final approval. Unfortunately, our editorial approach may not be able to accommodate all contributions. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8976755142211914, "perplexity": 583.8261405030488}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607731.0/warc/CC-MAIN-20170524020456-20170524040456-00268.warc.gz"} |
https://home.cern/about/updates/2017/08/atlas-observes-direct-evidence-light-light-scattering | # ATLAS observes direct evidence of light-by-light scattering
A light-by-light scattering event measured in the ATLAS detector (Image: ATLAS/CERN)
Physicists from the ATLAS experiment at CERN have found the first direct evidence of high energy light-by-light scattering, a very rare process in which two photons – particles of light – interact and change direction. The result, published today in Nature Physics, confirms one of the oldest predictions of quantum electrodynamics (QED).
"This is a milestone result: the first direct evidence of light interacting with itself at high energy,” says Dan Tovey (University of Sheffield), ATLAS Physics Coordinator. “This phenomenon is impossible in classical theories of electromagnetism hence this result provides a sensitive test of our understanding of QED, the quantum theory of electromagnetism."
Direct evidence for light-by-light scattering at high energy had proven elusive for decades – until the Large Hadron Collider’s second run began in 2015. As the accelerator collided lead ions at unprecedented collision rates, obtaining evidence for light-by-light scattering became a real possibility.“This measurement has been of great interest to the heavy-ion and high-energy physics communities for several years, as calculations from several groups showed that we might achieve a significant signal by studying lead-ion collisions in Run 2,” says Peter Steinberg (Brookhaven National Laboratory), ATLAS Heavy Ion Physics Group Convener.
Heavy-ion collisions provide a uniquely clean environment to study light-by-light scattering. As bunches of lead ions are accelerated, an enormous flux of surrounding photons is generated. When ions meet at the centre of the ATLAS detector, very few collide, yet their surrounding photons can interact and scatter off one another. These interactions are known as ‘ultra-peripheral collisions’.
Studying more than 4 billion events taken in 2015, the ATLAS collaboration found 13 candidates for; light-by-light scattering. This result has a significance of 4.4 standard deviations, allowing the ATLAS collaboration to report the first direct evidence of this phenomenon at high energy.
“Finding evidence of this rare signature required the development of a sensitive new ‘trigger’ for the ATLAS detector,” says Steinberg. “The resulting signature — two photons in an otherwise empty detector — is almost the diametric opposite of the tremendously complicated events typically expected from lead nuclei collisions. The new trigger’s success in selecting these events demonstrates the power and flexibility of the system, as well as the skill and expertise of the analysis and trigger groups who designed and developed it.”
ATLAS physicists will continue to study light-by-light scattering during the upcoming LHC heavy-ion run, scheduled for 2018. More data will further improve the precision of the result and may open a new window to studies of new physics. In addition, the study of ultra-peripheral collisions should play a greater role in the LHC heavy-ion programme, as collision rates further increase in Run 3 and beyond. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8362063765525818, "perplexity": 1989.7967770924568}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267158279.11/warc/CC-MAIN-20180922084059-20180922104459-00547.warc.gz"} |
https://www.physicsforums.com/threads/maximum-value-of-xy.224119/ | Homework Help: Maximum value of xy
1. Mar 25, 2008
rohanprabhu
[SOLVED] Maximum value of xy
1. The problem statement, all variables and given/known data
Q] Given that $x \in [1, 2]$ and $y \in [-1, 1]$ and $x + y = 0$, find the maximum value of $xy$
3. The attempt at a solution
I have no idea at all. Does this have something to do with the maxima/minima. In that case, i can get that:
$$\frac{dx}{dy} = xdy + ydx$$
also,
$$dx = -dy$$
hence, for the condition of $f'(x) = 0$,
$$xdy + ydx = 0$$
$$xdy = - ydx$$
$$\frac{dy}{dx} = \frac{-y}{x}$$
i don't even know what i'm doing till now.
2. Mar 25, 2008
nicksauce
If f = xy, and x + y = 0, then f = -x^2. I think it should be fairly straight forward to find the maximum value of this function. Of course if this is not in the region for which the function is defined, then you just need to check at the boundaries.
3. Mar 25, 2008
gamesguru
This looks like a problem too easy for Lagrange multipliers, so I'll keep it simple. In case you don't know, $u$ is defined to be $xy$, so that's what we want to maximize.
$$x+y=0 \Rightarrow y=-x$$
$$u=xy=-x\times x=-x^2$$
Take the derivative and set to zero,
$$\frac{du}{dx}=0=-2x\Rightarrow x=0 \Rightarrow y=0$$
This makes sense because it's going to be the product of a negative number and its absolute value. So the largest is going to be at zero.
4. Mar 25, 2008
HallsofIvy
In fact you don't need to differentiate at all. Once you realize that u(x)= -x2, it is clear that u is negative for all x except x= 0.
5. Mar 25, 2008
mhill
Perhaps it may be solved by Lagrange multiplier, you should obtain the minimum of
$$xy-{\lambda}(x+y)$$
differenentiating respect to x , y and lambda we get the equations
$$y-{\lambda}=0$$
$$x-{\lambda}=0$$
$$x+y=0$$
it seems that only a minimum at x=y=0 exists , no maximum.
6. Mar 25, 2008
rohanprabhu
thanks to everybody.. i got it now. I really feel stupid about this problem. I have no idea about Lagrange multiplier, but calculating f(x) is something i should've done... thanks to everyone again. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8623719215393066, "perplexity": 296.0087130813672}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039747665.82/warc/CC-MAIN-20181121092625-20181121114625-00458.warc.gz"} |
https://fractalforums.org/programming/11/round-off-errors-in-polynomial-evaluation/3460/new | • April 19, 2021, 06:17:55 PM
### Author Topic: Round-off errors in polynomial evaluation (Read 883 times)
0 Members and 1 Guest are viewing this topic.
#### marcm200
• 3d
• Posts: 953
##### Re: Round-off errors in polynomial evaluation
« Reply #15 on: May 07, 2020, 09:37:29 PM »
It seems the outside of the M-set is recursively enumerable but not recursive, which means you can verify algorithmically that a point that is outside is in fact outside, but you can't do that for any point inside.
I don't understand - the period-2 bulb is known in area and position - and can be considered algorithmically verified. And all the true shape Msets laserblaster and I are computing right now via interval arithmetics and cell-mapping are verified as well as the seed intervals enter a cycle for the interior. Given enough time and computer power, in principle every small square can be verified whether it lies fully in the interior of a hyperbolic component or fully outside the Mset (not allowed to touch the boundary itself though).
Or is there a definition of computability that collides with what I naively assume computability means? Or maybe there is more "interior" than just the hyperbolic components, wherever that then lies.
#### gerrit
• 3f
• Posts: 2402
##### Re: Round-off errors in polynomial evaluation
« Reply #16 on: May 07, 2020, 11:04:38 PM »
I don't understand - the period-2 bulb is known in area and position - and can be considered algorithmically verified. And all the true shape Msets laserblaster and I are computing right now via interval arithmetics and cell-mapping are verified as well as the seed intervals enter a cycle for the interior. Given enough time and computer power, in principle every small square can be verified whether it lies fully in the interior of a hyperbolic component or fully outside the Mset (not allowed to touch the boundary itself though).
Or is there a definition of computability that collides with what I naively assume computability means? Or maybe there is more "interior" than just the hyperbolic components, wherever that then lies.
What about a square that's both in and out?
#### xenodreambuie
• Fractal Friar
• Posts: 117
##### Re: Round-off errors in polynomial evaluation
« Reply #17 on: May 07, 2020, 11:29:21 PM »
It would make sense if Penrose meant points in the set, rather than interior points.
#### gerrit
• 3f
• Posts: 2402
##### Re: Round-off errors in polynomial evaluation
« Reply #18 on: May 08, 2020, 12:04:44 AM »
It would make sense if Penrose meant points in the set, rather than interior points.
Yes.
#### marcm200
• 3d
• Posts: 953
##### Re: Round-off errors in polynomial evaluation
« Reply #19 on: May 08, 2020, 08:31:27 AM »
@gerrit: What about the following: A whole square that straddles the Mset (so not just touching). Run a thread with the CM/IA algorithm for 5 sec for that square. Then divide the initial square in 4 identical small ones, and let now 5 threads run for another 5 seconds. Then divide every square again and go on. As the inital square straddles truely, at one finite point a (very small) square is fully inside the Mset and one is fully outside, so the algorithm knows the initial square had both interior and exterior and can stop. However that would leave the case where the initial square touches the Mset.
Some points on the x-axis are verified and so are all the boundary points of the period 1-4 as explicit formulas exist and can be symbolically solved.
But maybe the notion is "there's (not yet) an algorithm to judge a general point" - and that reminds me of the halting problem in computer science: No general agorithm exists, but there might very well be one that could prove e.g. that any C++ compiler with no more than 1000 lines of code halts on every input.
#### gerrit
• 3f
• Posts: 2402
##### Re: Round-off errors in polynomial evaluation
« Reply #20 on: May 08, 2020, 06:01:23 PM »
But maybe the notion is "there's (not yet) an algorithm to judge a general point" - and that reminds me of the halting problem in computer science: No general agorithm exists, but there might very well be one that could prove e.g. that any C++ compiler with no more than 1000 lines of code halts on every input.
Yes, the conjecture is "there is no halting algorithm that you input a single c and it tells you in M-set or not". c is assumed "computable" meaning there is an algorithm that computes digit after digit of c. For if c itself is uncomputable (like Chaitin's number) it's trivial. The discussion in Penrose's book is in context of Goedel theorem and Turing halting problem. So the idea is there is no better way than to iterate and hope it either converges or diverges but if neither it may stay bounded forever or not, no way to know.
Of course there is a better way, using human creativity. As trivial example if c is in main cardioid there is a better way than to iterate forever and give up: just prove it's inside the cardioid which has a simple computable shape. For pretty much any non-escaping orbit you compute you can figure out in which cardioid or pseudo-circle blob it is and then prove it's inside by solving a polynomial equation which is computable. So the non-computable points must be on the boundary. Penrose can't give an example of a non computable point. At least that's my summary of the book chapter, I hope I got it all right; the chapter is written as "computability for dummies" so I hope I'm not worse than dumb.
The paper Claude pointed at defines "computability" in a more productive way AFAICU and seems to show M-set is computable in the sense you can make more and more accurate pictures of it and you can actually give guarantees on accuracy. But you already know that and are doing it.
Would be nice to come up with an algorithm (like an infinite series) that calculates a c such that no-one can figure out it it's in M or not. Proving uncomputability is an open problem so that would be too much to ask. Something like the first trancendental numbers that were proven; they were defined with an infinite series constructed explicitly so you can prove trancendentality.
Maybe pauldelbrot can come up with such a thing, at least I've never been able to ask a math question here that he could not answer
#### hobold
• Fractal Frogurt
• Posts: 419
##### Re: Round-off errors in polynomial evaluation
« Reply #21 on: May 08, 2020, 09:39:45 PM »
Random tidbit of useless information: the definition of "computability" is older than data processing machines. It used to mean a somewhat abstract thing. Then came computer science and overloaded the term with a more practical definition. It is possible to find a whole range of scientific papers with definitions inbetween these two ends (depending on what kind of abstract or concrete machine a paper is working with).
The meaning will shift again if and when quantum computers become relevant.
#### gerrit
• 3f
• Posts: 2402
##### Re: Round-off errors in polynomial evaluation
« Reply #22 on: May 08, 2020, 11:59:20 PM »
Random tidbit of useless information: the definition of "computability" is older than data processing machines. It used to mean a somewhat abstract thing. Then came computer science and overloaded the term with a more practical definition. It is possible to find a whole range of scientific papers with definitions inbetween these two ends (depending on what kind of abstract or concrete machine a paper is working with).
The meaning will shift again if and when quantum computers become relevant.
Good point, I thought computable alywas means Turing computable which includes quantum computers. Some more random reading in the Penrose book: if x is a computable real x==1? is uncomputable (undecidable) in general. For if your algorithm has computed 0.99..99 with a trilion 9's the trillionandoneth digit could be 8. So in this sense the unit circle is also not computable. Also in a foornote he writes someone told him non-computability of M-set was actually proven but no reference given.
I guess we're digressing far from rounding errors in polynomial evaluation...
• Fractal Frogurt
• Posts: 467
##### Re: Round-off errors in polynomial evaluation
« Reply #23 on: July 04, 2020, 07:39:49 AM »
Would be nice to come up with an algorithm (like an infinite series) that calculates a c such that no-one can figure out it it's in M or not. Proving uncomputability is an open problem so that would be too much to ask. Something like the first trancendental numbers that were proven; they were defined with an infinite series constructed explicitly so you can prove trancendentality.
https://en.wikibooks.org/wiki/Fractals/Mathematics/Numerical#Test
If one have a year to check one point it is practicaly uncomputable
#### gerrit
• 3f
• Posts: 2402
##### Re: Round-off errors in polynomial evaluation
« Reply #24 on: July 04, 2020, 10:06:33 PM »
If one have a year to check one point it is practicaly uncomputable
Get a faster machine and do it in 1 hr. There is no such thing as "practically uncomputable".
#### 3DickUlus
• Posts: 2193
##### Re: Round-off errors in polynomial evaluation
« Reply #25 on: July 04, 2020, 11:22:42 PM »
Impractical to compute, but not un-computable.
#### marcm200
• 3d
• Posts: 953
##### Re: Round-off errors in polynomial evaluation
« Reply #26 on: April 12, 2021, 11:06:17 AM »
Computing some Newton maps for polynomials for the period-5 (and divisor) hyperbolic centers of the quadratiuc Mset (degree 16, evaluated in iterative form), it's interesting how qualitatively different the outcome is, given a specific number type.
Left is double precision, there are hardly any (green) points Newton-converging to the hyperbolic center (red square in the image's center. The period-5 cardioid at -1.6254137262130523567 is overlayed onto the Newton map in blue). Those splattered points look like numerical artefacts`.
Middle is long double and it now looks like there might be circular structures having a more or less complicated boundary (spikes). The splattered green gets more dense, so might not be an artefact after all.
And _float128 suggests that almost the entire image will finally be inside an attraction basin for the hc (and probably is part of the immediate basin). The only remaining structure is the one on the far left. But maybe that will vanish too later.
### Similar Topics
###### A Round Thing
Started by Bill Snowzell on Fractal Image Gallery
2 Replies
227 Views
March 09, 2018, 09:55:20 AM
by Bill Snowzell
###### CPU: order of evaluation of an arithmetic expression
Started by marcm200 on Programming
5 Replies
325 Views
February 12, 2020, 09:35:38 AM
by hobold
###### Several Problems / Errors
Started by PMH on Kalles Fraktaler
5 Replies
482 Views
December 07, 2017, 04:43:37 PM
by PMH
###### compile errors with release 2-2.13.
Started by piotrv on Mandelbulber
3 Replies
385 Views
April 01, 2018, 07:42:03 AM
by buddhi
###### Errors when trying to upload video
Started by LionHeart on Forum Help And Support
6 Replies
262 Views
April 13, 2020, 02:36:08 AM
by LionHeart | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8455461263656616, "perplexity": 1974.1836956423435}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038887646.69/warc/CC-MAIN-20210419142428-20210419172428-00252.warc.gz"} |
https://brilliant.org/problems/how-good-does-omega-need-to-be/ | # How good does Omega need to be?
Logic Level 1
If the payoff for Newcomb's problem is as listed above, then what is the probability $$p$$ of Omega guessing correctly at which the expected payoff for choosing only box $$A$$ is equal to the expected payoff for choosing both boxes?
× | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9901863932609558, "perplexity": 797.8853333784674}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583509690.35/warc/CC-MAIN-20181015184452-20181015205952-00347.warc.gz"} |
https://homework.cpm.org/category/CC/textbook/ccg/chapter/4/lesson/4.2.2/problem/4-73 | ### Home > CCG > Chapter 4 > Lesson 4.2.2 > Problem4-73
4-73.
Write the first four terms of each of the following sequences.
1. $a_n=3·5^{n-1}$
Substitute the number of the term for $n$. For example, replace the $n$ with $1$ to find the first term.
1. $a_1=10$$a_{n+1}=-5a_n$
The first term is given. Use $(n=1)$ in the second part of the formula to find the second term.
$a_{(1+1)}=-5(a_1)\\\quad\ \ a_2 = -5(10)\\\quad \ \ a_2 = -50$
Use the answer for $a_2$ to find the third and fourth terms.
$10$, $-50$, $250$, $-1250$ | {"extraction_info": {"found_math": true, "script_math_tex": 13, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8862349390983582, "perplexity": 1463.6194144128026}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347394756.31/warc/CC-MAIN-20200527141855-20200527171855-00238.warc.gz"} |
http://physics.stackexchange.com/questions/55863/is-the-total-pressure-coefficient-always-1-in-incompressible-flow | # Is the total pressure coefficient always 1 in incompressible flow?
I have to do some calculations to get the drag from an experiment with a wake rake. In the equation I have to enter the total pressure coefficient $C_{pt}$, but in my calculations it seems to always be equal to +1. Is this correct?
-
## 1 Answer
To get the drag from a wake survey, use the following formula from Fundamentals of Aerodynamics by John D. Anderson:
$D' = \int_{-h}^h\rho u\left(U_\infty - u\right)dy$
I'm assuming you know $U_\infty$ from a freestream Pitot tube. You can find $u$ from the wake rake using Bernoulli's equation. The equation above can be derived using a control volume of the entire test section.
-
Thanks. I already found the solution. This was exactly what I did. – Aaron de Windt May 9 '13 at 10:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9315176010131836, "perplexity": 498.88440116765304}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042988650.6/warc/CC-MAIN-20150728002308-00295-ip-10-236-191-2.ec2.internal.warc.gz"} |
https://huijzer.xyz/posts/comparing-means-and-sds/ | HUIJZER.XYZ
# Comparing means and SDs
2020-06-27
When comparing different papers it might be that the papers have numbers about the same thing, but that the numbers are on different scales. Forr example, many different questionnaires exists measuring the same constructs such as the NEO-PI and the BFI both measure the Big Five personality traits. Say, we want to compare reported means and standard deviations (SDs) for these questionnaires, which both use a Likert scale.
In this post, the equations to rescale reported means and standard deviations (SDs) to another scale are derived. Before that, an example is worked trough to get an intuition of the problem.
## Preliminaries
In this post, I stick to the set theory convention of denoting sets by uppercase letters. So, $$|A|$$ denotes the number of items in the set $$A$$ and $$|a|$$ denotes the absolute value of the number $$a$$. To say that predicate $$P_x$$ holds for all elements in $$X$$, I use the notation $$\forall_t[t \in T : P_x]$$, for example: if $$X$$ contains all integers above 3, then we can write $$\forall_x[x \in X : 3 < x]$$.
For some study, let the set of participants and questions be respectively denoted by $$P$$ and $$Q$$ with $$|P| = n$$ and $$|Q| = v$$. Let the set of responses be denoted by $$R$$ with $$|R| = n \cdot v$$ and let $$T$$ denote the set of the summed scores per participant, that is, $$T = \{ t_1, t_2, \ldots, t_n \}$$, see the table below.
$$P$$$$q_1$$$$q_2$$...$$q_v$$Total
$$p_1$$$$r_{11}$$$$r_{12}$$...$$r_{1v}$$$$t_1 = \sum_{q \in Q} \: r_{1q}$$
$$p_2$$$$r_{21}$$$$r_{22}$$...$$r_{2v}$$$$t_2 = \sum_{q \in Q} \: r_{2q}$$
..................
$$p_n$$$$r_{n1}$$$$r_{n2}$$...$$r_{nv}$$$$t_n = \sum_{q \in Q} \: r_{nq}$$
Let $$m$$ and $$s$$ denote respectively the reported mean and sample SD. We assume that the papers calculated the mean and SD with
$m = mean(T) = \frac{\sum T}{|P|}$
and
$s = sd(T) = \sqrt{Var(T)} = \sqrt{\frac{1}{n - 1} \sum_{p \in P} (t_p - m)^2}.$
Note here that Bessel's correction is applied, because $$\frac{1}{n - 1}$$ instead of $$\frac{1}{n}$$. This seems to be the default way to calculate the standard deviation.
## An example with numbers
Lets consider one study consisting of only one question and three participants. Each response $$u \in U$$ is an integer ($$\mathbb{Z}$$) in the range [1, 3], that is, $$\forall_u[u \in U : u \in \mathbb{Z} \land 1 \leq u \leq 3]$$. So, the lower and upper bound of $$u$$ are respectively $$u_l = 1$$ and $$u_u = 3$$.
$$P$$$$U$$Total
$$p_1$$33
$$p_2$$11
$$p_3$$22
We can rescale these numbers to a normalized response $$v \in V$$ in the range [0, 1] by applying min-max normalization,
$v = \frac{u - u_l}{u_u - u_l} = \frac{u - 1}{3 - 1} = \frac{u - 1}{2}.$
The rescaled responses become
$$P$$$$V$$Total
$$p_1$$11
$$p_2$$00
$$p_3$$$$\frac{1}{2}$$$$\frac{1}{2}$$
Now, suppose that the study would have used a scale in the range [0, 5]. Let these responses be denoted by $$w \in W$$. We can rescale the normalized responses $$v \in V$$ in the range [0, 1] up to $$w \in W$$ in the range [0, 5] with
$w = v \cdot (w_u - w_l) + w_l = v \cdot (5 - 0) + 0 = 5v.$
This results in
$$P$$$$W$$Total
$$p_1$$55
$$p_2$$00
$$p_3$$$$2 \frac{1}{2}$$$$2 \frac{1}{2}$$
Since we know all the responses, we can calculate the means and standard deviations:
responsesmeansd
$$U$$$$2$$$$1$$
$$V$$$$\frac{1}{2}$$$$\frac{1}{2}$$
$$W$$$$2 \frac{1}{2}$$$$2 \frac{1}{2}$$
Now, suppose that $$U$$ was part of a study reported in a paper and the scale of $$V$$ was the scale we have for our own study. Of course, a typical study doesn't give us all responses $$u \in U$$. Instead, we only have $$mean(U)$$ and $$sd(U)$$ and want to know $$mean(W)$$ and $$sd(W)$$. This can be done by using the equations derived below. We could first normalize the result, by Eq. (14),
$mean(V) = \frac{mean(U) - u_l}{u_u - u_l} = \frac{mean(U) - 1}{3 - 1} = \frac{2 - 1}{2} = \frac{1}{2}$
and, by Eq. (15),
$sd(V) = \frac{sd(U)}{u_u - u_l} = \frac{sd(U)}{3 - 1} = \frac{1}{2}.$
Next, we can rescale this to the range of $$W$$. By Eq. (16),
$mean(W) = (w_u - w_l) \cdot mean(V) + w_l = (5 - 0) \cdot mean(V) + 0 = 5 \cdot \frac{1}{2} = 2 \frac{1}{2}$
and, by Eq. (17),
$sd(W) = (w_u - w_l) \cdot sd(V) = (5 - 0) \cdot \frac{1}{2} = 2 \frac{1}{2}.$
We could also go from $$U$$ to $$W$$ in one step. By Eq. (18),
$mean(W) = (w_u - w_l) \cdot \frac{mean(U) - u_l}{u_u - u_l} + g_l = (5 - 0) \cdot \frac{2 - 1}{3 - 1} + 0 = 2 \frac{1}{2}.$
and, by Eq. (19),
$sd(W) = (w_u - w_l) \cdot \frac{sd(U)}{u_u - u_l} = (5 - 0) \cdot \frac{1}{3 - 1} = 2 \frac{1}{2}.$
## Linear transformations
Consider a random variable $$X$$ with a finite mean and variance, and some constants $$a$$ and $$b$$. Before we can derive the transformations, we need some equations to be able to move $$a$$ and $$b$$ out of $$mean(aX + b)$$ and $$sd(aX + b)$$.
For the mean, the transformation is quite straightforward,
\begin{aligned} mean(aX + b) &= \frac{\sum_{i=1}^{|X|} (ax_i + b)}{|X|} \\ &= \frac{\sum_{i=1}^{|X|} (ax_i) + |X|b}{|X|} \\ &= \frac{\sum_{i=1}^{|X|} (ax_i)}{|X|} + b \\ &= \frac{a \sum_{i=1}^{|X|}(x_i)}{|X|} + b \\ &= a \cdot \frac{\sum_{i=1}^{|X|}(x_i)}{|X|} + b \\ &= a \cdot mean(x) + b. \end{aligned}
Note that the position of constant $$b$$ makes intuitive sense: for example, if you add a constant $$b$$ to all the elements of a sample, then the mean will move by $$b$$. To scale the standard deviation, we can use the equation for a linear transformation of the variance ,
$Var(aX + b) = a^2 \cdot Var(X).$
We can use this to derive that
$sd(aX + b) = \sqrt{Var(aX + b)} = \sqrt{a^2 \cdot Var(X)} = |a| \cdot sd(X).$
## Transformations
Next, we derive the equations for the transformations. Let $$l$$ and $$u$$ be respectively the lower and upper bound for the Likert scale over all the answers; specifically, $$\forall_t [t \in T : l \leq t \leq u]$$. Let $$k_l$$ and $$k_u$$ be respectively the lower and upper bound for the Likert scale per answer; specifically, $$\forall_r [ r \in R : k_l \leq r \leq k_u]$$. Now, for the normalized mean $$mean(T')$$,
$mean(T') = mean \left( \frac{T - k_l}{k_u - k_l} \right) = \frac{mean(T - k_l)}{k_u - k_l} = \frac{mean(T) - k_l}{k_u - k_l} = \frac{mean(T) - k_l}{k_u - k_l}$
and for the normalized SD $$sd(T')$$,
$sd(T') = sd \left( \frac{T - k_l}{k_u - k_l} \right) = \frac{sd(T - k_l)}{|k_u - k_l|} = \frac{sd(T)}{k_u - k_l}$
where $$|k_u - k_l| = k_u - k_l$$ since we know that both are positive and $$k_l < k_u$$.
To change these normalized scores back to another scale in the range $$[g_l, g_u]$$, we can use
$mean(T'') = mean((g_u - g_l) \cdot T' + g_l) = (g_u - g_l) \cdot mean(T') + g_l$
and
$sd(T'') = sd((g_u - g_l) \cdot T' + g_l) = (g_u - g_l) \cdot sd(T')$
We can also transform the mean and SD into one step from the range $$[k_l, k_u]$$ to $$[g_l, g_u]$$ with
\begin{aligned} mean(T'') &= (g_u - g_l) \cdot mean(T') + g_l \\ &= (g_u - g_l) \cdot \frac{mean(T) - k_l}{k_u - k_l} + g_l \end{aligned}
and
\begin{aligned} sd(T'') &= (g_u - g_l) \cdot sd(T') \\ &= (g_u - g_l) \cdot \frac{sd(T')}{k_u - k_l} \end{aligned}
## References
Hogg, R. V., McKean, J., & Craig, A. T. (2018). Introduction to mathematical statistics. Pearson Education. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.9998624324798584, "perplexity": 449.19328766894967}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499790.41/warc/CC-MAIN-20230130003215-20230130033215-00644.warc.gz"} |
http://crypto.stackexchange.com/users/4031/alt?tab=activity | alt
Reputation
Top tag
Next privilege 100 Rep.
Edit community wikis
May30 comment How to test if a number is a primitive root? You will see in that reference that often a choice of $p$ is made so that the factorization of $p-1$ is already known. Primality testing is a separate issue, but it is well-studied. In practice, the Miller-Rabin primality test performs well. May30 comment How to test if a number is a primitive root? I am sure there is some software to do this already. If $p$ is prime, $p-1$ cannot be prime since $2 | p-1$, but $(p-1)/2$ may be prime (although this is not that likely; 'safe' primes do not have small prime factors, each should be roughly the same size). Another source: cacr.uwaterloo.ca/~dstinson/papers/cs877s10.ps May30 comment How to test if a number is a primitive root? I could easily write a program for this, the question is whether it makes sense for the bit-length of $p$ that you are considering. If it is 32 bits, for example, then no problem. Any larger than that and I cannot guarantee anything.. computing the prime factorization of $p-1$ is expensive (for 64 bit $p$, it costs 2^32 work, doable but slow if you want many generators). edit: doing this now. May30 answered How to test if a number is a primitive root? May28 answered University for Crypto grad study May27 awarded Supporter May27 answered Recommended skills for a job in cryptology May13 awarded Editor May13 revised Now that quantum computers have been out for a while, has RSA been cracked? facts check on recommended key sizes. May13 suggested approved edit on Now that quantum computers have been out for a while, has RSA been cracked? May10 answered Now that quantum computers have been out for a while, has RSA been cracked? Oct12 awarded Teacher Oct10 answered Cryptanalysis of S-DES - Equations | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8800771236419678, "perplexity": 1049.115155312162}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207929956.54/warc/CC-MAIN-20150521113209-00102-ip-10-180-206-219.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/195170/question-about-gamma-distribution | For $X$ a gamma random variable with parameters $\alpha >2$ and $\beta> 0$
a) Prove that the mean of $\dfrac1X$ is $\dfrac{\beta}{\alpha-1}$.
b) Prove that the variance of $\dfrac1X$ is $\dfrac{\beta^2}{(\alpha-1)^2 (\alpha - 2)}$.
-
What did you try? Can you write a density? Which integral do have to compute? – Davide Giraudo Sep 13 '12 at 12:57
As a digression, if $X$ follows gamma distribution, its reciprocal $X^{-1}$ follows inverse gamma distribution.
By the definition of the expectation, since $X$ is non-negative: $$\mathbb{E}\left(\frac{1}{X}\right) = \int_0^\infty \frac{1}{x} f_X(x) \mathrm{d}x$$ Recall that $f_X(x) = C_{\alpha, \beta} x^{\alpha-1} \mathrm{e}^{-x \beta}$, for some normalization constant $C_{\alpha, \beta}$. Use it in the integral above: $$\mathbb{E}\left(\frac{1}{X}\right) = C_{\alpha, \beta} \int_0^\infty \frac{1}{x} \cdot x^{\alpha-1} \mathrm{e}^{-x \beta} \mathrm{d}x = C_{\alpha, \beta} \int_0^\infty x^{\alpha-2} \mathrm{e}^{-x \beta} \mathrm{d}x$$ The latter integral is the Euler integral of the second kind. With simplification you should arrive at the requested expression for item a).
For the item b), note that $$\mathbb{Var}\left(\frac{1}{X}\right) = \mathbb{E}\left(\left(\frac{1}{X}\right)^2\right) - \mathbb{E}\left(\frac{1}{X}\right)^2$$ and repeat calculations above: $$\mathbb{E}\left(\left(\frac{1}{X}\right)^2\right) = C_{\alpha, \beta} \int_0^\infty x^{\alpha-3} \exp(-x \beta) \mathrm{d}x$$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9991723299026489, "perplexity": 212.2694708165086}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701145519.33/warc/CC-MAIN-20160205193905-00306-ip-10-236-182-209.ec2.internal.warc.gz"} |
http://www.ck12.org/arithmetic/Fraction-and-Mixed-Number-Comparison/lesson/Fraction-and-Mixed-Number-Comparison-MSM6/ | <img src="https://d5nxst8fruw4z.cloudfront.net/atrk.gif?account=iA1Pi1a8Dy00ym" style="display:none" height="1" width="1" alt="" />
# Fraction and Mixed Number Comparison
## Use <, > and/or = to compare fractions and mixed numbers.
Estimated7 minsto complete
%
Progress
Practice Fraction and Mixed Number Comparison
MEMORY METER
This indicates how strong in your memory this concept is
Progress
Estimated7 minsto complete
%
Fraction and Mixed Number Comparison
Keith and his sister were assigned the task of cleaning up after a party. Keith took all of the leftover tuna sandwiches and his sister took all of the left over ham sandwiches.
Keith has of tuna sandwiches.
His sister has of ham sandwiches. Who has more sandwiches?
In this concept, you will learn how to compare and order improper fractions and mixed numbers.
### Comparing Improper Fractions and Mixed Numbers
An improper fraction is a fraction where the numerator is larger than the denominator.
A mixed number is composed of a whole number and a fraction.
To compare a mixed number and an improper fraction, first make sure that they are in the same form. Convert the improper fraction to a mixed number or the mixed number to an improper fraction, then compare.
Convert into a mixed number. Divide 15 by 4 and write the quotient as a whole number and a fraction.
Compare the numbers.
is greater than .
If the whole number is the same, compare the fractions. You may have to convert the fractions using the lowest common denominator.
You can order improper fractions and mixed numbers in the same way. Convert them all to the same form and then write them in order.
Order these fractions from least to greatest.
First, change the fractions so that they are all in the same form. Let’s change them all to mixed numbers. Simplify if you can.
Now you can write them in order from least to greatest.
### Examples
#### Example 1
Earlier, you were given a problem about Keith and the sandwiches.
Keith has tuna sandwiches and his sister has ham sandwiches. Compare the fractions to see who has more sandwiches.
First, convert the improper fraction to a mixed number.
Then, compare the two quantities.
Keith has more sandwiches.
For the following examples, compare the fractions.
#### Example 2
First, convert the improper fraction to a mixed number.
Compare the numbers.
is greater than .
#### Example 3
First, change the improper fraction to a mixed number
Then, compare the numbers.
is greater than .
#### Example 4
Both fractions are improper. Let’s try comparing the fractions using the lowest common denominator of 3 and 5. The LCD is 15.
First, find the equivalent fraction for each with the denominator of 15.
Then, compare the fractions.
is greater than .
#### Example 5
First, convert the mixed fraction to an improper fraction. Multiply the whole number by the denominator and add the numerator. Write it as a fraction over 4.
Then, compare the fractions.
is equal to .
### Review
Compare each set of values using <, > or =.
1.
2.
To see the Review answers, open this PDF file and look for section 5.16.
### Notes/Highlights Having trouble? Report an issue.
Color Highlighted Text Notes
### Vocabulary Language: English
Equivalent
Equivalent means equal in value or meaning.
improper fraction
An improper fraction is a fraction in which the absolute value of the numerator is greater than the absolute value of the denominator. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8030024766921997, "perplexity": 2090.7730622970385}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988718957.31/warc/CC-MAIN-20161020183838-00216-ip-10-171-6-4.ec2.internal.warc.gz"} |
http://www.irohabook.com/logarithm-properties | # Logarithm Properties – Product & Quotient Rule
The most important logarithm properties are product and quotient rules. Many logarithm formulas are derived from these rules.
## Formula
(Product Rule) $\log_a xy = \log_a x + \log_a y$
(Quotient Rule) $\log_a \dfrac{ x }{ y } = \log_a x - \log_a y$
In particular
(Power rule) $\log_a x^n = n \log_a x$
The logarithm of the multiplication of real values is the total of the logarithms of the values.
And the logarithm of the division of real values is the difference of the logarithms of the values.
## Proof
Reviewing exponent expressions help us to prove the above.
$a^m \times a^n = a^{ m + n }$
Let $x$ be $a^m$ and $y$ be $a^n$.
$x \times y = a^{ m + n }$
And then
$\log_a ( x \times y ) = m + n$
$\log_a xy = \log_a x + \log_a y$
So the rules of logarithm are essentially ones of exponential. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9323851466178894, "perplexity": 1435.4971171028726}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257648198.55/warc/CC-MAIN-20180323063710-20180323083710-00535.warc.gz"} |
http://mathoverflow.net/questions/40326/what-is-the-right-notion-of-equivariant-cech-cohomology | # What is the right notion of equivariant Cech cohomology?
What is the right definition of equivariant Cech cohomology is so that given a $G$-space $X$, $H^1_G(X;H)$ classifies $G$-equivariant principal $H$-bundles on $X$?
-
H is abelian, right? If H is not abelian, you will fail to construct the long exact sequence past the first cohomology group (cf. Giraud's Cohomologie non-abelienne). – Harry Gindi Sep 28 '10 at 15:08
At this point, it really makes most sense to define the cohomology of the orbifold X/G. That orbifold is best encoded by the groupoid whose objects are X and whose arrows are XxG. In that way, you avoid the possibility of doing various non-sensical constructions (such as keeping track of different G-space structures on H - no offence). – André Henriques Sep 28 '10 at 16:56
What is a $G$-equivariant principal $H$-bundle? – Martin Brandenburg Sep 28 '10 at 17:22
Andre is right to point out that the way I first went at it doesn't make sense. There's some extra data that I need to encode in the definition of cochains in order to capture that a G-equivariant principal H-bundle E has isomorphisms $g:E_x\rightarrow E_{gx}$ for any $g\in G$ and $x\in X$ and I′d confused this data with a notion of $G$−action on $H$. Thanks for the replies. – Jesse Wolfson Sep 28 '10 at 21:05
You say: right definition but usually that requires a meaning for 'right'. For example, working with $G$-spaces the whole time look at what a group object in that category looks like, here you may need to replace $G$-spaces by spaces over $BG$. Now look what torsor / principal bundle objects over a G-space in that category would be and so on. That is very general and may be way too general for what you want but is one interpretation. André's Orbifold approach (and orbifold cohomology à lå Moerdijk and Pronk?) may answer your query by interpreting things differently. – Tim Porter Jun 21 '11 at 16:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8111138939857483, "perplexity": 802.8070497473307}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00368-ip-10-147-4-33.ec2.internal.warc.gz"} |
http://dailygre.blogspot.com/2011/08/math-gre-23.html?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+blogspot%2FFcIdx+%28Daily+GRE%29 | ## Pages
News: Currently the LaTeX and hidden solutions on this blog do not work on Google Reader.
Email me if you have suggestions on how to improve this blog!
## Wednesday, 24 August 2011
### Math GRE - #23
Suppose B is a basis for a real vector space V of dimension greater than 1. Which of the following statements could be true?
1. The zero vector of V is an element of B.
2. B has a proper subset that spans V.
3. B is a proper subset of a linearly independent subset of V.
4. There is a basis for V that is disjoint from B.
5. One of the vectors in B is a linear combination of the other vectors in B.
Solution :
Let us first recall the definition of a basis:
"A basis for a vector space is a linearly independent set of vectors that spans the vector space".
B is linearly independent because it is a basis. This removes choice 5.
Because a basis is linearly independent, the zero vector cannot be part of it.
This removes choice 1.
For B to have a proper subset that spans V, there must exist vectors in B which are not linearly independent (since we can remove vectors and still span V). However, this is impossible since B is a basis (i.e. we cannot remove any vectors and still hope to span V). This removes choice 2.
Similarly, we cannot add any more linearly independent vectors to B because it already spans all of V. Thus B cannot be a proper subset of a linearly independent subset of V. This removes choice 3.
The only choice left is choice 4. Thus choice 4 is the answer (there are in fact an infinite number of bases that are disjoint from B).
This webpage is LaTeX enabled. To type in-line formulae, type your stuff between two '\$'. To type centred formulae, type '$' at the beginning of your formula and '$' at the end. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9664831757545471, "perplexity": 315.5648586592932}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221215404.70/warc/CC-MAIN-20180819224020-20180820004020-00552.warc.gz"} |
https://www.physicsforums.com/threads/why-is-opw-complete.753673/ | # Why is OPW complete?
1. May 14, 2014
### AndrewShen
Orthogonal plane waves can be used to expand Bloch waves. It is better than plane waves because it converges more quickly. However, I've got a problem. The completeness of plane waves is guaranteed by Fourier analysis. Why is OPW complete? It is orthogonal to core levels. But does it mean OPW is complete?
2. May 18, 2014 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.837488055229187, "perplexity": 1328.3885069510725}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591578.1/warc/CC-MAIN-20180720100114-20180720120114-00482.warc.gz"} |
http://mathhelpforum.com/calculus/144003-volume.html | 1. ## Volume
Consider the given curves to do the following. 8 y = x^3, y = 0 , x = 4 Use the method of cylindrical shells to find the volume V generated by rotating the region bounded by the given curves about y = 8.
2. Originally Posted by leebatt
Consider the given curves to do the following. 8 y = x^3, y = 0 , x = 4 Use the method of cylindrical shells to find the volume V generated by rotating the region bounded by the given curves about y = 8.
did you sketch a graph ?
$\displaystyle V = 2\pi \int_0^8 (8-y)(4 - 2\sqrt[3]{y}) \, dy$
check the result by using washers ...
$\displaystyle V = \pi \int_0^4 8^2 - \left(8 - \frac{x^3}{8}\right)^2 \, dx$
3. I keep getting (1152 * Pi)/7. I am confused about why you have 4 - 2y^(1/3) when using the method of cylindrical shells. For the washer method
it makes sense. The outer radius is 8 and the inner is 8 - x^3/8.
I tried checking it with Maple and it still spits out the answer the answer from above. Maple just integrates Pi*(8-x^3/8)^2 over the interval x = 0 to x = 4.
Thank you for your time
4. ## I see
I see where my error is. It is bounded by x = 4.
Thank you very much. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8500165939331055, "perplexity": 240.3953937784703}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267861752.19/warc/CC-MAIN-20180619021643-20180619041643-00228.warc.gz"} |
http://math.stackexchange.com/questions/102755/greatest-prime-factor-of-n-is-less-than-square-root-of-n-proof | # Greatest prime factor of $n$ is less than square root of $n$, proof
I remember reading this somewhere but I cannot locate the proof.
-
Greatest prime factor? $\sqrt6 = 2.44948974..$ but the greatest prime factor of $6$ is $3$, so it's not true. Or are you asking something else? – Mikko Korhonen Jan 26 '12 at 20:59
What does greatest prime factorization mean? And there certainly can be a prime factor $p|n$ with $p>\sqrt{n}$; take $3|6,3>\sqrt{6}$ for example. Perhaps you mean there is always a nonunit $d|n$ with $d\le n$? – anon Jan 26 '12 at 21:01
Perhaps you are looking for a proof of: if $n$ is composite, then there is at least one prime $p \le \sqrt{n}$ which divides $n$. – Aryabhata Jan 26 '12 at 21:04
Short proof of @Aryabhata's statement: If $n=ab$ and $a \leq b$ then $a^2 \leq ab=n$. – Fredrik Meyer Jan 26 '12 at 21:52
It is the smallest prime factor that is less than or equal to $\sqrt{n}$, unless $n$ is prime. One proof is as follows: Suppose $n=ab$ and $a$ is the smallest prime factor of $n$, and $n$ is not prime. Since $n$ is not prime, we have $b\ne1$. Since $a$ is the smallest prime factor of $n$, we have $a\le b$. If $a$ were bigger than $\sqrt{n}$, then $b$ would also be bigger than $\sqrt{n}$, so $ab$ would be bigger than $\sqrt{n}\cdot\sqrt{n}$. But $ab=n$.
-
As stated, what you wrote is false: for example, $5$ is a prime factor of $15$, but the square root of $15$ is less than $4$. Not to mention the fact that if $n$ is prime, then its only prime factor is $n$ itself, certainly larger than $\sqrt{n}$.
What is true is that if $n$ is not prime and not equal to $1$, then it must have a prime factor less than or equal to $\sqrt{n}$.
We can prove this by strong induction: assume the result holds for all $k\lt n$, if $k\gt 1$, then either $k$ is a prime, or it has a prime factor that is no more than $\sqrt{k}$. We wish to prove the same is true for $n$.
If $n$ is prime, we are done. If $n$ is not prime, then there exist $a$ and $b$, such that $1\lt a,b\lt n$ and $n=ab$. We cannot have both $a$ and $b$ greater than $\sqrt{n}$, because then $n = ab \gt \sqrt{n}\sqrt{n} = n$, which is impossible. So either $a\leq\sqrt{n}$, or $b\leq \sqrt{n}$. If $a\leq\sqrt{n}$, then either $a$ is prime, and so $n$ has a prime factor less than or equal to $\sqrt{n}$; or else $a$ has a prime factor $p$ with $p\leq\sqrt{a}$; but a prime factor of $a$ is also a prime factor of $n$, and $a\lt n$ implies $\sqrt{a}\lt\sqrt{n}$, so $p$ is a prime factor of $n$, $p\leq \sqrt{n}$. Either way, $n$ has a prime factor less than or equal to $\sqrt{n}$.
If $b\leq\sqrt{n}$, then repeat the argument with $b$ instead of $a$.
-
Arturo: As written, the proof is correct; but I don't think we really need to do an induction on $a$ (or $b$). Specifically, in the case $a \leq \sqrt{n}$, any prime factor $p$ of $a$ satisfies: $p \leq a \leq \sqrt{n}$, and we are done. The $b \leq \sqrt{n}$ case is similar. – Srivatsan Jan 26 '12 at 22:02
@Srivatsan: The amusing thing about the argument above is that I do not need to assume that a positive integer greater than $1$ necessarily has a prime factor; we're actually proving this fact "along the way". – Arturo Magidin Jan 27 '12 at 1:36
Ah yes, that is indeed true. Thanks for pointing it out explicitly to me. – Srivatsan Jan 27 '12 at 1:38
Greene and Knuth (1990) called numbers for which the greatest prime factor was greater than $\sqrt{n}$ "unusual numbers" - perhaps that's what you're thinking of.
However, it turns out they aren't so unusual after all. As Schroeppel (1972) pointed out, the probability that a random integer will be "unusual" is $\ln2$.
As the other answers have more precisely stated, the statement does hold for the smallest prime factor. (IE: $spf(n) <= \sqrt{n}$)
-
As said by others here your claim is false, maybe you've seen that the Smallest prime factor of $n$ is lower or equal to the square root of $n$.
Proof for the first claim: assume that the claim - Smallest prime factor of n is above than square root of $n$ is true. then let $x$ be the smallest prime factor of $n$, so there exists integer $y$ such that $n=xy$. but $x > \sqrt{n}$ and $y\ge x> \sqrt{n}$ so we got $n<xy$ and thats the desired contradiction.
try to think how to change that proof to prove the second claim.
-
It's not true in general that the biggest prime factor of $n$ is greater than or equal to $\sqrt{n}$. For example, $5\cdot7\cdot11=385$ and $11 < \sqrt{385}$. – Michael Hardy Aug 5 '12 at 4:27
youre right. edited it to the correct claim. – Ofek Ron Aug 5 '12 at 10:29
Given that n is composite so $n=p\cdot q\cdot r\cdot s\cdot t\cdots$ where $p,q\cdots$ are primes by fundamental theorem of arithmetic.
Now assume without loss of generality that $p<q<r<\cdots$. So $p<q\cdot r\cdot t\cdots$.
Now take square root on both sides then $\sqrt{p}<\sqrt{q\cdot r\cdot s\cdot t\cdots}$. Now multiply both sides by $\sqrt{p}$. We get $p<\sqrt{p\cdot q\cdot r \cdot s \cdot t\cdots}$ and $\sqrt{p\cdot q\cdot r\cdot s\cdot t\cdots}$ is equal to $\sqrt{n}$.
Hence established that $p \leq \sqrt{n}$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9544184803962708, "perplexity": 76.1792466883985}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737958671.93/warc/CC-MAIN-20151001221918-00191-ip-10-137-6-227.ec2.internal.warc.gz"} |
https://indico.cern.ch/event/632403/?print=1 | LHC Seminar
# Probing the Quark Gluon Plasma with Heavy Flavours: recent results from ALICE
## by Elena Bruna (Universita e INFN Torino (IT))
Europe/Zurich
222/R-001 (CERN)
### 222/R-001
#### CERN
200
Show room on map
Description
The study of open heavy-flavour physics allows us to investigate the key properties of the Quark-Gluon Plasma (QGP) and the microscopic processes ongoing in the medium produced in heavy-ion collisions at relativistic energies. Heavy quarks are produced in the early stages of heavy-ion collisions and their further production and annihilation rates in the medium are expected to be very small throughout the evolution of the system. Therefore, they serve as penetrating probes that traverse the hot and dense medium, interact with the partonic constituents of the plasma and lose energy.
Understanding the interactions of heavy quarks with the medium requires precise measurements over a wide momentum range in heavy-ion collisions, but also in smaller systems like pp collisions, which also test next-to-leading order perturbative QCD calculations, and proton-nucleus collisions, which are sensitive to Cold Nuclear Matter effects (CNM), such as the modification of the parton distribution functions of nuclei, and parton energy loss in cold nuclear matter.
This talk presents recent heavy-flavour results from ALICE in pp, p-Pb and Pb-Pb collisions and discusses the current state of and next steps towards a characterization of the QGP properties with heavy-flavour probes. In particular, new results from Pb-Pb collisions at sqrt(s_NN)=5.02 TeV show a significant modulation of charm production as a function of the azimuthal angle, so-called elliptic flow, which indicates that low-momentum charm quarks take part in the collective motion of the QGP. In addition, a strong modification of the transverse momentum spectra of heavy-flavour particles is observed relative to pp collisions, which is interpreted as an effect of the in-medium energy loss. The impact of these measurements on our understanding of heavy-quark production in heavy-ion collisions will be discussed by comparing results from different collision energies and to expectations from energy-loss models. Results from smaller systems include new measurements on charmed baryon ($\Lambda_{c}^{+}$ and $\Xi_{c}^{0}$) production in pp collisions at sqrt(s)=7 TeV and p-Pb collisions at sqrt(s_NN)=5.02 TeV, as well as multiplicity-dependent studies, which provide information on possible collective effects in high-multiplicity p-Pb events.
Organized by
M. Mangano, C. Lourenco, G. Unal................ Refreshments will be served at 10h30
Webcast
There is a live webcast for this event | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9804465174674988, "perplexity": 1971.0285383177898}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500365.52/warc/CC-MAIN-20230206212647-20230207002647-00488.warc.gz"} |
https://www.physicsforums.com/threads/find-an-isomorphism-between-the-group-of-orientation.138039/ | # Find an isomorphism between the group of orientation
1. Oct 12, 2006
### Dragonfall
I need to find an isomorphism between the group of orientation preserving rigid motions of the plane (translations, rotations) and complex valued matrices of the form
a b
0 1
where |a|=1.
I defined an isomorphism where the rotation part goes to e^it with angle t and the translation by l=ax+by to b=a+bi. But the multiplication doesn't work out.
2. Oct 12, 2006
### AKG
The rotation part goes to eit? It's supposed to go to a matrix, eit is not a matrix. The translation goes to b=a+bi? I don't even know what this means. Is the b on the left side the same as the one on the right side? And again, a+bi is a number, not a matrix, so how can the translation go to a+bi? Moreover, what exactly do you mean by "the rotation part" and "the translation"? I assume you mean that any rigid motion can be decomposed some how into a translation part and rotation part. But have you proved that this is possible? And do you realize that if f is an arbitrary orientation preserving rigid motion, then it can be decomposed into a rotation and translation like so: f = rt for some rotation r and some translation t, and can also be decomposed f = r't', for some rotation r' and some translation t', but prima facie, r' need not equal r and t' need not equal t, so when you speak of "the rotation part" it's ambiguous until you say whether you're decomposing rotation-first or translation-first.
Once you write out something that's clear, unambiguous, and makes sense, we can suggest ways to get passed wherever you're getting stuck, but right now I don't know how to help you. Actually, you haven't even asked a question.
3. Oct 13, 2006
### matt grime
(a priori, not prima facie)
4. Oct 13, 2006
### Dragonfall
Every orientation preserving rigid motion can be written as $$\rho_{\theta}t_a$$ where $$0\leq\theta <2\pi$$ and $$a=a_1x_1+a_2x_2$$. Define a map $$f(\rho_{\theta}t_a)=\left(\begin{array}{cc}{e^{i\theta}}&{a_1+a_2i}\\0&1\end{array}\right)$$. Clear enough now?
Last edited: Oct 13, 2006
5. Oct 14, 2006
### AKG
Okay, so you've tried one thing that doesn't work. What exactly do you want now? I ask because I'm having trouble thinking of a hint, so it would help me if I had a more specific question to answer. Do you know anything about fractional linear transformations, a.k.a. Mobius transformations?
6. Oct 14, 2006 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8931054472923279, "perplexity": 613.2306892436749}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257648000.93/warc/CC-MAIN-20180322190333-20180322210333-00435.warc.gz"} |
https://mathhelpforum.com/tags/airplane/ | # airplane
1. ### Pilot and wind velocity vector help
A pilot needs to fly 35 degrees east of north. He's flying at a speed of 105 miles per hour at fixed altitude with no wind. When he reaches a point, he encounters wind with a velocity of 15 miles per hour in the direction 10 degrees east of north. Express the velocity of the wind W as a vector...
2. ### Trigonometric bearing airplane problem
A plane's air speed is 520 mi/hr. Find the bearing (theta) the plane should fly to get to Seattle and the new speed (r) when the wind is blowing from a bearing of 120degrees with a speed of 120 mi/hr. The bearing from Hawaii to Seattle is 50 degrees and the distance is 2677 mi. Find the time it...
3. ### Finding a missing airplane (Conditional probability? Confusing wording.)
Conditional probability question (with confusing wording) LaTeX is giving me some serious parsing issues today, so I'll try to use it sparsely. --- Problem statement: An airplane is reported as missing. Investigators assume that the airplane has disappeared in one of three given zones...
4. ### Airplane vectors
12. A pilot wishes to fly from Toronto to Montreal, a distance of 508 km on a bearing of 075^o. The cruising speed of the plane is 550 km/h. An 80 km/h wind is blowing on a bearing of 125^o a) What heading should the pilot take to reach his destination? b) What will be the speed of the plane...
5. ### DVT word problem help needed
A airplane that travels from one city to another city against the wind takes it 4 hours to get there, the voyage back only takes it 2 hours. If the distance between both city's is 600km, what is the speed of the plane? So far I have: D = V x T Go...
6. ### airplane
HELLO EVERY ONE An airplane moves on the runway of the airport at aconstant acceleration, then it takes off after it travels 1.5 km on the runway at avelocity of 210 km/h . what is the acceleration of the airplane ? (please by integral)
7. ### Airplane departures and arrival stats
Given that a flight departs (on-time) = P(d)=.83, and arrives (on-time) = P(a)=.82...and prob that it departs & arrives on time is .78: 1. what is the prob. that an aircraft will arrive on time or depart on time? - ??? 2. Are these events (a,d) independent? - I said they are dependent upon...
8. ### Airplane Problem
I'm looking for help on a logarithm graphing problem. When I graph the problem on my calculator, I get a nonsensical answer....the resulting graph looks like a tangent. Here's the problem statement: "Climb Rate. The time t (in minutes) for a small plane to climb to an altitude of h feet...
9. ### Urgent! Airplane problem!
An airplane of mass 8000 kg has a wing area of 400 m2. The speed of air ow is 100 m/s over the top of the wing and 70 m/s below the bottom of the wing. Assume the density of air varies with height as Rho =Rho0*exp(-z/8000m) with Rho0 = 1.3 kg/m^3. (a) What is the lift force on the airplane at...
10. ### Airplane Engines
"An airplane needs at least half of its engines working to stay airborne. If an engine independently functions with probability p, for what values of p is an airplane with three engines safer than one with five engines?" Ok, so I'm going to try to work through this. I'm pretty much thinking...
11. ### North, East, South, West airplane thingy
Hi, thank you (again?) for taking the time to look at my problem. A helicopter makes a forced landing at sea. The last radio signal received at station C gives the bearing of the helicopter from C as N 57.1degrees E at an altitude of 423 feet. An observer at C sights the helicopter and gives...
12. ### Application - Airplane and Wind
the course and ground speed of a plare are 70 degree and 400 miles per hour respectively. There is a 60 mph wind blowing from the south. find the approximate direction and air speed of the plane. This is how I learned how to solve it with vectors: v_1 = 400 \langle \cos {70}, \sin {70} \rangle...
13. ### vectors Airplane question
Could someone help me I no math genius but this question is driving me crazy(Angry). any help is appericated(Bow) Questions attached
14. ### Airplane Engine - Probability Question!
If someone could check my answer that would be AWESOME Q. Assume that airplane engines operate independently of each other and that at least half of the engines on a plane must operate for the plane to continue flying. A particular airplane engine fails with a probability of 1/7. Which is...
15. ### Vector airplane question
An airplane is flying at 500km/hr in a wind blowing 60 km/hr toward the SE. In what direction should the plane head to end up going due east? What is the plane's speed relative to the ground? I assume that the plane is heading due east to start and the wind would add to the i component of...
16. ### AIRPLANE Problem
Two airplanes (at the same height) are flying away from an airport at a right angle to each other. The distance between them is 470 miles. Plane X is 270 miles from the airport. Plane Y is traveling at a speed of 520 mi/hr. The distance between them is increasing by 660 mi/hr. How fast is plane...
17. ### Airplane and radar station functions
An airplane is flying at a speed of 200 mi/h at an altitude of one mile and passes directly over a radar station at time t = 0. (a) Express the horizontal distance d (in miles) that the plane has flown as a function of t. d = 200t ok, well that was easy (b) Express the distance s between the...
18. ### Airplane Application
A plane crosses the Atlantic Ocean (3000 miles) with an airspeed of 500 miles per hour. The cost C (in dollars) for each passenger is given by the following function: C(x) = 100 + (x/10) + (36,000/x), where x is the ground speed (airspeed, plus, minus wind). (a) What is the cost per...
19. ### Airplane and Wind speed problem
An airplane averaged 160 miles per hour with the wind and 112 miles per hour against the wind. Determine the speed of the plane and the speed of the wind. Is this possible to calculate with the info presented?
20. ### Crazy Guy On The Airplane
Problem :- A line of 100 airline passengers is waiting to board a plane. They each hold a ticket to one of the 100 seats on that flight. (For convenience, let's say that the nth passenger in line has a ticket for the seat number n.) Unfortunately, the first person in line is crazy, and will... | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8108874559402466, "perplexity": 1003.903344346739}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540548544.83/warc/CC-MAIN-20191213043650-20191213071650-00507.warc.gz"} |
http://math.stackexchange.com/questions/47321/proof-of-textvar-left-sum-i-1ngx-i-right-n-left-textvar-gx | # Proof of $\text{Var}\,\left(\sum_{i=1}^{n}g(X_i)\right)=n\left(\text{Var}\,g(X_1)\right).$
I have a question about part of a proof of a Lemma in a book (Casella's Statistical Inference) I'm reading. This it how it goes.
Let $X_1, \cdots ,X_n$ are a random sample from a population and let $g(x)$ be a function such that $\mathbb{E}g(X_1)$ and $\text{Var}\,g(X_1)$ exist. Then $$\text{Var}\,\left(\sum_{i=1}^{n}g(X_i)\right)=n\left(\text{Var}\,g(X_1)\right).$$
So this is how I proceeded to to prove it.
Since the $X_i's$ are independent, we have that
\begin {align*} \text{Var}\,\left(\sum_{i=1}^{n}g(X_i)\right)&= \text{Var}\,g(X_1)+\cdots +\text{Var}\,g(X_n)\\ &= n\text{Var}\, g(X_1). \end {align*} where the last equality holds because the $X_i's$ are identically distributed. Can I do this? I'm asking this because the proof in the book started by using the definition of the variance and somewhere along the lines involved the covariance matrix.
Thanks.
-
Yes, you can do this. Let $Y_i = g(X_i)$. Then $Y_1,\ldots,Y_n$ are i.i.d. random variables with ${\rm Var}(Y_1) < \infty$ (hence also the mean exists). It is an elementary (and very useful) result that, in this case, ${\rm Var}(Y_1 + \cdots + Y_n) = {\rm Var}(Y_1) + \cdots + {\rm Var}(Y_n)$. – Shai Covo Jun 24 '11 at 2:49
Thanks. That's refreshing to know...) – Nana Jun 24 '11 at 2:56
Shai has answered Nana's question, but in the interest of this question being "officially" answered let's prove the elementary result Shai cites; namely, that if $Y_1, \ldots, Y_n$ are independent random variables with finite mean and variance then $\newcommand{\Var}{\mathrm{Var}}\Var(Y_1 + \cdots + Y_n) = \Var(Y_1) + \cdots + \Var(Y_n)$.
First, let's prove it in the $n=2$ case. If $Y_1$ and $Y_2$ are independent then we know that $E[Y_1 Y_2] = E[Y_1] E[Y_2]$. By a basic property of the variance,
$$\Var(Y_1 + Y_2) = E[(Y_1 + Y_2)^2] - (E[Y_1 + Y_2])^2 = E[Y_1 + 2Y_1Y_2 + Y_2^2] - (E[Y_1] + E[Y_2])^2$$ $$= E[Y_1^2] + 2E[Y_1Y_2] + E[Y_2^2] - E[Y_1]^2 -2E[Y_1]E[Y_2] - E[Y_2]^2$$ $$= E[Y_1^2] - E[Y_1]^2 + E[Y_2^2] - E[Y_2]^2 = \Var(Y_1) + \Var(Y_2).$$
Then, applying the result for the $n=2$ case successively to the general case we have $$\Var(Y_1 + \cdots + Y_n) = \Var(Y_1) + \Var(Y_2 + \cdots + Y_n) = \Var(Y_1) + \Var(Y_2) + \Var(Y_3 + \cdots + Y_n)$$ $$= \cdots = \Var(Y_1) + \cdots + \Var(Y_n).$$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9857708811759949, "perplexity": 208.51912072349685}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701165378.58/warc/CC-MAIN-20160205193925-00089-ip-10-236-182-209.ec2.internal.warc.gz"} |
https://de.maplesoft.com/support/help/errors/view.aspx?path=evalc | Overview - Maple Help
evalc
symbolic evaluator over the complex field
Calling Sequence evalc(expr)
Parameters
expr - any expression
Description
• This evalc(expr) calling sequence is used to manipulate complex-valued expressions, such as $\mathrm{sin}\left(a+Ib\right)$, by attempting to split such expressions into their real and imaginary parts. Whenever possible, the output from evalc is put into the canonical form $\mathrm{expr1}+I\mathrm{expr2}$.
• The fundamental assumption that evalc makes is that unknown variables represent real-valued quantities. Thus, for example, evalc(Re(a+I*b)) = a and evalc(Im(a+b)) = 0. Furthermore, evalc also assumes that an unknown function of a real variable is real valued.
• The assume command can be used to override these default assumptions. For example, assume(u::complex) tells evalc that u is not necessarily real. Note also that some usages of the assume command implicitly imply real and others do not. For example assume(u<1) implies u is real but assume(v^2<1) and assume(abs(v)<1) do not imply that v is real.
• The evalc command maps onto sets, lists, equations and relations. The evalc command applied to a complex series will be a series with each coefficient in the above canonical form.
• When evalc encounters a function whose decomposition into real and imaginary parts is unknown to it (such as f(1+I) where f is not defined), it attempts to put the arguments in the above canonical form.
• The standard functions Re, Im, abs, and conjugate are recognized by evalc, and when such functions are invoked from within a call to evalc they apply the assumptions outlined above. For example, evalc(abs(a+I*b)) = sqrt(a^2+b^2).
• A complex-valued expression may be represented to evalc as polar(r,theta) where r is the modulus and theta is the argument of the expression.
• For a complete list of the functions initially known to evalc, see evalc/functions.
Examples
> $\mathrm{evalc}\left(\mathrm{sqrt}\left(1+I\right)\right)$
$\frac{\sqrt{{2}{+}{2}{}\sqrt{{2}}}}{{2}}{+}\frac{{I}{}\sqrt{{-}{2}{+}{2}{}\sqrt{{2}}}}{{2}}$ (1)
> $\mathrm{evalc}\left(\mathrm{sin}\left(3+5I\right)\right)$
${\mathrm{sin}}{}\left({3}\right){}{\mathrm{cosh}}{}\left({5}\right){+}{I}{}{\mathrm{cos}}{}\left({3}\right){}{\mathrm{sinh}}{}\left({5}\right)$ (2)
> $\mathrm{evalc}\left({2}^{1+I}\right)$
${2}{}{\mathrm{cos}}{}\left({\mathrm{ln}}{}\left({2}\right)\right){+}{2}{}{I}{}{\mathrm{sin}}{}\left({\mathrm{ln}}{}\left({2}\right)\right)$ (3)
> $\mathrm{evalc}\left(\mathrm{conjugate}\left(\mathrm{exp}\left(I\right)\right)\right)$
${\mathrm{cos}}{}\left({1}\right){-}{I}{}{\mathrm{sin}}{}\left({1}\right)$ (4)
> $\mathrm{evalc}\left(f\left(\mathrm{exp}\left(a+bI\right)\right)\right)$
${f}{}\left({{ⅇ}}^{{a}}{}{\mathrm{cos}}{}\left({b}\right){+}{I}{}{{ⅇ}}^{{a}}{}{\mathrm{sin}}{}\left({b}\right)\right)$ (5)
> $\mathrm{evalc}\left(\mathrm{polar}\left(r,\mathrm{\theta }\right)\right)$
${r}{}{\mathrm{cos}}{}\left({\mathrm{\theta }}\right){+}{I}{}{r}{}{\mathrm{sin}}{}\left({\mathrm{\theta }}\right)$ (6)
> $\mathrm{evalc}\left(\left[{\left(a+Ib\right)}^{2},\mathrm{ln}\left(a+Ib\right)\right]\right)$
$\left[{-}{{b}}^{{2}}{+}{2}{}{I}{}{a}{}{b}{+}{{a}}^{{2}}{,}\frac{{\mathrm{ln}}{}\left({{a}}^{{2}}{+}{{b}}^{{2}}\right)}{{2}}{+}{I}{}{\mathrm{arctan}}{}\left({b}{,}{a}\right)\right]$ (7)
> $\mathrm{evalc}\left(\mathrm{abs}\left(x+Iy\right)=\mathrm{cos}\left(u\left(x\right)+Iv\left(y\right)\right)\right)$
$\sqrt{{{x}}^{{2}}{+}{{y}}^{{2}}}{=}{\mathrm{cos}}{}\left({u}{}\left({x}\right)\right){}{\mathrm{cosh}}{}\left({v}{}\left({y}\right)\right){-}{I}{}{\mathrm{sin}}{}\left({u}{}\left({x}\right)\right){}{\mathrm{sinh}}{}\left({v}{}\left({y}\right)\right)$ (8)
> $\mathrm{evalc}\left(\mathrm{sqrt}\left(1-{u}^{2}\right)\right)$
$\frac{\sqrt{\left|{{u}}^{{2}}{-}{1}\right|}{}\left({1}{-}{\mathrm{signum}}{}\left({{u}}^{{2}}{-}{1}\right)\right)}{{2}}{+}\frac{{I}{}\sqrt{\left|{{u}}^{{2}}{-}{1}\right|}{}\left({1}{+}{\mathrm{signum}}{}\left({{u}}^{{2}}{-}{1}\right)\right)}{{2}}$ (9)
Set an assumption on $v$. An alternative way to set this assumption is with assume(-1<v,v<1), which implicitly assumes $v$ is real.
> $\mathrm{assume}\left(v::\mathrm{real},{v}^{2}<1\right)$
> $\mathrm{evalc}\left(\mathrm{sqrt}\left(1-{v}^{2}\right)\right)$
$\sqrt{{-}{{\mathrm{v~}}}^{{2}}{+}{1}}$ (10)
> $\mathrm{series}\left(\mathrm{exp}\left(\mathrm{Ei}\left(1,4I\right)x\right),x,3\right)$
${1}{+}{{\mathrm{Ei}}}_{{1}}{}\left({4}{}{I}\right){}{x}{+}\frac{{1}}{{2}}{}{{{\mathrm{Ei}}}_{{1}}{}\left({4}{}{I}\right)}^{{2}}{}{{x}}^{{2}}{+}{O}{}\left({{x}}^{{3}}\right)$ (11)
> $\mathrm{evalc}\left(\right)$
${1}{+}\left({-}{\mathrm{Ci}}{}\left({4}\right){+}{I}{}\left({\mathrm{Si}}{}\left({4}\right){-}\frac{{\mathrm{\pi }}}{{2}}\right)\right){}{x}{+}\left(\frac{{{\mathrm{Ci}}{}\left({4}\right)}^{{2}}}{{2}}{-}\frac{{{\mathrm{Si}}{}\left({4}\right)}^{{2}}}{{2}}{+}\frac{{\mathrm{Si}}{}\left({4}\right){}{\mathrm{\pi }}}{{2}}{-}\frac{{{\mathrm{\pi }}}^{{2}}}{{8}}{+}{I}{}\left({-}{\mathrm{Ci}}{}\left({4}\right){}{\mathrm{Si}}{}\left({4}\right){+}\frac{{\mathrm{Ci}}{}\left({4}\right){}{\mathrm{\pi }}}{{2}}\right)\right){}{{x}}^{{2}}{+}{O}{}\left({{x}}^{{3}}\right)$ (12) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 29, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9635607600212097, "perplexity": 1554.9924557155039}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662543797.61/warc/CC-MAIN-20220522032543-20220522062543-00251.warc.gz"} |
https://web2.0calc.com/questions/help_26259 | +0
# help
0
67
1
ABC is an isosceles triangle with sides AB=AC=5. D is a point in between B and C such that BD=2 and DC=4.5. Find the length of AD.
Nov 11, 2019
#1
+118
+2
This being an isoscles triangle the base angles B and C are equal in measure. Let's call that measure $$\alpha$$. Also let's call the length of Ad, x. By the law of cosines, we have
$$x^2=5^2+2^2-2(2)(5)cos(\alpha)$$, and $$x^2=5^2+4.5^2-2(5)(4.5)cos(\alpha)$$. By setting the two equations equal to each other (i.e. eliminating x squared) we get $$29-20cos(\alpha)=45.25-45cos(\alpha)$$. Solving for $$cos(\alpha)$$ we get $$cos(\alpha)=0.65$$. So
$$x^2=29-20(0.65)=16$$, and as result x=4. Good luck and let me know for future reference if I am overdoing the 'help'! Perhaps a couple of hints would have been more than adequate.
.
Nov 11, 2019 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9133973717689514, "perplexity": 423.1652431999939}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250607314.32/warc/CC-MAIN-20200122161553-20200122190553-00205.warc.gz"} |
https://www.clutchprep.com/chemistry/practice-problems/84073/osmium-is-one-of-the-densest-elements-known-what-is-its-density-if-2-72-g-has-a- | # Problem: Osmium is one of the densest elements known. What is its density if 2.72 g has a volume of 0.121 cm3 ?
###### FREE Expert Solution
Density represents the mass of an object or compound within a given volume. When calculating for density, we use the following equation:
79% (97 ratings)
###### Problem Details
Osmium is one of the densest elements known. What is its density if 2.72 g has a volume of 0.121 cm3 ? | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8340233564376831, "perplexity": 1605.7812639526378}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141186414.7/warc/CC-MAIN-20201126030729-20201126060729-00324.warc.gz"} |
https://www.arxiv-vanity.com/papers/0804.2923/ | On the giant magnon and spike solutions for strings on AdS S
Bum-Hoon Lee111e-mail : , Rashmi R. Nayak222e-mail : , Kamal L. Panigrahi333e-mail :
and Chanyong Park444e-mail :
Center for Quantum Space-Time (CQUeST), Sogang University,
Seoul 121-742, Korea
Department of Physics, Indian Institute of Technology Guwahati,
Guwahati-781 039, India
ABSTRACT
We study solutions for the rotating strings on the sphere with a background NS-NS field and on the Anti-de-Sitter spacetime. We show the existence of magnon and single spike solutions on RS in the presence of constant magnetic field as two limiting cases. We also study the solution for strings on AdS S with Melvin deformation. The dispersion relations among various conserved charges are shown to receive finite corrections due to the deformation parameter. We further study the rotating string on AdS S geometry with two conserved angular momenta on S and one spin along the AdS. We show that there exists two kinds of solutions: a circular string solution and a helical string. We find out the dispersion relation among various charges and give physical interpretation of these solutions.
## 1 Introduction
A remarkable development in the field of string theory is the celebrated string theory-gauge theory duality, which relates the spectrum of free string on with that of operator dimension of super Yang-Mills (SYM) in planar limit. Determining this spectrum is an interesting and challenging problem. Recently it has been realized that this problem of counting the operators in gauge theory has an elegant formulation in terms of integrable spin chain [2, 3, 4, 5, 6, 7, 8]. In the dual formulation, the string theory has also integrable structure in the semiclassical limit. Recently Hofman and Maldacena (HM) considered a special limit where the problem of determining the spectrum on both sides simplifies considerably [9]. In this limit the ’t Hooft coupling is held fixed allowing for a direct interpolation between the gauge theory and string theory and the energy (or conformal dimension ) and a R-charge both become infinite with the difference held fixed. The spectrum consists of elementary excitations known as magnons that propagates with a conserved momentum along the long spin chain [9]111for more work on related topic see for example [10, 11, 12, 13, 14, 15, 16, 17, 18, 19]. These magnon excitations satisfy a dispersion relation of the type (in the large ’t Hooft limit )
E−J=√λπ∣∣∣sinp2∣∣∣ . (1)
A more general type of solution are the ones rotating in AdS, one of which is the spiky string [20] which describes the higher twist operators from field theory view point. Giant magnon solutions can be seen as a special limit of such spiky strings with shorter wavelength. Several papers [21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35] have been devoted in studying the gauge theory and string theory side of this interesting rotating solutions in AdS space and on the sphere. Hence it is very important to find out more general class of rotating and pulsating strings in AdS S background and look for possible dual operators in the gauge theory. Because the complete understanding of the gauge theory operators corresponding to the semi classical string states is still lacking, it seems reasonable to find out the string states first and then look for possible operators in the dual side.
In this paper we study few examples of spike solutions for strings in AdS S background in an attempt to study the string spectrum and the other elementary string states further. The AdS S spacetime has been studied by using the Wess-Zumino-Witten model. In the study of D-branes on this group manifold, one needs the mechanism of ‘flux stabilisation’ which ensures the stability of these branes against the collapse of the sphere. This has opened up a new window for the understanding of strings and branes in AdS space in the past. Our interest is whether one could find out any rotating string solution that looks like a spike and/or magnon in this background. As we will show in what follows that there exist such a solution, which modifies the relation between various conserved charges of the spike solution in a very natural way. Our next example is the spike solution in a Melvin deformed AdS S background, and we will show that in the limit of small deformation parameter, to the leading order, the energy Vs height of spike, relationship gets corrections even at the lowest order in . Finally we present an interesting example of elementary string solution with one spin along AdS and two angular momenta along S. We find a parameter space of configurations which admit two interesting classes of solution. One of them is a classical circular string on AdS with the infinite spin and at the same time, the giant magnon on S with the finite angular momentum. The other is a helical string which has the same configuration with the circular string on the sphere but becomes an array of the spikes on AdS. We will show that these two solutions satisfy similar dispersion relation with two parameters, the velocity of the string and the winding number . In the absence of an exact expression for energy, we will write a perturbative expansion form of the dispersion relation and give the physical meaning of this solution. For the helical string case, we will find the dispersion relation for a single spike which is one segment of the helical string.
The rest of the paper is organized as follows. In section 2, we calculate the energy and momentum of a spike on a two sphere with a constant background NS-NS B field. We have shown that for the rigidly rotating string of the two sphere, in the background of constant NS-NS B-field, there exists two limiting cases of interest. First one is the known magnon solution studied in [36, 37]. The second one is the single spike solution that generalizes of the single spike solution on RS found in [21]. We compute its energy E and angular momentum and a function of (the height of the spike) and the constant background field. Section 3 is devoted to the study of spike like solution in the magnetic Melvin deformed AdS S, where we constrain the motion of the string along RS only. In the small deformation parameter, we show that the relationship between the angular momenta and the height of the spike, which is a generalization of the result obtained in [21]. In section 4 we calculate the example of multi spin spike solution in the AdS S background with two angular momenta along S and a spin along AdS. We find two classes of solution of particular interest and the dispersion relation for each. These multi-spin solutions can be reinterpreted as a generalization of the giant magnon on S with other spins and have different shapes on the AdS space. Finally in section 5, we conclude with some remarks.
## 2 Spike on R×S2 with a background NS-NS B field
As a first example we will show the existence of a single spike solution of the string around RS with a background NS-NS field. We will show that for the string rotating around the rigid sphere in the background of B field, there exists two interesting solution. The first one is a magnon solution found in [36], and the second one is a single spike solution which generalizes the results of [21] can be obtained from two different limits of the same solution. As explained in [38], the NS-NS background field has been used for the purpose of stability of the size of the sphere against its shrinking to zero size. The metric and the background NS-NS flux field is given by
ds2=−dt2+dθ2+sin2θdϕ2,Bθϕ=Bsinθ (2)
We are interested in finding out the classical rotating string solution around this geometry. To do so, as usual the starting point is to write down the Polyakov form of the action
S=−√λ4π∫π−πdσdτ[√−γγαβgMN∂αxM∂βxN−eαβ∂αxM∂βxNBMN] (3)
and where is world-sheet metric and is the anti symmetric tensor defined as . Finally, the modes parameterize the embedding of the string in the background. The equations of motion derived by the above action has to be supplemented by the following Virasoro constraints
gMN(∂σxM∂σxN+∂τxM∂τxN)=0 (4) (5) gMN∂σxM∂τxN=0 (6)
We consider a spike string in the following worldsheet parametrization
t=κτ, θ=θ(y), ϕ=ωτ+~ϕ(y). (7)
where . With this the Virasoro constraints take the form
˙θθ′+sin2θ˙ϕϕ′=0 (8) (9) −(˙t2+t′2)+˙θ2+θ′2+sin2θ(˙ϕ2+ϕ′2)=0 (10)
The next step is to use the above ansatz in the equations of motion222one can check that with the choice of the - field in eqn. (2), its contribution to the equations of motion cancel among each other. and using Virasoro constraints one can obtain
~ϕ′=1(α2−β2)(βω−βκ2ωsin2θ) (11)
θ′2=sin2θ(α2−β2)2(α2−β2κ2ω2sin2θ)(κ2sin2θ−ω2). (12)
In order to find spike like solutions, let us define
sinθ0=βκαω, sinθ1=κω. (13)
So now using these definitions one can rewrite the above equations as follows
~ϕ′=βω(α2−β2)sin2θ(sin2θ−sin2θ1) (14)
and
θ′=ωα(α2−β2)sinθ√(sin2θ0−sin2θ)(sin2θ−sin2θ1). (15)
The two conserved quantities, namely the total energy and angular momentum are defined as
E=2Tκα∫θ1θ0dθθ′ (16)
J=2Tα∫θ1θ0dθθ′(sin2θ˙ϕ+Bθϕθ′). (17)
Now we will consider two limits will define the giant magnon and the single spike around this RS.
1. For giant magnon we put which implies that
E−J = 2T(1+B)cosθ0, (19) Δϕ = ∫π/2θ0dθθ′~ϕ′=2arcsin(cosθ0)=π−2θ0, (20)
so the giant magnon dispersion relation as mentioned in [36] can be evaluated as
E−J=2T(1+B)cosθ0=√λπ(1+B)sinΔϕ2. (21)
Note that the dispersion relation gets modified due to presence of field [36].
2. For the single spike solution, we consider the opposite limit . This implies that
J=2Tα∫θ1π/2dθθ′(sin2θ˙ϕ+Bθϕθ′). (22)
One can evaluate the above integral and obtain
J=2T(1+B)cosθ1=√λπ(1+B)cosθ1. (23)
Hence one can show that
E−TΔϕ=√λπ(π2−θ1). (24)
Now the height of the spike can be defined as
¯θ=(π2−θ1) (25)
As usual the energy of the spike can be defined as
Δ=(E−TΔϕ)−J=√λπ(¯θ−(1+B)sin¯θ). (26)
Notice that this relationship also gets a correction due to the presence of the background field. Putting B = 0, we get the result derived in [21]. The generalization of this solution by adding one more angular momentum to get a solution on RS is straightforward. We however leave this as an exercise for the readers.
## 3 Rotating string on the Melvin deformed AdS3× S3
Recently, the rotating string with spin along various directions of S was investigated by many authors in the AdS S background [21, 37, 41, 26]. As mentioned earlier, the rotating string appears as a magnon solution which is a smooth configuration or a spike solution with cusp. Here, we will consider a string rotating on S of the Melvin field deformed AdS S background (see [21] for the rigidly rotating string on S with no deformation).
The relevant metric on RS with such a deformation is given by [37]
ds2=√1+B2cos2θ(−dt2+dθ2+sin2θdϕ2+cos2θ1+B2cos2θdχ2) (27)
On this background, the string is rotating in two direction, and , is described by the Nambu-Goto action
S=T∫d2σL=T∫d2σ√(∂σX⋅∂τX)2−(∂σX)2(∂τX)2. (28)
The equations of motion of this system are
∂σ∂L∂t′+∂τ∂L∂˙t = ∂L∂t ∂σ∂L∂θ′+∂τ∂L∂˙θ = ∂L∂θ ∂σ∂L∂ϕ′+∂τ∂L∂˙ϕ = ∂L∂ϕ ∂σ∂L∂χ′+∂τ∂L∂˙χ = ∂L∂χ, (29)
where or means the derivative with respect to or , respectively. We choose the following parametrization,
t=κτ,θ=θ(σ),ϕ=ω1τ+σ,χ=χ(σ)+ω2τ (30)
the first and the third equations of motion reduce the following forms
∂L∂t′=c1,∂L∂ϕ′=c2, (31)
where and are arbitrary constants.
From these equations with two integration constants, we can obtain
χ′(σ)={κ(c1κ−c2ω1)+(B2κ(c1κ−c2ω1)−c1ω22)cos2θ}tan2θω21sin2θ−κ2. (32)
For , becomes singular so that we choose the integration constants to cancel this singularity. If two constants satisfy , then is not singular any more. From now on, we choose and for simplicity. Using these fixed integration constants, the equations for and reduced to 333The second order differential equations for and for are very complicated indeed. Hence we first solve the first and third equation in (29) and then use that to write the first order equations for and . We have checked that they all are consistent with each other. A similar analysis was presented in [21].
χ′(σ) = ω1ω2sin2θω21sin2θ−κ2, θ′(σ) = κsinθcosθ√(ω21−ω22)sin2θ−κ2−B2sin2θ(ω21sin2θ−κ2)ω21sin2θ−κ2. (33)
At the fixed time, the string configuration is determined from the above equations.
From now on, we consider the string configuration in the (,) space in the small limit. Note that the second equation in Eq. (3) is meaningful only when the inside of the square root becomes a non-negative value, which gives a constraint to the range of .
The exact positive values of making the square root zero are
sinθ= ⎷ω21−ω22+B2κ2±√(ω21−ω22+B2κ2)2−4κ2B2ω212B2ω21. (34)
Assuming that and , then the range of making the inside of the square root a non-negative value is given by where
sinθmin = κ√ω21−ω22(1+κ2ω22B22(ω21−ω22)2)+O(B4) ≡ sinθ0+κ3ω22B22(ω21−ω22)5/2+O(B4), sinθmax = √ω21−ω22Bω1(1−κ2ω22B22(ω21−ω22)2+O(B4)), (35)
where we set which is the minimum value in the case . Due to the above assumption, is always greater than , so the relevant range of becomes . In Eq. (3), has a singularity at which corresponds to the peak of the spike solution. Since , this singular point is always located at the outside of the relevant range of , which implies that there is no spike solution in the assumed parameter region 444see a comment on the similarity between the giant magnon and the spike solution with two angular momenta in [21].
Note that since from the Eq. (30), is equivalent to . At two boundary values of , and , is zero, so we can identify these two points with the top and the bottom of the giant magnon. In addition, in the space we can also find a similar string configuration using the equation
∂θ∂χ=κcosθ√(ω21−ω22)sin2θ−κ2−B2sin2θ(ω21sin2θ−κ2)ω1ω2sinθ. (36)
As a result, the macroscopic string found here is a giant magnon in both and directions.
The energy of this giant magnon is given by
E=2T∫θ1θ0dθθ′∂L∂˙t=2T∫θ1θ0dθ(ω21−κ2−B2κ2cos2θ)tanθκ√(ω21−ω22)sin2θ−κ2−B2sin2θ(ω21sin2θ−κ2), (37)
where and and the first angular momentum in the direction is
J1=2T∫θ1θ0dθθ′∂L∂˙ϕ=2T∫θ1θ0dθω1cosθsinθ(1−B2sin2θ)√(ω21−ω22)sin2θ−κ2−B2sin2θ(ω21sin2θ−κ2). (38)
The last conserved quantity is the second angular momentum in the direction given by
J2=2T∫θ1θ0dθθ′∂L∂˙χ=2T∫θ1θ0dθω2cosθsinθ√(ω21−ω22)sin2θ−κ2−B2sin2θ(ω21sin2θ−κ2). (39)
The difference in angle between two bottoms (or top) of the giant magnon, corresponding to the size of the giant magnon, becomes
ΔΘ=2∫θ1θ0dθθ′=2∫θ1θ0dθω21sin2θ−κ2κsinθcosθ√(ω21−ω22)sin2θ−κ2−B2sin2θ(ω21sin2θ−κ2). (40)
Using this, the integration of in this small limit becomes
E−TΔΘ≈2T{¯θ−¯κsinγB2cos2γ}, (41)
where , and . Two angular momentums, and are given by
J1 ≈ 2T{1cosγsin¯θ−¯κ2sinγB2cos4γ}, J2 ≈ 2T{sinγcosγsin¯θ−¯κ2sin2γB2cos4γ}. (42)
When , all conserved quantities are reduced to those on the undeformed [21].
J21≈J22+λπ2sin2¯θ−λπ2¯κ2sinγsin¯θcos3γB (43)
Again in B = 0 limit it reduces to the result obtained in [21] for the three sphere case.
## 4 Three-spin spiky string on AdS3×S3
Here, we consider a three-spin string solution in AdS S which has one spin in AdS and two spins and in S. In [41], the three-spin giant magnon in the special parameter region was investigated in the same background. In this section, we will consider a different solution in the different parameter region which is not smoothly connected with the case in Ref. [41].
Now, we consider the relevant metric of AdS S as a subspace of AdS S
ds2=−cosh2ρdt2+dρ2+sinh2ρdϕ2+dθ2+cos2θdϕ21+sin2θdϕ22 . (44)
In the conformal gauge, the Polyakov string action is given by
I = −√λ4π∫d2σ[−cosh2ρ (t′2−˙t2)+ρ′2−˙ρ2+sinh2ρ (ϕ′2−˙ϕ2) (45) +(θ′2−˙θ2)+cos2θ (ϕ′12−˙ϕ12)+sin2θ (ϕ′22−˙ϕ22)],
where dot and prime denote the derivatives with respect to and respectively. Now, we choose the following parametrization for a rotating string in the above background
t=τ+h1(y), ρ=ρ(y), ϕ=w(τ+h2(y)), ϕ1=τ+g1(y), θ=θ(y), ϕ2=~w(τ+g2(y)), (46)
where .
After introducing the appropriate integration constants, the equations of motion for the part are reduced to
∂yg1 = v1−v2tan2θ, ∂yg2 = −v1−v2, ∂yθ = sinθ(1−v2)cosθ√(1−~w2)cos2θ−v2. (47)
For the consistency, should run from to , when . The solution of the last equation is given by
sinθ=αcoshβy, (48)
where and [41]. Note that since at , corresponds to and is described by , so the range of is given by .
The string configuration in and space is described by
∂θ∂ϕ1=cosθvsinθ√(1−~w2)cos2θ−v2. (49)
Note that the equator of is located at and at this equator the above equation is singular. When or , becomes or respectively, which implies that the string shape of this solution described by and looks like that of giant magnon on . The angle difference of this magnon-like solution in the direction reads
Δϕ1=2∫θmax0dθvsinθcosθ√(1−~w2)cos2θ−v2=2θmax. (50)
Now, we consider the open string configuration in the part. After some calculation, the equations for and becomes
∂yh1 = 11−v2(−v+d1cosh2ρ), ∂yh2 = 11−v2(−v+d2sinh2ρ), (51)
where and are integration constants. The case having has been studied in [41]. Here, we consider the different parameter region, which gives the different type of the string solution. Using the relation , the Virasoro constraints are reduced to a single equation
(∂yρ)2=w2v(1−v∂yh2)∂yh2sinh2ρ−1v(1−v∂yh1)∂yh1cosh2ρ. (52)
Notice that the variation of this Virasoro constraint equation with respect to gives the equation of motion for
(53)
Hence, to obtain it is sufficient to solve the above Virasoro constraint instead of the equation of motion for . The general form of is given by
∂yρ=±A(1−v2)coshρsinhρ, (54)
where
A=√(1−w2)sinh6ρ+(1−v2−w2)sinh4ρ+d2w2(2v−d2(1−w2))sinh2ρ−d22w2. (55)
Actually, it is difficult to calculate the physical quantities like the energy and the angular momentum using the above , so we choose a special set of parameters like and , which removes the second and the third term in . Then, the Eq. (55) reduces to a simple form
A=√(1−w2)sinh6ρ−d22w2 , (56)
which gives the minimum value of for . Since goes to zero (infinity) as respectively, we will consider the range of as .
Using this reduced function and Eq. (4), the following differential equation
∂ρ∂ϕ=A sinhρcoshρ(d2−vsinh2ρ), (57)
describes the shape of the string on the AdS part. As will be shown in the next sections, this gives two kinds of the string solution: one is the circular string rotating at and the other is the helical string extended from to with the infinite winding number and the infinite angular momentum in the -direction.
### 4.1 Circular string on AdS
The simple solution of Eq. (57) is given by the string located at where is zero. From Eq. (4) at a fixed time where , the string configuration in -direction is given by
ϕ=11−v2((2v1−v2)1/3−v)σ. (58)
Note that the coefficient of this relation is not zero except . Since the range of is , also has to cover the range, . This implies that this solution describes the circular string having the infinite windings. The conserved charges of this string are given by
E = √λ2π∫dycosh2ρmin−d1v1−v2=√λπ∫θmax0dθcosθ(cosh2ρmin−2+v2)sinθ√(1−~w2)cos2θ−v2, S = w√λ2π∫dysinh2ρmin−d2v(1−v2)=w√λπ∫θmax0dθcosθ(sinh2ρmin−2)sinθ√(1−~w2)cos2θ−v2, J1 = √λ2π∫dycos2θ−v2(1−v2)=√λπ∫θmax0dθcosθ(cos2θ−v2)sinθ√(1−~w2)cos2θ−v2, J2 = ~w√λ2π∫dy sin2θ(1−v2)=~w√λπ∫θmax0dθcosθsinθ√(1−~w2)cos2θ−v2. (59)
In the above integral equations, in the denominator gives rise to the logarithmic divergence at , so three charges, , and have a logarithmic divergence where as is finite. Interestingly, these quantities satisfy the following relation
E−Sw=1+v21−v2(J1+J2~w), (60)
which is the exact dispersion relation of the string on AdS S with two parameters, and and the finite charge, is given by
J2=√λπ~w√1−~w2sinθmax. (61)
To investigate this solution more clearly, we consider the special parameter limit and , where and . Here, implies that the string solution has to a point-like configuration in the -direction because the angular momentum and the angle difference vanishes. Hence, in this parameter region, the string solution reduces to that on AdS. The shape of this solution on is described by the relation between and
tanθ2=eσ, (62)
which is obtained by calculating the integral of the last equation in Eq. (4) at . Since , the angle difference becomes from Eq. (50), which gives the shape of a giant magnon on S with the maximal -angle difference . As a result, this solution describes a circular string rotating at with the infinite angular momentum and having the shape of the magnon on , whose dispersion relation becomes
E−S−J1=√λπ. (63)
For the giant magnon on [20, 9, 42] with the following dispersion relation
E−J1=√λπ, (64)
this string has no angular momentum in the -direction and is located at . So the circular string in the limit and , can be considered as an extension of the giant magnon on extended in the -direction with the infinite winding number and the infinite angular momentum.
To describe the string solution on AdS S, we should turn on the angular momentum , which corresponds to considering the parameter region with . In the case of and , the dispersion relation becomes
E−S−J1=J2~w∣∣∣v=0=√λπ1√1−~w2. (65)
For , the above dispersion relation can have some corrections
E−S−J1=J2~w+ΔE, (66)
where
ΔE = 1−√1−v2√1−v2S+2v21−v2J, (67)
and we set .
Note that all conserved charges defined in Eq. (4.1) are functions of , and . Since the dependence of can be removed by a simple rescaling, we can consider these charges as functions of and effectively. This implies that in principle two parameters, and can be rewritten as functions of and . In the limit where , and can be approximately rewritten as
S ≈ (22/3v4/3+O(v2/3))Δ, J ≈ (1+O(v2))Δ, (68)
where
Δ=√λπ∫θmax0dθcosθsinθ√(1−~w2)cos2θ−v2. (69)
Then, we can approximately rewrite in terms of and
v2≈2(JS)3/2. (70)
So the dispersion relation becomes in this approximation
E=√λπ1√1−~w2sinθmax+S[1+(JS)3/2]+J[1+4(JS)3/2]+O(JS)3, (71)
which describes the circular string rotating at on the AdS and the magnon on with the finite angular momentum .
### 4.2 Helical string on AdS
When the string is extended in the radial direction of the AdS space, becomes a function of and . As shown in Eq. (48), covers the full range of , but unlike the range of , | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9733043909072876, "perplexity": 586.946834134116}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104277498.71/warc/CC-MAIN-20220703225409-20220704015409-00028.warc.gz"} |
http://mathhelpforum.com/calculus/42115-help-integration.html | # Math Help - Help with Integration
1. ## Help with Integration
I am not getting integrals that are improper! Here are 2 questions that I'm not sure about: The integral from 0 to infinity of (x) / (x^2+33)2 dx. Also, the integral from negative infinity to infinity of (30-x^4) dx. I think the last integral is convergent since its an upside down parabola.
2. $\int_{0}^{L}\frac{x}{(x^{2}+33)^{2}}dx$
Let $u=x^{2}+33, \;\ \frac{du}{2}=xdx$
You get:
$=\frac{1}{66}-\frac{1}{2(L^{2}+33)}$
$\lim_{L\to {\infty}}\left[\frac{1}{66}-\frac{1}{2(L^{2}+33)}\right]$
$=\frac{1}{66}$
It converges.
3. Okay, I just don't get how you got 1/66, I know that 2 times (1/33) is 1/66 but what about the x^2+33 squared?
4. Originally Posted by Holly3
Okay, I just don't get how you got 1/66, I know that 2 times (1/33) is 1/66 but what about the x^2+33 squared?
You don't know that $\lim _{L \to \infty } \frac{1}{{2\left( {L^2 + 33} \right)}} = 0$??
5. No, I was confused before why (1) over (x^2+33)^2 got 1/66, but I understand now! And I think for my second question, I think the answer is negative infinity, but wouldn't that be divergent? The second question was the integral from neg. infin. to pos infin. of (30-x^2). I'm not sure if I did something wrong, but it seems like the answer is negative infinity. Does that make sense? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9852165579795837, "perplexity": 662.4592986377976}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131297689.58/warc/CC-MAIN-20150323172137-00115-ip-10-168-14-71.ec2.internal.warc.gz"} |
https://fr.mathworks.com/help/stats/birnbaum-saunders-distribution.html | ## Birnbaum-Saunders Distribution
### Definition
The Birnbaum-Saunders distribution has the density function
`$\frac{1}{\sqrt{2\pi }}\mathrm{exp}\left\{-\frac{{\left(\sqrt{x}{\beta }}-\sqrt{\beta }{x}}\right)}^{2}}{2{\gamma }^{2}}\right\}\left(\frac{\left(\sqrt{x}{\beta }}+\sqrt{\beta }{x}}\right)}{2\gamma x}\right)$`
with scale parameter β > 0 and shape parameter γ > 0, for x > 0.
If x has a Birnbaum-Saunders distribution with parameters β and γ, then
`$\frac{\left(\sqrt{x}{\beta }}-\sqrt{\beta }{x}}\right)}{\gamma }$`
has a standard normal distribution.
### Background
The Birnbaum-Saunders distribution was originally proposed as a lifetime model for materials subject to cyclic patterns of stress and strain, where the ultimate failure of the material comes from the growth of a prominent flaw. In materials science, Miner's Rule suggests that the damage occurring after n cycles, at a stress level with an expected lifetime of N cycles, is proportional to n / N. Whenever Miner's Rule applies, the Birnbaum-Saunders model is a reasonable choice for a lifetime distribution model.
### Parameters
To estimate distribution parameters, us `mle` or the Distribution Fitter app. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9696179032325745, "perplexity": 3108.7900662385928}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057479.26/warc/CC-MAIN-20210923225758-20210924015758-00170.warc.gz"} |
https://www.aimsciences.org/article/doi/10.3934/dcds.2019264 | # American Institute of Mathematical Sciences
October 2019, 39(10): 6039-6067. doi: 10.3934/dcds.2019264
## Weighted elliptic estimates for a mixed boundary system related to the Dirichlet-Neumann operator on a corner domain
School of Mathematics, Sun Yat-sen University, No.135 Xingangxi Road, Haizhu District, Guangzhou 510275, China
* Corresponding author: Mei Ming
Received January 2019 Revised March 2019 Published July 2019
Fund Project: The author is supported by NSFC grant 11401598.
Based on the $H^2$ existence of the solution, we investigate weighted estimates for a mixed boundary elliptic system in a two-dimensional corner domain, when the contact angle $\omega\in(0,\pi/2)$. This system is closely related to the Dirichlet-Neumann operator in the water-waves problem, and the weight we choose is decided by singularities of the mixed boundary system. Meanwhile, we also prove similar weighted estimates with a different weight for the Dirichlet boundary problem as well as the Neumann boundary problem when $\omega\in(0,\pi)$.
Citation: Mei Ming. Weighted elliptic estimates for a mixed boundary system related to the Dirichlet-Neumann operator on a corner domain. Discrete & Continuous Dynamical Systems, 2019, 39 (10) : 6039-6067. doi: 10.3934/dcds.2019264
##### References:
[1] J. Banasiak and G. F. Roach, On mixed boundary value problems of Dirichlet oblique-derivative type in plane domains with piecewise differentiable boundary, Journal of differential equations, 79 (1989), 111-131. doi: 10.1016/0022-0396(89)90116-2. Google Scholar [2] M. Sh. Birman and G. E. Skvortsov, On the quadratic integrability of the highest derivatives of the Dirichlet problem in a domain with piecewis smooth boundary, Izv. Vyssh. Uchebn. Zaved. Mat., 1962 (1962), 11–21 (in Russsian). Google Scholar [3] M. Borsuk and V. A. Kondrat'ev, Elliptic Boundary Value Problems of Second Order in Piecewise Smooth Domains, North-Holland Mathematical Library, 69. Elsevier Science B.V., Amsterdam, 2006. doi: 10.1016/S0924-6509(06)80026-7. Google Scholar [4] M. Costabel and M. Dauge, General edge asymptotics of solutions of second order elliptic boundary value problems, Ⅰ, Proc. Roy. Soc. Edinburgh Sect. A, 123 (1993), 109-155. doi: 10.1017/S0308210500021272. Google Scholar [5] M. Costabel and M. Dauge, General edge asymptotics of solutions of second order elliptic boundary value problems Ⅱ., Proc. Roy. Soc. Edinburgh Sect. A, 123 (1993), 157-184. doi: 10.1017/S0308210500021272. Google Scholar [6] M. Dauge, Elliptic Boundary Value Problems on Corner Domains, Smoothness and Asymptotics of Solutions, Lecture Notes in Mathematics, 1341. Springer-Verlag, Berlin, 1988. doi: 10.1007/BFb0086682. Google Scholar [7] M. Dauge, S. Nicaise, M. Bourlard and M. S. Lubuma, Coefficients des singularités pour des problèmes aux limites elliptiques sur un domaine à points coniques Ⅰ: résultats généraux pour le problème de Dirichlet, Mathematical Modelling and Numerical Analysis, 24 (1990), 27-52. doi: 10.1051/m2an/1990240100271. Google Scholar [8] G. I. Eskin, General boundary values problems for equations of principle type in a plane domain with angular points, Uspekhi Mat. Nauk, 18 (1963), 241–242 (in Russian). Google Scholar [9] P. Grisvard, Elliptic Problems in Non Smooth Domains, Pitman Advanced Publishing Program, Boston-London-Melbourne, 1985. Google Scholar [10] P. Grisvard, Singularities in Boundary Value Problems, Research notes in applied mathematics, Springer-Verlag, 1992. Google Scholar [11] V. A. Kondrat'ev, Boundary Value Problems for Elliptic Equations in Conical Regions, , Soviet Math. Dokl., 1963. Google Scholar [12] V. A. Kondrat'ev, Boundary value problems for elliptic equations in domains with conical or angular points, Trans. Moscow Math. Soc., 16 (1967), 209-292. Google Scholar [13] V. A. Kondart'ev and O. A. Oleinik, Boundary value problems for partial differential equations in nonsmooth domains, Russian Math. Surveys, 38 (1983), 3-76. Google Scholar [14] V. A. Kozlov, V. G. Mazya and J. Rossmann, Elliptic Boundary Value Problems in Domains with Point Singularities, Mathematical Surveys and Monographs, 52, American Mathematical Society, Providence, RI, 1997. Google Scholar [15] D. Lannes, Well-posedness of the water-wave equations, Journal of the American Math. Society, 18 (2005), 605-654. doi: 10.1090/S0894-0347-05-00484-4. Google Scholar [16] Ya. B. Lopatinskiy, On one type of singular integral equations, Teoret. i Prikl. Mat. (Lvov), 2 (1963), 53–57 (in Russsian). Google Scholar [17] V. G. Maz'ya and J. Rossmann, Elliptic Equations in Polyhedral Domains, Mathematical Surveys and Monographs, 162, American Mathematical Society, Providence, RI, 2010. doi: 10.1090/surv/162. Google Scholar [18] V. G. Maz'ya, The solvability of the Dirichlet problem for a region with a smooth irregular boundary, Vestnik Leningrad. Univ., 19 (1964), 163–165 (in Russian). Google Scholar [19] V. G. Maz'ya, The behavior near the boundary of the solution of the Dirichlet problem for an elliptic equation of the second order in divergence form, Mat. Zametki, 2 (1967), 209–220 (in Russian). Google Scholar [20] V. G. Maz'ya and B. A. Plamenevskiy, On the coefficients in the asymptotics of solutions of elliptic boundary value problems in domains with conical points, Math. Nachr., 76 (1977), 29-60. doi: 10.1002/mana.19770760103. Google Scholar [21] V. G. Maz'ya and B. A. Plamenevskiy, $L^p$ estimates of solutions of elliptic boundary value problems in a domains with edges, Trans. Moscow Math. Soc., 1 (1980), 49-97. Google Scholar [22] V. G. Maz'ya and B. A. Plamenevskiy, Coefficients in the asymptotics of the solutions of an elliptic boundary value problem in a cone, Journal of Soviet Mathematics, 9 (1978), 750-764. doi: 10.1007/BF01085326. Google Scholar [23] V. G. Maz'ya and J. Rossmann, On a problem of Babu$\breve {\rm{s}}$ka (Stable asymptotics of the solution to the Dirichlet problem for elliptic equations of second order in domains with angular points), Math. Nachr., 155 (1992), 199-220. doi: 10.1002/mana.19921550115. Google Scholar [24] M. Ming and C. Wang, Elliptic estimates for Dirichlet-Neumann operator on a corner domain., Asymptotic Analysis, 104 (2017), 103-166. doi: 10.3233/ASY-171427. Google Scholar [25] M. Ming and C. Wang, Water waves problem with surface tension in a corner domain Ⅰ: A priori estimates with constrained contact angle, preprint, arXiv: 1709.00180. Google Scholar
show all references
##### References:
[1] J. Banasiak and G. F. Roach, On mixed boundary value problems of Dirichlet oblique-derivative type in plane domains with piecewise differentiable boundary, Journal of differential equations, 79 (1989), 111-131. doi: 10.1016/0022-0396(89)90116-2. Google Scholar [2] M. Sh. Birman and G. E. Skvortsov, On the quadratic integrability of the highest derivatives of the Dirichlet problem in a domain with piecewis smooth boundary, Izv. Vyssh. Uchebn. Zaved. Mat., 1962 (1962), 11–21 (in Russsian). Google Scholar [3] M. Borsuk and V. A. Kondrat'ev, Elliptic Boundary Value Problems of Second Order in Piecewise Smooth Domains, North-Holland Mathematical Library, 69. Elsevier Science B.V., Amsterdam, 2006. doi: 10.1016/S0924-6509(06)80026-7. Google Scholar [4] M. Costabel and M. Dauge, General edge asymptotics of solutions of second order elliptic boundary value problems, Ⅰ, Proc. Roy. Soc. Edinburgh Sect. A, 123 (1993), 109-155. doi: 10.1017/S0308210500021272. Google Scholar [5] M. Costabel and M. Dauge, General edge asymptotics of solutions of second order elliptic boundary value problems Ⅱ., Proc. Roy. Soc. Edinburgh Sect. A, 123 (1993), 157-184. doi: 10.1017/S0308210500021272. Google Scholar [6] M. Dauge, Elliptic Boundary Value Problems on Corner Domains, Smoothness and Asymptotics of Solutions, Lecture Notes in Mathematics, 1341. Springer-Verlag, Berlin, 1988. doi: 10.1007/BFb0086682. Google Scholar [7] M. Dauge, S. Nicaise, M. Bourlard and M. S. Lubuma, Coefficients des singularités pour des problèmes aux limites elliptiques sur un domaine à points coniques Ⅰ: résultats généraux pour le problème de Dirichlet, Mathematical Modelling and Numerical Analysis, 24 (1990), 27-52. doi: 10.1051/m2an/1990240100271. Google Scholar [8] G. I. Eskin, General boundary values problems for equations of principle type in a plane domain with angular points, Uspekhi Mat. Nauk, 18 (1963), 241–242 (in Russian). Google Scholar [9] P. Grisvard, Elliptic Problems in Non Smooth Domains, Pitman Advanced Publishing Program, Boston-London-Melbourne, 1985. Google Scholar [10] P. Grisvard, Singularities in Boundary Value Problems, Research notes in applied mathematics, Springer-Verlag, 1992. Google Scholar [11] V. A. Kondrat'ev, Boundary Value Problems for Elliptic Equations in Conical Regions, , Soviet Math. Dokl., 1963. Google Scholar [12] V. A. Kondrat'ev, Boundary value problems for elliptic equations in domains with conical or angular points, Trans. Moscow Math. Soc., 16 (1967), 209-292. Google Scholar [13] V. A. Kondart'ev and O. A. Oleinik, Boundary value problems for partial differential equations in nonsmooth domains, Russian Math. Surveys, 38 (1983), 3-76. Google Scholar [14] V. A. Kozlov, V. G. Mazya and J. Rossmann, Elliptic Boundary Value Problems in Domains with Point Singularities, Mathematical Surveys and Monographs, 52, American Mathematical Society, Providence, RI, 1997. Google Scholar [15] D. Lannes, Well-posedness of the water-wave equations, Journal of the American Math. Society, 18 (2005), 605-654. doi: 10.1090/S0894-0347-05-00484-4. Google Scholar [16] Ya. B. Lopatinskiy, On one type of singular integral equations, Teoret. i Prikl. Mat. (Lvov), 2 (1963), 53–57 (in Russsian). Google Scholar [17] V. G. Maz'ya and J. Rossmann, Elliptic Equations in Polyhedral Domains, Mathematical Surveys and Monographs, 162, American Mathematical Society, Providence, RI, 2010. doi: 10.1090/surv/162. Google Scholar [18] V. G. Maz'ya, The solvability of the Dirichlet problem for a region with a smooth irregular boundary, Vestnik Leningrad. Univ., 19 (1964), 163–165 (in Russian). Google Scholar [19] V. G. Maz'ya, The behavior near the boundary of the solution of the Dirichlet problem for an elliptic equation of the second order in divergence form, Mat. Zametki, 2 (1967), 209–220 (in Russian). Google Scholar [20] V. G. Maz'ya and B. A. Plamenevskiy, On the coefficients in the asymptotics of solutions of elliptic boundary value problems in domains with conical points, Math. Nachr., 76 (1977), 29-60. doi: 10.1002/mana.19770760103. Google Scholar [21] V. G. Maz'ya and B. A. Plamenevskiy, $L^p$ estimates of solutions of elliptic boundary value problems in a domains with edges, Trans. Moscow Math. Soc., 1 (1980), 49-97. Google Scholar [22] V. G. Maz'ya and B. A. Plamenevskiy, Coefficients in the asymptotics of the solutions of an elliptic boundary value problem in a cone, Journal of Soviet Mathematics, 9 (1978), 750-764. doi: 10.1007/BF01085326. Google Scholar [23] V. G. Maz'ya and J. Rossmann, On a problem of Babu$\breve {\rm{s}}$ka (Stable asymptotics of the solution to the Dirichlet problem for elliptic equations of second order in domains with angular points), Math. Nachr., 155 (1992), 199-220. doi: 10.1002/mana.19921550115. Google Scholar [24] M. Ming and C. Wang, Elliptic estimates for Dirichlet-Neumann operator on a corner domain., Asymptotic Analysis, 104 (2017), 103-166. doi: 10.3233/ASY-171427. Google Scholar [25] M. Ming and C. Wang, Water waves problem with surface tension in a corner domain Ⅰ: A priori estimates with constrained contact angle, preprint, arXiv: 1709.00180. Google Scholar
[1] Sándor Kelemen, Pavol Quittner. Boundedness and a priori estimates of solutions to elliptic systems with Dirichlet-Neumann boundary conditions. Communications on Pure & Applied Analysis, 2010, 9 (3) : 731-740. doi: 10.3934/cpaa.2010.9.731 [2] Yu-Hao Liang, Shin-Hwa Wang. Classification and evolution of bifurcation curves for a one-dimensional Dirichlet-Neumann problem with a specific cubic nonlinearity. Discrete & Continuous Dynamical Systems, 2020, 40 (2) : 1075-1105. doi: 10.3934/dcds.2020071 [3] Grégoire Allaire, Yves Capdeboscq, Marjolaine Puel. Homogenization of a one-dimensional spectral problem for a singularly perturbed elliptic operator with Neumann boundary conditions. Discrete & Continuous Dynamical Systems - B, 2012, 17 (1) : 1-31. doi: 10.3934/dcdsb.2012.17.1 [4] Claudia Anedda, Giovanni Porru. Boundary estimates for solutions of weighted semilinear elliptic equations. Discrete & Continuous Dynamical Systems, 2012, 32 (11) : 3801-3817. doi: 10.3934/dcds.2012.32.3801 [5] Mamadou Sango. Homogenization of the Neumann problem for a quasilinear elliptic equation in a perforated domain. Networks & Heterogeneous Media, 2010, 5 (2) : 361-384. doi: 10.3934/nhm.2010.5.361 [6] Dorina Mitrea, Marius Mitrea, Sylvie Monniaux. The Poisson problem for the exterior derivative operator with Dirichlet boundary condition in nonsmooth domains. Communications on Pure & Applied Analysis, 2008, 7 (6) : 1295-1333. doi: 10.3934/cpaa.2008.7.1295 [7] Yanni Guo, Genqi Xu, Yansha Guo. Stabilization of the wave equation with interior input delay and mixed Neumann-Dirichlet boundary. Discrete & Continuous Dynamical Systems - B, 2016, 21 (8) : 2491-2507. doi: 10.3934/dcdsb.2016057 [8] Haitao Yang, Yibin Zhang. Boundary bubbling solutions for a planar elliptic problem with exponential Neumann data. Discrete & Continuous Dynamical Systems, 2017, 37 (10) : 5467-5502. doi: 10.3934/dcds.2017238 [9] Doyoon Kim, Hongjie Dong, Hong Zhang. Neumann problem for non-divergence elliptic and parabolic equations with BMO$_x$ coefficients in weighted Sobolev spaces. Discrete & Continuous Dynamical Systems, 2016, 36 (9) : 4895-4914. doi: 10.3934/dcds.2016011 [10] Bastian Gebauer, Nuutti Hyvönen. Factorization method and inclusions of mixed type in an inverse elliptic boundary value problem. Inverse Problems & Imaging, 2008, 2 (3) : 355-372. doi: 10.3934/ipi.2008.2.355 [11] Mahamadi Warma. A fractional Dirichlet-to-Neumann operator on bounded Lipschitz domains. Communications on Pure & Applied Analysis, 2015, 14 (5) : 2043-2067. doi: 10.3934/cpaa.2015.14.2043 [12] John Villavert. On problems with weighted elliptic operator and general growth nonlinearities. Communications on Pure & Applied Analysis, 2021, 20 (4) : 1347-1361. doi: 10.3934/cpaa.2021023 [13] Mourad Bellassoued, David Dos Santos Ferreira. Stability estimates for the anisotropic wave equation from the Dirichlet-to-Neumann map. Inverse Problems & Imaging, 2011, 5 (4) : 745-773. doi: 10.3934/ipi.2011.5.745 [14] Ping Li, Pablo Raúl Stinga, José L. Torrea. On weighted mixed-norm Sobolev estimates for some basic parabolic equations. Communications on Pure & Applied Analysis, 2017, 16 (3) : 855-882. doi: 10.3934/cpaa.2017041 [15] Huiying Fan, Tao Ma. Parabolic equations involving Laguerre operators and weighted mixed-norm estimates. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5487-5508. doi: 10.3934/cpaa.2020249 [16] Raúl Ferreira, Julio D. Rossi. Decay estimates for a nonlocal $p-$Laplacian evolution problem with mixed boundary conditions. Discrete & Continuous Dynamical Systems, 2015, 35 (4) : 1469-1478. doi: 10.3934/dcds.2015.35.1469 [17] Sunghan Kim, Ki-Ahm Lee, Henrik Shahgholian. Homogenization of the boundary value for the Dirichlet problem. Discrete & Continuous Dynamical Systems, 2019, 39 (12) : 6843-6864. doi: 10.3934/dcds.2019234 [18] Carmen Calvo-Jurado, Juan Casado-Díaz, Manuel Luna-Laynez. Parabolic problems with varying operators and Dirichlet and Neumann boundary conditions on varying sets. Conference Publications, 2007, 2007 (Special) : 181-190. doi: 10.3934/proc.2007.2007.181 [19] Chang-Yeol Jung, Roger Temam. Interaction of boundary layers and corner singularities. Discrete & Continuous Dynamical Systems, 2009, 23 (1&2) : 315-339. doi: 10.3934/dcds.2009.23.315 [20] Ihsane Bikri, Ronald B. Guenther, Enrique A. Thomann. The Dirichlet to Neumann map - An application to the Stokes problem in half space. Discrete & Continuous Dynamical Systems - S, 2010, 3 (2) : 221-230. doi: 10.3934/dcdss.2010.3.221
2019 Impact Factor: 1.338 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8220515847206116, "perplexity": 2624.573599347477}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487626465.55/warc/CC-MAIN-20210617011001-20210617041001-00121.warc.gz"} |
https://ijnaa.semnan.ac.ir/article_2321.html | # Existence of three solutions for a class of fractional boundary value systems
Document Type: Research Paper
Authors
1 Department of Mathematics, Faculty of Mathematical Sciences, University of Mazandaran, Babolsar, Iran
2 Department of Mathematics, Faculty of Basic Sciences, University of Bojnord, P.O. Box 1339, Bojnord 94531, Iran
Abstract
In this paper, under appropriate oscillating behaviours of the nonlinear term, we prove some multiplicity results for a class of nonlinear fractional equations. These problems have a variational structure and we find three solutions for them by exploiting an abstract result for smooth functionals defined on a reflexive Banach space. To make the nonlinear methods work, some careful analysis of the fractional spaces involved is necessary. We also give an example to illustrate the obtained result.
Keywords | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8216887712478638, "perplexity": 372.1000544906643}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107876500.43/warc/CC-MAIN-20201021122208-20201021152208-00073.warc.gz"} |
https://www.physicsforums.com/threads/rotational-ke-problem.300343/ | # Rotational KE problem
1. Mar 17, 2009
### makeAwish
The problem statement, all variables and given/known data
Energy is to be stored in a flywheel in the shape of a uniform solid disk with a radius of = 1.30m and a mass of 66.0kg . To prevent structural failure of the flywheel, the maximum allowed radial acceleration of a point on its rim is 3600m/s² .
What is the maximum kinetic energy that can be stored in the flywheel?
The attempt at a solution
I took I = 3/2MR² (at rim)
where R = 2.6m
Then K = 1/2 * I * ω², where ω² = a/R
so K = (1/2)(3/2)(66kg)(2.6m)²(3600/2.6) = 463320J
which is wrong..
The ans is 7.72x10^4 J
can someone pls tell me where i gone wrong?
Thanks!!
2. Mar 17, 2009
### rl.bhat
Moment of inertia of flywheel is 1/2*M*R^2 and radius is 1.3 m.
3. Mar 17, 2009
### sArGe99
Radial acceleration = $$m r^2 \omega$$
Find out $$\omega$$
Calc. K.E using the equation $$\frac{1}{2} I \omega^2$$..
4. Mar 17, 2009
### LowlyPion
Not quite. The equation is correct, but ω = v/r = 3600/1.3
As noted by rl.bhat a uniform disk has a moment of inertia of of 1/2mr².
5. Mar 17, 2009
### sArGe99
The Moment of Inertia isn't always 1/2 mr^2, is it?
It does depend upon the axis chosen, with M.I. of the new axis found out using parallel and perpendicular axis theorems.
6. Mar 17, 2009
### LowlyPion
Rotating about its central axis it is.
http://hyperphysics.phy-astr.gsu.edu/hbase/icyl.html#icyl2
The || and ⊥ axis theorem are useful of course for other axes of rotation.
7. Mar 20, 2009
### makeAwish
i got it. thanks!
Similar Discussions: Rotational KE problem | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8072466850280762, "perplexity": 4297.520762487783}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886104681.22/warc/CC-MAIN-20170818140908-20170818160908-00338.warc.gz"} |
https://web2.0calc.com/questions/binomials_2 | +0
# binomials
0
419
7
+4090
On Tuesday, I worked $$t+1$$ hours and earned $$3t-3$$ dollars per hour. My friend Andrew worked 3t-5 hours but only earned t+2 dollars an hour. At the end of the day, I had earned two dollars more than he had. What is the value of t?
Dec 29, 2017
#1
+502
0
I am not 100% sure if this answer is correct but is the answer $$\frac{7}{6}$$
.
Dec 29, 2017
#2
+502
0
If it is then I will explain how I got it.
Rauhan Dec 29, 2017
#3
+4090
0
i don't think so
Dec 29, 2017
#4
+502
0
nevermind I got it.
$$(t+1)(3t-3)=(3t-5)(t+2)$$
Expanding gives
$$3t^2-3t+3t-3= 3t^2+6t-5t-10$$
Get the like terms together
$$3t^2-3t^2 -3t+3t +5t - 6t=-10+3$$
3t2 cuts out and the remaining equals to -t but u are looking for a positive t so so multiply both sides by (-).
therefore this gives
$$t=10-3$$
$$t=7$$
So on Tuesday, u worked 8 hours and got 18\$ and Andrew worked 16 hours and got 9 dollars. And also the question should be
'At the end of the day, I had earned two times more than he had. What is the value of t?'
Dec 29, 2017
#5
+4090
+1
nope, t=5
Dec 29, 2017
#6
+502
0
substituting 5 in the question doesn't give 2 times the dollars
Rauhan Dec 29, 2017
#7
+99539
+1
Hours * Rate per hour = Total Amt
So
(t + 1) (3t - 3) = (3t - 5)(t + 2) + 2 simplify
3t^2 + 3t - 3t - 3 = 3t^2 - 5t + 6t - 10 + 2
-3 = 1t - 8 add 8 to both sides
5 = 1t
5 = t
Check
(5 + 1)(3*5 - 3) = (3(5) - 5 ) ( 5 + 2) + 2 ???
(6)(12) = (10)(7) + 2
72 = 70 + 2
So.....t = 5 is correct
Dec 29, 2017 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8204376101493835, "perplexity": 4350.457203509505}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578558125.45/warc/CC-MAIN-20190422155337-20190422180319-00030.warc.gz"} |
http://mathwiki.cs.ut.ee/modular/residue_classes?rev=1426788027&do=diff | # Differences
This shows you the differences between two versions of the page.
— modular:residue_classes [2015/03/19 20:00] (current)aleksei imported from previous wiki 2015/03/19 20:00 aleksei imported from previous wiki 2015/03/19 20:00 aleksei imported from previous wiki Line 1: Line 1: + ===== Residue classes ===== + Modular arithmetics as a way of calculating modulo some number $n$ is a powerful tool. However, it can be made simpler, and thus even more effective. + + First, recall that numbers having the same remainder when divided by $n$ behave in exactly the same way in such a calculation and yield exactly the same results. + + A quick example: + $8 * 4 + 11 \equiv 3*(-1) + 6 \equiv 3 (\mod 5)$. + + One might say that + there are only + $n$ //essentially different objects// for this calculation - the numbers that give distinct remainders modulo $n$. The numbers having the same remainder can be considered to be //essentially the same object//. + + The next step would be to + //define// **addition** and **multiplication** between such objects in a suitable way compatible with the original addition and multiplication. + + This approach leads us to the notion of **residue classes**. + + A residue class modulo $n$ is a set of all integers that give the same remainder when divided by $n$. + + Formally, the residue class of $a \in \mathbb Z$ modulo $n \in \mathbb N$ is + $\overline a = \{b \in \mathbb Z \mid a \equiv b \ (\mod n)\} = \{a + kn \mid k \in \mathbb Z\}$. + In this notation, the modulus $n$ is implicit. + + As the following exercises show, all residue + classes modulo $n$ partition the set $\mathbb Z$ into $n$ parts. + + Show that for all $a, b \in \mathbb Z$, the following three claims are equivalent: + * $\overline a = \overline b$, + * $\overline a \cap \overline b = \emptyset$, + * $a \equiv b \ (\mod n)$, + * exists $k \in \mathbb Z$, such that $a = b + kn$. + + + Show that $\overline 0, \overline 1, \dots , \overline{n − 1}$ are different residue classes. + + The first of those two exercises shows that there are at most $n$ different residue + classes modulo $n$, because there are $n$ different remainders when dividing by $n$. The + second exercise shows that the number of residue classes is at least $n$. + + + Denote $\mathbb Z_n = \{\overline 0, \overline 1, \dots , \overline{n − 1}\}$. + + + We can define addition and multiplication on $\mathbb Z_n$ as + follows: + *$\overline a+\overline b=\overline{a+b}$, + *$\overline a \cdot \overline b=\overline{a\cdot b}$. + Thanks to the properties of $\equiv$, shown in the previous section, these operations are well-defined. + Namely, when we write $\overline{a + b}$, the integer $a$ is only defined as an arbitrary element of the + residue class $\overline a$. When we take an arbitrary element $a$ of $\overline a$, and an arbitrary element $b$ of + $\overline b$, is it the case that the residue class $\overline{a + b}$ is always the same? + Yes, because $a_1 + b_1 \equiv a_2 + b_2 \ (\mod n)$ whenever + $a_1 \equiv a_2 \ (\mod n)$ and $b_1 \equiv b_2 \ (\mod n)$. Similarly, the multiplication of residue classes is well-defined. + + Having defined $\mathbb Z_n$, we can rewrite our quick example from the beginning of the lesson as + $\overline{8} \cdot \overline 4 + \overline{11}= \overline 3 \cdot \overline{-1} +\overline 6 = \overline{3}$ in $\mathbb Z_5$, where the first equality holds trivially simply because the corresponding objects are equal. + + + Write down the addition and multiplication tables for $\mathbb Z_7$ and $\mathbb Z_8$. + + + ==Similarities to integers== + + It turns out that + both operations of addition and multiplication on $\mathbb Z_n$ have similar properties + as the same operations on integers: + + *They are both **associative**, meaning $(\overline a + \overline b) + \overline c = \overline a + (\overline b + \overline c)$ and $(\overline a \cdot \overline b) \cdot \overline c = \overline a \cdot (\overline b \cdot \overline c)$.\\ This property allows the formula $\overline a + \overline b + \overline c$ to be correctly understood in a single way. The same is true for $a \cdot b \cdot c$. + + *They are both **commutative**, meaning $\overline a + \overline b = \overline b + \overline a$ and $\overline a \cdot \overline b = \overline b \cdot \overline a$. + + *They both have **units**. For addition, $\overline a + \overline 0 = \overline 0 + \overline a = \overline a$. For multiplication, $\overline a \cdot \overline 1 = \overline 1 \cdot \overline a = \overline a$. + + *There is always an **opposite element** for addition, meaning that for any $\overline a$ there exists $-\overline a = \overline{-a}$ such that $\overline a + -\overline{a} = \overline{0}$. + + *Multiplication is **distributive** over addition:\\ \begin{align*} + (\overline a+\overline b)\cdot \overline c &= \overline a\cdot \overline c + \overline b\cdot \overline c\enspace,\\ + \overline a\cdot (\overline b + \overline c) &= \overline a\cdot \overline c + \overline a\cdot \overline c\enspace. + \end{align*} + + These properties actually mean that both $\mathbb Z$ + and $\mathbb Z_n$ + are **commutative rings** with respect to addition and multiplication. For an introduction to rings, see ?. + + Because of the latter fact $\mathbb Z_n$ + is also called the **residue class ring modulo $n$**. + + + + Show that $\mathbb Z_n$, together with the addition and multiplication operations, is a commutative ring. + + ==Differences== + + Not everything in $\mathbb Z_n$ is the same way as in $\mathbb Z$. + + Take the equation $4a = 4b$ with integers $a$ and $b$. We know that it yields $a = b$. + More generally, for any integer $c \neq 0$ + the equality $ca = c b$ + implies $a = b$ because we can divide both sides by $c$. + On the other hand, for example, in $\mathbb Z_{12}$ + the equality + $\overline{4}\cdot \overline{a} = \overline{4}\cdot \overline{b}$ does not necessarily mean that + $\overline{a} = \overline{b}$, simply because + $\overline{4}\cdot \overline{1} = \overline{4}\cdot \overline{4}$. + + The numbers as the number $c$ above are called **cancellative**. + + Thus we made an important observation that + every nonzero integer is cancellative while for some numbers $n$ there are nonzero nonconcellative elements of $\mathbb Z_n$. + + + Find noncancellative elements of $\mathbb Z_{12}$, + $\mathbb Z_7$, and $\mathbb Z_4$. + + ++Answer|For $\mathbb Z_{12}: \overline 0, \overline 2, \overline 3, \overline 4, \overline 6, \overline 8, \overline 9, \overline{10}$, + ++ + + + Similarly, consider the equation $ab = 0$ with integers $a$ and $b$. It means that at least one of the numbers $a$ and $b$ is equal to $0$. Again, + this is not the case in $\mathbb Z_{12}$ because we can find nonzero $\overline{a}$ and $\overline b$ + such that $\overline{a} \cdot \overline{b} = \overline{0}$, e.g., $\overline{4} \cdot \overline{3} = \overline{0}$. + + Nonzero $a$ such that $ab = 0$ for some nonzero $b$ is called a **zero divisor**. + + Thus we observed that there are no zero divisors in $\mathbb Z$ but for some numbers $n$ there are zero divisors in $\mathbb Z_n$. + + It is relatively easy to see that a zero divisor is always noncancellative. + + + Show that a zero divisor is never cancellative. + ++Solution|Assume that a zero divisor $a$ is cancellative. Then for some $b \neq 0$, we know that $ab = 0$. On the other hand, $a0 = 0$. Then $ab = a0$ but $b \neq 0$. So our assumption was wrong and $a$ is not cancellative.++ + + It turns out that in $\mathbb Z_n$ the notions of zero divisors and noncancellative nonzero elements actually coincide. In fact, a nonzero element $\overline a$ in $\mathbb Z_n$ is a zero divisor and noncancellative if and only if + $\gcd(a,n) > 1$. + ++++Proof| + Let us check this fact. To see that $\gcd(a,n) > 1$ implies that $\overline a$ is a zero divisor, let us recall the equality $\gcd(a,n) \cdot \operatorname{lcm}(a,n) = an$. From this we have + $a \cdot \frac{n}{\gcd(a,n)} = \operatorname{lcm}(a,n)$. Let us denote $c :=\frac{n}{\gcd(a,n)}$ and notice that $c$ is an integer greater than $1$ but less than $n$, so that $c \not \equiv 0 (\mod n)$. In other words, + $\overline c \neq 0$. But $\overline a \cdot \overline c = \overline{\operatorname{lcm}(a,n)} = \overline 0$. So $\overline a$ is a zero divisor. + + On the other hand, if $\gcd(a,n) = 1$, then Bezout's identity says that + there are integers $x$ and $y$ such that $ax + ny = 1$. + Then $ax \equiv 1 (\mod n)$ and therefore $\overline a \cdot \overline x = \overline 1$. It means that $\overline x$ is the **multiplicative inverse** of $\overline a$ in $\mathbb Z_n$. + It follows easily that $\overline a$ is cancellative because + $\overline a \cdot \overline b = \overline a \cdot \overline c$ + would imply $\overline x \cdot \overline a \cdot \overline b = \overline x \cdot \overline a \cdot \overline c$, then + $\overline 1 \cdot \overline b = \overline 1 \cdot \overline c$, + and $\overline b = \overline c$ + ++++ + + As we already observed in the proof, the condition $\gcd(a,n) = 1$ + actually describes those elements in $\mathbb Z_n$, which have **multiplicative inverse**. That is, those elements $\overline a$ for which there is $x \in \mathbb Z_n$ such that $\overline a \cdot \overline x = \overline x \cdot \overline a = \overline 1$. For example, in $\mathbb Z_7$ + the multiplicative inverse of $\overline 3$ is $\overline 5$, because + $\overline 3 \cdot \overline 5 = \overline 1$ in $\mathbb Z_7$. + Then $\overline x$ is denoted by $\overline a^{-1}$. If $\overline a$ + has a multiplicative inverse, then $\overline a$ is called a **unit** in $\mathbb Z_n$. + In our example, $\overline 3$ is a unit and $\overline 3^{-1} = \overline 5$ in $\mathbb Z_7$. + + + Find multiplicative inverses of units in $\mathbb Z_{12}$, $\mathbb Z_7$, $\mathbb Z_{20}$. Remember that units in $\mathbb Z_n$ are exactly those elements $a$ for which $\gcd(a,n) = 1$. + + + As you may have noticed... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9643626809120178, "perplexity": 270.107777769975}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370496227.25/warc/CC-MAIN-20200329201741-20200329231741-00011.warc.gz"} |
https://www.allaboutcircuits.com/textbook/direct-current/chpt-5/power-calculations/ | # How to Calculate Power in a Series and Parallel Circuit
## Chapter 5 - Series And Parallel Circuits
PDF Version
### What is Electrical Power and How Can You Calculate it in Series and Parallel Circuits?
Electrical power measures the rate of work represented in electrical circuits by the symbol “P” and the units of Watts (W). The total circuit power is additive for series, parallel, or any combination of series and parallel components.
When calculating the power dissipation of resistive components, we can use any one of the three Ohm’s law power equations if given any two of the voltage (V), current (I), and resistance (R):
$$P = IV = I^2R = \frac{V^2}{R}$$
### Calculating Power Using the Table Method
These electric power calculations can be easily managed using the table method by adding another row below the voltages, currents, and resistances, as shown in Table 1.
##### Table 1. Table method with power included.
Power for any particular table column can be found using the appropriate Ohm’s power law equation.
### Power in Series and Parallel Circuits
Power is a measure of the rate of work. Per the physics law of conservation of energy, the power dissipated in the circuit must equal the total power applied by the source(s). Therefore, an interesting rule for total circuit power versus individual component power is that it is additive for any circuit configuration: series (Table 2), parallel (Table 3), or any combination of series and parallel.
##### Table 3. Table method for parallel circuits—power is additive.
If you need a refresher or skipped the pages on series circuits and parallel circuits, you can find them here:
### Review of Power for Series and Parallel Circuits:
• Electrical power is the measure of work
• Power is represented by the symbol “P”
• The unit of power is the Watt (W)
• Power is additive in any configuration of a resistive circuit—series, parallel, or series-parallel
• Ptotal = P1 + P2 + . . . Pn
### Related Content
Learn more about power, series, and parallel circuits in the additional content below.
Calculators:
Worksheets
Video Tutorials and Lectures:
Published under the terms and conditions of the Design Science License
0 Comments | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8665017485618591, "perplexity": 950.9561936352723}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710933.89/warc/CC-MAIN-20221203143925-20221203173925-00655.warc.gz"} |
https://www.physicsforums.com/threads/space-time-continuum.682065/ | # Space Time Continuum
1. Mar 31, 2013
### Naveen3456
It is said that the space time continuum gives rise to space and time.
So, is space time continuum a mixture of space and time or something totally different from space and time? Plz elaborate as per current scientific understanding only. No out and out philosophy please?
2. Mar 31, 2013
### Staff: Mentor
"space time continuum" is a name for "space and time". I don't think it makes sense to say "it gives rise to space and time". Does 6 give rise to 2 and 3, as 2*3=6?
3. Mar 31, 2013
### Staff: Mentor
Mathematically spacetime is a pseudo-Riemannian manifold. I am not sure what you are after for the rest of your question.
4. Apr 1, 2013
### Passionflower
In GR reality is completely described using a curved four dimensional surface. Time evolution is just a gauge choice.
5. Apr 1, 2013
### pervect
Staff Emeritus
Take two points that are separated only in space, i.e. two points that occur at the same time, in some specific reference frame.
For simplicity, assume there's no gravity, so that we can use only SR and don't need to use GR.
THen in some other reference frame, the two points that occured at the same time occur at different times, due to the relativity of simultaneity. So they are separated in both space and time.
Thus we are led to the conclusion that space and time must intermix, somehow. One observer's purely spatial separation appears to be a separation in space and time to another observer. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8840038180351257, "perplexity": 802.7686235028002}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125937114.2/warc/CC-MAIN-20180420042340-20180420062340-00367.warc.gz"} |
https://www.talkstats.com/search/2085535/ | Search results
1. Probability of Sporting Event Occurring
Hello everyone I am trying to establish whether it is possible (or appropriate) to calculate the probability of a sporting event occurring given the historic ratio of such event happening. Let me explain a little more in detail - for the purpose of this exercise, I am discounting the fact... | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9040826559066772, "perplexity": 450.96990104182584}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487621699.22/warc/CC-MAIN-20210616001810-20210616031810-00266.warc.gz"} |
http://stats.stackexchange.com/questions/34446/stationary-arma-model-as-infinite-ar-or-ma-process | # Stationary ARMA model as infinite AR or MA process
How can a stationary, invertible ARMA(1,1) process be represented as either an infinite order AR or infinite order MA process?
-
I think I just answered the question here: stats.stackexchange.com/questions/197803/… – Jeremias K Feb 22 at 8:57 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9928951859474182, "perplexity": 2040.6409430702176}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049276131.38/warc/CC-MAIN-20160524002116-00107-ip-10-185-217-139.ec2.internal.warc.gz"} |
http://yargb.blogspot.com/2006/11/going-for-blast-into-real-past.html | ### Going for a blast into the real past
Saturday, November 18, 2006
Going for a blast into the real past: "If his experiment with splitting photons actually works, says University of Washington physicist John Cramer, the next step will be to test for quantum 'retrocausality.'
That's science talk for saying he hopes to find evidence of a photon going backward in time."
Luther McLeod said...
Great stuff. I have never thought 'entanglement' could/would be explained in my lifetime. 'spooky' indeed.
Syl said...
This comment has been removed by a blog administrator.
Syl said...
The experiment seems so simple (to envision, anyway) that I'm surprised it hasn't been done before. Yeah, well, I know the technology to carry out the experiment has to exist first.
The original slit experiment was only a thought experiment for how many decades until it was actually tried?
So, what are the actual assumptions we must question because of entanglement?
The particles are discreet.
There is interaction.
At the quantum level time does exist.
If this experiment succeeds, those assumptions are still in place if we accept the result as meaning signalling can go backwards in time.
terrye said...
Amazing. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8061710000038147, "perplexity": 2024.20035887979}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115863825.66/warc/CC-MAIN-20150124161103-00244-ip-10-180-212-252.ec2.internal.warc.gz"} |
https://math.eretrandre.org/tetrationforum/showthread.php?tid=757 | • 0 Vote(s) - 0 Average
• 1
• 2
• 3
• 4
• 5
Solve this limit Nasser Junior Fellow Posts: 9 Threads: 3 Joined: Nov 2012 11/27/2012, 05:13 PM Hi solve this limit I tried to sovle it but no way until now and I searched for solving formula but no results found. this limit $lim_{x\rightarrow 0}(T(a,x))^\frac{1}{x}$ a>0 the result must be f(a) if there is no soving formula, then try to draw this function, I don't know if there is a software support tetration!! $f(x)=(T(a,x))^\frac{1}{x}$ with various values of "a" and check the curve at x =0 thank you sheldonison Long Time Fellow Posts: 640 Threads: 22 Joined: Oct 2008 11/27/2012, 06:28 PM (This post was last modified: 11/27/2012, 06:30 PM by sheldonison.) (11/27/2012, 05:13 PM)Nasser Wrote: solve this limit .... $\lim_{x\rightarrow 0}(T(a,x))^{\frac{1}{x}}$ a>0 ...What is T(a,x)? Perhaps super exponentiation base a of x? We usually say sexp(0)=1, which works for all bases. Also, in my quote, I modified your comment to use the tex tag. - Sheldon Nasser Junior Fellow Posts: 9 Threads: 3 Joined: Nov 2012 11/28/2012, 07:14 AM (11/27/2012, 06:28 PM)sheldonison Wrote: What is T(a,x)? Perhaps super exponentiation base a of x?You are right thank you sheldonison Long Time Fellow Posts: 640 Threads: 22 Joined: Oct 2008 11/28/2012, 02:22 PM (This post was last modified: 11/28/2012, 05:05 PM by sheldonison.) (11/27/2012, 05:13 PM)Nasser Wrote: solve this limit ....I don't know if there is a software support tetration!!I posted a pari-gip routine that generates sexp(z) for real bases greater than $\eta=\exp(1/e)$ here, http://math.eretrandre.org/tetrationforu...hp?tid=486. By definition, T(a,0) = 1, since sexp(0) is defined to be 1. If T is analytic, then for each value of a, T has a Taylor series expansion around 0, corresponding to the Taylor series for sexp(z) around 0. Define $k_a=T'(a,0)$ as the first derivitive of that Taylor series. $\lim_{x \to 0} T(a,x)^{1/x} \approx (1 + k_a x) ^ {1/x}$ $\log(\lim_{x \to 0} T(a,x)^{1/x}) \approx \frac{1}{x} \log (1 + k_a x) \approx \frac{k_a x}{x} = k_a$ $\lim_{x \to 0} T(a,x)^{1/x} = \exp(k_a)$ There is an unproven conjecture that $\text{sexp}_a(z)$ is analytic in the base=a for complex values of a, with a singularity at base $\eta=\exp(1/e)$. For real values of a, if $a>\eta$, then sexp(z) goes to infinity at the real axis as z increases. If $a<=\eta$, then iterating $\exp^{[on]}_a(0)$ converges towards the attracting fixed point as n goes to infinity, but this is a different function than tetration. Then for base>$\eta$, we can have a taylor series for the any of the derivatives of $\text{sexp}_a(z)$, with the radius of convergence = $a-\eta$. I posted such a the taylor series for the first derivative of the base. For base=e, the first derivative ~= 1.0917673512583209918013845500272. The post includes pari-gp code to calculate sexp(z) for complex bases; the code for complex bases isn't as stable as the code for real bases, and doesn't always converge. If you're interested in a Taylor series for $k_a$ for your limit, search for "the Taylor series of the first derivative of sexp_b(z), developed around b=2" in this post: http://math.eretrandre.org/tetrationforu...e=threaded. - Sheldon Nasser Junior Fellow Posts: 9 Threads: 3 Joined: Nov 2012 12/03/2012, 07:46 AM You found an approximated solution. It is ok, but this will not help me, because I tried to find the first derive of b^^x and x^^x and other related functions like for example b^^(x^2) by using differentiation fundamentals concepts, and I am just facing this problem to finish my work. I may post my work here for discussion. thank you Sheldonison. « Next Oldest | Next Newest »
Possibly Related Threads... Thread Author Replies Views Last Post Dangerous limits ... Tommy's limit paradox tommy1729 0 1,994 11/27/2015, 12:36 AM Last Post: tommy1729 tetration limit ?? tommy1729 40 50,835 06/15/2015, 01:00 AM Last Post: sheldonison Limit of mean of Iterations of f(x)=(ln(x);x>0,ln(-x) x<0) =-Omega constant for all x Ivars 10 15,292 03/29/2015, 08:02 PM Last Post: tommy1729 Another limit tommy1729 0 1,673 03/18/2015, 06:55 PM Last Post: tommy1729 A limit exercise with Ei and slog. tommy1729 0 2,038 09/09/2014, 08:00 PM Last Post: tommy1729 [MSE] The mick tommy limit conjecture. tommy1729 1 2,710 03/30/2014, 11:22 PM Last Post: tommy1729 tetration base conversion, and sexp/slog limit equations sheldonison 44 58,403 02/27/2013, 07:05 PM Last Post: sheldonison (MSE) A limit- question concerning base-change Gottfried 0 2,460 10/03/2012, 06:44 PM Last Post: Gottfried Wonderful new form of infinite series; easy solve tetration JmsNxn 1 4,552 09/06/2012, 02:01 AM Last Post: JmsNxn a limit curiosity ? Pi/2 tommy1729 0 2,178 08/07/2012, 09:27 PM Last Post: tommy1729
Users browsing this thread: 1 Guest(s) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 15, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8929380774497986, "perplexity": 4128.9937115355}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371876625.96/warc/CC-MAIN-20200409185507-20200409220007-00114.warc.gz"} |
https://sureshemre.wordpress.com/2018/10/27/new-upper-limit-for-electron-edm/ | ## New upper limit for electron EDM
ACME II experiment at Harvard reports a new upper limit for the electron EDM (electric dipole moment).
Electron EDM $\mathbf{\le 1.1 \times 10^{-29} \quad e \quad cm}$
The previous upper limit was $\mathbf{8.7 \times 10^{-29} \quad e \quad cm}$.
According to SM (Standard Model) of particle physics electron is a point particle with no spatial extension. Therefore, the electron EDM should be very close to zero. But, quantum effects (sea of virtual particles flickering around the electron core) can cause an asymmetric electric charge distribution therefore nonzero electric dipole moment. The SM estimate for EDM is less than $\mathbf{10^{-38} \quad e \quad cm}$. BSM (beyond SM) models such as the models involving supersymmetry predict higher EDM. The latest EDM measurements indicate that there is no hope of discovering supersymmetry with the current CERN LHC experiments.
The ACME collaboration is financially supported by NSF (National Science Foundation). The NSF news release about the latest EDM measurements is the best place to start reading about this subject.
For the curious
As an aside, it is said that EDM breaks the P-symmetry as well as the T-symmetry. I wanted to understand this. Unfortunately, I could not find a satisfactory tutorial on the web. Here’s my amateurish attempt for explanation.
Parity inversion (P) switches the sign of all spatial coordinates (i.e. $x \rightarrow -x$, $y \rightarrow -y$, $z \rightarrow -z$); time reversal (T) inverts the sign of all quantities associated with time ($t \rightarrow -t$) or motion, such as momentum $\overrightarrow{p} \rightarrow - \overrightarrow{p}$; charge conjugation (C) reverses electric charge ($q \rightarrow - q$).
All the laws of classical physics are identical under these mathematical operations. Quantum processes involving the “weak-nuclear force”, however, can violate P, T or CP minimally. The CPT invariance is believed to be exact in both classical and quantum processes. This also means that T-symmetry is equivalent to CP-symmetry. Another direct consequence, of course, is that when the T-symmetry is broken the CP-symmetry would be broken as well; and vica versa.
Since T-symmetry is equivalent to CP-symmetry, applying the T transformation to an electron turns it into a positron (anti-electron). The C transformation changes charge, P transformation changes helicity. This is how electron becomes a positron under the CP transformation. What is the “symmetry” here? Electrons and positrons follow the same laws of physics. That’s the symmetry.
image credit: Benjamin N. Spaun
Let’s assume that initially the spin vector (MDM) and the dipole vector (EDM) are pointing in the same direction.
P transformation: P changes dipole direction but remember P also changes helicity. This means that MDM and EDM vectors will be pointing in opposite directions after the P transformation.
T transformation = C transformation followed by P transformation: if there is a non-zero EDM it would flip after C but P would flip it again. P also changes helicity therefore MDM and EDM vectors would be pointing in opposite directions.
In both scenarios the EDM and MDM vectors would be pointing in opposite directions after P or T transformations. Such electrons/positrons would not behave the same. This means that a non-zero EDM would break P-symmetry as well as the T-symmetry.
We need a better tutorial than this. 🙂 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 9, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9351276159286499, "perplexity": 1438.0996633439306}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027312025.20/warc/CC-MAIN-20190817203056-20190817225056-00246.warc.gz"} |
http://www.emathematics.net/g8_linear_function.php?def=slope_perpendicular_line | User:
• Matrices
• Algebra
• Geometry
• Graphs and functions
• Trigonometry
• Coordinate geometry
• Combinatorics
Suma y resta Producto por escalar Producto Inversa
Monomials Polynomials Special products Equations Quadratic equations Radical expressions Systems of equations Sequences and series Inner product Exponential equations Matrices Determinants Inverse of a matrix Logarithmic equations Systems of 3 variables equations
2-D Shapes Areas Pythagorean Theorem Distances
Graphs Definition of slope Positive or negative slope Determine slope of a line Equation of a line Equation of a line (from graph) Quadratic function Parallel, coincident and intersecting lines Asymptotes Limits Distances Continuity and discontinuities
Sine Cosine Tangent Cosecant Secant Cotangent Trigonometric identities Law of cosines Law of sines
Equations of a straight line Parallel, coincident and intersecting lines Distances Angles in space Inner product
Find the slope of perpendicular lines
Perpendicular lines have slopes that are opposite reciprocals, like $\frac{a}{b}$ and $-\frac{b}{a}$. The slopes also have a product of -1.
Line t has a slope of $-\frac{5}{6}$ . Line u is parallel to line t. What is the slope of line u?
Simplify your answer and write it as a proper fraction, improper fraction, or integer.
Line u is perpendicular to line t, so its slope is the opposite reciprocal. Find the opposite reciprocal.
$-\frac{5}{6}$ Take the slope of line t $-\frac{6}{5}$ Find the reciprocal $\frac{6}{5}$ Find the opposite
The slope of line u is $\frac{6}{5}$ .
Line f has a slope of $\frac{-3}{4}$. Line g is perpendicular to line f. What is the slope of line g?
Simplify your answer and write it as a proper fraction or as a whole or mixed number.
Solution: | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 8, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9656082987785339, "perplexity": 1662.9524950580035}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256314.25/warc/CC-MAIN-20190521082340-20190521104340-00207.warc.gz"} |
http://mathhelpforum.com/differential-geometry/82045-twist-triangle-inequality-proof.html | # Thread: twist on the triangle inequality proof
1. ## twist on the triangle inequality proof
i'm stuck on this one:
Prove that || z1|-|z2|| £ |z1-z2|
Thanks!
2. The usual triangle inequality tells you that $|z_1| = |z_2 + (z_1-z_2)|\leqslant|z_2| + |z_1-z_2|$, and hence $|z_1|-|z_2|\leqslant|z_1-z_2|$. The same inequality with $z_1$ and $z_2$ interchanged gives $|z_2|-|z_1|\leqslant|z_2-z_1| = |z_1-z_2|$. The two inequalities together give $\bigl||z_1|-|z_2|\bigr|\leqslant|z_1-z_2|$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9931555986404419, "perplexity": 531.647694482303}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280891.90/warc/CC-MAIN-20170116095120-00098-ip-10-171-10-70.ec2.internal.warc.gz"} |
https://wikimili.com/en/Thompson_sampling | # Thompson sampling
Last updated
Thompson sampling, [1] [2] named after William R. Thompson, is a heuristic for choosing actions that addresses the exploration-exploitation dilemma in the multi-armed bandit problem. It consists in choosing the action that maximizes the expected reward with respect to a randomly drawn belief.
## Description
Consider a set of contexts ${\displaystyle {\mathcal {X}}}$, a set of actions ${\displaystyle {\mathcal {A}}}$, and rewards in ${\displaystyle \mathbb {R} }$. In each round, the player obtains a context ${\displaystyle x\in {\mathcal {X}}}$, plays an action ${\displaystyle a\in {\mathcal {A}}}$ and receives a reward ${\displaystyle r\in \mathbb {R} }$ following a distribution that depends on the context and the issued action. The aim of the player is to play actions such as to maximize the cumulative rewards.
The elements of Thompson sampling are as follows:
1. a likelihood function ${\displaystyle P(r|\theta ,a,x)}$;
2. a set ${\displaystyle \Theta }$ of parameters ${\displaystyle \theta }$ of the distribution of ${\displaystyle r}$;
3. a prior distribution ${\displaystyle P(\theta )}$ on these parameters;
4. past observations triplets ${\displaystyle {\mathcal {D}}=\{(x;a;r)\}}$;
5. a posterior distribution ${\displaystyle P(\theta |{\mathcal {D}})\propto P({\mathcal {D}}|\theta )P(\theta )}$, where ${\displaystyle P({\mathcal {D}}|\theta )}$ is the likelihood function.
Thompson sampling consists in playing the action ${\displaystyle a^{\ast }\in {\mathcal {A}}}$ according to the probability that it maximizes the expected reward, i.e.
${\displaystyle \int \mathbb {I} \left[\mathbb {E} (r|a^{\ast },x,\theta )=\max _{a'}\mathbb {E} (r|a',x,\theta )\right]P(\theta |{\mathcal {D}})d\theta ,}$
where ${\displaystyle \mathbb {I} }$ is the indicator function.
In practice, the rule is implemented by sampling, in each round, parameters ${\displaystyle \theta ^{\ast }}$ from the posterior ${\displaystyle P(\theta |{\mathcal {D}})}$, and choosing the action ${\displaystyle a^{\ast }}$ that maximizes ${\displaystyle \mathbb {E} [r|\theta ^{\ast },a^{\ast },x]}$, i.e. the expected reward given the sampled parameters, the action and the current context. Conceptually, this means that the player instantiates their beliefs randomly in each round, and then acts optimally according to them. In most practical applications, it is computationally onerous to maintain and sample from a posterior distribution over models. As such, Thompson sampling is often used in conjunction with approximate sampling techniques. [2]
## History
Thompson sampling was originally described by Thompson in 1933 [1] . It was subsequently rediscovered numerous times independently in the context of multi-armed bandit problems. [3] [4] [5] [6] [7] [8] A first proof of convergence for the bandit case has been shown in 1997. [3] The first application to Markov decision processes was in 2000. [5] A related approach (see Bayesian control rule) was published in 2010. [4] In 2010 it was also shown that Thompson sampling is instantaneously self-correcting. [8] Asymptotic convergence results for contextual bandits were published in 2011. [6] Nowadays, Thompson Sampling has been widely used in many online learning problems: Thompson sampling has also been applied to A/B testing in website design and online advertising; [9] Thompson sampling has formed the basis for accelerated learning in decentralized decision making; [10] a Double Thompson Sampling (D-TS) [11] algorithm has been proposed for dueling bandits, a variant of traditional MAB, where feedbacks come in the format of pairwise comparison.
## Relationship to other approaches
### Probability matching
Probability matching is a decision strategy in which predictions of class membership are proportional to the class base rates. Thus, if in the training set positive examples are observed 60% of the time, and negative examples are observed 40% of the time, the observer using a probability-matching strategy will predict (for unlabeled examples) a class label of "positive" on 60% of instances, and a class label of "negative" on 40% of instances.
### Bayesian control rule
A generalization of Thompson sampling to arbitrary dynamical environments and causal structures, known as Bayesian control rule, has been shown to be the optimal solution to the adaptive coding problem with actions and observations. [4] In this formulation, an agent is conceptualized as a mixture over a set of behaviours. As the agent interacts with its environment, it learns the causal properties and adopts the behaviour that minimizes the relative entropy to the behaviour with the best prediction of the environment's behaviour. If these behaviours have been chosen according to the maximum expected utility principle, then the asymptotic behaviour of the Bayesian control rule matches the asymptotic behaviour of the perfectly rational agent.
The setup is as follows. Let ${\displaystyle a_{1},a_{2},\ldots ,a_{T}}$ be the actions issued by an agent up to time ${\displaystyle T}$, and let ${\displaystyle o_{1},o_{2},\ldots ,o_{T}}$ be the observations gathered by the agent up to time ${\displaystyle T}$. Then, the agent issues the action ${\displaystyle a_{T+1}}$ with probability: [4]
${\displaystyle P(a_{T+1}|{\hat {a}}_{1:T},o_{1:T}),}$
where the "hat"-notation ${\displaystyle {\hat {a}}_{t}}$ denotes the fact that ${\displaystyle a_{t}}$ is a causal intervention (see Causality), and not an ordinary observation. If the agent holds beliefs ${\displaystyle \theta \in \Theta }$ over its behaviors, then the Bayesian control rule becomes
${\displaystyle P(a_{T+1}|{\hat {a}}_{1:T},o_{1:T})=\int _{\Theta }P(a_{T+1}|\theta ,{\hat {a}}_{1:T},o_{1:T})P(\theta |{\hat {a}}_{1:T},o_{1:T})\,d\theta }$,
where ${\displaystyle P(\theta |{\hat {a}}_{1:T},o_{1:T})}$ is the posterior distribution over the parameter ${\displaystyle \theta }$ given actions ${\displaystyle a_{1:T}}$ and observations ${\displaystyle o_{1:T}}$.
In practice, the Bayesian control amounts to sampling, in each time step, a parameter ${\displaystyle \theta ^{\ast }}$ from the posterior distribution ${\displaystyle P(\theta |{\hat {a}}_{1:T},o_{1:T})}$, where the posterior distribution is computed using Bayes' rule by only considering the (causal) likelihoods of the observations ${\displaystyle o_{1},o_{2},\ldots ,o_{T}}$ and ignoring the (causal) likelihoods of the actions ${\displaystyle a_{1},a_{2},\ldots ,a_{T}}$, and then by sampling the action ${\displaystyle a_{T+1}^{\ast }}$ from the action distribution ${\displaystyle P(a_{T+1}|\theta ^{\ast },{\hat {a}}_{1:T},o_{1:T})}$.
### Upper-Confidence-Bound (UCB) algorithms
Thompson sampling and upper-confidence bound algorithms share a fundamental property that underlies many of their theoretical guarantees. Roughly speaking, both algorithms allocate exploratory effort to actions that might be optimal and are in this sense "optimistic." Leveraging this property, one can translate regret bounds established for UCB algorithms to Bayesian regret bounds for Thompson sampling [12] or unify regret analysis across both these algorithms and many classes of problems. [13]
## Related Research Articles
In statistics, the likelihood function is formed from the joint probability of a sample of data given a set of model parameter values; it is viewed and used as a function of the parameters given the data sample.
Bayesian inference is a method of statistical inference in which Bayes' theorem is used to update the probability for a hypothesis as more evidence or information becomes available. Bayesian inference is an important technique in statistics, and especially in mathematical statistics. Bayesian updating is particularly important in the dynamic analysis of a sequence of data. Bayesian inference has found application in a wide range of activities, including science, engineering, philosophy, medicine, sport, and law. In the philosophy of decision theory, Bayesian inference is closely related to subjective probability, often called "Bayesian probability".
In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of a probability distribution by maximizing a likelihood function, so that under the assumed statistical model the observed data is most probable. The point in the parameter space that maximizes the likelihood function is called the maximum likelihood estimate. The logic of maximum likelihood is both intuitive and flexible, and as such the method has become a dominant means of statistical inference.
A Bayesian network, Bayes network, belief network, decision network, Bayes(ian) model or probabilistic directed acyclic graphical model is a probabilistic graphical model that represents a set of variables and their conditional dependencies via a directed acyclic graph (DAG). Bayesian networks are ideal for taking an event that occurred and predicting the likelihood that any one of several possible known causes was the contributing factor. For example, a Bayesian network could represent the probabilistic relationships between diseases and symptoms. Given symptoms, the network can be used to compute the probabilities of the presence of various diseases.
In probability theory and statistics, a Gaussian process is a stochastic process, such that every finite collection of those random variables has a multivariate normal distribution, i.e. every finite linear combination of them is normally distributed. The distribution of a Gaussian process is the joint distribution of all those random variables, and as such, it is a distribution over functions with a continuous domain, e.g. time or space.
In Bayesian statistics, the posterior probability of a random event or an uncertain proposition is the conditional probability that is assigned after the relevant evidence or background is taken into account. Similarly, the posterior probability distribution is the probability distribution of an unknown quantity, treated as a random variable, conditional on the evidence obtained from an experiment or survey. "Posterior", in this context, means after taking into account the relevant evidence related to the particular case being examined. For instance, there is a ("non-posterior") probability of a person finding buried treasure if they dig in a random spot, and a posterior probability of finding buried treasure if they dig in a spot where their metal detector rings.
In statistics, the score is the gradient of the log-likelihood function with respect to the parameter vector. Evaluated at a particular point of the parameter vector, the score indicates the steepness of the log-likelihood function and thereby the sensitivity to infinitesimal changes to the parameter values. If the log-likelihood function is continuous over the parameter space, the score will vanish at a local maximum or minimum; this fact is used in maximum likelihood estimation to find the parameter values that maximize the likelihood function.
In mathematical statistics, the Fisher information is a way of measuring the amount of information that an observable random variable X carries about an unknown parameter θ of a distribution that models X. Formally, it is the variance of the score, or the expected value of the observed information. In Bayesian statistics, the asymptotic distribution of the posterior mode depends on the Fisher information and not on the prior. The role of the Fisher information in the asymptotic theory of maximum-likelihood estimation was emphasized by the statistician Ronald Fisher. The Fisher information is also used in the calculation of the Jeffreys prior, which is used in Bayesian statistics.
In Bayesian probability theory, if the posterior distributions p(θ | x) are in the same probability distribution family as the prior probability distribution p(θ), the prior and posterior are then called conjugate distributions, and the prior is called a conjugate prior for the likelihood function. For example, the Gaussian family is conjugate to itself with respect to a Gaussian likelihood function: if the likelihood function is Gaussian, choosing a Gaussian prior over the mean will ensure that the posterior distribution is also Gaussian. This means that the Gaussian distribution is a conjugate prior for the likelihood that is also Gaussian. The concept, as well as the term "conjugate prior", were introduced by Howard Raiffa and Robert Schlaifer in their work on Bayesian decision theory. A similar concept had been discovered independently by George Alfred Barnard.
In statistics, a marginal likelihood function, or integrated likelihood, is a likelihood function in which some parameter variables have been marginalized. In the context of Bayesian statistics, it may also be referred to as the evidence or model evidence.
In statistics, the Wald test assesses constraints on statistical parameters based on the weighted distance between the unrestricted estimate and its hypothesized value under the null hypothesis, where the weight is the precision of the estimate. Intuitively, the larger this weighted distance, the less likely it is that the constraint is true. While the finite sample distributions of Wald tests are generally unknown, it has an asymptotic χ2-distribution under the null hypothesis, a fact that can be used to determine statistical significance.
In statistical decision theory, an admissible decision rule is a rule for making a decision such that there is no other rule that is always "better" than it, in the precise sense of "better" defined below. This concept is analogous to Pareto efficiency.
In Bayesian statistics, a maximum a posteriori probability (MAP) estimate is an estimate of an unknown quantity, that equals the mode of the posterior distribution. The MAP can be used to obtain a point estimate of an unobserved quantity on the basis of empirical data. It is closely related to the method of maximum likelihood (ML) estimation, but employs an augmented optimization objective which incorporates a prior distribution over the quantity one wants to estimate. MAP estimation can therefore be seen as a regularization of ML estimation.
In statistics, the Bayesian information criterion (BIC) or Schwarz information criterion is a criterion for model selection among a finite set of models; the model with the lowest BIC is preferred. It is based, in part, on the likelihood function and it is closely related to the Akaike information criterion (AIC).
In statistics, M-estimators are a broad class of extremum estimators for which the objective function is a sample average. Both non-linear least squares and maximum likelihood estimation are special cases of M-estimators. The definition of M-estimators was motivated by robust statistics, which contributed new types of M-estimators. The statistical procedure of evaluating an M-estimator on a data set is called M-estimation.
Approximate Bayesian computation (ABC) constitutes a class of computational methods rooted in Bayesian statistics that can be used to estimate the posterior distributions of model parameters.
In statistical decision theory, a randomised decision rule or mixed decision rule is a decision rule that associates probabilities with deterministic decision rules. In finite decision problems, randomised decision rules define a risk set which is the convex hull of the risk points of the nonrandomised decision rules.
In statistics, suppose that we have been given some data, and we are constructing a statistical model of that data. The relative likelihood compares the relative plausibilities of different candidate models or of different values of a parameter of a single model.
In computational statistics, the pseudo-marginal Metropolis–Hastings algorithm is a Monte Carlo method to sample from a probability distribution. It is an instance of the popular Metropolis–Hastings algorithm that extends its use to cases where the target density is not available analytically. It relies on the fact that the Metropolis–Hastings algorithm can still sample from the correct target distribution if the target density in the acceptance ratio is replaced by an estimate. It is especially popular in Bayesian statistics, where it is applied if the likelihood function is not tractable.
Stochastic gradient Langevin dynamics, is an optimization technique composed of characteristics from Stochastic gradient descent, a Robbins-Monro optimization algorithm, and Langevin dynamics, a mathematical extension of molecular dynamics models. Like stochastic gradient descent, SGLD is an iterative optimization algorithm which introduces additional noise to the stochastic gradient estimator used in SGD to optimize a differentiable objective function. Unlike traditional SGD, SGLD can be used for Bayesian learning, since the method produces samples from a posterior distribution of parameters based on available data. First described by Welling and Teh in 2011, the method has applications in many contexts which require optimization, and is most notably applied in machine learning problems.
## References
1. Thompson, William R. "On the likelihood that one unknown probability exceeds another in view of the evidence of two samples". Biometrika , 25(3–4):285–294, 1933.
2. Daniel J. Russo, Benjamin Van Roy, Abbas Kazerouni, Ian Osband and Zheng Wen (2018), "A Tutorial on Thompson Sampling", Foundations and Trends in Machine Learning: Vol. 11: No. 1, pp 1-96. https://web.stanford.edu/~bvr/pubs/TS_Tutorial.pdf
3. J. Wyatt. Exploration and Inference in Learning from Reinforcement. Ph.D. thesis, Department of Artificial Intelligence, University of Edinburgh. March 1997.
4. P. A. Ortega and D. A. Braun. "A Minimum Relative Entropy Principle for Learning and Acting", Journal of Artificial Intelligence Research, 38, pages 475–511, 2010.
5. M. J. A. Strens. "A Bayesian Framework for Reinforcement Learning", Proceedings of the Seventeenth International Conference on Machine Learning, Stanford University, California, June 29–July 2, 2000, http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.140.1701
6. B. C. May, B. C., N. Korda, A. Lee, and D. S. Leslie. "Optimistic Bayesian sampling in contextual-bandit problems". Technical report, Statistics Group, Department of Mathematics, University of Bristol, 2011.
7. Chapelle, Olivier, and Lihong Li. "An empirical evaluation of thompson sampling." Advances in neural information processing systems. 2011. http://papers.nips.cc/paper/4321-an-empirical-evaluation-of-thompson-sampling
8. O.-C. Granmo. "Solving Two-Armed Bernoulli Bandit Problems Using a Bayesian Learning Automaton", International Journal of Intelligent Computing and Cybernetics, 3 (2), 2010, 207-234.
9. Ian Clarke. "Proportionate A/B testing", September 22nd, 2011, http://blog.locut.us/2011/09/22/proportionate-ab-testing/
10. Granmo, O. C.; Glimsdal, S. (2012). "Accelerated Bayesian learning for decentralized two-armed bandit based decision making with applications to the Goore Game". Applied Intelligence. doi:10.1007/s10489-012-0346-z.
11. Wu, Huasen; Liu, Xin; Srikant, R (2016), Double Thompson Sampling for Dueling Bandits, arXiv:, Bibcode:2016arXiv160407101W
12. Daniel J. Russo and Benjamin Van Roy (2014), "Learning to Optimize Via Posterior Sampling", Mathematics of Operations Research, Vol. 39, No. 4, pp. 1221-1243, 2014. https://pubsonline.informs.org/doi/abs/10.1287/moor.2014.0650
13. Daniel J. Russo and Benjamin Van Roy (2013), "Eluder Dimension and the Sample Complexity of Optimistic Exploration", Advances in Neural Information Processing Systems 26, pp. 2256-2264. http://papers.nips.cc/paper/4909-eluder-dimension-and-the-sample-complexity-of-optimistic-exploration.pdf | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 41, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.94151371717453, "perplexity": 505.64698791194064}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875148850.96/warc/CC-MAIN-20200229083813-20200229113813-00095.warc.gz"} |
https://en.wikipedia.org/wiki/Tight_binding | # Tight binding
In solid-state physics, the tight-binding model (or TB model) is an approach to the calculation of electronic band structure using an approximate set of wave functions based upon superposition of wave functions for isolated atoms located at each atomic site. The method is closely related to the LCAO method (linear combination of atomic orbitals method) used in chemistry. Tight-binding models are applied to a wide variety of solids. The model gives good qualitative results in many cases and can be combined with other models that give better results where the tight-binding model fails. Though the tight-binding model is a one-electron model, the model also provides a basis for more advanced calculations like the calculation of surface states and application to various kinds of many-body problem and quasiparticle calculations.
## Introduction
The name "tight binding" of this electronic band structure model suggests that this quantum mechanical model describes the properties of tightly bound electrons in solids. The electrons in this model should be tightly bound to the atom to which they belong and they should have limited interaction with states and potentials on surrounding atoms of the solid. As a result, the wave function of the electron will be rather similar to the atomic orbital of the free atom to which it belongs. The energy of the electron will also be rather close to the ionization energy of the electron in the free atom or ion because the interaction with potentials and states on neighboring atoms is limited.
Though the mathematical formulation[1] of the one-particle tight-binding Hamiltonian may look complicated at first glance, the model is not complicated at all and can be understood intuitively quite easily. There are only three kinds of matrix elements that play a significant role in the theory. Two of those three kinds of elements should be close to zero and can often be neglected. The most important elements in the model are the interatomic matrix elements, which would simply be called the bond energies by a chemist.
In general there are a number of atomic energy levels and atomic orbitals involved in the model. This can lead to complicated band structures because the orbitals belong to different point-group representations. The reciprocal lattice and the Brillouin zone often belong to a different space group than the crystal of the solid. High-symmetry points in the Brillouin zone belong to different point-group representations. When simple systems like the lattices of elements or simple compounds are studied it is often not very difficult to calculate eigenstates in high-symmetry points analytically. So the tight-binding model can provide nice examples for those who want to learn more about group theory.
The tight-binding model has a long history and has been applied in many ways and with many different purposes and different outcomes. The model doesn't stand on its own. Parts of the model can be filled in or extended by other kinds of calculations and models like the nearly-free electron model. The model itself, or parts of it, can serve as the basis for other calculations.[2] In the study of conductive polymers, organic semiconductors and molecular electronics, for example, tight-binding-like models are applied in which the role of the atoms in the original concept is replaced by the molecular orbitals of conjugated systems and where the interatomic matrix elements are replaced by inter- or intramolecular hopping and tunneling parameters. These conductors nearly all have very anisotropic properties and sometimes are almost perfectly one-dimensional.
## Historical background
By 1928, the idea of a molecular orbital had been advanced by Robert Mulliken, who was influenced considerably by the work of Friedrich Hund. The LCAO method for approximating molecular orbitals was introduced in 1928 by B. N. Finklestein and G. E. Horowitz, while the LCAO method for solids was developed by Felix Bloch, as part of his doctoral dissertation in 1928, concurrently with and independent of the LCAO-MO approach. A much simpler interpolation scheme for approximating the electronic band structure, especially for the d-bands of transition metals, is the parameterized tight-binding method conceived in 1954 by John Clarke Slater and George Fred Koster,[1] sometimes referred to as the SK tight-binding method. With the SK tight-binding method, electronic band structure calculations on a solid need not be carried out with full rigor as in the original Bloch's theorem but, rather, first-principles calculations are carried out only at high-symmetry points and the band structure is interpolated over the remainder of the Brillouin zone between these points.
In this approach, interactions between different atomic sites are considered as perturbations. There exist several kinds of interactions we must consider. The crystal Hamiltonian is only approximately a sum of atomic Hamiltonians located at different sites and atomic wave functions overlap adjacent atomic sites in the crystal, and so are not accurate representations of the exact wave function. There are further explanations in the next section with some mathematical expressions.
In the recent research about strongly correlated material the tight binding approach is basic approximation because highly localized electrons like 3-d transition metal electrons sometimes display strongly correlated behaviors. In this case, the role of electron-electron interaction must be considered using the many-body physics description.
The tight-binding model is typically used for calculations of electronic band structure and band gaps in the static regime. However, in combination with other methods such as the random phase approximation (RPA) model, the dynamic response of systems may also be studied.
## Mathematical formulation
We introduce the atomic orbitals ${\displaystyle \varphi _{m}(\mathbf {r} )}$, which are eigenfunctions of the Hamiltonian ${\displaystyle H_{\rm {at}}}$ of a single isolated atom. When the atom is placed in a crystal, this atomic wave function overlaps adjacent atomic sites, and so are not true eigenfunctions of the crystal Hamiltonian. The overlap is less when electrons are tightly bound, which is the source of the descriptor "tight-binding". Any corrections to the atomic potential ${\displaystyle \Delta U}$ required to obtain the true Hamiltonian ${\displaystyle H}$ of the system, are assumed small:
${\displaystyle H(\mathbf {r} )=H_{\mathrm {at} }(\mathbf {r} )+\sum _{\mathbf {R_{n}} \neq \mathbf {0} }V(\mathbf {r} -\mathbf {R_{n}} )=H_{\mathrm {at} }(\mathbf {r} )+\Delta U(\mathbf {r} )\ ,}$
where ${\displaystyle V(\mathbf {r} -\mathbf {R_{n}} )}$ denotes the atomic potential of one atom located at site ${\displaystyle \mathbf {R} _{n}}$ in the crystal lattice. A solution ${\displaystyle \psi _{m}}$ to the time-independent single electron Schrödinger equation is then approximated as a linear combination of atomic orbitals ${\displaystyle \varphi _{m}(\mathbf {r-R_{n}} )}$:
${\displaystyle \psi _{m}(\mathbf {r} )=\sum _{\mathbf {R_{n}} }b_{m}(\mathbf {R_{n}} )\ \varphi _{m}(\mathbf {r-R_{n}} )}$,
where ${\displaystyle m}$ refers to the m-th atomic energy level.
### Translational symmetry and normalization
The Bloch theorem states that the wave function in a crystal can change under translation only by a phase factor:
${\displaystyle \psi (\mathbf {r+R_{\ell }} )=e^{i\mathbf {k\cdot R_{\ell }} }\psi (\mathbf {r} )\ ,}$
where ${\displaystyle \mathbf {k} }$ is the wave vector of the wave function. Consequently, the coefficients satisfy
${\displaystyle \sum _{\mathbf {R_{n}} }b_{m}(\mathbf {R_{n}} )\ \varphi _{m}(\mathbf {r-R_{n}+R_{\ell }} )=e^{i\mathbf {k\cdot R_{\ell }} }\sum _{\mathbf {R_{n}} }b_{m}(\mathbf {R_{n}} )\ \varphi _{m}(\mathbf {r-R_{n}} )\ .}$
By substituting ${\displaystyle \mathbf {R_{p}} =\mathbf {R_{n}} -\mathbf {R_{\ell }} }$, we find
${\displaystyle b_{m}(\mathbf {R_{p}+R_{\ell }} )=e^{i\mathbf {k\cdot R_{\ell }} }b_{m}(\mathbf {R_{p}} )\ ,}$ (where in RHS we have replaced the dummy index ${\displaystyle \mathbf {R_{n}} }$ with ${\displaystyle \mathbf {R_{p}} }$)
or
${\displaystyle b_{m}(\mathbf {R_{l}} )=e^{i\mathbf {k\cdot R_{l}} }b_{m}(\mathbf {0} )\ .}$
Normalizing the wave function to unity:
${\displaystyle \int d^{3}r\ \psi _{m}^{*}(\mathbf {r} )\psi _{m}(\mathbf {r} )=1}$
${\displaystyle =\sum _{\mathbf {R_{n}} }b_{m}^{*}(\mathbf {R_{n}} )\sum _{\mathbf {R_{\ell }} }b_{m}(\mathbf {R_{\ell }} )\int d^{3}r\ \varphi _{m}^{*}(\mathbf {r-R_{n}} )\varphi _{m}(\mathbf {r-R_{\ell }} )}$
${\displaystyle =b_{m}^{*}(0)b_{m}(0)\sum _{\mathbf {R_{n}} }e^{-i\mathbf {k\cdot R_{n}} }\sum _{\mathbf {R_{\ell }} }e^{i\mathbf {k\cdot R_{\ell }} }\ \int d^{3}r\ \varphi _{m}^{*}(\mathbf {r-R_{n}} )\varphi _{m}(\mathbf {r-R_{\ell }} )}$
${\displaystyle =Nb_{m}^{*}(0)b_{m}(0)\sum _{\mathbf {R_{p}} }e^{-i\mathbf {k\cdot R_{p}} }\ \int d^{3}r\ \varphi _{m}^{*}(\mathbf {r-R_{p}} )\varphi _{m}(\mathbf {r} )\ }$
${\displaystyle =Nb_{m}^{*}(0)b_{m}(0)\sum _{\mathbf {R_{p}} }e^{i\mathbf {k\cdot R_{p}} }\ \int d^{3}r\ \varphi _{m}^{*}(\mathbf {r} )\varphi _{m}(\mathbf {r-R_{p}} )\ ,}$
so the normalization sets ${\displaystyle b_{m}(0)}$ as
${\displaystyle b_{m}^{*}(0)b_{m}(0)={\frac {1}{N}}\ \cdot \ {\frac {1}{1+\sum _{\mathbf {R_{p}\neq 0} }e^{i\mathbf {k\cdot R_{p}} }\alpha _{m}(\mathbf {R_{p}} )}}\ ,}$
where αm (Rp ) are the atomic overlap integrals, which frequently are neglected resulting in[3]
${\displaystyle b_{m}(0)\approx {\frac {1}{\sqrt {N}}}\ ,}$
and
${\displaystyle \psi _{m}(\mathbf {r} )\approx {\frac {1}{\sqrt {N}}}\sum _{\mathbf {R_{n}} }e^{i\mathbf {k\cdot R_{n}} }\ \varphi _{m}(\mathbf {r-R_{n}} )\ .}$
### The tight binding Hamiltonian
Using the tight binding form for the wave function, and assuming only the m-th atomic energy level is important for the m-th energy band, the Bloch energies ${\displaystyle \varepsilon _{m}}$ are of the form
${\displaystyle \varepsilon _{m}=\int d^{3}r\ \psi _{m}^{*}(\mathbf {r} )H(\mathbf {r} )\psi (\mathbf {r} )}$
${\displaystyle =\sum _{\mathbf {R_{n}} }b^{*}(\mathbf {R_{n}} )\ \int d^{3}r\ \varphi ^{*}(\mathbf {r-R_{n}} )H(\mathbf {r} )\psi (\mathbf {r} )\ }$
${\displaystyle =\sum _{\mathbf {R_{\ell }} }\ \sum _{\mathbf {R_{n}} }b^{*}(\mathbf {R_{n}} )\ \int d^{3}r\ \varphi ^{*}(\mathbf {r-R_{n}} )H_{\mathrm {at} }(\mathbf {r-R_{\ell }} )\psi (\mathbf {r} )\ +\sum _{\mathbf {R_{n}} }b^{*}(\mathbf {R_{n}} )\ \int d^{3}r\ \varphi ^{*}(\mathbf {r-R_{n}} )\Delta U(\mathbf {r} )\psi (\mathbf {r} )\ .}$
${\displaystyle \approx E_{m}+b^{*}(0)\sum _{\mathbf {R_{n}} }e^{-i\mathbf {k\cdot R_{n}} }\ \int d^{3}r\ \varphi ^{*}(\mathbf {r-R_{n}} )\Delta U(\mathbf {r} )\psi (\mathbf {r} )\ .}$
Here terms involving the atomic Hamiltonian at sites other than where it is centered are neglected. The energy then becomes
${\displaystyle \varepsilon _{m}(\mathbf {k} )=E_{m}-N\ |b(0)|^{2}\left(\beta _{m}+\sum _{\mathbf {R_{n}} \neq 0}\sum _{l}\gamma _{m,l}(\mathbf {R_{n}} )e^{i\mathbf {k} \cdot \mathbf {R_{n}} }\right)\ ,}$
${\displaystyle =E_{m}-\ {\frac {\beta _{m}+\sum _{\mathbf {R_{n}} \neq 0}\sum _{l}e^{i\mathbf {k} \cdot \mathbf {R_{n}} }\gamma _{m,l}(\mathbf {R_{n}} )}{\ \ 1+\sum _{\mathbf {R_{n}\neq 0} }\sum _{l}e^{i\mathbf {k\cdot R_{n}} }\alpha _{m,l}(\mathbf {R_{n}} )}}\ ,}$
where Em is the energy of the m-th atomic level, and ${\displaystyle \alpha _{m,l}}$, ${\displaystyle \beta _{m}}$ and ${\displaystyle \gamma _{m,l}}$ are the tight binding matrix elements discussed below.
### The tight binding matrix elements
The elements
${\displaystyle \beta _{m}=-\int {\varphi _{m}^{*}(\mathbf {r} )\Delta U(\mathbf {r} )\varphi _{m}(\mathbf {r} )\,d^{3}r}{\text{,}}}$
are the atomic energy shift due to the potential on neighboring atoms. This term is relatively small in most cases. If it is large it means that potentials on neighboring atoms have a large influence on the energy of the central atom.
The next class of terms
${\displaystyle \gamma _{m,l}(\mathbf {R_{n}} )=-\int {\varphi _{m}^{*}(\mathbf {r} )\Delta U(\mathbf {r} )\varphi _{l}(\mathbf {r} -\mathbf {R_{n}} )\,d^{3}r}{\text{,}}}$
is the interatomic matrix element between the atomic orbitals m and l on adjacent atoms. It is also called the bond energy or two center integral and it is the dominant term in the tight binding model.
The last class of terms
${\displaystyle \alpha _{m,l}(\mathbf {R_{n}} )=\int {\varphi _{m}^{*}(\mathbf {r} )\varphi _{l}(\mathbf {r-R_{n}} )\,d^{3}r}{\text{,}}}$
denote the overlap integrals between the atomic orbitals m and l on adjacent atoms. These, too, are typically small; if not, then Pauli repulsion has a non-negligible influence on the energy of the central atom.
## Evaluation of the matrix elements
As mentioned before the values of the ${\displaystyle \beta _{m}}$-matrix elements are not so large in comparison with the ionization energy because the potentials of neighboring atoms on the central atom are limited. If ${\displaystyle \beta _{m}}$ is not relatively small it means that the potential of the neighboring atom on the central atom is not small either. In that case it is an indication that the tight binding model is not a very good model for the description of the band structure for some reason. The interatomic distances can be too small or the charges on the atoms or ions in the lattice is wrong for example.
The interatomic matrix elements ${\displaystyle \gamma _{m,l}}$ can be calculated directly if the atomic wave functions and the potentials are known in detail. Most often this is not the case. There are numerous ways to get parameters for these matrix elements. Parameters can be obtained from chemical bond energy data. Energies and eigenstates on some high symmetry points in the Brillouin zone can be evaluated and values integrals in the matrix elements can be matched with band structure data from other sources.
The interatomic overlap matrix elements ${\displaystyle \alpha _{m,l}}$ should be rather small or neglectable. If they are large it is again an indication that the tight binding model is of limited value for some purposes. Large overlap is an indication for too short interatomic distance for example. In metals and transition metals the broad s-band or sp-band can be fitted better to an existing band structure calculation by the introduction of next-nearest-neighbor matrix elements and overlap integrals but fits like that don't yield a very useful model for the electronic wave function of a metal. Broad bands in dense materials are better described by a nearly free electron model.
The tight binding model works particularly well in cases where the band width is small and the electrons are strongly localized, like in the case of d-bands and f-bands. The model also gives good results in the case of open crystal structures, like diamond or silicon, where the number of neighbors is small. The model can easily be combined with a nearly free electron model in a hybrid NFE-TB model.[2]
## Connection to Wannier functions
Bloch functions describe the electronic states in a periodic crystal lattice. Bloch functions can be represented as a Fourier series[4]
${\displaystyle \psi _{m}\mathbf {(k,r)} ={\frac {1}{\sqrt {N}}}\sum _{n}{a_{m}\mathbf {(R_{n},r)} }e^{\mathbf {ik\cdot R_{n}} }\ ,}$
where Rn denotes an atomic site in a periodic crystal lattice, k is the wave vector of the Bloch's function, r is the electron position, m is the band index, and the sum is over all N atomic sites. The Bloch's function is an exact eigensolution for the wave function of an electron in a periodic crystal potential corresponding to an energy Em (k), and is spread over the entire crystal volume.
Using the Fourier transform analysis, a spatially localized wave function for the m-th energy band can be constructed from multiple Bloch's functions:
${\displaystyle a_{m}\mathbf {(R_{n},r)} ={\frac {1}{\sqrt {N}}}\sum _{\mathbf {k} }{e^{\mathbf {-ik\cdot R_{n}} }\psi _{m}\mathbf {(k,r)} }={\frac {1}{\sqrt {N}}}\sum _{\mathbf {k} }{e^{\mathbf {ik\cdot (r-R_{n})} }u_{m}\mathbf {(k,r)} }.}$
These real space wave functions ${\displaystyle {a_{m}\mathbf {(R_{n},r)} }}$ are called Wannier functions, and are fairly closely localized to the atomic site Rn. Of course, if we have exact Wannier functions, the exact Bloch functions can be derived using the inverse Fourier transform.
However it is not easy to calculate directly either Bloch functions or Wannier functions. An approximate approach is necessary in the calculation of electronic structures of solids. If we consider the extreme case of isolated atoms, the Wannier function would become an isolated atomic orbital. That limit suggests the choice of an atomic wave function as an approximate form for the Wannier function, the so-called tight binding approximation.
## Second quantization
Modern explanations of electronic structure like t-J model and Hubbard model are based on tight binding model.[5] Tight binding can be understood by working under a second quantization formalism.
Using the atomic orbital as a basis state, the second quantization Hamiltonian operator in the tight binding framework can be written as:
${\displaystyle H=-t\sum _{\langle i,j\rangle ,\sigma }(c_{i,\sigma }^{\dagger }c_{j,\sigma }^{}+h.c.)}$,
${\displaystyle c_{i\sigma }^{\dagger },c_{j\sigma }}$ - creation and annihilation operators
${\displaystyle \displaystyle \sigma }$ - spin polarization
${\displaystyle \displaystyle t}$ - hopping integral
${\displaystyle \displaystyle \langle i,j\rangle }$ - nearest neighbor index
${\displaystyle \displaystyle h.c.}$ - the hermitian conjugate of the other term(s)
Here, hopping integral ${\displaystyle \displaystyle t}$ corresponds to the transfer integral ${\displaystyle \displaystyle \gamma }$ in tight binding model. Considering extreme cases of ${\displaystyle t\rightarrow 0}$, it is impossible for an electron to hop into neighboring sites. This case is the isolated atomic system. If the hopping term is turned on (${\displaystyle \displaystyle t>0}$) electrons can stay in both sites lowering their kinetic energy.
In the strongly correlated electron system, it is necessary to consider the electron-electron interaction. This term can be written in
${\displaystyle \displaystyle H_{ee}={\frac {1}{2}}\sum _{n,m,\sigma }\langle n_{1}m_{1},n_{2}m_{2}|{\frac {e^{2}}{|r_{1}-r_{2}|}}|n_{3}m_{3},n_{4}m_{4}\rangle c_{n_{1}m_{1}\sigma _{1}}^{\dagger }c_{n_{2}m_{2}\sigma _{2}}^{\dagger }c_{n_{4}m_{4}\sigma _{2}}c_{n_{3}m_{3}\sigma _{1}}}$
This interaction Hamiltonian includes direct Coulomb interaction energy and exchange interaction energy between electrons. There are several novel physics induced from this electron-electron interaction energy, such as metal-insulator transitions (MIT), high-temperature superconductivity, and several quantum phase transitions.
## Example: one-dimensional s-band
Here the tight binding model is illustrated with a s-band model for a string of atoms with a single s-orbital in a straight line with spacing a and σ bonds between atomic sites.
To find approximate eigenstates of the Hamiltonian, we can use a linear combination of the atomic orbitals
${\displaystyle |k\rangle ={\frac {1}{\sqrt {N}}}\sum _{n=1}^{N}e^{inka}|n\rangle }$
where N = total number of sites and ${\displaystyle k}$ is a real parameter with ${\displaystyle -{\frac {\pi }{a}}\leqq k\leqq {\frac {\pi }{a}}}$. (This wave function is normalized to unity by the leading factor 1/√N provided overlap of atomic wave functions is ignored.) Assuming only nearest neighbor overlap, the only non-zero matrix elements of the Hamiltonian can be expressed as
${\displaystyle \langle n|H|n\rangle =E_{0}=E_{i}-U\ .}$
${\displaystyle \langle n\pm 1|H|n\rangle =-\Delta \ }$
${\displaystyle \langle n|n\rangle =1\ ;}$${\displaystyle \langle n\pm 1|n\rangle =S\ .}$
The energy Ei is the ionization energy corresponding to the chosen atomic orbital and U is the energy shift of the orbital as a result of the potential of neighboring atoms. The ${\displaystyle \langle n\pm 1|H|n\rangle =-\Delta }$ elements, which are the Slater and Koster interatomic matrix elements, are the bond energies ${\displaystyle E_{i,j}}$. In this one dimensional s-band model we only have ${\displaystyle \sigma }$-bonds between the s-orbitals with bond energy ${\displaystyle E_{s,s}=V_{ss\sigma }}$. The overlap between states on neighboring atoms is S. We can derive the energy of the state ${\displaystyle |k\rangle }$ using the above equation:
${\displaystyle H|k\rangle ={\frac {1}{\sqrt {N}}}\sum _{n}e^{inka}H|n\rangle }$
${\displaystyle \langle k|H|k\rangle ={\frac {1}{N}}\sum _{n,\ m}e^{i(n-m)ka}\langle m|H|n\rangle }$${\displaystyle ={\frac {1}{N}}\sum _{n}\langle n|H|n\rangle +{\frac {1}{N}}\sum _{n}\langle n-1|H|n\rangle e^{+ika}+{\frac {1}{N}}\sum _{n}\langle n+1|H|n\rangle e^{-ika}}$${\displaystyle =E_{0}-2\Delta \,\cos(ka)\ ,}$
where, for example,
${\displaystyle {\frac {1}{N}}\sum _{n}\langle n|H|n\rangle =E_{0}{\frac {1}{N}}\sum _{n}1=E_{0}\ ,}$
and
${\displaystyle {\frac {1}{N}}\sum _{n}\langle n-1|H|n\rangle e^{+ika}=-\Delta e^{ika}{\frac {1}{N}}\sum _{n}1=-\Delta e^{ika}\ .}$
${\displaystyle {\frac {1}{N}}\sum _{n}\langle n-1|n\rangle e^{+ika}=Se^{ika}{\frac {1}{N}}\sum _{n}1=Se^{ika}\ .}$
Thus the energy of this state ${\displaystyle |k\rangle }$ can be represented in the familiar form of the energy dispersion:
${\displaystyle E(k)={\frac {E_{0}-2\Delta \,\cos(ka)}{1+2S\,\cos(ka)}}}$.
• For ${\displaystyle k=0}$ the energy is ${\displaystyle E=(E_{0}-2\Delta )/(1+2S)}$ and the state consists of a sum of all atomic orbitals. This state can be viewed as a chain of bonding orbitals.
• For ${\displaystyle k=\pi /(2a)}$ the energy is ${\displaystyle E=E_{0}}$ and the state consists of a sum of atomic orbitals which are a factor ${\displaystyle e^{i\pi /2}}$ out of phase. This state can be viewed as a chain of non-bonding orbitals.
• Finally for ${\displaystyle k=\pi /a}$ the energy is ${\displaystyle E=(E_{0}+2\Delta )/(1-2S)}$ and the state consists of an alternating sum of atomic orbitals. This state can be viewed as a chain of anti-bonding orbitals.
This example is readily extended to three dimensions, for example, to a body-centered cubic or face-centered cubic lattice by introducing the nearest neighbor vector locations in place of simply n a.[6] Likewise, the method can be extended to multiple bands using multiple different atomic orbitals at each site. The general formulation above shows how these extensions can be accomplished.
## Table of interatomic matrix elements
In 1954 J.C. Slater and G.F. Koster published, mainly for the calculation of transition metal d-bands, a table of interatomic matrix elements[1]
${\displaystyle E_{i,j}({\vec {\mathbf {r} }}_{n,n'})=\langle n,i|H|n',j\rangle }$
which can also be derived from the cubic harmonic orbitals straightforwardly. The table expresses the matrix elements as functions of LCAO two-centre bond integrals between two cubic harmonic orbitals, i and j, on adjacent atoms. The bond integrals are for example the ${\displaystyle V_{ss\sigma }}$, ${\displaystyle V_{pp\pi }}$ and ${\displaystyle V_{dd\delta }}$ for sigma, pi and delta bonds (Notice that these integrals should also depend on the distance between the atoms, i.e. are a function of ${\displaystyle (l,m,n)}$, even though it is not explicitly stated every time.).
The interatomic vector is expressed as
${\displaystyle {\vec {\mathbf {r} }}_{n,n'}=(r_{x},r_{y},r_{z})=d(l,m,n)}$
where d is the distance between the atoms and l, m and n are the direction cosines to the neighboring atom.
${\displaystyle E_{s,s}=V_{ss\sigma }}$
${\displaystyle E_{s,x}=lV_{sp\sigma }}$
${\displaystyle E_{x,x}=l^{2}V_{pp\sigma }+(1-l^{2})V_{pp\pi }}$
${\displaystyle E_{x,y}=lmV_{pp\sigma }-lmV_{pp\pi }}$
${\displaystyle E_{x,z}=lnV_{pp\sigma }-lnV_{pp\pi }}$
${\displaystyle E_{s,xy}={\sqrt {3}}lmV_{sd\sigma }}$
${\displaystyle E_{s,x^{2}-y^{2}}={\frac {\sqrt {3}}{2}}(l^{2}-m^{2})V_{sd\sigma }}$
${\displaystyle E_{s,3z^{2}-r^{2}}=[n^{2}-(l^{2}+m^{2})/2]V_{sd\sigma }}$
${\displaystyle E_{x,xy}={\sqrt {3}}l^{2}mV_{pd\sigma }+m(1-2l^{2})V_{pd\pi }}$
${\displaystyle E_{x,yz}={\sqrt {3}}lmnV_{pd\sigma }-2lmnV_{pd\pi }}$
${\displaystyle E_{x,zx}={\sqrt {3}}l^{2}nV_{pd\sigma }+n(1-2l^{2})V_{pd\pi }}$
${\displaystyle E_{x,x^{2}-y^{2}}={\frac {\sqrt {3}}{2}}l(l^{2}-m^{2})V_{pd\sigma }+l(1-l^{2}+m^{2})V_{pd\pi }}$
${\displaystyle E_{y,x^{2}-y^{2}}={\frac {\sqrt {3}}{2}}m(l^{2}-m^{2})V_{pd\sigma }-m(1+l^{2}-m^{2})V_{pd\pi }}$
${\displaystyle E_{z,x^{2}-y^{2}}={\frac {\sqrt {3}}{2}}n(l^{2}-m^{2})V_{pd\sigma }-n(l^{2}-m^{2})V_{pd\pi }}$
${\displaystyle E_{x,3z^{2}-r^{2}}=l[n^{2}-(l^{2}+m^{2})/2]V_{pd\sigma }-{\sqrt {3}}ln^{2}V_{pd\pi }}$
${\displaystyle E_{y,3z^{2}-r^{2}}=m[n^{2}-(l^{2}+m^{2})/2]V_{pd\sigma }-{\sqrt {3}}mn^{2}V_{pd\pi }}$
${\displaystyle E_{z,3z^{2}-r^{2}}=n[n^{2}-(l^{2}+m^{2})/2]V_{pd\sigma }+{\sqrt {3}}n(l^{2}+m^{2})V_{pd\pi }}$
${\displaystyle E_{xy,xy}=3l^{2}m^{2}V_{dd\sigma }+(l^{2}+m^{2}-4l^{2}m^{2})V_{dd\pi }+(n^{2}+l^{2}m^{2})V_{dd\delta }}$
${\displaystyle E_{xy,yz}=3lm^{2}nV_{dd\sigma }+ln(1-4m^{2})V_{dd\pi }+ln(m^{2}-1)V_{dd\delta }}$
${\displaystyle E_{xy,zx}=3l^{2}mnV_{dd\sigma }+mn(1-4l^{2})V_{dd\pi }+mn(l^{2}-1)V_{dd\delta }}$
${\displaystyle E_{xy,x^{2}-y^{2}}={\frac {3}{2}}lm(l^{2}-m^{2})V_{dd\sigma }+2lm(m^{2}-l^{2})V_{dd\pi }+[lm(l^{2}-m^{2})/2]V_{dd\delta }}$
${\displaystyle E_{yz,x^{2}-y^{2}}={\frac {3}{2}}mn(l^{2}-m^{2})V_{dd\sigma }-mn[1+2(l^{2}-m^{2})]V_{dd\pi }+mn[1+(l^{2}-m^{2})/2]V_{dd\delta }}$
${\displaystyle E_{zx,x^{2}-y^{2}}={\frac {3}{2}}nl(l^{2}-m^{2})V_{dd\sigma }+nl[1-2(l^{2}-m^{2})]V_{dd\pi }-nl[1-(l^{2}-m^{2})/2]V_{dd\delta }}$
${\displaystyle E_{xy,3z^{2}-r^{2}}={\sqrt {3}}\left[lm(n^{2}-(l^{2}+m^{2})/2)V_{dd\sigma }-2lmn^{2}V_{dd\pi }+[lm(1+n^{2})/2]V_{dd\delta }\right]}$
${\displaystyle E_{yz,3z^{2}-r^{2}}={\sqrt {3}}\left[mn(n^{2}-(l^{2}+m^{2})/2)V_{dd\sigma }+mn(l^{2}+m^{2}-n^{2})V_{dd\pi }-[mn(l^{2}+m^{2})/2]V_{dd\delta }\right]}$
${\displaystyle E_{zx,3z^{2}-r^{2}}={\sqrt {3}}\left[ln(n^{2}-(l^{2}+m^{2})/2)V_{dd\sigma }+ln(l^{2}+m^{2}-n^{2})V_{dd\pi }-[ln(l^{2}+m^{2})/2]V_{dd\delta }\right]}$
${\displaystyle E_{x^{2}-y^{2},x^{2}-y^{2}}={\frac {3}{4}}(l^{2}-m^{2})^{2}V_{dd\sigma }+[l^{2}+m^{2}-(l^{2}-m^{2})^{2}]V_{dd\pi }+[n^{2}+(l^{2}-m^{2})^{2}/4]V_{dd\delta }}$
${\displaystyle E_{x^{2}-y^{2},3z^{2}-r^{2}}={\sqrt {3}}\left[(l^{2}-m^{2})[n^{2}-(l^{2}+m^{2})/2]V_{dd\sigma }/2+n^{2}(m^{2}-l^{2})V_{dd\pi }+[(1+n^{2})(l^{2}-m^{2})/4]V_{dd\delta }\right]}$
${\displaystyle E_{3z^{2}-r^{2},3z^{2}-r^{2}}=[n^{2}-(l^{2}+m^{2})/2]^{2}V_{dd\sigma }+3n^{2}(l^{2}+m^{2})V_{dd\pi }+{\frac {3}{4}}(l^{2}+m^{2})^{2}V_{dd\delta }}$
Not all interatomic matrix elements are listed explicitly. Matrix elements that are not listed in this table can be constructed by permutation of indices and cosine directions of other matrix elements in the table. Note that swapping orbital indices amounts to taking ${\displaystyle (l,m,n)\rightarrow (-l,-m,-n)}$, i.e. ${\displaystyle E_{\alpha ,\beta }(l,m,n)=E_{\beta ,\alpha }(-l,-m,-n)}$. For example, ${\displaystyle E_{x,s}=-lV_{sp\sigma }}$.
## References
1. ^ a b c J. C. Slater, G. F. Koster (1954). "Simplified LCAO method for the Periodic Potential Problem". Physical Review. 94 (6): 1498–1524. Bibcode:1954PhRv...94.1498S. doi:10.1103/PhysRev.94.1498.
2. ^ a b Walter Ashley Harrison (1989). Electronic Structure and the Properties of Solids. Dover Publications. ISBN 0-486-66021-4.
3. ^ As an alternative to neglecting overlap, one may choose as a basis instead of atomic orbitals a set of orbitals based upon atomic orbitals but arranged to be orthogonal to orbitals on other atomic sites, the so-called Löwdin orbitals. See PY Yu & M Cardona (2005). "Tight-binding or LCAO approach to the band structure of semiconductors". Fundamentals of Semiconductors (3 ed.). Springrer. p. 87. ISBN 3-540-25470-6.
4. ^ Orfried Madelung, Introduction to Solid-State Theory (Springer-Verlag, Berlin Heidelberg, 1978).
5. ^ Alexander Altland and Ben Simons (2006). "Interaction effects in the tight-binding system". Condensed Matter Field Theory. Cambridge University Press. pp. 58 ff. ISBN 978-0-521-84508-3.
6. ^ Sir Nevill F Mott & H Jones (1958). "II §4 Motion of electrons in a periodic field". The theory of the properties of metals and alloys (Reprint of Clarendon Press (1936) ed.). Courier Dover Publications. pp. 56 ff. ISBN 0-486-60456-X.
• N. W. Ashcroft and N. D. Mermin, Solid State Physics (Thomson Learning, Toronto, 1976).
• Stephen Blundell Magnetism in Condensed Matter(Oxford, 2001).
• S.Maekawa et al. Physics of Transition Metal Oxides (Springer-Verlag Berlin Heidelberg, 2004).
• John Singleton Band Theory and Electronic Properties of Solids (Oxford, 2001). | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 125, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.837878942489624, "perplexity": 1343.5459742449673}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572221.38/warc/CC-MAIN-20220816060335-20220816090335-00028.warc.gz"} |
https://www.lessonplanet.com/teachers/expressions-and-operations-628871-math-9th | # Expressions and Operations
In this expressions and operations worksheet, 9th graders solve and complete 7 different problems that include variable expressions and applying the order of operations. First, they determine the value of expressions given the integers to substitute. Then, students determine the expression that describes a number divided by the sum of another number. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8488507866859436, "perplexity": 836.6202140511195}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423681.33/warc/CC-MAIN-20170721022216-20170721042216-00143.warc.gz"} |
https://brilliant.org/problems/stars-within-stars/ | # Stars Within Stars
Geometry Level 4
Let $$\{p/q\}$$ denote the $$p$$-pointed star formed by joining every $$q^{\text{th}}$$ vertex on a convex $$p$$-gon until you reach the starting point. This is repeated with different starting points if necessary until we form a $$p$$-pointed star.
Is it true that $$\{p/q\}$$ contains $$\{p/r\}$$ if $$r \le q$$?
Note:
• $$\{p/q\}$$ is defined for all $$p \ge 3$$, and $$1 \le q < \frac p2$$.
• As an explicit example, the following shows three possible $$7$$-pointed stars. Here $$\{7/2\}$$ contains $$\{7/1\}$$ because there is an instance of $$\{7/1\}$$ inside $$\{7/2\}$$.
× | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8972196578979492, "perplexity": 528.5738090569207}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948521292.23/warc/CC-MAIN-20171213045921-20171213065921-00539.warc.gz"} |
http://www.ma.utexas.edu/mp_arc-bin/mpa?yn=05-434 | 05-434 Marek Biskup and Roman Kotecky
Phase coexistence of gradient Gibbs states (318K, PDF) Dec 21, 05
Abstract , Paper (src), View paper (auto. generated pdf), Index of related papers
Abstract. We consider the (scalar) gradient fields $\eta=(\eta_b)$--with $b$ denoting the nearest-neighbor edges in $\Z^2$--that are distributed according to the Gibbs measure proportional to $\texte^{-\beta H(\eta)}\nu(\textd\eta)$. Here $H=\sum_bV(\eta_b)$ is the Hamiltonian, $V$ is a symmetric potential, $\beta>0$ is the inverse temperature, and $\nu$ is the Lebesgue measure on the linear space defined by imposing the loop condition $\eta_{b_1}+\eta_{b_2}=\eta_{b_3}+\eta_{b_4}$ for each plaquette $(b_1,b_2,b_3,b_4)$ in $\Z^2$. For convex $V$, Funaki and Spohn have shown that ergodic infinite-volume Gibbs measures are characterized by their tilt. We describe a mechanism by which the gradient Gibbs measures with non-convex $V$ undergo a structural, order-disorder phase transition at some intermediate value of inverse temperature $\beta$. At the transition point, there are at least two distinct gradient measures with zero tilt, i.e., $E \eta_b=0$.
Files: 05-434.src( 05-434.keywords , grad-models-submit.pdf.mm ) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9723191857337952, "perplexity": 992.9497006198582}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863650.42/warc/CC-MAIN-20180620143814-20180620163814-00422.warc.gz"} |
https://chemistry.stackexchange.com/tags/hydrogen/hot | # Tag Info
60
As other answers have noted, the only gas lighter than helium is hydrogen, which has some flammability issues that make it more difficult to handle safely than helium. Also, in practice, hydrogen is not significantly "lighter" than helium. While the molecular mass (and thus, per the ideal gas law, the density) of hydrogen gas is about half that of helium, ...
49
Actually, hydrogen is the only gas that is lighter than helium. However, it has a very big disadvantage: It is highly flammable. On the other hand, helium is almost completely inert - this is why it is very much safer to use the latter. What might happen when you use hydrogen instead of helium was impressively proven by history when the "Hindenburg" ...
34
It depends on which definition of acids and bases you are using. According to the Arrhenius theory, acids are defined as a compound or element that releases hydrogen (H+) ions into the solution. Therefore, there are no Arrhenius acids without a hydrogen atom. According to the Brønsted–Lowry acid–base theory, an acid is any substance that can donate a ...
31
Harold Urey and George Murphy used spectroscopy to identify deuterium late in 1931, announcing it at the 1931 Christmas meeting of the American Physical Society. Picking up out of 'From Nuclear Transmutation to Nuclear Fission, 1932-1939" by Per F. Dahl: If anything, the naming of the new isotope proved more problematic than its isolation. At a special ...
26
In addition to the reasons ste listed, the isotopes of hydrogen have the greatest differences in mass compared to other elements. Consider that deuterium is twice as heavy as protium, and tritium is three-times as heavy as protium. Isotopes of all elements can be used in kinetic isotope experiments. The dramatic differences in mass among the hydrogen ...
25
We are discussing the following equilibrium We can make the acid a stronger acid by pushing the equilibrium to the right. To push the equilibrium to the right we can destabilize the starting acid pictured on the left side of the equation, and \ or stabilize the carboxylate anion pictured on the right side of the equation. Comparing acetic acid ($\ce{R~ =... 24 Water, as you may know, has a dualist nature between covalent and ionic bonding; oxygen is the second most electronegative element in the periodic table, while hydrogen's simplistic construction makes it very zen about how it forms bonds (it defines the center point of most electronegativity scales). While the bond between the hydrogens and oxygen of a water ... 23$\ce{H2}$cannot be liquified at room temperature, whatever the pressure. Generally speaking, all gases can only be liquified when the temperature is under its critical value. 22 It can't work because of the fundamental thermodynamics What you are proposing is, basically, the plane carries water; the water is broken down into its components, hydrogen and oxygen; the components are recombined by burning them as fuel. Burning hydrogen and oxygen is a perfectly good way to create a lot of heat. But it doesn't much matter how you break ... 22 Hydrogen critical temperature is$\pu{32.938 K, resp. -240.21 ^{\circ}C}$. Above this temperature, it cannot be liquified. So to answer your question, you can get as high pressure as you can produce and the container can withstand, as there is no condensation reducing the pressure. WARNING: An accidental explosive container rupture can easily cause severe ... 21 There is no chemical difference, only a psychological one: how do you think about it. They are both the same thing, but many people associate$\ce{H+}$ions with chemical reactions and protons with particle physics. A hydrogen atom has one electron and a proton, no neutron. Therefore$\ce{H+}$is just a proton. That is why acids are sometimes referred as ... 21 Yes, sodium metal is also going to react exothermically with salt water or any other aqueous solution as long as it comes in contact with water: $$\ce{Na (s) + H2O -> Na+ (aq) + OH- (aq) + 0.5 H2 (g)}$$ eventually leading to explosion of hydrogen-oxygen mix forming near the water surface. Presence of sodium chloride in salt water isn't going to ... 20 As you have said, you are studying stoichiometry at High School Level. From this I can guess, that you have probably not studied the concept of Limiting Reagent yet. What is Limiting Reagent? In a chemical reaction, the limiting reagent, also known as the "limiting reactant", is the substance which is totally consumed when the chemical reaction is ... 17 First, let me say that I've enjoyed many times exploding soap bubbles of about one milliliter filled with hydrolysis gas. That is 1 cubic centimeter. That will give you a sound that rings in your ears in a decent sized living room. You may wish to use ear protection for the experiment. 50 ml will have an effect in a lecture hall that not only wakes up ... 17 This post deals with the mechanism that is observed in the gas phase. It is of course not as simple as the equation might suggest and you did suspect that already. $$\ce{2H2 + O2 -> 2H2O}$$ This will be divided into many different elementary sub reactions. Any mixture of oxygen and hydrogen is metastable (stable as long as you do not change the ... 17 I think there are two reasons. First, it is more convenient to categorize them under the actual element-name to which they belong. If I say "15-Beryllium" everyone knows immediately, what I'm talking about. If we add hundreds of isotope-names, it would be quite a mess. Leading to the second reason: Xenon for example has over known 30 isotopes. There are just ... 17 Your chemistry teacher is making a few simplifications there that make the statement false on a black-and-white true-and-false scale. Protons would repel each other electrostaticly due to their same charges. Neutrons interact with protons by the so-termed strong interaction (because it is stronger than the weak interaction; props to physicists for inventing ... 17 This is a rather interesting question because these names actually refer to classes of reactions (specific to certain reagents and products), and aren't constrained by specific proportions of substances or even the identity of these substances.$\hspace{4cm}$A Rosenmund catalyst is used to reduce acyl chlorides to their corresponding aldehydes, and is ... 16 Water has formula H2O. Oxygen has 3 stable isotopes (99.76% 16O, 0.039% 17O, 0.201% 18O), and hydrogen has two (99.985% 1H, 0.015% 2H). Thus, there are 9 natural isotopic configurations for water: 3 possibilities for oxygen, multiplied by 3 possibilities for 2 hydrogens with 2 possible isotopes. Out of those 9 possible configurations, only 4 have a natural ... 16 Yes free$\ce{H+}$ions, protons, really exist. Protons are constantly emanating from the sun and reaching Earth. The proton flux is continuously monitored by satellite. However, in a solution such as water, instead of bare$\ce{H+}$ions, they are$\ce{H3O+}$or larger ions such as$\ce{H5O2+}$or$\ce{H9O4+}$. When$\ce{HCl}$dissolves, the ... 15 As the electrons fall from higher levels to lower levels, they release photons. Different "falls" create different colors of light. A larger transition releases higher energy (short wavelength) light, while smaller transitions release lower energies (longer wavelength). The visible wavelengths are caused a by single electron making the different ... 15$\ce{Pd}$can dissociate$\ce{H2}$because the resulting$\ce{Pd-H}$bonds are more stable than the starting$\ce{H2}$. But the reason why$\ce{Pd}$is so good at dissociating$\ce{H2}$is related to the energy barrier to bond formation. The dissociation of$\ce{H2}$on a$\ce{Pd}$surface (and on$\ce{Pt}$and maybe several other metals) has no barrier. So ... 15 You hit it right on the nose. The real key piece of information is that given enough time, all the unsaturated bonds will be reduced. This tells you that though the reduction is thermodynamically favorable, it is the difference in the energy barriers ($\ce{\Delta \mathrm{G^{‡}}}$) that prevents the carbonyl reduction from occurring at the same rate as the ... 15 Is your book by chance very old? From the Wikipedia entry for "nascent hydrogen": Nascent hydrogen is purported to consist of a chemically reactive form of hydrogen that is freshly generated, hence nascent. Molecular hydrogen ($\ce{H2}$), which is the normal form of this element, is unreactive toward organic compounds, so a special state of ... 15 WHAT MAKES HYDROGEN ABUNDANT IN UNIVERSE: After few minutes of creation of the universe, protons and neutrons began to react with each other to form deuterium, an isotope of hydrogen. Deuterium, soon collected another neutron to form tritium. Rapidly following this reaction was the addition of another proton which produced a helium nucleus. Sources say ... 15 There are quite a number of theories regarding acidity and basicity, but in this case, will explain the Lewis acid. The Lewis Theory of acids and bases This theory extends well beyond the things you normally think of as acids and bases. The theory An acid is an electron pair acceptor. A base is an electron pair donor The Lewis acid-base theory explains ... 14 The way I understand it is (and my understanding is by no means perfect, or complete), as you pointed out correctly: a hydrogen ion is in fact a proton. The proton is a "bare charge" and as you rightly said, "tiny". a This makes it extremely reactive (in a sense), and thus in a chemical system of any sort would immediately seek out and associate with the ... 13 A water molecule is charge neutral because there is the same number of positive charges as there are negative charges. In this diagram, called a Lewis structure, the dots represent electrons while the lines or dashes represent a covalent bond of two electrons. When water ionizes one of the hydrogen atoms absconds with itself and leaves it's electron ... 13 It seems like an idea of using magnesium anthracene systems for the$\ce{MgH2}$production persisted since 1980s [1] till late 2000s, when new more efficient method with better scalability for industrial use was established. One of the recent reviews in hydrogen-storage applications [2, p. 220] compares the older two-step process of$\ce{MgH2}$synthesis: ... 12 You cannot apply the$\Delta{G}$equation to a single electrode potential. It can be applied to a cell though so if the hydrogen electrode is connected to another electrode (say copper dipped in copper sulfate solution) then you can find the free energy. It's better to remember the formula as:$\Delta{G} = -nFE_{cell}^o$Where$E_{cell}^o = E_{cathode}^...
Only top voted, non community-wiki answers of a minimum length are eligible | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8009772896766663, "perplexity": 1780.9988586256875}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400244231.61/warc/CC-MAIN-20200926134026-20200926164026-00201.warc.gz"} |
http://mathhelpforum.com/pre-calculus/134531-need-help-magnitude.html | # Math Help - Need help with Magnitude!!!
1. ## Need help with Magnitude!!!
The magnitude of M of a star is a measure of its brightness B given by the formula M = -2.5 log (B/B0), where B0 is a constant, the brightness of an "average star."
a) If a star's brightness increases by a factor of 10, by how much does its magnitude increase or decrease?
I got the answer from the back of the book and it says "decreasing by -2.5." How did they get this answer? Please help me with this problem.
Thank you so much!
2. Originally Posted by florx
The magnitude of M of a star is a measure of its brightness B given by the formula M = -2.5 log (B/B0), where B0 is a constant, the brightness of an "average star."
a) If a star's brightness increases by a factor of 10, by how much does its magnitude increase or decrease?
I got the answer from the back of the book and it says "decreasing by -2.5." How did they get this answer? Please help me with this problem.
Thank you so much!
I assume that your logarithm is base 10...
You are told the brightness is magnified by a factor of 10.
This means $B \to 10B$.
So $M = -2.5\log{\frac{10B}{B_0}}$
$= -2.5\left(\log{10} + \log{\frac{B}{B_0}}\right)$
$= -2.5\left(1 + \log{\frac{B}{B_0}}\right)$
$= -2.5 - 2.5\log{\frac{B}{B_0}}$.
So the magnitude has decreased by -2.5. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8052102327346802, "perplexity": 391.8591255283432}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037663218.28/warc/CC-MAIN-20140930004103-00490-ip-10-234-18-248.ec2.internal.warc.gz"} |
https://labs.tib.eu/arxiv/?author=J.%20Molenda-%C5%BBakowicz | • ### High-resolution spectroscopy and abundance analysis of Delta Scuti stars near the Gamma Doradus instability strip(1706.04782)
June 15, 2017 astro-ph.SR
$\delta$ Scuti stars are remarkable objects for asteroseismology. In spite of decades of investigations, there are still important questions about these pulsating stars to be answered, such as their positions in $\log$$T_{\rm eff}$ $-$ $\log g$ diagram, or the dependence of the pulsation modes on atmospheric parameters and rotation. Therefore, we performed a detailed spectroscopic study of $41$ $\delta$ Scuti stars. The selected objects are located near the $\gamma$ Doradus instability strip to make a reliable comparison between both types of variables. Spectral classification, stellar atmospheric parameters ($T_{\rm eff}$, $\log g$, $\xi$) and $v \sin i$ values were determined. The spectral types and luminosity classes of stars were found to be A1$-$F5 and III$-$V, respectively. The $T_{\rm eff}$ ranges from $6600$ to $9400$ K, whereas the obtained $\log g$ values are from $3.4$ to $4.3$. The $v \sin i$ values were found between $10$ and $222$ km s$^{-1}$. The derived chemical abundances of $\delta$ Scuti stars were compared to those of the non-pulsating stars and $\gamma$ Doradus variables. It turned out that both $\delta$ Scuti and $\gamma$ Doradus variables have similar abundance patterns, which are slightly different from the non-pulsating stars. These chemical differences can help us to understand why there are non-pulsating stars in classical instability strip. Effects of the obtained parameters on pulsation period and amplitude were examined. It appears that the pulsation period decreases with increasing $T_{\rm eff}$. No significant correlations were found between pulsation period, amplitude and $v \sin i$.
• ### Detection of Solar-Like Oscillations, Observational Constraints, and Stellar Models for $\theta$ Cyg, the Brightest Star Observed by the {\it Kepler} Mission(1607.01035)
July 4, 2016 astro-ph.SR
$\theta$ Cygni is an F3 spectral-type main-sequence star with visual magnitude V=4.48. This star was the brightest star observed by the original Kepler spacecraft mission. Short-cadence (58.8 s) photometric data using a custom aperture were obtained during Quarter 6 (June-September 2010) and subsequently in Quarters 8 and 12-17. We present analyses of the solar-like oscillations based on Q6 and Q8 data, identifying angular degree $l$ = 0, 1, and 2 oscillations in the range 1000-2700 microHz, with a large frequency separation of 83.9 plus/minus 0.4 microHz, and frequency with maximum amplitude 1829 plus/minus 54 microHz. We also present analyses of new ground-based spectroscopic observations, which, when combined with angular diameter measurements from interferometry and Hipparcos parallax, give T_eff = 6697 plus/minus 78 K, radius 1.49 plus/minus 0.03 solar radii, [Fe/H] = -0.02 plus/minus 0.06 dex, and log g = 4.23 plus/minus 0.03. We calculate stellar models matching the constraints using several methods, including using the Yale Rotating Evolution Code and the Asteroseismic Modeling Portal. The best-fit models have masses 1.35-1.39 solar masses and ages 1.0-1.6 Gyr. theta Cyg's T_eff and log g place it cooler than the red edge of the gamma Doradus instability region established from pre-Kepler ground-based observations, but just at the red edge derived from pulsation modeling. The pulsation models show gamma Dor gravity-mode pulsations driven by the convective-blocking mechanism, with frequencies of 1 to 3 cycles/day (11 to 33 microHz). However, gravity modes were not detected in the Kepler data, one signal at 1.776 cycles/day (20.56 microHz) may be attributable to a faint, possibly background, binary. Asteroseismic studies of theta Cyg and other A-F stars observed by Kepler and CoRoT, will help to improve stellar model physics and to test pulsation driving mechanisms.
• ### Activity indicators and stellar parameters of the Kepler targets. An application of the ROTFIT pipeline to LAMOST-Kepler stellar spectra(1606.09149)
June 29, 2016 astro-ph.SR
The LAMOST-Kepler survey, whose spectra are analyzed in the present paper, is the first large spectroscopic project aimed at characterizing these sources. Our work is focused at selecting emission-line objects and chromospherically active stars and on the evaluation of the atmospheric parameters. We have used a version of the code ROTFIT that exploits a wide and homogeneous collection of real star spectra, i.e. the Indo US library. We provide a catalog with the atmospheric parameters (Teff, logg, [Fe/H]), the radial velocity (RV) and an estimate of the projected rotation velocity (vsini). For cool stars (Teff<6000 K) we have also calculated the H-alpha and CaII-IRT chromospheric fluxes. We have derived the RV and the atmospheric parameters for 61,753 spectra of 51,385 stars. Literature data for a few hundred stars have been used to do a quality control of our results. The final accuracy of RV, Teff, logg, and [Fe/H] measurements is about 14 km/s, 3.5%, 0.3 dex, and 0.2 dex, respectively. However, while the Teff values are in very good agreement with the literature, we noted some issues with the determination of [Fe/H] of metal poor stars and the tendency, for logg, to cluster around the values typical for main sequence and red giant stars. We propose correction relations based on these comparison. The RV distribution is asymmetric and shows an excess of stars with negative RVs which is larger at low metallicities. We could identify stars with variable RV, ultrafast rotators, and emission-line objects. Based on the H-alpha and CaII-IRT fluxes, we have found 442 chromospherically active stars, one of which is a likely accreting object. The availability of precise rotation periods from the Kepler photometry has allowed us to study the dependency of the chromospheric fluxes on the rotation rate for a quite large sample of field stars.
• ### LAMOST observations in the Kepler field. Database of low-resolution spectra(1508.06391)
Aug. 26, 2015 astro-ph.SR
The nearly continuous light curves with micromagnitude precision provided by the space mission Kepler are revolutionising our view of pulsating stars. They have revealed a vast sea of low-amplitude pulsation modes that were undetectable from Earth. The long time base of Kepler light curves allows an accurate determination of frequencies and amplitudes of pulsation modes needed for in-depth asteroseismic modeling. However, for an asteroseismic study to be successful, the first estimates of stellar parameters need to be known and they can not be derived from the Kepler photometry itself. The Kepler Input Catalog (KIC) provides values for the effective temperature, the surface gravity and the metallicity, but not always with a sufficient accuracy. Moreover, information on the chemical composition and rotation rate is lacking. We are collecting low-resolution spectra for objects in the Kepler field of view with the Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST, Xinglong observatory, China). All of the requested fields have now been observed at least once. In this paper we describe those observations and provide a database of use to the whole astronomical community.
• ### The 2003-4 multisite photometric campaign for the Beta Cephei and eclipsing star 16 (EN) Lacertae with an Appendix on 2 Andromedae, the variable comparison star(1508.05250)
Aug. 21, 2015 astro-ph.SR
A multisite photometric campaign for the Beta Cephei and eclipsing variable 16 Lacertae is reported. 749 h of high-quality differential photoelectric Stromgren, Johnson and Geneva time-series photometry were obtained with ten telescopes during 185 nights. After removing the pulsation contribution, an attempt was made to solve the resulting eclipse light curve by means of the computer program EBOP. Although a unique solution was not obtained, the range of solutions could be constrained by comparing computed positions of the secondary component in the Hertzsprung-Russell diagram with evolutionary tracks. For three high-amplitude pulsation modes, the uvy and the Geneva UBG amplitude ratios are derived and compared with the theoretical ones for spherical-harmonic degrees l <= 4. The highest degree, l = 4, is shown to be incompatible with the observations. One mode is found to be radial, one is l = 1, while in the remaining case l = 2 or 3. The present multisite observations are combined with the archival photometry in order to investigate the long-term variation of the amplitudes and phases of the three high-amplitude pulsation modes. The radial mode shows a non-sinusoidal variation on a time-scale of 73 yr. The l = 1 mode is a triplet with unequal frequency spacing, giving rise to two beat-periods, 720.7 d and 29.1 yr. The amplitude and phase of the l = 2 or 3 mode vary on time-scales of 380.5 d and 43 yr. The light variation of 2 And, one of the comparison stars, is discussed in the Appendix.
• ### Seismic constraints on the radial dependence of the internal rotation profiles of six Kepler subgiants and young red giants(1401.3096)
Jan. 14, 2014 astro-ph.SR
Context : We still do not know which mechanisms are responsible for the transport of angular momentum inside stars. The recent detection of mixed modes that contain the signature of rotation in the spectra of Kepler subgiants and red giants gives us the opportunity to make progress on this issue. Aims: Our aim is to probe the radial dependance of the rotation profiles for a sample of Kepler targets. For this purpose, subgiants and early red giants are particularly interesting targets because their rotational splittings are more sensitive to the rotation outside the deeper core than is the case for their more evolved counterparts. Methods: We first extract the rotational splittings and frequencies of the modes for six young Kepler red giants. We then perform a seismic modeling of these stars using the evolutionary codes CESAM2k and ASTEC. By using the observed splittings and the rotational kernels of the optimal models, we perform inversions of the internal rotation profiles of the six stars. Results: We obtain estimates of the mean rotation rate in the core and in the convective envelope of these stars. We show that the rotation contrast between the core and the envelope increases during the subgiant branch. Our results also suggest that the core of subgiants spins up with time, contrary to the RGB stars whose core has been shown to spin down. For two of the stars, we show that a discontinuous rotation profile with a deep discontinuity reproduces the observed splittings significantly better than a smooth rotation profile. Interestingly, the depths that are found most probable for the discontinuities roughly coincide with the location of the H-burning shell, which separates the layers that contract from those that expand. These results will bring observational constraints to the scenarios of angular momentum transport in stars.
• ### Asteroseismic fundamental properties of solar-type stars observed by the NASA Kepler Mission(1310.4001)
Oct. 17, 2013 astro-ph.SR
We use asteroseismic data obtained by the NASA Kepler Mission to estimate the fundamental properties of more than 500 main-sequence and sub-giant stars. Data obtained during the first 10 months of Kepler science operations were used for this work, when these solar-type targets were observed for one month each in a survey mode. Stellar properties have been estimated using two global asteroseismic parameters and complementary photometric and spectroscopic data. Homogeneous sets of effective temperatures were available for the entire ensemble from complementary photometry; spectroscopic estimates of T_eff and [Fe/H] were available from a homogeneous analysis of ground-based data on a subset of 87 stars. [Abbreviated version... see paper for full abstract.]
• ### Kepler White Paper: Asteroseismology of Solar-Like Oscillators in a 2-Wheel Mission(1309.0702)
Sept. 3, 2013 astro-ph.SR
We comment on the potential for continuing asteroseismology of solar-type and red-giant stars in a 2-wheel Kepler Mission. Our main conclusion is that by targeting stars in the ecliptic it should be possible to perform high-quality asteroseismology, as long as favorable scenarios for 2-wheel pointing performance are met. Targeting the ecliptic would potentially facilitate unique science that was not possible in the nominal Mission, notably from the study of clusters that are significantly brighter than those in the Kepler field. Our conclusions are based on predictions of 2-wheel observations made by a space photometry simulator, with information provided by the Kepler Project used as input to describe the degraded pointing scenarios. We find that elevated levels of frequency-dependent noise, consistent with the above scenarios, would have a significant negative impact on our ability to continue asteroseismic studies of solar-like oscillators in the Kepler field. However, the situation may be much more optimistic for observations in the ecliptic, provided that pointing resets of the spacecraft during regular desaturations of the two functioning reaction wheels are accurate at the < 1 arcsec level. This would make it possible to apply a post-hoc analysis that would recover most of the lost photometric precision. Without this post-hoc correction---and the accurate re-pointing it requires---the performance would probably be as poor as in the Kepler-field case. Critical to our conclusions for both fields is the assumed level of pointing noise (in the short-term jitter and the longer-term drift). We suggest that further tests will be needed to clarify our results once more detail and data on the expected pointing performance becomes available, and we offer our assistance in this work.
• ### mu Eridani from MOST and from the ground: an orbit, the SPB component's fundamental parameters, and the SPB frequencies(1303.6812)
March 27, 2013 astro-ph.SR
MOST time-series photometry of mu Eri, an SB1 eclipsing binary with a rapidly-rotating SPB primary, is reported and analyzed. The analysis yields a number of sinusoidal terms, mainly due to the intrinsic variation of the primary, and the eclipse light-curve. New radial-velocity observations are presented and used to compute parameters of a spectroscopic orbit. Frequency analysis of the radial-velocity residuals from the spectroscopic orbital solution fails to uncover periodic variations with amplitudes greater than 2 km/s. A Rossiter-McLaughlin anomaly is detected from observations covering ingress. From archival photometric indices and the revised Hipparcos parallax we derive the primary's effective temperature, surface gravity, bolometric correction, and the luminosity. An analysis of a high signal-to-noise spectrogram yields the effective temperature and surface gravity in good agreement with the photometric values. From the same spectrogram, we determine the abundance of He, C, N, O, Ne, Mg, Al, Si, P, S, Cl, and Fe. The eclipse light-curve is solved by means of EBOP. For a range of mass of the primary, a value of mean density, very nearly independent of assumed mass, is computed from the parameters of the system. Contrary to a recent report, this value is approximately equal to the mean density obtained from the star's effective temperature and luminosity. Despite limited frequency resolution of the MOST data, we were able to recover the closely-spaced SPB frequency quadruplet discovered from the ground in 2002-2004. The other two SPB terms seen from the ground were also recovered. Moreover, our analysis of the MOST data adds 15 low-amplitude SPB terms with frequencies ranging from 0.109 c/d to 2.786 c/d.
• ### Characterizing two solar-type Kepler subgiants with asteroseismology: KIC10920273 and KIC11395018(1211.6650)
Nov. 28, 2012 astro-ph.SR
Determining fundamental properties of stars through stellar modeling has improved substantially due to recent advances in asteroseismology. Thanks to the unprecedented data quality obtained by space missions, particularly CoRoT and Kepler, invaluable information is extracted from the high-precision stellar oscillation frequencies, which provide very strong constraints on possible stellar models for a given set of classical observations. In this work, we have characterized two relatively faint stars, KIC10920273 and KIC11395018, using oscillation data from Kepler photometry and atmospheric constraints from ground-based spectroscopy. Both stars have very similar atmospheric properties; however, using the individual frequencies extracted from the Kepler data, we have determined quite distinct global properties, with increased precision compared to that of earlier results. We found that both stars have left the main sequence and characterized them as follows: KIC10920273 is a one-solar-mass star (M=1.00 +/- 0.04 M_sun), but much older than our Sun (t=7.12 +/- 0.47 Gyr), while KIC11395018 is significantly more massive than the Sun (M=1.27 +/- 0.04 M_sun) with an age close to that of the Sun (t=4.57 +/- 0.23 Gyr). We confirm that the high lithium abundance reported for these stars should not be considered to represent young ages, as we precisely determined them to be evolved subgiants. We discuss the use of surface lithium abundance, rotation and activity relations as potential age diagnostics.
• ### Spectroscopic and Photometric Observations of Kepler Asteroseismic Targets(1211.5247)
Nov. 22, 2012 astro-ph.SR
We summarize our ground-based program of spectroscopic and photometric observations of the asteroseismic targets of the Kepler space telescope. We have already determined atmospheric parameters, projected velocity of rotation, and radial velocity of 62 Kepler asteroseismic targets and 33 other stars in the Kepler field of view. We discovered six single-lined and two double-lined spectroscopic binaries, we determined the interstellar reddening for 29 stars in the Kepler field of view, and discovered three delta Sct, two gamma Dor and 14 other variable stars in the field of NGC 6866.
• ### Fundamental Properties of Stars using Asteroseismology from Kepler & CoRoT and Interferometry from the CHARA Array(1210.0012)
Sept. 28, 2012 astro-ph.SR
We present results of a long-baseline interferometry campaign using the PAVO beam combiner at the CHARA Array to measure the angular sizes of five main-sequence stars, one subgiant and four red giant stars for which solar-like oscillations have been detected by either Kepler or CoRoT. By combining interferometric angular diameters, Hipparcos parallaxes, asteroseismic densities, bolometric fluxes and high-resolution spectroscopy we derive a full set of near model-independent fundamental properties for the sample. We first use these properties to test asteroseismic scaling relations for the frequency of maximum power (nu_max) and the large frequency separation (Delta_nu). We find excellent agreement within the observational uncertainties, and empirically show that simple estimates of asteroseismic radii for main-sequence stars are accurate to <~4%. We furthermore find good agreement of our measured effective temperatures with spectroscopic and photometric estimates with mean deviations for stars between T_eff = 4600-6200 K of -22+/-32 K (with a scatter of 97K) and -58+/-31 K (with a scatter of 93 K), respectively. Finally we present a first comparison with evolutionary models, and find differences between observed and theoretical properties for the metal-rich main-sequence star HD173701. We conclude that the constraints presented in this study will have strong potential for testing stellar model physics, in particular when combined with detailed modelling of individual oscillation frequencies.
• ### Magnetic activity and differential rotation in the young Sun-like stars KIC 7985370 and KIC 7765135(1205.5721)
May 25, 2012 astro-ph.SR
We present a detailed study of the two Sun-like stars KIC 7985370 and KIC 7765135, aimed at determining their activity level, spot distribution, and differential rotation. Both stars were discovered by us to be young stars and were observed by the NASA Kepler mission. The stellar parameters (vsini, spectral type, Teff, log g, and [Fe/H]) were derived from optical spectroscopy which allowed us also to study the chromospheric activity from the emission in the core of H\alpha\ and CaII IRT lines. The high-precision Kepler photometric data spanning over 229 days were then fitted with a robust spot model. Model selection and parameter estimation are performed in a Bayesian manner, using a Markov chain Monte Carlo method. Both stars came out to be Sun-like with an age of about 100-200 Myr, based on their lithium content and kinematics. Their youth is confirmed by the high level of chromospheric activity, comparable to that displayed by the early G-type stars in the Pleiades cluster. The flux ratio of the CaII-IRT lines suggests that the cores of these lines are mainly formed in optically-thick regions analogous to solar plages. The model of the light curves requires at least seven enduring spots for KIC 7985370 and nine spots for KIC 7765135 for a satisfactory fit. The assumption of longevity of the star spots, whose area is allowed to evolve in time, is at the heart of our approach. We found, for both stars, a rather high value of the equator-to-pole differential rotation (d\Omega~0.18 rad/day) which is in contrast with the predictions of some mean-field models of differential rotation for fast-rotating stars. Our results are instead in agreement with previous works on solar-type stars and with other models which predict a higher latitudinal shear, increasing with equatorial angular velocity.
• ### Accurate parameters of 93 solar-type Kepler targets(1203.0611)
March 3, 2012 astro-ph.SR
We present a detailed spectroscopic study of 93 solar-type stars that are targets of the NASA/Kepler mission and provide detailed chemical composition of each target. We find that the overall metallicity is well-represented by Fe lines. Relative abundances of light elements (CNO) and alpha-elements are generally higher for low-metallicity stars. Our spectroscopic analysis benefits from the accurately measured surface gravity from the asteroseismic analysis of the Kepler light curves. The log g parameter is known to better than 0.03 dex and is held fixed in the analysis. We compare our Teff determination with a recent colour calibration of V-K (TYCHO V magnitude minus 2MASS Ks magnitude) and find very good agreement and a scatter of only 80 K, showing that for other nearby Kepler targets this index can be used. The asteroseismic log g values agree very well with the classical determination using Fe1-Fe2 balance, although we find a small systematic offset of 0.08 dex (asteroseismic log g values are lower). The abundance patterns of metals, alpha elements, and the light elements (CNO) show that a simple scaling by [Fe/H] is adequate to represent the metallicity of the stars, except for the stars with metallicity below -0.3, where alpha-enhancement becomes important. However, this is only important for a very small fraction of the Kepler sample. We therefore recommend that a simple scaling with [Fe/H] be employed in the asteroseismic analyses of large ensembles of solar-type stars.
• ### A uniform asteroseismic analysis of 22 solar-type stars observed by Kepler(1202.2844)
Feb. 13, 2012 astro-ph.SR
Asteroseismology with the Kepler space telescope is providing not only an improved characterization of exoplanets and their host stars, but also a new window on stellar structure and evolution for the large sample of solar-type stars in the field. We perform a uniform analysis of 22 of the brightest asteroseismic targets with the highest signal-to-noise ratio observed for 1 month each during the first year of the mission, and we quantify the precision and relative accuracy of asteroseismic determinations of the stellar radius, mass, and age that are possible using various methods. We present the properties of each star in the sample derived from an automated analysis of the individual oscillation frequencies and other observational constraints using the Asteroseismic Modeling Portal (AMP), and we compare them to the results of model-grid-based methods that fit the global oscillation properties. We find that fitting the individual frequencies typically yields asteroseismic radii and masses to \sim1% precision, and ages to \sim2.5% precision (respectively 2, 5, and 8 times better than fitting the global oscillation properties). The absolute level of agreement between the results from different approaches is also encouraging, with model-grid-based methods yielding slightly smaller estimates of the radius and mass and slightly older values for the stellar age relative to AMP, which computes a large number of dedicated models for each star. The sample of targets for which this type of analysis is possible will grow as longer data sets are obtained during the remainder of the mission.
• ### Seismic analysis of four solar-like stars observed during more than eight months by Kepler(1110.0135)
Oct. 1, 2011 astro-ph.SR
Having started science operations in May 2009, the Kepler photometer has been able to provide exquisite data of solar-like stars. Five out of the 42 stars observed continuously during the survey phase show evidence of oscillations, even though they are rather faint (magnitudes from 10.5 to 12). In this paper, we present an overview of the results of the seismic analysis of 4 of these stars observed during more than eight months.
• ### Ensemble Asteroseismology of Solar-Type Stars with the NASA Kepler Mission(1109.4723)
Sept. 22, 2011 astro-ph.SR
In addition to its search for extra-solar planets, the NASA Kepler Mission provides exquisite data on stellar oscillations. We report the detections of oscillations in 500 solartype stars in the Kepler field of view, an ensemble that is large enough to allow statistical studies of intrinsic stellar properties (such as mass, radius and age) and to test theories of stellar evolution. We find that the distribution of observed masses of these stars shows intriguing differences to predictions from models of synthetic stellar populations in the Galaxy.
• ### Testing Scaling Relations for Solar-Like Oscillations from the Main Sequence to Red Giants using Kepler Data(1109.3460)
Sept. 15, 2011 astro-ph.SR
We have analyzed solar-like oscillations in ~1700 stars observed by the Kepler Mission, spanning from the main-sequence to the red clump. Using evolutionary models, we test asteroseismic scaling relations for the frequency of maximum power (nu_max), the large frequency separation (Delta_nu) and oscillation amplitudes. We show that the difference of the Delta_nu-nu_max relation for unevolved and evolved stars can be explained by different distributions in effective temperature and stellar mass, in agreement with what is expected from scaling relations. For oscillation amplitudes, we show that neither (L/M)^s scaling nor the revised scaling relation by Kjeldsen & Bedding (2011) are accurate for red-giant stars, and demonstrate that a revised scaling relation with a separate luminosity-mass dependence can be used to calculate amplitudes from the main-sequence to red-giants to a precision of ~25%. The residuals show an offset particularly for unevolved stars, suggesting that an additional physical dependency is necessary to fully reproduce the observed amplitudes. We investigate correlations between amplitudes and stellar activity, and find evidence that the effect of amplitude suppression is most pronounced for subgiant stars. Finally, we test the location of the cool edge of the instability strip in the Hertzsprung-Russell diagram using solar-like oscillations and find the detections in the hottest stars compatible with a domain of hybrid stochastically excited and opacity driven pulsation.
• ### Asteroseismology from multi-month Kepler photometry: the evolved Sun-like stars KIC 10273246 and KIC 10920273(1108.3807)
Aug. 18, 2011 astro-ph.SR
The evolved main-sequence Sun-like stars KIC 10273246 (F-type) and KIC 10920273 (G-type) were observed with the NASA Kepler satellite for approximately ten months with a duty cycle in excess of 90%. Such continuous and long observations are unprecedented for solar-type stars other than the Sun. We aimed mainly at extracting estimates of p-mode frequencies - as well as of other individual mode parameters - from the power spectra of the light curves of both stars, thus providing scope for a full seismic characterization. The light curves were corrected for instrumental effects in a manner independent of the Kepler Science Pipeline. Estimation of individual mode parameters was based both on the maximization of the likelihood of a model describing the power spectrum and on a classic prewhitening method. Finally, we employed a procedure for selecting frequency lists to be used in stellar modeling. A total of 30 and 21 modes of degree l=0,1,2 - spanning at least eight radial orders - have been identified for KIC 10273246 and KIC 10920273, respectively. Two avoided crossings (l=1 ridge) have been identified for KIC 10273246, whereas one avoided crossing plus another likely one have been identified for KIC 10920273. Good agreement is found between observed and predicted mode amplitudes for the F-type star KIC 10273246, based on a revised scaling relation. Estimates are given of the rotational periods, the parameters describing stellar granulation and the global asteroseismic parameters $\Delta\nu$ and $\nu_{\rm{max}}$.
• ### Constructing a one-solar-mass evolutionary sequence using asteroseismic data from \textit{Kepler}(1108.2031)
Aug. 9, 2011 astro-ph.SR
Asteroseismology of solar-type stars has entered a new era of large surveys with the success of the NASA \textit{Kepler} mission, which is providing exquisite data on oscillations of stars across the Hertzprung-Russell (HR) diagram. From the time-series photometry, the two seismic parameters that can be most readily extracted are the large frequency separation ($\Delta\nu$) and the frequency of maximum oscillation power ($\nu_\mathrm{max}$). After the survey phase, these quantities are available for hundreds of solar-type stars. By scaling from solar values, we use these two asteroseismic observables to identify for the first time an evolutionary sequence of 1-M$_\odot$ field stars, without the need for further information from stellar models. Comparison of our determinations with the few available spectroscopic results shows an excellent level of agreement. We discuss the potential of the method for differential analysis throughout the main-sequence evolution, and the possibility of detecting twins of very well-known stars.
• ### An asteroseismic membership study of the red giants in three open clusters observed by Kepler: NGC6791, NGC6819, and NGC6811(1107.1234)
July 6, 2011 astro-ph.SR
Studying star clusters offers significant advances in stellar astrophysics due to the combined power of having many stars with essentially the same distance, age, and initial composition. This makes clusters excellent test benches for verification of stellar evolution theory. To fully exploit this potential, it is vital that the star sample is uncontaminated by stars that are not members of the cluster. Techniques for determining cluster membership therefore play a key role in the investigation of clusters. We present results on three clusters in the Kepler field of view based on a newly established technique that uses asteroseismology to identify fore- or background stars in the field, which demonstrates advantages over classical methods such as kinematic and photometry measurements. Four previously identified seismic non-members in NGC6819 are confirmed in this study, and three additional non-members are found -- two in NGC6819 and one in NGC6791. We further highlight which stars are, or might be, affected by blending, which needs to be taken into account when analysing these Kepler data.
• ### Amplitudes of solar-like oscillations: constraints from red giants in open clusters observed by Kepler(1107.0490)
July 3, 2011 astro-ph.SR
Scaling relations that link asteroseismic quantities to global stellar properties are important for gaining understanding of the intricate physics that underpins stellar pulsation. The common notion that all stars in an open cluster have essentially the same distance, age, and initial composition, implies that the stellar parameters can be measured to much higher precision than what is usually achievable for single stars. This makes clusters ideal for exploring the relation between the mode amplitude of solar-like oscillations and the global stellar properties. We have analyzed data obtained with NASA's Kepler space telescope to study solar-like oscillations in 100 red giant stars located in either of the three open clusters, NGC 6791, NGC 6819, and NGC 6811. By fitting the measured amplitudes to predictions from simple scaling relations that depend on luminosity, mass, and effective temperature, we find that the data cannot be described by any power of the luminosity-to-mass ratio as previously assumed. As a result we provide a new improved empirical relation which treats luminosity and mass separately. This relation turns out to also work remarkably well for main-sequence and subgiant stars. In addition, the measured amplitudes reveal the potential presence of a number of previously unknown unresolved binaries in the red clump in NGC 6791 and NGC 6819, pointing to an interesting new application for asteroseismology as a probe into the formation history of open clusters.
• ### Evidence for the impact of stellar activity on the detectability of solar-like oscillations observed by Kepler(1103.5570)
April 8, 2011 astro-ph.SR
We use photometric observations of solar-type stars, made by the NASA Kepler Mission, to conduct a statistical study of the impact of stellar surface activity on the detectability of solar-like oscillations. We find that the number of stars with detected oscillations fall significantly with increasing levels of activity. The results present strong evidence for the impact of magnetic activity on the properties of near-surface convection in the stars, which appears to inhibit the amplitudes of the stochastically excited, intrinsically damped solar-like oscillations.
• ### Predicting the detectability of oscillations in solar-type stars observed by Kepler(1103.0702)
March 3, 2011 astro-ph.SR
Asteroseismology of solar-type stars has an important part to play in the exoplanet program of the NASA Kepler Mission. Precise and accurate inferences on the stellar properties that are made possible by the seismic data allow very tight constraints to be placed on the exoplanetary systems. Here, we outline how to make an estimate of the detectability of solar-like oscillations in any given Kepler target, using rough estimates of the temperature and radius, and the Kepler apparent magnitude.
• ### Preparation of Kepler lightcurves for asteroseismic analyses(1103.0382)
March 2, 2011 astro-ph.SR
The Kepler mission is providing photometric data of exquisite quality for the asteroseismic study of different classes of pulsating stars. These analyses place particular demands on the pre-processing of the data, over a range of timescales from minutes to months. Here, we describe processing procedures developed by the Kepler Asteroseismic Science Consortium (KASC) to prepare light curves that are optimized for the asteroseismic study of solar-like oscillating stars in which outliers, jumps and drifts are corrected. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8972427845001221, "perplexity": 2153.500040292625}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178370239.72/warc/CC-MAIN-20210305060756-20210305090756-00632.warc.gz"} |
https://cris.bgu.ac.il/en/publications/new-lst-of-inter-departure-times-in-phg1-queue-and-extensions-to | # New LST of inter-departure times in PH/G/1 queue, and extensions to ME/G/1 and G/G/1 queues
Ruth Sagron, Yoav Kerner, Gad Rabinowitz, Israel Tirkel
Research output: Contribution to journalArticlepeer-review
## Abstract
In this paper, we provide a new approach to model the inter-departure times distribution in a PH/G/1 queue. This approach enables to further model the inter-departure times distribution in more general queues as well. Initially, we propose to express the Laplace–Stieltjes transform (LST) of inter-departure times in PH/G/1 queues by exploiting the probabilistic interpretation of phase-type distributions. Using this interpretation enables to eliminate the necessity of the matrix-geometric method, and thus significantly reduces the computational complexity. Then, we use the LST of inter-departure times distribution in a Cm/G/1 queue to express this LST in a ME/G/1 queue, where ME is a Matrix-Exponential distribution. We validate it in a few ME/G/1 examples. Finally, we propose to approximate the LST of inter-departure times distribution in a G/G/1 queue by employing the above LST of the proper PH/G/1 queue. Without loss of generality, we demonstrate our proposed approximation by using the LST as obtained in a Cm/G/1 queue, while illustrating by a few G/G/1 examples that the accuracy can be as good as one might want.
Original language English 518-527 10 Computers and Industrial Engineering 135 https://doi.org/10.1016/j.cie.2019.06.029 Published - 1 Sep 2019
## Keywords
• GG/1 queue
• Laplace-Stieltjes transform
• ME/G/1 queue
• Matrix Geometric Method
• PH/G/1 queue
• Queueing, Departure process
## ASJC Scopus subject areas
• Computer Science (all)
• Engineering (all)
## Fingerprint
Dive into the research topics of 'New LST of inter-departure times in PH/G/1 queue, and extensions to ME/G/1 and G/G/1 queues'. Together they form a unique fingerprint. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8194388747215271, "perplexity": 4693.269767437732}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573699.52/warc/CC-MAIN-20220819131019-20220819161019-00330.warc.gz"} |
http://mathonline.wikidot.com/algebras-over-f | Algebras over F
# Algebras over F
Definition: An (Associative) Algebra over $\mathbb{F}$ is a linear space $\mathfrak{A}$ over the field $\mathbf{F}$ along with a binary function $\cdot \mathfrak{A} \times \mathfrak{A} \to \mathfrak{A}$, sometimes called (vector) multiplication or product, that satisfies the following three properties: 1) $a \cdot (b \cdot c) = (a \cdot b) \cdot c$ for all $a, b, c \in \mathfrak{A}$ (Associativity of multiplication). 2) $a \cdot (b + c) = a \cdot b + a \cdot c$ for all $a, b, c \in \mathfrak{A}$ (Distributivity of multiplication over addition). 3) $(\alpha a) \cdot b = \alpha (a \cdot b) = a \cdot (\alpha b)$ for all $a, b \in \mathfrak{A}$ and all $\alpha \in \mathbf{F}$. If $\mathbb{F} = \mathbb{R}$ then the above structure is called a Real Algebra, and if $\mathbf{F} = \mathbb{C}$ then the above structure is called a Complex Algebra.
Here, $\mathbf{F}$ denotes the field of real numbers $\mathbb{R}$ or the field of complex numbers $\mathbb{C}$ depending on the context.
The notation $\alpha x$ is used denote scalar multiplication in $X$, while $x \cdot y$ is used to denote vector multiplication in $X$. When no ambiguity arises, we will omit writing the "$\cdot$" and simply write "$xy$".
Definition: Let $\mathfrak{A}$ be an algebra over $\mathbf{F}$. A Subalgebra of $\mathfrak{A}$ is a linear subspace $\mathfrak{B}$ of $\mathfrak{A}$ with the additional property that for all $b_1, b_2 \in \mathfrak{B}$ we have that $b_1 \cdot b_2 \in \mathfrak{B}$.
Equivalently, if $X$ is an algebra then a subset $Y$ of $X$ is a subalgebra of $X$ if it is closed under both addition, scalar multiplication, and multiplication.
## Example 1
The set of real numbers $\mathbb{R}$ is an algebra over $\mathbb{R}$ with the operations of standard number addition, scalar multiplication, and multiplication of real numbers (which is really the same as scalar multiplication in this case).
Similarly, the set of complex numbers $\mathbb{C}$ is an algebra over $\mathbb{C}$.
## Example 2
Let $X$ be a nonempty set and let $\mathfrak{A}$ be an algebra over $\mathbf{F}$. Consider the set of all functions from $X$ to $\mathfrak{A}$. We define the operations of pointwise function addition, pointwise scalar multiplication, and pointwise multiplication respectively for all $f, g : X \to \mathfrak{A}$ and for all $\alpha \in \mathbb{F}$,
(1)
\begin{align} \quad [f + g](x) &:= f(x) + g(x), \quad x \in X\\ \quad [\alpha f](x) &:= \alpha f(x), \quad x \in X \\ \quad [f \cdot g](x) &:= f(x) \cdot g(x), \quad x \in X \end{align}
(Note that $f(x), g(x) \in \mathfrak{A}$ and so $f(x) \cdot g(x)$ is a well-defined element of $\mathfrak{A}$.)
It is easy to check that this space is an algebra over $\mathbf{F}$.
## Example 3
Let $X$ be a nonempty set. The set of all real-valued (or complex-valued functions) on $X$ with the operations of pointwise function addition, pointwise scalar multiplication, and pointwise function multiplication as defined in example 2, is an algebra. (This is simply a special case of the class of algebras in example 2 with $\mathfrak{A} = \mathbb{F}$.
## Example 4
Let $X$ be a linear space. Let $\mathcal L(X, X)$ be the set of all linear operators from $X$ to $X$. We already know that $\mathcal L(X, X)$ is a linear space. We define multiplication on $\mathcal L(X, X)$ as composition, i.e., for all $S, T \in \mathcal L(X, X)$:
(2)
\begin{align} \quad (S \circ T)(x) = S(T(x)) \end{align}
Then $\mathcal L(X, X)$ with composition as the multiplication becomes an algebra | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9992662072181702, "perplexity": 120.76690498811625}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202324.5/warc/CC-MAIN-20190320085116-20190320111116-00266.warc.gz"} |
https://robotics.stackexchange.com/questions/18733/question-about-ekf-slam-w-unknown-correspondence-update-step | # Question about EKF SLAM (w/ unknown correspondence) update step
In Probabilistic Robotics, page 322, the EKF SLAM update step is shown below.
My question is, why is for every observed feature, $$\bar{\mu}_t,\bar{\Sigma}_t$$ are overwritten in lines 23,24 by the most likely observation in every new loop? It just seems to me, if there are N observed features, then 0th to N-1 th features aren't even accounted for in the final $$\mu_t, \Sigma_t$$ in line 26 and 27, only the Nth feature and its ML correspondence is updating $$\mu_t, \Sigma_t$$. I think I might be wrong about my understanding, but I don't see how the subsequent loops account for the previous loops' ML correspondences. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 3, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9294977188110352, "perplexity": 1222.997721938136}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986668569.22/warc/CC-MAIN-20191016113040-20191016140540-00507.warc.gz"} |
https://math.stackexchange.com/questions/328399/two-topology-questions-open-set-and-equivalence/328485 | # two topology questions (open set and equivalence)
Two metrics $d_1$ and $d_2$ are called equivalent if there exist positive constants $\alpha, \beta$ s.t $\forall x,y\in\mathbb R^n: \alpha d_2(x,y)\le d_1(x,y)\le\beta d_2(x,y)$
I already proved that $d_1$=Manhattan Metric and $d_\infty$ are equivalent.
I do not see the equivalence between $d_\infty=\max |x_y-y_j|$ and $d_2=\sqrt{\sum_{j=1}^{n}(x_j-y_j)^2}$, may you could help me.
2nd Question: The infinite intersection of open sets do not have to be open: Of youre I know the well known example $\bigcap(-1/n,1/n)$ but do you know also other examples?
• These are two quite unrelated questions, I think you should split them up and post as two separate ones. – Herng Yi Mar 12 '13 at 13:19
The condition for the two metrics are equivalent that has you exposed is sufficient but not necessary in general metric spaces. Two metrics are equivalent iff induces the same topology, in particular, when we work in $\mathbb{R}^n$ your condition is also necessary.
Note that $d_\infty\leq d_2\leq \sqrt{n}d_\infty$. Then $d_\infty$ and $d_2$ are equivalent.
• We have better $d_2\leq \sqrt{n}d_{\infty}$. – user63181 Mar 12 '13 at 13:27
• Clearly your example can be generalised to $\Bbb R^n$ with the Euclidean (standard) topology: just choose a point and take the open balls of radius $\frac{1}{n}$ centred on that point. Actually, any sequence of real numbers converging to $0$ will do as radii.
• Consider $X=\Bbb R$ (a line) with the cofinite topology, i.e. where the open sets are only the complements of finite sets and the empty set. Then the intersection of an infinite number of open sets is the complement of an infinite set, hence it isn't open. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9667714238166809, "perplexity": 188.12103046621766}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195524111.50/warc/CC-MAIN-20190715195204-20190715221204-00366.warc.gz"} |
http://math.stackexchange.com/questions/705096/why-is-this-allowed-fouriers-trick-finding-the-coefficients-in-a-fourier-s | # Why is this allowed? (“Fourier's Trick”; finding the coefficients in a Fourier Series)
In my textbook (Introduction to Electrodynamics, D. Griffiths), we derive the equation for some strange potential function. Eventually, we get to this (for $n \in \mathbb{Z}^+$):
$$V_0(y) = \sum_{n=0}^{\infty} C_n\sin{\frac{n\pi}{a}y} \tag{3.31}$$
Here's where things go awry for me.
... how do we actually determine the coefficients $C_n$, buried as they are in that infinite sum? The device for accomplishing this is so lovely it deserves a name—I call it Fourier's trick, though it seems Euler had used essentially the same idea somewhat earlier. Here's how it goes: Multiply Eq. 3.31 by $\sin{n'\pi y/a}$ (where $n'$ is a positive integer), and integrate from 0 to a:
$$\displaystyle \sum_{n=0}^{\infty} C_n \int_0^a\sin{\frac{n\pi}{a}y} \sin{\frac{n'\pi}{a}y} dy ~~~=~~~ \int_0^a V_0(y)\sin{\frac{n'\pi}{a}y} dy$$
The answer understandably comes out to something very nice and convenient. But... why is this something you can do? There's no obvious reason for why that doesn't intrinsically change the problem (in the same way that I can say "Multiply both sides by $0$. You've successfully reduced the problem to zero. Well done!)
(While typing out the above, I suspect that it has something to do with the inner product of a function and an orthonormal basis? The infinite $\sin$ functions create an orthonormal basis, and taking that integral over all possible values effectively extracts the coefficients for each basis function. When it is suggested that we multiply by $\sin{\frac{n'\pi}{a}y}$ and integrate, this isn't changing the basis at all, it's just (sneakily) extracting the coefficients, which only exist when $n = n'$ (because the $\sin$ functions are all orthogonal). It's like taking the coefficients of a basis with itself... right?
I think this may be one of those cases where, in the process of asking the question, I figure out the answer—but this is all fairly new to me, and I'd like to ask it anyway for confirmation and, possibly, a clearer explanation).
-
Multiplying both sides of an equation by $0$ is perfectly valid, but the resulting equation $0 = 0$ isn't helpful. With Fourier's trick, we multiply both sides by something non-zero, then integrate both sides, and obtain a result which is helpful. – littleO Mar 9 '14 at 9:15
I know you can do whatever you want to both sides of an equation, but you're still multiplying a lot of the terms by zero here (which is why so many of them drop out). There are a lot of $C_n$ terms there (infinite); the part about orthogonal basis was meant to explain why you're not losing any information when you multiply those terms by zero. Is it an accurate explanation? – AmagicalFishy Mar 9 '14 at 9:21
@AmagicalFishy you are not multiplying by zero , the function which you are multiplying when integrated makes all but one term zero. – happymath Mar 9 '14 at 9:22
I know what the function does—but I still don't understand why this is considered "not multiplying by zero". It just seems like a neat way of selectively multiplying by zero. – AmagicalFishy Mar 9 '14 at 9:26
What do you mean when you say we're multiplying "a lot" of the terms by zero? All the terms are getting multiplied by the same thing. Then, we integrate, and it turns out that all but one of the integrals on the left are equal to $0$. – littleO Mar 9 '14 at 9:28
Your suspicion about inner product is entirely correct. The trigonometic polynomials $\{\sin(nx), \cos(mx)\}$ (possibly translated and scaled) are known to form an orthogonal system with respect to the scalar product given by $\langle f, g\rangle:=\int fg$, and if you choose the function space correctly (usually one uses a space called $L^2$) and use properly chosen norming factors then one can show they actally form a complete orthonormal system $e_k$, which simply means you can express any function in that space as $$f = \sum_k\langle f, e_k \rangle e_k$$ (where convergence is to be understood in that space wrt the norm derived from the scalar product). The coefficients in that sum are what you are looking at. You may know that kind of representation from the finite dimensional case.
I deliberately did not specify the constants which make the orthogonal system orthonormal, nor an interval as domain of definition -- by translation and scaling you can do something like that on any bounded interval in $\mathbb{R}$ There is also a complex version of this, in which case one would use the scalar product $\int f\bar{g}$, and $\{e^{ikx}\}$ as orthogonal system.
If you want to look up the details, then most introductions to real analysis will have a section on that topic. Rudin's books, for example do explain this.
-
Awesome! I will do that (look up the details, that is). Thanks very much. – AmagicalFishy Mar 9 '14 at 18:06
It might help to break this up into smaller steps.
\begin{align*} &V_0(y) = \sum_{n=1}^{\infty} C_n \sin \left(\frac{n \pi y}{a} \right) \\ \implies & V_0(y) \sin \left(\frac{n' \pi y}{a} \right) = \sum_{n=1}^{\infty} C_n \sin \left(\frac{n \pi y}{a} \right)\sin \left(\frac{n' \pi y}{a} \right) \\ \implies & \int_0^a V_0(y) \sin \left(\frac{n' \pi y}{a} \right) \, dy = \sum_{n=1}^{\infty} C_n \int_0^a \sin \left(\frac{n \pi y}{a} \right)\sin \left(\frac{n' \pi y}{a} \right) \, dy = \frac{a}{2} C_{n'}. \end{align*}
We now solve for $C_{n'}$ to obtain \begin{equation*} C_{n'} = \frac{2}{a} \int_0^a V_0(y) \sin \left(\frac{n' \pi y}{a} \right) \, dy. \end{equation*}
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9825708270072937, "perplexity": 202.11289469545994}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207929176.79/warc/CC-MAIN-20150521113209-00043-ip-10-180-206-219.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/ages-of-starts-from-cmd.291844/ | # Ages of starts from CMD
1. Feb 12, 2009
### randa177
why is getting the age from CMD more accurate than getting it from photometry?
2. Feb 12, 2009
### randa177
I mean ages of start clusters!
3. Feb 16, 2009
### randa177
Why no one answered my question? Does it make sense?
Why are we requiring color magnitude diagrams to get ages.. Why are they the best?
4. Feb 16, 2009
### mgb_phys
CMD isn't a very common abbreviation, most people would say HR diagram.
Do you mean - how to get the age of stars from a color magnitude (ie Hertzsprung–Russell) diagram?
5. Feb 16, 2009
### randa177
Actually I am reading papers about large magelanic cloud, and it always mentions that getting the age using CMD (H-R diagram) is more accurate than using photometry. why CMDs give the only truly accurate ages for star clusters?
6. Feb 16, 2009
### mgb_phys
Not sure I understand the question. You use photometry to get the color index of the star. Then plotted on the HR diagram gives you the age (or at least the evolutionary state of the star) for an accurate age you would also need to know metalicity and mass.
7. Feb 16, 2009
### randa177
Yes, that's right, but that uses the photometry of each star in the cluster, not the color/magnitude of the whole cluster, because the magnitude and integrated colors for a cluster as a whole doesn't give very accurate information... Oh, wait I think i've just answered my original question!!! .... i got confused too now!
8. Feb 20, 2009
### randa177
Hi,
I just wanted to let you know that I was right about the reason CMD (HR diagram)give most accurate ages for stellar clusters.
But can anyone let me know HOW we get the ages from CMDs (H-R diagrams)? | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8894204497337341, "perplexity": 3641.8484509290647}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257824204.68/warc/CC-MAIN-20160723071024-00072-ip-10-185-27-174.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/673688/gcds-in-non-ufd-rings | # gcd's in non-UFD rings
In a UFD ring we have that for coprime $a,b \in R$, i.e. $(a,b)=1$:
$$a|cb \Rightarrow a|c$$
Does this property hold for non-UFD rings? I think not but do not recall a standard counter-example.
NOTE: In a non-UFD ring the elements $a,b$ may not even have a gcd, but I am assuming here they do have one.
-
As mentioned, this version of Euclid's Lemma is true in any ring if $\,(a,b) = 1\,$ is interpreted as an ideal equality, i.e. that $\,(a),(b)\,$ are comaximal. But I think you probably intend $\,(a,b)=1\,$ to mean $\,\gcd(a,b)=1,\,$ i.e. $\,d\mid a,b\,\Rightarrow\,d\mid 1.\,$ Then $\,a\mid bc\,\Rightarrow a\mid c\,$ yields, when $\,a\,$ is an atom (irreducible), that atoms are prime, so the domain is a UFD, assuming every nonunit $\ne 0$ has a factorization in atoms. Conversely any UFD satisfies Euclid's Lemma because it is an immediate consequence of the uniqueness of prime factorizations.
As for counterexamples, any domain with a nonunique factorization will have a non-prime atom $\,a\mid bc,\,\ a\nmid b,c,\,$ which yields a failure of Euclid's Lemma, e.g. $\ 2\mid \alpha \alpha',\ \alpha = 2+\sqrt 5\,\in\,\Bbb Z[\sqrt{-5}].\,$ As I elaborate here this immediately yields a nonexistent gcd, and a nonprincipal ideal. See also this answer for over $15$ closely related properties which all imply uniqueness of atomic factorizations. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9298486709594727, "perplexity": 420.69938665783}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997873839.53/warc/CC-MAIN-20140722025753-00124-ip-10-33-131-23.ec2.internal.warc.gz"} |
http://www.open.edu/openlearn/science-maths-technology/mathematics-and-statistics/mathematics-education/using-scientific-calculator/content-section-9 | Using a scientific calculator
This free course is available to start right now. Review the full course description and key learning outcomes and create an account and enrol if you want a free statement of participation.
Free course
# Using a scientific calculator
Your calculator can be set to calculate trigonometric functions using the radian measure for angles, instead of degrees, by using the key sequence (SETUP) (Rad).
When in this mode, the display indicator is shown.
In this activity, the angles are measured in radians. Find the values of the following expressions, giving your answers correct to 3 significant figures.
Remember to set your calculator to work in radians before entering these calculations.
1. (to 3 significant figures).
2. .
Remember: can be input to the calculator using .
3. or 0.785 (to 3 significant figures).
Notice from the final example in this activity that where an answer is a simple (possibly fractional) multiple of , the answer is displayed in terms of rather than as a decimal number.
MU123_1 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8411734700202942, "perplexity": 1110.6416268407872}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583509336.11/warc/CC-MAIN-20181015163653-20181015185153-00380.warc.gz"} |
https://www.coin-or.org/CppAD/Doc/atomic_rev_sparse_hes.htm | Prev Next Index-> contents reference index search external Up-> CppAD AD ADValued atomic atomic_base atomic_rev_sparse_hes ADValued-> Arithmetic unary_standard_math binary_math CondExp Discrete numeric_limits atomic atomic-> checkpoint atomic_base atomic_base-> atomic_ctor atomic_option atomic_afun atomic_forward atomic_reverse atomic_for_sparse_jac atomic_rev_sparse_jac atomic_for_sparse_hes atomic_rev_sparse_hes atomic_base_clear atomic_get_started.cpp atomic_norm_sq.cpp atomic_reciprocal.cpp atomic_set_sparsity.cpp atomic_tangent.cpp atomic_eigen_mat_mul.cpp atomic_eigen_mat_inv.cpp atomic_eigen_cholesky.cpp atomic_mat_mul.cpp atomic_rev_sparse_hes-> atomic_rev_sparse_hes.cpp Headings-> Syntax Deprecated 2016-06-27 Purpose Implementation ---..vx ---..s ---..t ---..q ---..r u ---..v ---..x Examples
$\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }$
Atomic Reverse Hessian Sparsity Patterns
Syntax
ok = afun.rev_sparse_hes(vx, s, t, q, r, u, v, x)
Deprecated 2016-06-27
ok = afun.rev_sparse_hes(vx, s, t, q, r, u, v)
Purpose
This function is used by RevSparseHes to compute Hessian sparsity patterns. If you are using RevSparseHes to compute one of the versions of this virtual function muse be defined by the atomic_user class. There is an unspecified scalar valued function $g : B^m \rightarrow B$. Given a sparsity pattern for $R \in B^{n \times q}$, and information about the function $z = g(y)$, this routine computes the sparsity pattern for $$V(x) = (g \circ f)^{(2)}( x ) R$$
Implementation
If you are using and RevSparseHes , this virtual function must be defined by the atomic_user class.
vx
The argument vx has prototype const CppAD:vector<bool>& vx vx.size() == n , and for $j = 0 , \ldots , n-1$, vx[j] is true if and only if ax[j] is a variable in the corresponding call to afun(ax, ay)
s
The argument s has prototype const CppAD:vector<bool>& s and its size is m . It is a sparsity pattern for $S(x) = g^{(1)} [ f(x) ] \in B^{1 \times m}$.
t
This argument has prototype CppAD:vector<bool>& t and its size is m . The input values of its elements are not specified (must not matter). Upon return, t is a sparsity pattern for $T(x) \in B^{1 \times n}$ where $$T(x) = (g \circ f)^{(1)} (x) = S(x) * f^{(1)} (x)$$
q
The argument q has prototype size_t q It specifies the number of columns in $R \in B^{n \times q}$, $U(x) \in B^{m \times q}$, and $V(x) \in B^{n \times q}$.
r
This argument has prototype const atomic_sparsity& r and is a atomic_sparsity pattern for $R \in B^{n \times q}$.
u
This argument has prototype const atomic_sparsity& u and is a atomic_sparsity pattern for $U(x) \in B^{m \times q}$ which is defined by $$\begin{array}{rcl} U(x) & = & \{ \partial_u \{ \partial_y g[ y + f^{(1)} (x) R u ] \}_{y=f(x)} \}_{u=0} \\ & = & \partial_u \{ g^{(1)} [ f(x) + f^{(1)} (x) R u ] \}_{u=0} \\ & = & g^{(2)} [ f(x) ] f^{(1)} (x) R \end{array}$$
v
This argument has prototype atomic_sparsity& v The input value of its elements are not specified (must not matter). Upon return, v is a atomic_sparsity pattern for $V(x) \in B^{n \times q}$ which is defined by $$\begin{array}{rcl} V(x) & = & \partial_u [ \partial_x (g \circ f) ( x + R u ) ]_{u=0} \\ & = & \partial_u [ (g \circ f)^{(1)}( x + R u ) ]_{u=0} \\ & = & (g \circ f)^{(2)}( x ) R \\ & = & f^{(1)} (x)^\R{T} g^{(2)} [ f(x) ] f^{(1)} (x) R + \sum_{i=1}^m g_i^{(1)} [ f(x) ] \; f_i^{(2)} (x) R \\ & = & f^{(1)} (x)^\R{T} U(x) + \sum_{i=1}^m S_i (x) \; f_i^{(2)} (x) R \end{array}$$
x
The argument has prototype const CppAD::vector<Base>& x and size is equal to the n . This is the Value value corresponding to the parameters in the vector ax (when the atomic function was called). To be specific, if if( Parameter(ax[i]) == true ) x[i] = Value( ax[i] ); else x[i] = CppAD::numeric_limits<Base>::quiet_NaN(); The version of this function with out the x argument is deprecated; i.e., you should include the argument even if you do not use it.
Examples
The file atomic_rev_sparse_hes.cpp contains an example and test that uses this routine. It returns true if the test passes and false if it fails. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8106735348701477, "perplexity": 2734.0307629229656}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084890314.60/warc/CC-MAIN-20180121060717-20180121080717-00025.warc.gz"} |
https://www.physicsforums.com/threads/low-impact-energy-high-impact-toughness-comparisons.887057/ | # Low impact energy, high impact toughness - comparisons
Tags:
1. Sep 28, 2016
### raniero
Hi,
I have been doing some research about imapact toughness of steel and found a paper comparing toughnesses of 4 different steels using the Charpy impact test. The following is a link to the paper mentioned.
At 24oC, SAE 4140 has the lowest impact energy, 13J.
In the conclusion it is stated that fracture toughness of SAE 4140 is the largest of all at 68.61 MPa m1/2.
How come the mentioned steel absorbed the least amount energy and ended up being the the toughest? Is it because it has a higher ultimate tensile strength?
Could it be considered as being a ductile or brittle material? The high toughness suggests it is ductile but is surely did not deform as much as the other steels.
Note: I am still unsure of how to calculate the impact toughness from a Charpy energy test although I understand how the energy to fracture the specimen is obtained.
Last edited: Sep 28, 2016
2. Oct 3, 2016
### Greg Bernhardt
Thanks for the thread! This is an automated courtesy bump. Sorry you aren't generating responses at the moment. Do you have any further information, come to any new conclusions or is it possible to reword the post? The more details the better.
Draft saved Draft deleted
Similar Discussions: Low impact energy, high impact toughness - comparisons | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8379524946212769, "perplexity": 2639.584553409591}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825147.83/warc/CC-MAIN-20171022060353-20171022080353-00830.warc.gz"} |
https://socratic.org/questions/how-do-you-simplify-and-write-2-5-times-10-3-times-520-in-scientific-notation | Algebra
Topics
# How do you simplify and write 2.5 times 10^3 times 520 in scientific notation?
Jul 24, 2016
#### Answer:
$1.3 \times {10}^{6}$
#### Explanation:
Lets get rid of the decimal!
Write $2.5 \times {10}^{3} \text{ as } 25 \times {10}^{2}$
Now we have:
$25 \times 520 \times {10}^{2} = 13000 \times {10}^{2}$
'~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Rather than write the solution straight off I am going to do the following so you can see what the stages are and their relationships.
$\textcolor{b l u e}{\text{All the following have the same value. They just look different!}}$
$13000 \times {10}^{2}$
$1300 \times {10}^{3}$
$130 \times {10}^{4}$
$13 \times {10}^{5}$
$1.3 \times {10}^{6} \leftarrow \text{ This is Scientific notation}$
##### Impact of this question
199 views around the world
You can reuse this answer
Creative Commons License | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 9, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9348986744880676, "perplexity": 3565.9245289874884}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986677964.40/warc/CC-MAIN-20191018055014-20191018082514-00341.warc.gz"} |
https://www.groundai.com/project/limiting-absorption-principle-for-the-magnetic-dirichlet-laplacian-in-a-half-plane/ | Limit absorption for the half-plane magnetic Dirichlet Laplacian
# Limiting absorption principle for the magnetic Dirichlet Laplacian in a half-plane.
Nicolas Popoff Université de Bordeaux, IMB, UMR 5251, 33405 TALENCE cedex, France and Eric Soccorsi Aix Marseille Université, Université de Toulon, CNRS, CPT UMR 7332, 13288, Marseille, France
###### Abstract.
We consider the Dirichlet Laplacian in the half-plane with constant magnetic field. Due to the translational invariance this operator admits a fiber decomposition and a family of dispersion curves, that are real analytic functions. Each of them is simple and monotonically decreasing from positive infinity to a finite value, which is the corresponding Landau level. These finite limits are thresholds in the purely absolutely continuous spectrum of the magnetic Laplacian. We prove a limiting absorption principle for this operator both outside and at the thresholds. Finally, we establish analytic and decay properties for functions lying in the absorption spaces. We point out that the analysis carried out in this paper is rather general and can be adapted to a wide class of fibered magnetic Laplacians with thresholds in their spectrum that are finite limits of their band functions..
AMS 2000 Mathematics Subject Classification: 35J10, 81Q10, 35P20.
Keywords: Two-dimensional Schrödinger operators, constant magnetic field, limit absorption, thresholds.
## 1. Introduction
In the present article we consider the Hamiltonian with magnetic potential , defined in the half-plane . We impose Dirichlet boundary conditions at and introduce the self-adjoint realization
H:=−∂2x+(−i∂y−x)2,
initially defined in and then closed in . This operator models the planar motion of a quantum charged particle (the various physical constants are taken equal to ) constrained to and submitted to an orthogonal magnetic field of strength , it has already been studied in several articles (e.g., [8, 15, 6, 16, 20]).
The Schrödinger operator is translationally invariant in the -direction and admits a fiber decomposition with fiber operators which have purely discrete spectrum. The corresponding dispersion curves (also named band functions in this text) are real analytic functions in , monotonically decreasing from positive infinity to the -th Landau level for . As a consequence, the spectrum of is absolutely continuous, equals the interval . Hence the resolvent operator depends analytically on in , and is well defined for every .
Since has a continuous spectrum, the spectral projector of , associated with the interval , , expresses as
E(a,b)=12iπlimε↓0∫ba(R(λ+iε)−R(λ−iε))dλ,
by the spectral theorem. Suitable functions of the operator may therefore be expressed in terms of the limits of the resolvent operators for . As a matter of fact the Schrödinger propagator associated with reads
e−itH=12iπlimε↓0∫+∞E1e−itλ(R(λ+iε)−R(λ−iε))dλ, t>0.
This motivates for a quantitative version of the convergence , for , known as the limiting absorption principle (abbreviated to LAP in the sequel). Notice moreover that a LAP is a useful tool for the analysis of the scattering properties of , and more specifically for the proof of the existence and the completeness of the wave operators (see e.g. [25, Chap. XI]). The main purpose of this article is to establish a LAP for . That is, for each , we aim to prove that has a limit as , in a suitable sense we shall make precise further.
There is actually a wide mathematical literature on LAP available for various operators of mathematical physics (see e.g. [29, 10, 19, 1, 28, 27, 3, 9]). More specifically, the case of analytically fibered self-adjoint operators was addressed in e.g. [7, 13, 26]. Such an operator is unitarily equivalent to the multiplier by a family of real analytic dispersion curves, so its spectrum is the closure of the range of its band functions. Generically, energies associated with a “flat” of any of the band functions , are thresholds in the spectrum of . More precisely, a threshold of the operator is any real number satisfying for some and all neighborhoods of in . We call the set of thresholds.
The occurrence of a LAP outside the thresholds of analytically fibered operators is a rather standard result. It is tied to the existence of a Mourre inequality at the prescribed energies (see [13, 12]), arising from the non-zero velocity of the dispersion curves for the corresponding frequencies. More precisely, given an arbitrary compact subset , we shall extend to a Hölder continuous function on in the norm-topology of for any . Here and henceforth the Hilbert space
L2,σ(Ω):={u:Ω→C measurable, (x,y)↦(1+y2)σ/2u(x,y)∈L2(Ω)},
is endowed with the scalar product .
Evidently, local extrema of the dispersion curves are thresholds in the spectrum of fibered operators. Any such energy being a critical point of some band function, it is referred as an attained threshold. Actually, numerous operators of mathematical physics modeling the propagation of acoustic, elastic or electromagnetic waves in stratified media [7, 4, 5, 26] and various magnetic Hamiltonians [11, 15, 30] have all their thresholds among local minima of their band functions. A LAP at an attained threshold may be obtained upon imposing suitable vanishing condition (depending on the level of degeneracy of the critical point) on the Fourier transform of the functions in , at the corresponding frequency. See [4, 26] for the analysis of this problem in the general case.
Nevertheless, none of the above mentioned papers seems relevant for our operator . This comes from the unusual behavior of the band functions of at infinity: In the framework examined in this paper, there exists a countable set of thresholds in the spectrum of , but in contrast with the situations examined in [4, 26], none of these thresholds are attained. This peculiar behavior raises several technical problems in the derivation of a LAP for at , . Nevertheless, for any arbitrary compact subset (which may contain one or several thresholds for ) we shall establish in Theorem 2.5 a LAP for in for the topology of the norm in , where is a suitable subspace of , for an appropriate , which is dense in . The space is made of -functions, with smooth Fourier coefficients vanishing suitably at the thresholds of lying in . Otherwise stated there is an actual LAP at , , even though is a non attained threshold of . Moreover, it turns out that the method developed in the derivation of a LAP for is quite general and may be generalized to a wide class of fibered operators (such as the ones examined in [15, 30, 6, 17, 23]) with non attained thresholds in their spectrum.
Finally, functions in exhibit interesting geometrical properties. Namely, assuming that , it turns out that the asymptotic behavior of the -th band function of at positive infinity (computed in [16, Theorem 1.4]) translates into super-exponential decay in the -variable (orthogonal to the edge) of their -th harmonic, see Theorem 3.5. Such a behavior is typical of magnetic Laplacians, as explained in Remark 3.7.
### 1.1. Spectral decomposition associated with the model
Let us now collect some useful information on the fiber decomposition of the operator .
The Schrödinger operator is translationally invariant in the longitudinal direction and therefore allows a direct integral decomposition
(1)
where denotes the partial Fourier transform with respect to and the fiber operator acts in with a Dirichlet boundary condition at . Since the effective potential is unbounded as goes to infinity, each , , has a compact resolvent, hence a purely discrete spectrum. We note the non-decreasing sequence of the eigenvalues of , each of them being simple. Furthermore, for introduce a family of eigenfunctions of the operator , which satisfy
h(k)un(x,k)=λn(k)un(x,k), x∈R∗+,
and form an orthonormal basis in .
As is a Kato analytic family, the functions , , are analytic (see e.g. [24, Theorem XII.12]). Moreover they are monotonically decreasing in according to [8, Lemma 2.1 (ii)] and the Max-Min principle yields (see [8, Lemma 2.1 (iii) and (v)])
limk→−∞λn(k)=+∞ and limk→+∞λn(k)=En, n∈N∗.
Therefore, the general theory of fibered operators (see e.g. [24, Section XIII.16]) implies that the spectrum of is purely absolutely continuous, with
σ(H)=¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯∪n∈N∗λn(R)=[E1,+∞].
For all we define the Fourier coefficient of by
fn(k):=⟨Fyf(⋅,k),un(⋅,k)⟩L2(R+), k∈R,
and introduce its harmonic as
(2) Πnf(x,y):=∫Reikyfn(k)un(x,k)dk, (x,y)∈Ω.
In view of (1), we have the standard Fourier decomposition in and the following Parseval identity
(3) ∥f∥2L2(Ω)=∑n∈N∗∥fn∥2L2(R),
involving that the linear mapping is continuous from into for each . Let us now recall the following useful properties of the restriction of to for (see e.g. [7, Proposition 3.2]).
###### Lemma 1.1.
Fix . Then the operators are uniformly bounded with respect to from into :
(4) ∃C(s)>0, ∀n∈N∗, ∀f∈L2,s(Ω), ∀k∈R, |fn(k)|≤C(s)∥f∥L2,s(Ω).
Moreover for any , each operator , , is bounded from into , the set of locally Hölder continuous functions in , of exponent . Namely, there exists a function such that,
(5) ∀f∈L2,s(Ω), ∀(k,k′)∈R2, |fn(k′)−fn(k)|≤Cn,α,s(k,k′)∥f∥L2,s(Ω)|k′−k|α.
## 2. Limiting absorption principle
For all and , standard functional calculus yields
(6) ⟨R(z)f,g⟩L2(R2+)=∑n≥1rn(z) with rn(z):=∫Rfn(k)¯¯¯¯¯¯¯¯¯¯¯¯gn(k)λn(k)−zdk, n∈N∗.
Since for each , the function is analytic on so is well defined. In light of (6), it suffices that each , with , be suitably extended to some locally Hölder continuous function in , to derive a LAP for the operator .
### 2.1. Singular Cauchy integrals
Let be fixed. Bearing in mind that is an analytic diffeomorphism from onto , we note the function inverse to and put for any function . Then, upon performing the change of variable in the integral appearing in (6), we get for every that
(7) rn(z)=∫InHn(λ)λ−zdλ % with Hn:=~fn ¯¯¯¯¯¯~gn~λ′n=(fn∘λ−1n)¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯(gn∘λ−1n)λ′n∘λ−1n.
Therefore, the Cauchy integral is singular for . Our main tool for extending singular Cauchy integrals of this type to locally Hölder continuous functions in is the Plemelj-Privalov Theorem (see e.g. [22, Chap. 2, §22]), stated below.
###### Lemma 2.1.
Let and let , where is an open bounded subinterval of . Then the mapping , defined in , satisfies for every :
limε↓0r(λ±iε)=r±(λ):=p.v.(∫Iψ(t)t−λdt)±iπψ(λ).
Moreover the function
r±(λ):={r(λ)if λ∈¯¯¯¯¯¯¯C±∖¯¯¯Ir±(λ)if λ∈I,
is analytic in and locally Hölder continuous of order in in the sense that there exists such that
∀z,z′∈¯¯¯¯¯¯¯C±∖{a,b}, |r±(z′)−r±(z)|≤∥ψ∥C0,α(¯¯I)CI,α(z,z′)|z′−z|α.
In addition, if , then extends to a locally Hölder continuous function of order in .
### 2.2. Limiting absorption principle outside the thresholds
In this subsection we establish a LAP for outside its thresholds . This is a rather standard result that we state here for the convenience of the reader. For the sake of completeness we also recall its proof, which requires several ingredients that are useful in the derivation of the main result of subsection 2.3.
###### Proposition 2.2.
Let be a compact subset of . Then for all and any , the resolvent extends to in a Hölder continuous function of order , still denoted by , for the topology of the norm in . Namely there exists a constant , such that the estimate
∥(R±(z′)−R±(z))f∥L2,−s(Ω)≤C∥f∥L2,s(Ω)|z′−z|α
holds for all and all .
###### Proof.
Let and be in . The notations below refer to (6)–(7). Since is bounded there is necessarily such that . This and (3) entail through straightforward computations that is Lipschitz continuous in , with
(8) ∀z∈K, ∥∥ ∥∥∑m≥Nrm(z)∥∥ ∥∥C0,1(K)≤d−2K∥f∥L2(Ω)∥g∥L2(Ω).
Thus it suffices to examine each , for , separately. Using that is a compact subset of we pick an open bounded subinterval , with , such that . With reference to (7) we have the following decomposition for each ,
(9) rm(z)=rm(z;I)+rm(z;Im∖¯¯¯I) where rm(z,J):=∫JHm(z)λ−zdλ for any J⊂Im.
Since , and are both Lipschitz continuous in , and . Therefore we deduce from Lemma 1.1 that and verifies
(10) ∥Hm∥C0,α(¯¯I)≤cm∥f∥L2,s(Ω)∥g∥L2,s(Ω),
for some constant which is independent of and . From this and Lemma 2.1 then follows that extends to a locally Hölder continuous function of order in , satisfying
(11) ∥rm(⋅;I)∥C0,α(K∩¯¯¯¯¯¯¯C±)≤Cm∥Hm∥C0,α(¯¯I),
where the positive constant depends neither on nor on .
Next, as the Euclidean distance between and is positive, from the very definition of , we get that , since is compact. Therefore is Lipschitz continuous in and satisfies
(12) ∥rm(⋅;Im∖¯¯¯I)∥C0,1(K)≤δK(I)−2∥f∥L2(Ω)∥g∥L2(Ω),
according to (3).
Finally, putting (8)-(9) and (10)-(12) together, and recalling that the injection is continuous, we end up getting a constant , such that the estimate
|⟨R±(z)f,g⟩L2(Ω)−⟨R±(z′)f,g⟩L2(Ω)|≤C∥f∥L2,s(Ω)∥g∥L2,s(Ω)|z−z′|α,
holds uniformly in and . The result follows from this and the fact that and the space of continuous linear forms on are isometric, with the duality pairing
∀f∈L2,−s(Ω), ∀g∈L2,s(Ω), ⟨f,g⟩L2,−s(Ω),L2,s(Ω):=∫Ωf(x,y)¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯g(x,y)dxdy.
### 2.3. Limiting absorption principle at the thresholds
We now examine the case of a compact subset containing one or several thresholds of , i.e. such that . Since then the bounded set contains at most a finite number of thresholds. For the sake of clarity, we first investigate the case where contains exactly one threshold:
(13) ∃n∈N∗, K∩T={En}.
The target here is the same as in subsection 2.2, that is to establish a LAP for in . Actually, for all and any , it is clear from the proof of Proposition 2.2 that can be regarded as a -Hölder continuous function in with values in .
Thus we are left with the task of suitably extending in . But, as may actually blow up as tends to , obviously the method used in the proof of Proposition 2.2 does not apply to when lies in the vicinity of . This is due to the vanishing of the denominator of in (7) as approaches , or, equivalently, to the flattening of when goes to . We shall compensate this peculiar asymptotic behavior of the dispersion curve by imposing appropriate conditions on the functions and so the numerator decays sufficiently fast at . This require that the following useful functional spaces be preliminarily introduced.
##### Suitable functional spaces.
For any open subset and the non vanishing function
(14) μn:=|λ′n∘λ−1n|−1/2,
on , we denote by 111Since is not defined at then it is understood in the peculiar case where that if and only if extends continuously to a function lying in .(resp. ), the -weighted space of Hölder continuous functions of order (resp., square integrable functions) in . Endowed with the norm (resp., ), (resp., ) is a Banach space since this is the case for (resp., ).
Further, the above definitions translate through the linear isometry from into , to
Kαn(R)=Λ−1n(C0,αμn(¯¯¯In)∩L2μn(In)):={f∈L2(R), Λnf∈C0,αμn(¯¯¯In)},
which is evidently a Banach space for the norm . As a consequence the set
equipped with its natural norm is a Banach space as well.
On we define the linear form . Notice from the embedding , that is well defined since extends to a continuous function in . Furthermore, we have for any so the linear form is continuous on . Let us now introduce the subspace
Xs,αn,0(Ω):= Xs,αn(Ω)∩(Λnπn)−1(kerδEn) = {f∈L2,s(Ω), μn~fn∈C0,α(¯¯¯In)∩L2(In) and (μn~fn)(En)=0},
where, as usual, stands for . Since is continuous then is closed in . Therefore it is a Banach space for the norm
∥f∥Xs,αn(Ω)=∥~fn∥C0,αμn(¯¯In)+∥~fn∥L2μn(In)+∥f∥L2,s(Ω).
Moreover, being dense in , we deduce from the imbedding that is dense in for the usual norm-topology.
Summing up, we have obtained the:
###### Lemma 2.3.
The set is a Banach space and is dense in .
##### Absorption at En.
Having defined for fixed, we now derive a LAP at for the restriction of the operator to associated with suitable values of and .
###### Proposition 2.4.
Let be a compact subset of obeying (13) and let . Then, for every and , both limits exist in the uniform operator topology on . Moreover the resolvent extends to a Hölder continuous on with order ; Namely there exists such that we have
∀z,z′∈K∩¯¯¯¯¯¯¯C±, ∀f∈Xs,α0,n(Ω), ∥(R±(z)−R±(z′))f∥(Xs,α0,n(Ω))′≤C|z−z′|s,α∥f∥Xs,αn(Ω).
###### Proof.
It is clear from (13) upon mimicking the proof of Proposition 2.2, that extends to an -Hölder continuous function in , denoted by , satisfying
(15) ∥∥ ∥∥∑m≠nr±m∥∥ ∥∥C0,α(K∩¯¯¯¯¯¯¯C±)≤c∥f∥L2,s(Ω)∥g∥L2,s(Ω),
for some constant that depends only on , and .
We turn now to examining . Taking into account that is bounded we pick so large that and refer once more to the proof of Proposition 2.2. We get that extends to a Hölder continuous function of exponent in , with
(16) ∥∥r±n(⋅;In∖¯¯¯¯J)∥∥C0,α(K∩¯¯¯¯¯¯¯C±)≤c′∥f∥L2,s(Ω)∥g∥L2,s(Ω),
where is a constant depending only on , and and .
Finally, since and are taken in then the function , defined in (7), is -Hölder continuous in , and we have
(17) ∥Hn∥C0,α(¯¯¯J)≤∥Λnfn∥C0,αμn(¯¯¯J)∥Λngn∥C0,αμn(¯¯¯J)≤∥fn∥Kαn(R)∥gn∥Kαn(R)≤∥f∥Xαn(Ω)∥g∥Xαn(Ω).
Bearing in mind that and , we deduce from (17) and Lemma 2.1 that extends to an -Hölder continuous function, still denoted by , in , obeying
(18) ∥r±n(⋅;J)∥C0,α(K∩¯¯¯¯¯¯¯C±)≤c′∥Hn∥C0,α(¯¯¯J)≤c′∥f∥Xs,αn(Ω)∥g∥Xs,αn(Ω),
where is the same as in (16). Finally, putting (15)-(16) and (18) together, we end up getting a constant , which is independent of and , such that we have
∀z,z′∈K∩¯¯¯¯¯¯¯C±, |⟨R±(z)f,g⟩L2(Ω)−⟨R±(z′)f,g⟩L2(Ω)|≤C|z−z′|α∥f∥Xs,αn(Ω)∥g∥Xs,αn(Ω).
Here we used the basic identity and the continuity of embedding . This entails the desired result. ∎
For any compact subset , the set is finite. Then upon substituting for in the proof of Proposition 2.4, it is apparent that we obtain the:
###### Theorem 2.5.
Let be compact, and let and be the same as in Proposition 2.4. Then the resolvent | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9866316914558411, "perplexity": 384.99314884886604}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655883961.50/warc/CC-MAIN-20200704011041-20200704041041-00178.warc.gz"} |
https://gilkalai.wordpress.com/tag/borsuks-conjecture/ | # Around Borsuk’s Conjecture 3: How to Save Borsuk’s conjecture
Borsuk asked in 1933 if every bounded set K of diameter 1 in $R^d$ can be covered by d+1 sets of smaller diameter. A positive answer was referred to as the “Borsuk Conjecture,” and it was disproved by Jeff Kahn and me in 1993. Many interesting open problems remain. The first two posts in the series “Around Borsuk’s Conjecture” are here and here. See also these posts (I,II,III, IV), and the post “Surprises in mathematics and theory” on Lipton and Reagan’s blog GLL.
Can we save the conjecture? We can certainly try, and in this post I would like to examine the possibility that Borsuk’s conjecture is correct except from some “coincidental” sets. The question is how to properly define “coincidental.”
Let K be a set of points in $R^d$ and let A be a set of pairs of points in K. We say that the pair (K, A) is general if for every continuous deformation of the distances on A there is a deformation K’ of K which realizes the deformed distances.
(This condition is related to the “strong Arnold property” (aka “transversality”) in the theory of Colin de Verdière invariants of graphs; see also this paper by van der Holst, Lovasz and Schrijver.)
Conjecture 1: If D is the set of diameters in K and (K,D) is general then K can be partitioned into d+1 sets of smaller diameter.
We propose also (somewhat stronger) that this conjecture holds even when “continuous deformation” is replaced with “infinitesimal deformation”.
The finite case is of special interest:
A graph embedded in $R^d$ is stress-free if we cannot assign non-trivial weights to the edges so that the weighted sum of the edges containing any vertex v (regarded as vectors from v) is zero for every vertex v. (Here we embed the vertices and regard the edges as straight line segments. (Edges may intersect.) Such a graph is called a “geometric graph”.) When we restrict Conjecture 1 to finite configurations of points we get.
Conjecture 2: If G is a stress free geometric graph of diameters in $R^d$ then G is (d+1)-colorable.
A geometric graph of diameters is a geometric graph with all edges having the same length and all non edged having smaller lengths. The attempt for “saving” the Borsuk Conjecture presented here and Conjectures 1 and 2 first appeared in a 2002 collection of open problems dedicated to Daniel J. Kleitman, edited by Douglas West.
When we consider finite configurations of points we can make a similar conjecture for the minimal distances:
Conjecture 3: If the geometric graph of pairs of vertices realizing the minimal distances of a point-configuration in $R^d$ is stress-free, then it is (d+1)-colorable.
We can speculate that even the following stronger conjectures are true:
Conjecture 4: If G is a stress-free geometric graph in $R^d$ so that all edges in G are longer than all non-edges of G, then G is (d+1)-colorable.
Conjecture 5: If G is a stress-free geometric graph in $R^d$ so that all edges in G are shorter than all non-edges of G, then G is (d+1)-colorable.
We can even try to extend the condition further so edges in the geometric graph will be larger (or smaller) than non-edges only just “locally” for neighbors of each given vertex.
1) It is not true that every stress-free geometric graph in $R^d$ is (d+1)-colorable, and not even that every stress-free unit-distance graph is (d+1)-colorable. Here is the (well-known) example referred to as the Moser Spindle. Finding conditions under which stress-free graphs in $R^d$ are (d+1)-colorable is an interesting challenge.
2) Since a stress-free graph with n vertices has at most $dn - {{d+1} \choose {2}}$ edges it must have a vertex of degree 2d-1 or less and hence it is 2d colorable. I expect this to be best possible but I am not sure about it. This shows that our “saved” version of Borsuk’s conjecture is of very different nature from the original one. For graphs of diameters in $R^d$ the chromatic number can, by the work of Jeff and me be exponential in $\sqrt d$.
3) It would be interesting to show that conjecture 1 holds in the non-discrete case when d+1 is replaced by 2d.
4) Coloring vertices of geometric graphs where the edged correspond to the minimal distance is related also the the well known Erdos-Faber-Lovasz conjecture..
See also this 1994 article by Jeff Kahn on Hypergraphs matching, covering and coloring problems.
5) The most famous conjecture regarding coloring of graphs is, of course, the four-color conjecture asserting that every planar graph is 4-colorable that was proved by Appel and Haken in 1976. Thinking about the four-color conjecture is always both fascinating and frustrating. An embedding for maximal planar graphs as vertices of a convex 3-dimensional polytope is stress-free (and so is, therefore, also a generic embedding), but we know that this property alone does not suffice for 4-colorability. Finding further conditions for stress-free graphs in $R^d$ that guarantee (d+1)-colorability can be relevant to the 4CT.
An old conjecture of mine asserts that
Conjecture 6: Let G be a graph obtained from the graph of a d-polytope P by triangulating each (non-triangular) face with non-intersecting diagonals. If G is stress-free (in which case the polytope P is called “elementary”) then G is (d+1)-colorable.
Closer to the conjectures of this post we can ask:
Conjecture 7: If G is a stress-free geometric graph in $R^d$ so that for every edge e of G is tangent to the unit ball and every non edge of G intersect the interior of the unit ball, then G is (d+1)-colorable.
### A question that I forgot to include in part I.
What is the minimum diameter $d_n$ such that the unit ball in $R^n$ can be covered by n+1 sets of smaller diameter? It is known that $2-C'\log n/n \le d_n\le 2-C/n$ for some constants C and C’.
# Andriy Bondarenko Showed that Borsuk’s Conjecture is False for Dimensions Greater Than 65!
### The news in brief
Andriy V. Bondarenko proved in his remarkable paper The Borsuk Conjecture for two-distance sets that the Borsuk’s conjecture is false for all dimensions greater than 65. This is a substantial improvement of the earlier record (all dimensions above 298) by Aicke Hinrichs and Christian Richter.
### Borsuk’s conjecture
Borsuk’s conjecture asserted that every set of diameter 1 in d-dimensional Euclidean space can be covered by d+1 sets of smaller diameter. (Here are links to a post describing the disproof by Kahn and me and a post devoted to problems around Borsuk’s conjecture.)
### Two questions posed by David Larman
David Larman posed in the ’70s two basic questions about Borsuk’s conjecture:
1) Does the conjecture hold for collections of 0-1 vectors (of constant weight)?
2) Does the conjecture hold for 2-distance sets? 2-distance sets are sets of points such that the pairwise distances between any two of them have only two values.
### Reducing the dimensions for which Borsuk’s conjecture fails
In 1993 Jeff Kahn and I disproved Borsuk’s conjecture in dimension 1325 and all dimensions greater than 2014. Larman’s first conjecture played a special role in our work. While being a special case of Borsuk’s conjecture, it looked much less correct.
The lowest dimension for a counterexample were gradually reduced to 946 by A. Nilli, 561 by A. Raigorodskii, 560 by Weißbach, 323 by A. Hinrichs and 320 by I. Pikhurko. Currently the best known result is that Borsuk’s conjecture is false for n ≥ 298; The two last papers relies strongly on the Leech lattice.
Bondarenko proved that the Borsuk’s conjecture is false for all dimensions greater than 65. For this he disproved Larman’s second conjecture.
### Bondarenko’s abstract
In this paper we answer Larman’s question on Borsuk’s conjecture for two-distance sets. We found a two-distance set consisting of 416 points on the unit sphere in the dimension 65 which cannot be partitioned into 83 parts of smaller diameter. This also reduces the smallest dimension in which Borsuk’s conjecture is known to be false. Other examples of two-distance sets with large Borsuk’s numbers will be given.
### Two-distance sets
There was much interest in understanding sets of points in $R^n$ which have only two pairwise distances (or K pairwise distances). Larman, Rogers and Seidel proved that the maximum number can be at most (n+1)(n+4)/2 and Aart Blokhuis improved the bound to (n+1)(n+2)/2. The set of all 0-1 vectors of length n+1 with two ones gives an example with n(n+1)/2 vectors.
### Equiangular lines
This is a good opportunity to mention another question related to two-distance sets. Suppose that you have a set of lines through the origin in $R^n$ so that the angles between any two of them is the same. Such a set is called an equiangular set of lines. Given such a set of cardinality m, if we take on each line one unit vector, this gives us a 2-distance set. It is known that m ≤ n(n+1)/2 but for a long time it was unknown if a quadratic set of equiangular lines exists in high dimensions. An exciting breakthrough came in 2000 when Dom deCaen constructed a set of equiangular lines in $R^n$ with $2/9(n+1)^2$ lines for infinitely many values of n.
### Strongly regular graphs
Strongly regular graphs are central in the new examples. A graph is strongly regular if every vertex has k neighbors, every adjacent pair of vertices have λ common neighbors and every non-adjacent pair of vertices have μ common neighbors. The study of strongly regular graphs (and other notions of strong regularity/symmetry) is a very important area in graph theory which involves deep algebra and geometry. Andriy’s construction is based on a known strongly regular graph $G_2(4)$.
# Around Borsuk’s Conjecture 1: Some Problems
Greetings to all!
Karol Borsuk conjectured in 1933 that every bounded set in $R^d$ can be covered by $d+1$ sets of smaller diameter. In a previous post I described the counterexample found by Jeff Kahn and me. I will devote a few posts to related questions that are still open. I will list and discuss such questions in the first and second parts. In the third part I will describe an approach towards better examples which is related to interesting extremal combinatorics. (Of cocyclec. this post appeared already.) In the fourth part I will try to “save the conjecture”, namely to present a variation of the conjecture which might be true. Let f(d) be the smallest integer so that every set of diameter one in $R^d$ can be covered by f(d) sets of smaller diamete
# The Combinatorics of Cocycles and Borsuk’s Problem.
## Cocycles
Definition: A $k$-cocycle is a collection of $(k+1)$-subsets such that every $(k+2)$-set $T$ contains an even number of sets in the collection.
Alternative definition: Start with a collection $\cal G$ of $k$-sets and consider all $(k+1)$-sets that contain an odd number of members in $\cal G$.
It is easy to see that the two definitions are equivalent. (This equivalence expresses the fact that the $k$-cohomology of a simplex is zero.) Note that the symmetric difference of two cocycles is a cocycle. In other words, the set of $k$-cocycles form a subspace over Z/2Z, i.e., a linear binary code.
1-cocycles correspond to the set of edges of a complete bipartite graph. (Or, in other words, to cuts in the complete graphs.) The number of edges of a complete bipartite graph on $n$ vertices is of the form $k(n-k)$. There are $2^{n-1}$ 1-cocycles on $n$ vertices altogether, and there are $n \choose k$ cocycles with $k(n-k)$ edges.
2-cocycles were studied under the name “two-graphs”. Their study was initiated by J. J. Seidel.
Let $e(k,n)$ be the number of $k$-cocycles.
Lemma: Two collections of $k$-sets (in the second definition) generate the same $k$-cocycle if and only if their symmetric difference is a $(k-1)$-cocycle.
It follows that $e(k,n)= 2^{{n}\choose {k}}/e(k-1,n).$ So $e(k,n)= 2^{{n-1} \choose {k}}$.
A very basic question is:
Problem 1: For $k$ odd what is the maximum number $f(k,n)$ of $(k+1)$-sets of a $k$-cocycle with $n$ vertices?
When $k$ is even, the set of all $(k+1)$-subsets of {1,2,…,n} is a cocycle.
Problem 2: What is the value of $m$ such that the number $ef(k,n,m)$ of $k$-cocycles with $n$ vertices and $m$ $k$-sets is maximum?
When $k$ is even the complement of a cocycle is a cocycle and hence $ef(k,n,m)$$=ef(k,n,{{n}\choose{k+1}}-m)$. It is likely that in this case $ef(k,n,m)$ is a unimodal sequence (apart from zeroes), but I don’t know if this is known. When $k$ is odd it is quite possible that (again, ignoring zero entries) $ef(n,m)$ is unimodal attaining its maximum when $m=1/2 {{n} \choose {k+1}}$.
## Borsuk’s conjecture, Larman’s conjecture and bipartite graphs
Karol Borsuk conjectured in 1933 that every bounded set in $R^d$ can be covered by $d+1$ sets of smaller diameter. David Larman proposed a purely combinatorial special case (that looked much less correct than the full conjecture.)
Larman’s conjecture: Let $\cal F$ be an $latex r$-intersecting family of $k$-subsets of $\{1,2,\dots, n\}$, namely $\cal F$ has the property that every two sets in the family have at least $r$ elements in common. Then $\cal F$ can be divided into $n$ $(r+1)$-intersecting families.
Larman’s conjecture is a special case of Borsuk’s conjecture: Just consider the set of characteristic vectors of the sets in the family (and note that they all belong to a hyperplane.) The case $r=1$ of Larman’s conjecture is open and very interesting.
A slightly more general case of Borsuk’s conjecture is for sets of 0-1 vectors (or equivalently $\pm 1$ vectors. Again you can consider the question in terms of covering a family of sets by subfamilies. Instead of intersection we should consider symmetric differences.
Borsuk 0-1 conjecture: Let $\cal F$ be a family of subsets of $\{1,2,\dots, n\}$, and suppose that the symmetric difference between every two sets in $\cal F$ has at most $t$ elements. Then $\cal F$ can be divided into $n+1$ families such that the symmetric difference between any pair of sets in the same family is at most $t-1$.
## Cuts and complete bipartite graphs
The construction of Jeff Kahn and myself can be described as follows:
Construction 1: The ground set is the set of edges of the complete graph on $4p$ vertices. The family $\cal F$ consists of all subsets of edges which represent the edge sets of a complete bipartite graph with $2p$ vertices in every part. In this case, $n={{4p} \choose {2}}$, $k=4p^2$, and $r=2p^2$.
It turns out (as observed by A. Nilli) that there is no need to restrict ourselves to balanced bipartite graphs. A very similar construction which performs even slightly better is:
Construction 2: The ground set is the set of edges of the complete graph on $4p$ vertices. The family $\cal F$ consists of all subsets of edges which represent the edge set of a complete bipartite graph.
Let $f(d)$ be the smallest integer such that every set of diameter 1 in $R^d$ can be covered by $f(d)$ sets of smaller diameter. Constructions 1 and 2 show that $f(d) >exp (K\sqrt d)$. We would like to replace $d^{1/2}$ by a larger exponent.
## The proposed constructions.
To get better bounds for Borsuk’s problem we propose to replace complete bipartite graphs with higher odd-dimensional cocycles.
Construction A: Consider all $(2k-1)$-dimensional cocycles of maximum size (or of another fixed size) on the ground set $\{1,2,\dots,n\}$.
Construction B: Consider all $(2k-1)$-dimensional cocycles on the ground set $\{1,2,\dots,n\}$.
## A Frankl-Wilson/Frankl-Rodl type problem for cocycles
Conjecture: Let $\alpha$ be a positive real number. There is $\beta = \beta (k,\alpha)<1$ with the following property. Suppose that
(*) The number of $k$-cocycles on $n$ vertices with $m$ edges is not zero
and that
(**) $m>\alpha\cdot {{n}\choose {k+1}}$, and $m<(1-\alpha){{n}\choose {k+1}}$. (The second inequality is not needed for odd-dimensional cocycles.)
Let $\cal F$ be a family of $k$-cocycles such that no symmetric difference between two cocycles in $\cal F$ has precisely $m$ sets. Then
$|{\cal F}| \le 2^{\beta {{n}\choose {k}}}.$
If true even for 3-dimensional cocycles this conjecture will improve the asymptotic lower bounds for Borsuk’s problem. For example, if true for 3-cocycles it will imply that $f(d) \ge exp (K d^{3/4})$. The Frankl-Wilson and Frankl-Rodl theorems have a large number of other applications, and an extension to cocycles may also have other applications.
## Crossing number of complete graphs, Turan’s (2k+1,2k) problems, and cocycles
The question on the maximum number of sets in a $k$-cocycle when $k$ is odd is related to several other (notorious) open problems.
# Raigorodskii’s Theorem: Follow Up on Subsets of the Sphere without a Pair of Orthogonal Vectors
Andrei Raigorodskii
(This post follows an email by Aicke Hinrichs.)
In a previous post we discussed the following problem:
Problem: Let $A$ be a measurable subset of the $d$-dimensional sphere $S^d = \{x\in {\bf R}^{d+1}:\|x\|=1\}$. Suppose that $A$ does not contain two orthogonal vectors. How large can the $d$-dimensional volume of $A$ be?
Setting the volume of the sphere to be 1, the Frankl-Wilson theorem gives a lower bound (for large $d$) of $1.203^{-d}$,
2) The double cap conjecture would give a lower bound (for large $d$) of $1.414^{-d}$.
A result of A. M. Raigorodskii from 1999 gives a better bound of $1.225^{-d}$. (This has led to an improvement concerning the dimensions where a counterexample for Borsuk’s conjecture exists; we will come back to that.) Raigorodskii’s method supports the hope that by considering clever configurations of points instead of just $\pm 1$-vectors and applying the polynomial method (the method of proof we described for the Frankl-Wilson theorem) we may get closer to and perhaps even prove the double-cap conjecture.
What Raigorodskii did was to prove a Frankl-Wilson type result for vectors with $0,\pm1$ coordinates with a prescribed number of zeros. Here is the paper.
Now, how can we beat the $1.225^{-d}$ record???
# A Little Story Regarding Borsuk’s Conjecture
Jeff Kahn
Jeff and I worked on the problem for several years. Once he visited me with his family for two weeks. Before the visit I emailed him and asked: What should we work on in your visit?
Jeff asnwered: We should settle Borsuk’s problem!
I asked: What should we do in the second week?!
and Jeff asnwered: We should write the paper!
And so it was.
# Borsuk’s Conjecture
Karol Borsuk conjectured in 1933 that every bounded set in $R^d$ can be covered by $d+1$ sets of smaller diameter. Jeff Kahn and I found a counterexample in 1993. It is based on the Frankl-Wilson theorem.
Let $\cal G$ be the set of $\pm 1$ vectors of length $n$. Suppose that $n=4p$ and $p$ is a prime, as the conditions of Frankl-Wilson theorem require. Let ${\cal G'} = \{(1/\sqrt n)x:x \in {\cal G}\}$. All vectors in ${\cal G}'$ are unit vectors.
Consider the set $X=\{x \otimes x: x \in {\cal G}'\}$. $X$ is a subset of $R^{n^2}$.
Remark: If $x=(x_1,x_2,\dots,x_n)$, regard $x\otimes x$ as the $n$ by $n$ matrix with entries $(x_ix_j)$.
It is easy to verify that:
Claim: $ = ^2$.
It follows that all vectors in $X$ are unit vectors, and that the inner product between every two of them is nonnegative. The diameter of $X$ is therefore $\sqrt 2$. (Here we use the fact that the square of the distance between two unit vectors $x$ and $y$ is 2 minus twice their inner product.)
Suppose that $Y \subset X$ has a smaller diameter. Write $Y=\{x \otimes x: x \in {\cal F}\}$ for some subset $\cal F$ of $\cal G$. This means that $Y$ (and hence also $\cal F$) does not contain two orthogonal vectors and therefore by the Frankl-Wilson theorem
$|{\cal F}| \le U=4({{n} \choose {0}}+{{n}\choose {1}}+\dots+{{n}\choose{p-1}})$.
It follows that the number of sets of smaller diameter needed to cover $X$ is at least $2^n / U$. This clearly refutes Borsuk’s conjecture for large enough $n$. Sababa.
Let me explain in a few more words Continue reading | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 165, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8839672207832336, "perplexity": 807.0036515521034}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246660628.16/warc/CC-MAIN-20150417045740-00185-ip-10-235-10-82.ec2.internal.warc.gz"} |
https://math.stackexchange.com/questions/58375/noetherian-integral-domain-such-that-m-m2-is-a-one-dimensional-vector-space-o | Noetherian integral domain such that $m/m^2$ is a one-dimensional vector space over $A/m$
I am having trouble doing the following question (I'm studying for quals, it isn't homework)
If $A$ is a noetherian integral domain such that for every maximal $m\subset A$, the quotient $m/m^2$ is a one-dimensional vector space over the field $A/m$
(a) Prove every nonzero prime ideal is maximal.
(b) Prove $A$ is integrally closed.
There is a hint which says that one should localize at maximal ideals. My problem is that I'm not really sure how to use the $m/m^2$ condition. A solution or hint in the right direction using a minimal amount of commutative algebra would be much appreciated (but clearly a decent amount should be used).
• As in the hint, you can assume $A$ is local. The dimension condition on $m/m^2$ implies $m$ is principal (use Nakayama's lemma); thus $m=(x)$, say. The Krull intersection theorem implies any prime in $m$ is then either $m$ or $0$. That's part (a). Part (b) follows because now $A$ is dimension one, and the condition on $m/m^2$ implies it's a DVR. – user641 Aug 18 '11 at 19:37
This is just a hint (since you are studying for a qualifying exam). I would suggest reading about Dedekind domains. For instance, read Dummit and Foote's Algebra. In particular, look at Section 16.3, and more concretely, the equivalence $(1) \longleftrightarrow (2)$ in Theorem 15.
Recall that if $\mathfrak{m}$ is a maximal ideal of $A$ then $A_\mathfrak{m} / \mathfrak{m} A_\mathfrak{m} \simeq A / \mathfrak{m} = k(\mathfrak{m})$ and $\mathfrak{m} A_\mathfrak{m} / \mathfrak{m}^2 A_\mathfrak{m} \simeq \mathfrak{m} / \mathfrak{m}^2$. Then the condition implies that $\mathfrak{m} A_\mathfrak{m} / \mathfrak{m}^2 A_\mathfrak{m}$ is $k(\mathfrak{m})$-vector space of dimension $1$. Since $A_\mathfrak{m}$ is local, for Nakayama's lemma (Matsumura, Theorem 2.3) $\mathfrak{m} A_\mathfrak{m}$ is a non-zero cyclic $A_\mathfrak{m}$-module, i.e. a non-zero principal ideal of $A_\mathfrak{m}$. Now $A_\mathfrak{m}$ is a noetherian domain with principal maximal ideal and $A_\mathfrak{m}$ is not a field, then $A_\mathfrak{m}$ is a DVR (Matsumura, Theorem 11.2).
We have proved that $A_\mathfrak{m}$ is a DVR for every maximal ideal $\mathfrak{m}$ of $A$. From this point, it is easy to show that $A$ is a Dedekind domain. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9886007905006409, "perplexity": 94.9183401875182}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038863420.65/warc/CC-MAIN-20210419015157-20210419045157-00450.warc.gz"} |
https://www.studypug.com/pre-calculus/complex-numbers-and-complex-plane/operations-on-complex-numbers-in-polar-form | # Operations on complex numbers in polar form
### Operations on complex numbers in polar form
Let's find out how to perform some basic operations on complex numbers in polar form! We will briefly introduce the notion of the exponential form of a complex number, then we will focus on multiplication and division on complex numbers in polar form.
#### Lessons
Note:
Polar form
real part
$a=|z|\cos \theta$
imaginary part
$b=|z|\sin \theta$
$z=|z|(\cos \theta+i\sin \theta)$
When …
Multiplying: multiply the absolute values, and add the angles
Dividing: divide the absolute values, and subtract the angles
Exponential form
$z=|z|e^{i \theta}$
• 1.
Multiplying complex numbers in polar form
a)
$4(\cos(\frac{5\pi}{3})+i \sin(\frac{5\pi}{3})) \cdot 8(\cos(\frac{2\pi}{3})+i \sin(\frac{2\pi}{3}))$
b)
$(\cos(170^{\circ})+i \sin(170^{\circ}))\cdot 5(\cos(45^{\circ})+i \sin(45^{\circ}))$
c)
$3(\cos(\pi)+i \sin(\pi))\cdot(\cos(\frac{\pi}{5})+i \sin(\frac{\pi}{5}))\cdot6(\cos(\frac{2\pi}{3})+i \sin(\frac{2\pi}{3}))$
• 2.
Dividing complex numbers in polar form
a)
$20(\cos(\frac{5\pi}{2})+i \sin(\frac{5\pi}{2}))\div 6(\cos(\frac{2\pi}{3})+i \sin(\frac{2\pi}{3}))$
b)
$3(\cos(\frac{3\pi}{4})+i \sin(\frac{3\pi}{4}))\div 12(\cos(\frac{\pi}{6})+i \sin(\frac{\pi}{6}))$
c)
$(\cos(262^{\circ})+i \sin(262^{\circ}))\div (\cos(56^{\circ})+i \sin(56^{\circ}))$
• 3.
Convert the following complex number to exponential form
$z=3+i$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 17, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9882820248603821, "perplexity": 994.4069303841026}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794863410.22/warc/CC-MAIN-20180520112233-20180520132233-00425.warc.gz"} |
http://math.stackexchange.com/questions/304300/solve-u-xu-y-1 | # Solve $u_x+u_y=1$
I am asked to solve $$u_x+u_y=1$$ If is was homogeneous i.e., $u_x+u_y$ the answer would be $u(x,y)=f(y-x)$ where $f$ is an arbitrary function. I have found the following set of solutions: $$u(x,y)=\lambda x +(1-\lambda)y$$ where $\lambda$ is an arbitrary constant(real or imaginary). I just have no idea what method other then trial and error would have lead me here. Any ideas? Thanks!
-
This is a linear first order PDE. Googling will provide lots of write-ups on how to solve it! – Mariano Suárez-Alvarez Feb 14 '13 at 21:04
Method of characteristics is used on general first-order PDEs. en.wikipedia.org/wiki/Method_of_characteristics – Ron Gordon Feb 14 '13 at 21:50
You can observe that the only difference between homogeneous an inhomogeneous equations is $1$. So you can assume that particular solution is linear on both $x$ and $y$, or $u^p = ax + by$. $$u_x^p + u_y^p = a + b = 1$$ In your case $a = \lambda$ and $b = 1 - \lambda$. General solution of inhomogeneous PDE given is the sum of general solution of homogeneous PDE and particular solution of inhomogeneous PDE, so $$u = f(x-y)+\lambda x + (1-\lambda)y$$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9430042505264282, "perplexity": 243.05038733506498}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207929256.27/warc/CC-MAIN-20150521113209-00292-ip-10-180-206-219.ec2.internal.warc.gz"} |
https://learn.careers360.com/engineering/question-answer-please-a-gas-is-compressed-from-a-volume-of-2-m3-to-a-volume-of-1-m3-at-a-constant-pressure-of-100-nm2-then-it-is-heated-at-constant-volume-by-supplying-150-j-of-energy-as-a-result-the-internal-energy-of-the-gas/ | ## Filters
Q&A - Ask Doubts and Get Answers
Q
# Answer please! A gas is compressed from a volume of 2 m3 to a volume of 1 m3 at a constant pressure of 100 N/m2. Then it is heated at constant volume by supplying 150 J. of energy. As a result, the internal energy of the gas :
A gas is compressed from a volume of 2 m3 to a volume of 1 m3 at a constant pressure of 100 N/m2. Then it is heated at constant volume by supplying 150 J. of energy. As a result, the internal energy of the gas :
• Option 1)
Increases by 250 J
• Option 2)
Decreases by 250 J
• Option 3)
Increases by 50 J
• Option 4)
Decreases by 50 J
136 Views
As we have learned
First law of Thermodynamics -
Heat imported to a body in is in general used to increase internal energy and work done against external pressure.
- wherein
$dQ= dU+dW$
$\Delta U = \Delta Q - W \\ \Delta Q = 150 J \\ W = P \Delta V = 100 N/m^2 (1 m^3 - 2 m^3 ) = - 100 J \\ \Delta V = 150 J - (-100J ) = 250 J$
U is increased by 250 J
Option 1)
Increases by 250 J
Option 2)
Decreases by 250 J
Option 3)
Increases by 50 J
Option 4)
Decreases by 50 J
Exams
Articles
Questions | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8574298620223999, "perplexity": 2517.634718168425}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250628549.43/warc/CC-MAIN-20200125011232-20200125040232-00249.warc.gz"} |
https://opus4.kobv.de/opus4-zib/frontdoor/index/index/docId/5359 | ## Fastest, average and quantile schedule
Please always quote using this URN: urn:nbn:de:0297-zib-53592
• We consider problems concerning the scheduling of a set of trains on a single track. For every pair of trains there is a minimum headway, which every train must wait before it enters the track after another train. The speed of each train is also given. Hence for every schedule - a sequence of trains - we may compute the time that is at least needed for all trains to travel along the track in the given order. We give the solution to three problems: the fastest schedule, the average schedule, and the problem of quantile schedules. The last problem is a question about the smallest upper bound on the time of a given fraction of all possible schedules. We show how these problems are related to the travelling salesman problem. We prove NP-completeness of the fastest schedule problem, NP-hardness of quantile of schedules problem, and polynomiality of the average schedule problem. We also describe some algorithms for all three problems. In the solution of the quantile problem we give an algorithm, based on a reverse search method, generating with polynomial delay all Eulerian multigraphs with the given degree sequence and a bound on the number of such multigraphs. A better bound is left as an open question.
$Rev: 13581$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9216261506080627, "perplexity": 413.10326458622853}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917118713.1/warc/CC-MAIN-20170423031158-00531-ip-10-145-167-34.ec2.internal.warc.gz"} |
https://en.m.wikipedia.org/wiki/Operation_(mathematics) | # Operation (mathematics)
Elementary arithmetic operations:
• −, minus (subtraction)
• ÷, obelus (division)
• ×, times (multiplication)
In mathematics, an operation is a calculation from zero or more input values (called operands) to an output value. The number of operands is the arity of the operation. The most commonly studied operations are binary operations of arity 2, such as addition and multiplication, and unary operations of arity 1, such as additive inverse and multiplicative inverse. An operation of arity zero, or 0-ary operation is a constant. The mixed product is an example of an operation of arity 3, or ternary operation. Generally, the arity is supposed to be finite, but infinitary operations are sometimes considered. In this context, the usual operations, of finite arity are also called finitary operations.
## Types of operationEdit
A binary operation takes two arguments ${\displaystyle x}$ and ${\displaystyle y}$ and returns the result ${\displaystyle x\circ y}$
There are two common types of operations: unary and binary. Unary operations involve only one value, such as negation and trigonometric functions. Binary operations, on the other hand, take two values, and include addition, subtraction, multiplication, division, and exponentiation.
Operations can involve mathematical objects other than numbers. The logical values true and false can be combined using logic operations, such as and, or, and not. Vectors can be added and subtracted. Rotations can be combined using the function composition operation, performing the first rotation and then the second. Operations on sets include the binary operations union and intersection and the unary operation of complementation. Operations on functions include composition and convolution.
Operations may not be defined for every possible value. For example, in the real numbers one cannot divide by zero or take square roots of negative numbers. The values for which an operation is defined form a set called its domain. The set which contains the values produced is called the codomain, but the set of actual values attained by the operation is its range. For example, in the real numbers, the squaring operation only produces non-negative numbers; the codomain is the set of real numbers, but the range is the non-negative numbers.
Operations can involve dissimilar objects. A vector can be multiplied by a scalar to form another vector. And the inner product operation on two vectors produces a scalar. An operation may or may not have certain properties, for example it may be associative, commutative, anticommutative, idempotent, and so on.
The values combined are called operands, arguments, or inputs, and the value produced is called the value, result, or output. Operations can have fewer or more than two inputs.
An operation is like an operator, but the point of view is different. For instance, one often speaks of "the operation of addition" or "addition operation" when focusing on the operands and result, but one says "addition operator" (rarely "operator of addition") when focusing on the process, or from the more abstract viewpoint, the function + : S × SS.
## General descriptionEdit
An operation ω is a function of the form ω : VY, where VX1 × ... × Xk. The sets Xk are called the domains of the operation, the set Y is called the codomain of the operation, and the fixed non-negative integer k (the number of arguments) is called the type or arity of the operation. Thus a unary operation has arity one, and a binary operation has arity two. An operation of arity zero, called a nullary operation, is simply an element of the codomain Y. An operation of arity k is called a k-ary operation. Thus a k-ary operation is a (k+1)-ary relation that is functional on its first k domains.
The above describes what is usually called a finitary operation, referring to the finite number of arguments (the value k). There are obvious extensions where the arity is taken to be an infinite ordinal or cardinal, or even an arbitrary set indexing the arguments.
Often, use of the term operation implies that the domain of the function is a power of the codomain (i.e. the Cartesian product of one or more copies of the codomain),[1] although this is by no means universal, as in the example of multiplying a vector by a scalar. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 3, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8579416275024414, "perplexity": 449.24952331612656}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814872.58/warc/CC-MAIN-20180223233832-20180224013832-00764.warc.gz"} |
https://blog.flyingcoloursmaths.co.uk/common-problem-not-reading-carefully/ | I’m a big advocate of error logs: notebooks in which students analyse their mistakes. I recommend a three-column approach: in the first, write the question, in the second, what went wrong, and in the last, how to do it correctly. Oddly, that’s the format for this post, too.
### The question
The densities of three metal alloys, A, B and C, are in the ratio 13:15:21.
1m³ of alloy B has a mass of 8600kg.
Work out the difference between 5m³ of alloy A and 3m³ of alloy C. Give your answer correct to 3 significant figures.
### What went wrong
Student tried to split 8600 in the given ratio, asserted that each ‘share’ was about 175kg, so A was 2,280kg and B 3,690kg (to 3sf), giving a difference of 1,410kg.
There are (I think) three errors here:
• The student has not dealt correctly with the ratio (we’re given the mass of alloy B, not the total mass)
• The student has not used the volumes of blocks A and C
• The student has rounded too early.
### How to do it right
We’re told that 1m³ of alloy B has a mass of 8600kg. Alloy A is $\frac{13}{15}$ as dense, so 1m³ has a mass $\frac{13}{15}$ as big - $7453 \frac{1}{3}$ kg. Similarly, the mass of 1m³ of alloy C is $\frac{21}{15}$ as large, which is 12,040kg.
The mass of 5kg of alloy A is $37,266 \frac{2}{3}$kg, and 3kg of alloy B gives 36,120kg. The difference is $1,146 \frac{2}{3}$ kg, which is 1,150kg to 3sf.
Notice that rounding to 3sf before subtracting gives an incorrect answer of $37,300 - 36,100 = 1,200$.
### Oh, hello, Mathematical Ninja!
“What’s all this?”
“Well, sensei, I can expl… ow.”
“15 units of alloy B represents 8,600kg. 5kg of alloy A is 65 units. 3kg of alloy C is 63 units. The difference is two units, so you need two-fifteenths of 8,600.”
“But, sensei…”
“But me no buts. $8,600 \times \frac{2}{15} = 1,720 \times \frac{2}{3} = \frac{3,440}{3} = 1,146 \frac{2}{3}$kg, directly.”
“Thank you, s… oh, the Mathematical Ninja has left the building.”
“I thought that was a bit unnecessary.”
“Me, too.”
* Edited 2017-07-04 to correct a rounding error and a mistranscription of what the Mathematical Ninja said. Thanks to @ImMisterAl for mentioning that I should have, um, read more carefully. I’ll be watching my back. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8991257548332214, "perplexity": 3879.4332731853056}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178366959.54/warc/CC-MAIN-20210303104028-20210303134028-00235.warc.gz"} |
https://www.physicsforums.com/threads/question-about-orbitals-in-an-atom.513860/ | # Question about Orbitals in an Atom
1. Jul 13, 2011
### CRichard
For a summer class, I’ve been trying to get the quantum picture of an atom at least in conceptual terms, no math. I understand how, if you think about an atom like a one-dimensional box, an s orbital is like the first fundamental wave, and a p-orbital is like a sine wave, and I understand how the p orbital has higher energy because of greater frequency. But how can you picture 2s and 3s, etc. orbitals? Are they like first fundamental waves with greater amplitudes? I thought that the frequency of a fixed string depends mainly on its length and tension, not amplitude, so if that's the case how does E = hv?
Also, how are they quantized, since can’t you “pluck” that string in a one-dimensional box however far you want (at least as far as it can stretch), and give it an infinite number of amplitudes? In other words, I get how the need to fit a whole number of half wavelengths into the box justifies the quantization of energy into s, p, d, etc orbitals, but what limitation prevents the s orbitals from having a continuous spread in energy?
2. Jul 13, 2011
### Staff: Mentor
A string is one-dimensional whereas an atom is three-dimensional. Crudely speaking, the "vibrations" are quantized in each dimension.
Again crudely speaking, the primary quantum number (energy level number) corresponds to quantization in the radial dimension. The s, p, d, f correspond to quantization in the "longitude" dimension (think of going north/south on a globe). Finally, there's a "magnetic" quantum number that corresponds to quantization in the "latitude" dimension (think of going east/west around a globe).
It might help to try to imagine standing waves of sound in a spherical cavity, but I can't find any pictures of those at the moment.
Last edited: Jul 13, 2011
3. Jul 13, 2011
### CRichard
Thanks, that makes sense. The only thing I don't get is how E = hv is not violated for example in the case of a piano string. I remember reading that it doesn't matter how much energy you give to a piano string by striking it hard, the pitch is the same because frequency doesn't change. Shouldn't the added energy and loudness be the result of an increase in frequency?
4. Jul 13, 2011
### chogg
E=hv is the energy of a single quantum. If you pluck a string harder, you are putting more quanta in it.
If you like, the generalization of the formula is E=nhv, where n is the number of quanta.
5. Jul 13, 2011
### Drakkith
Staff Emeritus
I don't know for sure, but I think a similar effect happens in a pendulum. Pushing either one harder only causes the amplitude to increase. The way both are built and react, the frequency will not change when you add more energy.
6. Jul 13, 2011
### alxm
E = hv relates the frequency of light to the energy of a photon. It doesn't apply to a piano string.
No, frequency (pitch) and amplitude ('volume', intensity) are independent properties. A flute has a higher pitch than a kettle-drum, but it's not necessarily louder.
7. Jul 13, 2011
### chogg
Doesn't it, though? My impression was that for a given mode of vibration, E = hv applies to the phonons in the lattice.
Recently, single phonons were added and removed from a mechanical mode of oscillation:
http://www.nature.com/nature/journal/v464/n7289/abs/nature08967.html
8. Jul 14, 2011
### CRichard
"E = hv relates the frequency of light to the energy of a photon. It doesn't apply to a piano string."
That's what I was thinking too, because you can have many water ripples too with a high frequency but low energy. But then, does E = hv also apply to matter waves like an electron wave?
9. Jul 15, 2011
### CRichard
Sorry to bump this. I can't seem to find a clear answer online, but just to clarify: if you think about the analogy of standing waves on a circular drum, a 2p orbital has electron density farther from the nucleus than a 1s orbital. So, I think it would have a higher energy because of the charge separation. But can you also think about it a different way, and say that the higher energy is due to the higher frequency of the p-orbital by the equation E = hv, (that matter waves are fundamentally quantized) or does this equation only work for electromagnetic energy? | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8629583716392517, "perplexity": 779.2046699843647}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376828507.57/warc/CC-MAIN-20181217113255-20181217135255-00103.warc.gz"} |
https://www.physicsforums.com/threads/principal-ideal-polynomial-generators.528686/ | # Principal Ideal, Polynomial generators
1. Sep 9, 2011
### nugget
1. The problem statement, all variables and given/known data
Suppose R is an integral domain and I is a principal ideal in R[x], and I $\neq$ {0}
a) Show I = <g(x)> for some g(x)$\in$R[x] that has minimal degree among all non-zero polynomials in I.
b) Is it necessarily true that I = <g(x)> for every g(x)$\in$R[x] that has minimal degree among all non-zero polynomials in I?
2. Relevant equations
Theorems, the division algorithm.
3. The attempt at a solution
For (a) we can take a g(x) of minimal degree as our generator; if deg(g) = 0, then g(x) is a unit in I and I = R[x], so we can assume deg(g)$\geq$1
Now we let f(x)$\in$I and apply the division algorithm.
f = qg + r, where q,r$\in$R[x] and deg(r)<deg(g)
This means that r = f - qg which is an element of I. By our choice of g having minimal degree, this means that r = 0. Otherwise r would be an element of I, and could be written as r = pg, p$\in$R[x]. (I'm not quite sure how best to explain the reasoning behind the fact that r must equal 0)
finishing the proof after this is fine.
b) I'm confused for this question, but I assume its a contradiction type answer, although I can't imagine a g(x) that wouldn't generate a principal ideal...
2. Sep 10, 2011
### micromass
Why??
Did you prove that the division algorithm holds when R is simply an integral domain?
Consider $\mathbb{Z}$ and take the ideal generated by X+2. Then 2X+4 is in the ideal and is of minimal degree.
3. Sep 11, 2011
### nugget
Hey,
regarding that deg(g)=0 bit, I guess I just assumed that a constant would be a unit in I. This would only be the case if R was a field, right?
Now I'm really confused for this question too; don't know how to prove the division algorithm, let alone for an integral domain.
I think I understand what you're saying for b)
Z is an integral domain so we can use that for R. b) is not true because <x+2> generates I and is of minimal degree, but <2x+4> can't generate I (it can't generate x+2, as Z doesn't contain fractions) but is also of minimal degree. Hence not every g(x) in R[x] of minimal degree generates I.
4. Sep 11, 2011
### micromass
The trick for (a) is that you already know that I is principal. That is, you already know that I=<g(x)> for some g. The only thing you need to prove is that g is of minimal degree. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9814803004264832, "perplexity": 738.4632240847599}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583657510.42/warc/CC-MAIN-20190116134421-20190116160421-00122.warc.gz"} |
https://en.wikibooks.org/wiki/Circuit_Theory/1Initially_Excited | # Circuit Theory/1Initially Excited
Jump to: navigation, search
Circuits are used to charge capacitors and inductors. Then the inductor/capacitor is switched out of the charging circuit and into the discharging circuit. There are three concepts:
• charging circuit ... assume on a long time if time on not specified ... assume steady state so capacitors are opens and inductors are shorts
• switching circuit (different for capacitors and inductors)
• discharging circuit ... assume initially capacitors are shorts and inductors are opens
There are two circuits to analyse: charging and discharging:
• Charging circuit ... Assume initial conditions zero, find steady state (particular) solution ... these become the initial conditions to the discharging circuit.
• Discharging circuit find the (homogeneous solution).
Solving the discharging circuit has these steps:
• assume the solution is an exponential
• find an exponential solution
• use the initial conditions and final value conditions to find the constants.
• If an exponential solution can be found (i.e. there is a real part), then the assumption was valid.
## Capacitors
### Charge Analysis
Charge Analysis is steady state analysis. Assume the circuit has been on a long time. The capacitor has had a long time to charge up.
What is different is that instead of a sinusoidal source, there is a DC voltage source. There is going to be a DC steady state voltage across the capacitor.
A capacitor looks like an open when fully charged, so the full source voltage is going to be across the capacitor. So in this case the initial voltage across the capacitor is 1 volt.
### Discharge Analysis
Looking at a capacitor discharging, marking up the current direction and voltage polarity consistent with the normal terminal relations .. for wikibook circuit theory
A trivial mesh or loop analysis (both result in the same equation) would be:
$V_R + V_c = 0$
$R*i + V_c = 0$
From the capacitor terminal relationship:
$i=C{d Vc \over dt}$
So substituting:
$RC{d Vc \over dt} + V_c =0$
Of course, after a long time, the capacitor discharges, the resistor dissipates all the energy and both current and voltage everywhere will be zero. But what is the equation in the time domain that describes how the voltage and current go to zero?
Notice that the current is going down while charging, but switches to going up when discharging. This can be instantaneous.
#### finding time constant
The general technique is to assume this form:
$V_c = A*e^{-\frac{t}{\tau}}$
Substituting into the above differential equation:
$-\frac{RCA}{\tau}e^{-\frac{t}{\tau}} + A*e^{-\frac{t}{\tau}} = 0$
Dividing through by A and the exponential, have:
$-\frac{RC}{\tau} + 1 = 0$
Solving for tau:
$\tau = RC$
#### finding the constants
So the formula for voltage across the capacitor is now:
$V_c = A*e^{-\frac{t}{RC}} + C$
This includes the steady state particular solution of 0 and the constant C that comes from solving a differential equation.
The initial voltage across the capacitor is +1 volt at t=0+. This means that:
$V_c(0_+) = 1 = A + C$
After a long period of time, the V_c(t) = 0. This helps us find find C:
$V_c(\infty) = A*0 + C = 0$
So C = 0 and A = 1 and:
$V_c(t) = e^{-\frac{t}{RC}}$
#### finding the current
Looking at a capacitor discharging, marking up the current direction and voltage polarity consistent with the normal terminal relations .. for wikibook circuit theory
Vc=-Vr or could plug into the capacitor terminal relation
$I = \frac{V_r}{R} = \frac{-Vc}{R} = -\frac{e^{-\frac{x}{RC}}}{R}$
Thus for this circuit:
$V_C = e^{-\frac{t}{RC}}=e^{-\frac{t}{10\mu s}}$
$V_r = -e^{-\frac{t}{10\mu s}}$
And:
$I = - \frac{e^{-\frac{t}{10\mu s}}}{10}$
#### Interpreting the results
Vc is in the polarity indicated. VR is opposite the polarity drawn. The current is going in the opposite direction drawn, which makes sense if the capacitor is acting like a voltage source and dumping it's energy into the resistor. Both are going to be essentially zero after 5 time constants which is 50 μs.
The homogeneous solution was easy to find because a DC source was used to charge the circuit. The particular solution of the discharge circuit was 0 because there was no forcing function.
## Inductors
### Charge Analysis
When inductor has same initial current as the current source, then no current is going to flow through the short. Superposition says the currents are going to cancel in the short.
Charge Analysis is steady state analysis. Assume the circuit has been on a long time. The inductor has had a long time to build up its magnetic fields.
What is different is that instead of a sinusoidal source, there is a DC current source. There is going to be a DC steady state current through the inductor.
An inductor looks like a short when fully charged, so the full source current is going through the inductor. So in this case the initial current is 1 amp.
### Shorted source and inductor Analysis
An inductor stores it's energy in a magnetic field, current has to continue to move to keep the magnetic field from collapsing ... this is the basis of a superconductor
Inductor with initial current discharging through resistor, for wikibook circuit theory
Shorting an inductor and current source by pushing the button of SW1 is safe. (It is dangerous to open wires to an inductor or current source). With ideal components, there will be no current through the short. However current through the inductor remains the same and the inductor remains charged up.
### Shorted inductor energy storage
The instant the SPDT switch cuts out the current source, the inductor's current appears in the short. The inductor and shorting wire do not store energy very long (unless frozen because the wire acts like a resistor).
### Discharge Analysis
Immediately after SW2 has completed moving the throw over to the second pole and the push button switch is released, the current of 1amp still flows through the inductor. A voltage instantly appears across the inductor and resistor. A trivial mesh or loop analysis (both result in the same equation) would be:
$V_R - V_L = 0$
Looking at the circuit markup, the current and voltage don't have the positive sign convention relationship so the inductor's terminal relationship has a negative sign:
$V_L= - L\frac{di}{dt}$
So:
$R*i + L\frac{di}{dt} = 0$
Of course, after a long time, the inductor discharges, the resistor dissipates all the energy and both current and voltage everywhere will be zero. But what is the equation in the time domain that describes how the voltage and current go to zero?
Notice that the voltage is positive on top of the inductor but switches polarity when discharging. This can be instantaneous.
#### finding time constant
The general technique is to assume this form:
$i = A*e^{-\frac{t}{\tau}}$
Substituting into the above differential equation:
$R*A*e^{-\frac{t}{\tau}} + L*\frac{d (A*e^{-\frac{t}{\tau}})}{dt} = 0$
Dividing through by A and then evaluating the derivative, have:
$R*e^{-\frac{t}{\tau}} - \frac{L*e^{-\frac{t}{\tau}}}{\tau} = 0$
Dividing through by the exponential:
$R - \frac{L}{\tau}=0$
Solving for tau:
$\tau = \frac{L}{R}$
#### finding the constants
So the formula for current now is:
$i = A*e^{-\frac{t}{\frac{L}{R}}} + C$
The steady state particular solution is 0, and the constant C comes from solving the differential equation.
The initial current is 1 amp at t=0+. This means that:
$1 = A + C$
After a long time, there is nothing going on in the circuit so:
$i(\infty) = A*0 + C$
So A=1 and C=0 thus:
$i(t) = e^{-\frac{t}{\frac{L}{R}}}$
#### finding the voltage across the Inductor/Resistor
VL=Vr or could plug into the inductor terminal relation
$V_R = R*i = R * e^{-\frac{t}{\frac{L}{R}}}$
of plugging into the terminal relation:
$V_L = - L*\frac{d (e^{-\frac{t}{\frac{L}{R}}})}{dt} = -L*(\frac{-R}{L})e^{-\frac{t}{\frac{L}{R}}} = Re^{-\frac{t}{\frac{L}{R}}}$
So in summary:
$V_R = V_L = 10*e^{-\frac{t}{0.1\mu s}}$
$i = e^{-\frac{t}{0.1\mu s}}$
#### Interpreting the results
Maintaining the current and magnetic field energy of an inductor requires shorting it during the switching between circuits. This would be just as simple as switching a capacitor, but a different type of switch is required: ... a make before break switch .. so that two circuits are connected simultaneously.
The discharge circuit was marked up with the final directions and polarity of everything. But the terminal relationship for the inductor had to have a negative sign in it because the voltage and current directions did not follow the positive sign convention.
Again, the homogeneous solution was easy to find because a DC source was used to charge the circuit. The particular solution of the discharge circuit was 0 because there was no forcing function. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 32, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.84792560338974, "perplexity": 1018.4534410604946}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375096579.52/warc/CC-MAIN-20150627031816-00158-ip-10-179-60-89.ec2.internal.warc.gz"} |
http://math.stackexchange.com/users/34736/mike?tab=activity&sort=all&page=19 | # Mike
less info
reputation
532
bio website location age 21 member for 2 years, 5 months seen yesterday profile views 298
Aug14 comment Convergent Subsequence in $\mathbb R$ Is it fair to say that since those subsequences are the only ones I have to consider because they are the only two convergent subsequences? Other than ones like $\langle a_{4k} \rangle$ which are contained within the one considered? Aug14 asked Convergent Subsequence in $\mathbb R$ Aug14 comment Proving that $\mu$ is $\sup S$ @JayeshBadwaik then why the $\Longleftrightarrow$? Aug14 comment Proving that $\mu$ is $\sup S$ Thank you. This is clear to me now @PeterTamaroff Aug14 accepted Proving that $\mu$ is $\sup S$ Aug14 revised Proving that $\mu$ is $\sup S$ redefined question Aug14 comment Proving that $\mu$ is $\sup S$ Ah! So Since we assumed that there is no $x \in [\mu -\epsilon, \mu]$ and arrived at $\mu \ne \sup S$ that means that $\mu$ is the supremum? Aug14 comment Proving that $\mu$ is $\sup S$ @BiditAcharya why does $\lambda \notin S \Longrightarrow \mu \ne \sup S$ Aug14 revised Proving that $\mu$ is $\sup S$ asked again Aug14 revised Proving that $\mu$ is $\sup S$ asked again Aug14 comment Proving that $\mu$ is $\sup S$ Ok I found my mistake there thank you @ArthurFischer Aug14 revised Proving that $\mu$ is $\sup S$ fixed mistake in forward direction Aug14 asked Proving that $\mu$ is $\sup S$ Aug14 comment Appropriate Notation: $\equiv$ versus $:=$ @AnonymousCoward see here: tex.stackexchange.com/questions/4216/how-to-typeset-correctly Aug14 revised Difference of two Cauchy Sequences edited body Aug13 revised Difference of two Cauchy Sequences added 16 characters in body Aug13 comment Difference of two Cauchy Sequences My mistake, I added the note that we are in $\mathbb R$ Aug13 revised Difference of two Cauchy Sequences added 16 characters in body Aug13 comment Are all infinities equal? I am interested by rigorizing "intuitive" statements such as "we can split these lines up and then 'add' them back together." For me, these are some of the hardest rigorous statements to make. How would you do it? Aug13 asked Difference of two Cauchy Sequences | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8429473638534546, "perplexity": 1338.7614242911895}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802766295.3/warc/CC-MAIN-20141217075246-00107-ip-10-231-17-201.ec2.internal.warc.gz"} |
https://www.math.ias.edu/seminars/abstract?event=44698 | # On the Complexity of Matrix Multiplication and Other Tensors
Computer Science/Discrete Mathematics Seminar II Topic: On the Complexity of Matrix Multiplication and Other Tensors Speaker: Joseph Landsberg Affiliation: Texas A&M University Date: Tuesday, November 20 Time/Room: 10:30am - 12:30pm/S-101 Video Link: https://video.ias.edu/csdm/landsberg
Many problems from complexity theory can be phrased in terms of tensors. I will begin by reviewing basic properties of tensors and discussing several measures of the complexity of a tensor. I'll then focus on the complexity of matrix multiplication. Since March 2012 there have been significant advances in our understanding of the complexity of matrix multiplication. This progress was made possible via tools from algebraic geometry and representation theory, and I'll explain why such techniques are useful without assuming any prior background in them. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9186939001083374, "perplexity": 507.54478867663954}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267156096.0/warc/CC-MAIN-20180919083126-20180919103126-00007.warc.gz"} |
https://owenduffy.net/blog/?p=20138 | # nanoVNA – measure Transmission Loss – example 5
This article is demonstration of measurement of Transmission Loss in a section of two wire transmission line embedded in a common mode choke. The scenario is based on an online article MEASURING DM ATTENUATION of YOUR CMC USING THE NANOVNA AND NANOVNA SAVER.
The reference article publishes measured attenuation or loss being -1.45dB @ 28.4MHz. Of course, the -ve value hints that the author is lost in hamdom where all losses MUST be -ve dB..
The meaning of loss in a generic sense (ie without further qualification) is $$loss=\frac{Power_{in}}{Power_{out}}$$ and can be expressed in dB as $$loss_{dB}=10 log_{10}(loss)$$.
Some might interpret the result to imply that $$(1-10^{\frac{-loss}{10}})*100=28 \%$$ of input power is converted to heat in the choke.
The result given (and corrected) as 1.45dB was taken simply from the nanoVNA $$|s21|$$ result, and so it is actually InsertionLoss, not simply Loss.
What is the difference?
Above is a Simsmith model of a similar scenario. I have calculated |s21| and |s11|, and $$TL=10 log_{10}\frac{Power_{in}}{Power_{out}}$$. PVC insulation was assumed.
The type of source specified would result in 1W or 0dBW in a 50+j0Ω load, this is the AvailablePower.
So, firstly the power in the load is -1.39dBW, that is 1.39dB less than the AvailablePower from the source (ie into a matched load), so the InsertionLoss=1.39dB.
In fact the input power to T1 is not 1W or 0dBW, it is -1.28dBW. So, we can find the value of Loss (or TransmissionLoss for clarity) which is the ratio $$\frac{Power_{in}}{Power_{out}}$$ by subtracting the dBW figures to obtain $$Loss=Power_{dBWin}-Power_{dBWout}=-1.29 – -1.39=0.10dB$$. This is affected by rounding error, the more exact calculation given at TL of 0.09561dB is more correct.
So, how much input power is converted to heat? Easy, $$(1-10^{\frac{-TL}{10}})*100=2.1 \%$$ of PowerIn.
## How do we apply this to a ham transmitter scenario?
The first thing to note is that the VNA is intended to be:
• a near perfect 50+j0Ω Thevinin source; and
• a near perfect 50+j0Ω load.
Neither of those apply to most ham transmitter scenarios, especially one that uses 1m or so of twisted #14 insulated wires as in the reference article.
Such a choke might well be used with an ATU, and if we assume that it was adjacent to the choke, a good ATU would deliver most of the rated transmitter to the choke, ie for a 100W transmitter you would expect that PowerIn is almost 100W.
Now the TransmissionLoss above was calculated under a specific standing wave scenario, Zload=50+j0Ω. The actual TransmissionLoss will depend on the distribution of voltage and current on the choke’s two wire line section, so the figure calculated for the VNA scenario doesn’t apply and is not simply extended to a real transmitter scenario.
## Conclusions
The Transmission Loss of a line section may not be directly given by any measures displayed by a VNA, it may take some interpretation and some accounting for elements that can be measured.
The discovered TransmissionLoss of the line section in a common mode choke applies to the exact standing wave scenario and is not simply applied to other (practical) scenarios. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8428984880447388, "perplexity": 2275.1530969991213}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00192.warc.gz"} |
https://www.physicsforums.com/threads/the-hamiltonian.860483/ | # I The Hamiltonian
Tags:
1. Mar 4, 2016
### Physgeek64
I'm a little confused about the hamiltonian.
Once you have the hamiltonian how can you find conserved quantities. I understand that if it has no explicit dependence on time then the hamiltonian itself is conserved, but how would you get specific conservation laws from this?
Many thanks
2. Mar 4, 2016
### vanhees71
Unfortunately you do not tell us about your level. Do you know Poisson brackets? If so, you question is quite easy to answer. Suppose you have an arbitrary phase-space function $f(t,q^k,p_j)$ ($k,j \in \{1,\ldots,f \}$) then the total time derivative is
$$\frac{\mathrm{d}}{ \mathrm{d} t} f=\dot{q}^k \frac{\partial f}{\partial q^k}+\dot{p}_j \frac{\partial f}{\partial p_j} + \partial_t f,$$
where the latter partial time derivative refers to the explicit time dependence of $f$ only. Now use the Hamilton equations of motion
$$\dot{q}^k=\frac{\partial H}{\partial p_k}, \quad \dot{p}_j=-\frac{\partial H}{\partial q^j}.$$
Plugging this in the time derivative you get
$$\frac{\mathrm{d}}{ \mathrm{d} t} f=\frac{\partial f}{\partial q^k} \frac{\partial H}{\partial p_k} - \frac{\partial f}{\partial p_j} \frac{\partial H}{\partial q^j} + \partial_t f=\{f,H \}+\partial_t f.$$
A quantity is thus obviously conserved by definition if this expression vanishes.
Applying this to the Hamiltonian itself you get
$$\frac{\mathrm{d}}{\mathrm{d} t} H=\{H,H\}+\partial_t H=\partial_t H,$$
i.e., $H$ is conserved (along the trajectory of the system) if and only if it is not explicitly time dependent.
Now an infinitesimal canonical transformation is generated by an arbitrary phase-space distribution function $G$,
$$\delta q^k=\frac{\partial G}{\partial p_k}\delta \alpha=\{q^k,G\} \delta \alpha, \quad \delta p_j=-\frac{\partial G}{\partial q^j} \delta \alpha =\{p_j,G \} \delta \alpha, \quad \delta H=\partial_t G \delta \alpha.$$
From this it is easy to show that
$$H'(t,q+\delta q,p+\delta p) = H(t,q,p),$$
i.e., that the infinitesimal canonical transformation is a symmetry of the Hamiltonian, if and only if
$$\{H,G \}+\partial_t G=0,$$
but that means that
$$\frac{\mathrm{d}}{\mathrm{d} t} G=0$$
along the trajectory of the system, i.e., the generator of a symmetry transformation is a conserved quantity, and also any conserved quantity is the generator of a symmetry transformation. That means that there's a one-to-one relation between the generators of symmetries and conserved quantities, which is one of Noether's famous theorems.
3. Mar 4, 2016
### Physgeek64
Sorry- no I don't know about Poisson brackets- I'm a complete novice. Haven't encountered hamiltonians before, nor do I know much about them.
Thank you for your response though! Unfortunately I can't see any of the maths you've included- For some reason my computer thinks its an error
4. Mar 4, 2016
### Edgardo
Right-click on the math error, go to Math Settings -> Math Renderer and try e.g. HTML-CSS (or some other renderer).
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook
Have something to add?
Draft saved Draft deleted
Similar Discussions: The Hamiltonian | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9302347898483276, "perplexity": 688.751388129643}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886106754.4/warc/CC-MAIN-20170820131332-20170820151332-00690.warc.gz"} |
http://philpapers.org/s/C.%20J.%20Ash | ## Works by C. J. Ash
22 found
Order:
1. This book describes a program of research in computable structure theory. The goal is to find definability conditions corresponding to bounds on complexity which persist under isomorphism. The results apply to familiar kinds of structures (groups, fields, vector spaces, linear orderings Boolean algebras, Abelian p-groups, models of arithmetic). There are many interesting results already, but there are also many natural questions still to be answered. The book is self-contained in that it includes necessary background material from recursion theory (ordinal notations, (...)
Export citation
My bibliography 11 citations
2. C. J. Ash & J. F. Knight (1990). Pairs of Recursive Structures. Annals of Pure and Applied Logic 46 (3):211-234.
Export citation
My bibliography 17 citations
3. C. J. Ash (1986). Stability of Recursive Structures in Arithmetical Degrees. Annals of Pure and Applied Logic 32 (2):113-135.
Export citation
My bibliography 16 citations
4. C. J. Ash (1990). Labelling Systems and R.E. Structures. Annals of Pure and Applied Logic 47 (2):99-119.
Export citation
My bibliography 12 citations
5. C. J. Ash (1987). Categoricity in Hyperarithmetical Degrees. Annals of Pure and Applied Logic 34 (1):1-14.
Export citation
My bibliography 9 citations
6. C. J. Ash, P. Cholak & J. F. Knight (1997). Permitting, Forcing, and Copying of a Given Recursive Relation. Annals of Pure and Applied Logic 86 (3):219-236.
Export citation
My bibliography 5 citations
7. C. J. Ash & J. F. Knight (1994). Ramified Systems. Annals of Pure and Applied Logic 70 (3):205-221.
Export citation
My bibliography 6 citations
8. C. J. Ash & J. F. Knight (1997). Possible Degrees in Recursive Copies II. Annals of Pure and Applied Logic 87 (2):151-165.
We extend results of Harizanov and Barker. For a relation R on a recursive structure /oA, we give conditions guaranteeing that the image of R in a recursive copy of /oA can be made to have arbitrary ∑α0 degree over Δα0. We give stronger conditions under which the image of R can be made ∑α0 degree as well. The degrees over Δα0 can be replaced by certain more general classes. We also generalize the Friedberg-Muchnik Theorem, giving conditions on a pair (...)
Export citation
My bibliography 5 citations
9. C. J. Ash & J. F. Knight (1995). Possible Degrees in Recursive Copies. Annals of Pure and Applied Logic 75 (3):215-221.
Let be a recursive structure, and let R be a recursive relation on . Harizanov isolated a syntactical condition which is necessary and sufficient for to have recursive copies in which the image of R is r.e. of arbitrary r.e. degree. We had conjectured that a certain extension of Harizanov's syntactical condition would be necessary and sufficient for to have recursive copies in which the image of R is ∑α0 of arbitrary ∑α0 degree, but this is not the case. Here (...)
Export citation
My bibliography 4 citations
10. C. J. Ash & John W. Rosenthal (1986). Intersections of Algebraically Closed Fields. Annals of Pure and Applied Logic 30 (2):103-119.
Export citation
My bibliography 5 citations
11. C. J. Ash & J. F. Knight (1994). Mixed Systems. Journal of Symbolic Logic 59 (4):1383-1399.
Export citation
My bibliography 3 citations
12. C. J. Ash & R. G. Downey (1984). Decidable Subspaces and Recursively Enumerable Subspaces. Journal of Symbolic Logic 49 (4):1137-1145.
A subspace V of an infinite dimensional fully effective vector space V ∞ is called decidable if V is r.e. and there exists an r.e. W such that $V \oplus W = V_\infty$ . These subspaces of V ∞ are natural analogues of recursive subsets of ω. The set of r.e. subspaces forms a lattice L(V ∞ ) and the set of decidable subspaces forms a lower semilattice S(V ∞ ). We analyse S(V ∞ ) and its relationship with L(V (...)
Export citation
My bibliography 4 citations
13. C. J. Ash, J. F. Knight & J. B. Remmel (1997). Quasi-Simple Relations in Copies of a Given Recursive Structure. Annals of Pure and Applied Logic 86 (3):203-218.
Export citation
My bibliography 2 citations
14. C. J. Ash (1991). A Construction for Recursive Linear Orderings. Journal of Symbolic Logic 56 (2):673-683.
We re-express a previous general result in a way which seems easier to remember, using the terminology of infinite games. We show how this can be applied to construct recursive linear orderings, showing, for example, that if there is a ▵ 0 2β + 1 linear ordering of type τ, then there is a recursive ordering of type ω β · τ.
Export citation
My bibliography 2 citations
15. C. J. Ash (1992). Generalizations of Enumeration Reducibility Using Recursive Infinitary Propositional Sentences. Annals of Pure and Applied Logic 58 (3):173-184.
Ash, C.J., Generalizations of enumeration reducibility using recursive infinitary propositional sentences, Annals of Pure and Applied Logic 58 173–184. We consider the relation between sets A and B that for every set S if A is Σ0α in S then B is Σ0β in S. We show that this is equivalent to the condition that B is definable from A in a particular way involving recursive infinitary propositional sentences. When α = β = 1, this condition is that B is (...)
Export citation
My bibliography 1 citation
16. C. J. Ash (1975). Sentences with Finite Models. Mathematical Logic Quarterly 21 (1):401-404.
Export citation
My bibliography
17.
Export citation
My bibliography
18. C. J. Ash (1994). On Countable Fractions From an Elementary Class. Journal of Symbolic Logic 59 (4):1410-1413.
Export citation
My bibliography
19. C. J. Ash, J. F. Knight, B. Balcar, T. Jech, J. Zapletal & D. Rubric (1997). Hella, L., Kolaitis, PG and Luosto, K., How to Define a Linear. Annals of Pure and Applied Logic 87:269.
Export citation
My bibliography
20. C. J. Ash (1984). Harrington Leo. Recursively Presentable Prime Models. Journal of Symbolic Logic 49 (2):671-672.
Export citation
My bibliography
21. C. J. Ash (1995). Knight, JF, See Ash, CJ (3). Annals of Pure and Applied Logic 75:313.
Export citation
My bibliography
22. C. J. Ash & J. F. Knight (1990). Pairs of Computable Structures. Annals of Pure and Applied Logic 46:211-234.
Export citation
My bibliography | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9379081726074219, "perplexity": 2536.5370601552368}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542213.61/warc/CC-MAIN-20161202170902-00045-ip-10-31-129-80.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/quantum-tunneling-for-dummies.651382/ | # Quantum tunneling for dummies
1. Nov 11, 2012
### paulzhen
Hi,
I am a physics student but still a layman in QM. I got a task to explain Quantum tunneling and I did my researchs for 3 days (I have read almost all threads here regarding to the topic). I found many different interpretations but I believe the Schrodinger's Equation is the correct answer.
What I want to be helped is: How can I show that the tunneling probabilities to another side is non-zero with the equation?
And the actual "probability" is |ψ|2 or the coefficient |T|2? sorry there were so many version of explanations, I really got confused....
Thanks
2. Nov 12, 2012
### tom.stoer
The interpretation is clear; the particle can tunnel through a potential well which is forbidden in classical mechanics.
Have a look at http://en.wikipedia.org/wiki/Quantum_tunnelling
There are different ways to address the problem, but I think the most transparent one is start with a problem that can be solved exactly. Consider a free particle with a non-vanishing potential V(x) = V0 for x in [-L, +L] and V(x) = 0 otherwise. Let the energy of the particle (the energy as eigenvalue of the Schrödinger equation) be E < V0, so classically the particle would be confined to x < -L and could never be observed in the region x > L. You have plane waves outside the potential step, where the relation between momentum and energy is given by p²/2m = E. For an incoming particle from x = -∞ you have
1) a wave propagating solely in positive x-direction for x > L
2) an incoming wave travelling in positive x-direction for x < -L; and a reflected wave travelling into negative x-direction for x < -L
So for x > L you have a wave function exp(ikx), for negative x you have a superposition of exp(ikx) and exp(-ikx). Now you can solve the Schrödinger equation explicitly (I think this is standard and is decsribed in many textbooks and e.g.
3) in the region with non-vanishing potential you have an exponential decay like exp(-λx) b/c the the relation p²/2m = E-V0 results in an imaginary wave number k = iλ.
Another idea is to use the WKB approximation; one can think of an arbitrary potential consisting of an infinite number of infinitesimal steps. Then the solution to integrate over the region with non-vanishing potential [x1, x2]; the tunneling is exponentially suppressed with
$$\exp\left[ -\frac{\sqrt{2m}}{\hbar}\int_{x_1}^{x_2}dx\,\sqrt{V(x)-E}\right]$$
3. Nov 12, 2012
### paulzhen
Thanks a lot! And thanks not treating me as a QM layman!
Basically I know what are you trying to say, the questions are:
1) how can you assume that there is "a wave propagating solely in positive x-direction for x > L"? I mean if you are the 1st person to investigate this problem, and you do not know the effect of tunneling, the wave with E<V0 should be reflected at x=-L, why there will be a wave traveling at x>L?
2) another noob question is: what do you mean by "solve the Schrödinger equation"? To get what result? to get ψ? or ψ2 or the other coefficients? And once I "solved" the equation, how do I know this is the evidence of tunneling?
What I always trying to do is:
1st, to confirm know what kind of quantities (some said |ψ|2? others said T2, or these 2 are the same thing?) represent probability of finding a particle;
2nd, compute it in an equation under boundry condition(ie. Schrodinger's?) and get a non-zero result. This will satisfy and convince me.
I am sorry pls be patient with me, I really need to know these. Thanks again.
4. Nov 12, 2012
### Staff: Mentor
You assume for the sake of argument that there is a "source" of some kind on the left, at some far away x < -L, and no "source" on the right (x > L). You start by assuming the most general possible outcome, namely that some of the wave is reflected and some of it is transmitted. That is, you don't prejudice the outcome by assuming the classical one which is no wave on the right. If the classical outcome is indeed the correct one, your solution will show it, that is, the amplitude of the outgoing wave on the right will turn out to be zero.
The Schrödinger equation is a differential equation: it gives the relationship between ψ(x,t) and its derivatives. (Do you know any calculus?) The solution of a differential equation is a function, in this case ψ(x,t). In this case you find that ψ ≠ 0 for x > L, which leads to a probability density |ψ|2 > 0 for x > L. This indicates tunnelling because the situation was set up so that there is no "source" for the wave on that side of the barrier. It has to come from the left, through the barrier.
I'd better point out that ψ ≠ 0 inside the barrier, also. You get a continuous function to the left, inside and right of the barrier, as in the example here:
http://hyperphysics.phy-astr.gsu.edu/hbase/quantum/barr.html
In general, |ψ(x,t)|2 gives you the probability density for finding the the particle at a particular point x at time t. We usually solve the tunneling problem using simple plane waves on both sides of the barrier. For these, |ψ(x,t)|2 is a constant, namely |A|2, the square of the amplitude of the wave. The transmission probabliity is the square of the ratio of the amplitudes of the wave going out to the right and the wave coming in from the left.
T = |A(outgoing to right) / A(incoming from left)|2
Some books may use T for the ratio of the amplitudes, in which case the transmission probability is |T|2.
For further details, you should look for a QM textbook. Setting up the math is fairly simple, but getting the amplitudes and the transmission probability requires a lot of grinding algebra. I think most books leave the details of the algebra as an exercise for the student. This book shows a lot of the details (you still have to fill in some steps):
https://www.amazon.com/Understandin...2651&sr=8-1&keywords=morrison+quantum+physics
Somewhere on the web there might be lecture notes that present some of the details.
Last edited by a moderator: May 6, 2017
5. Nov 12, 2012
### cosmic dust
Since the potential is defined differently in three regions of x axis, you have to solve Schrodinger's time-independent equation Hψ = Eψ for three regions: A) x<-L , B) -L$\leq$x$\leq$L and C) x>L . Since the solution for regions A and C give a non-zero ψ (which exponentially decays as you get away grom region B), you deduce that that there is a possibility to detect the particle outside the classical allowed region B. This is the tunneling effect: it is a direct mathematical consequence of the Schrodinger’s equation. Note that as V0 goes to infinity, the solutions in regions A and C go to zero.
6. Nov 12, 2012
### paulzhen
Thanks for all your replies, espectially for jtell, very good explanation! I cried. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9421157240867615, "perplexity": 556.7265297805985}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084893530.89/warc/CC-MAIN-20180124070239-20180124090239-00371.warc.gz"} |
https://www.science.gov/topicpages/f/future+collider+experiments.html | #### Sample records for future collider experiments
1. Precision electroweak physics at future collider experiments
SciTech Connect
Baur, U.; Demarteau, M.
1996-11-01
We present an overview of the present status and prospects for progress in electroweak measurements at future collider experiments leading to precision tests of the Standard Model of Electroweak Interactions. Special attention is paid to the measurement of the {ital W} mass, the effective weak mixing angle, and the determination of the top quark mass. Their constraints on the Higgs boson mass are discussed.
2. Future colliders
SciTech Connect
Palmer, R.B.; Gallardo, J.C.
1996-10-01
The high energy physics advantages, disadvantages and luminosity requirements of hadron (pp, pp), of lepton (e{sup +}e{sup {minus}}, {mu}{sup +} {mu}{sup {minus}}) and photon-photon colliders are considered. Technical arguments for increased energy in each type of machine are presented. Their relative size, and the implications of size on cost are discussed.
3. Towards future circular colliders
Benedikt, Michael; Zimmermann, Frank
2016-09-01
The Large Hadron Collider (LHC) at the European Organization for Nuclear Research (CERN) presently provides proton-proton collisions at a center-of-mass (c.m.) energy of 13 TeV. The LHC design was started more than 30 years ago, and its physics program will extend through the second half of the 2030's. The global Future Circular Collider (FCC) study is now preparing for a post-LHC project. The FCC study focuses on the design of a 100-TeV hadron collider (FCC-hh) in a new ˜100 km tunnel. It also includes the design of a high-luminosity electron-positron collider (FCCee) as a potential intermediate step, and a lepton-hadron collider option (FCC-he). The scope of the FCC study comprises accelerators, technology, infrastructure, detectors, physics, concepts for worldwide data services, international governance models, and implementation scenarios. Among the FCC core technologies figure 16-T dipole magnets, based on Nb3 S n superconductor, for the FCC-hh hadron collider, and a highly-efficient superconducting radiofrequency system for the FCC-ee lepton collider. Following the FCC concept, the Institute of High Energy Physics (IHEP) in Beijing has initiated a parallel design study for an e + e - Higgs factory in China (CEPC), which is to be succeeded by a high-energy hadron collider (SPPC). At present a tunnel circumference of 54 km and a hadron collider c.m. energy of about 70 TeV are being considered. After a brief look at the LHC, this article reports the motivation and the present status of the FCC study, some of the primary design challenges and R&D subjects, as well as the emerging global collaboration.
4. A Photon Collider Experiment based on SLC
SciTech Connect
Gronberg, J
2003-11-01
Technology for a photon collider experiment at a future TeV-scale linear collider has been under development for many years. The laser and optics technology has reached the point where a GeV-scale photon collider experiment is now feasible. We report on the photon-photon luminosities that would be achievable at a photon collider experiment based on a refurbished Stanford Linear Collider.
5. Availability modeling approach for future circular colliders based on the LHC operation experience
Niemi, Arto; Apollonio, Andrea; Gutleber, Johannes; Sollander, Peter; Penttinen, Jussi-Pekka; Virtanen, Seppo
2016-12-01
Reaching the challenging integrated luminosity production goals of a future circular hadron collider (FCC-hh) and high luminosity LHC (HL-LHC) requires a thorough understanding of today's most powerful high energy physics research infrastructure, the LHC accelerator complex at CERN. FCC-hh, a 4 times larger collider ring aims at delivering 10 - 20 ab-1 of integrated luminosity at 7 times higher collision energy. Since the identification of the key factors that impact availability and cost is far from obvious, a dedicated activity has been launched in the frame of the future circular collider study to develop models to study possible ways to optimize accelerator availability. This paper introduces the FCC reliability and availability study, which takes a fresh new look at assessing and modeling reliability and availability of particle accelerator infrastructures. The paper presents a probabilistic approach for Monte Carlo simulation of the machine operational cycle, schedule and availability for physics. The approach is based on best-practice, industrially applied reliability analysis methods. It relies on failure rate and repair time distributions to calculate impacts on availability. The main source of information for the study is coming from CERN accelerator operation and maintenance data. Recent improvements in LHC failure tracking help improving the accuracy of modeling of LHC performance. The model accuracy and prediction capabilities are discussed by comparing obtained results with past LHC operational data.
6. Physics at future hadron colliders
SciTech Connect
U. Baur et al.
2002-12-23
We discuss the physics opportunities and detector challenges at future hadron colliders. As guidelines for energies and luminosities we use the proposed luminosity and/or energy upgrade of the LHC (SLHC), and the Fermilab design of a Very Large Hadron Collider (VLHC). We illustrate the physics capabilities of future hadron colliders for a variety of new physics scenarios (supersymmetry, strong electroweak symmetry breaking, new gauge bosons, compositeness and extra dimensions). We also investigate the prospects of doing precision Higgs physics studies at such a machine, and list selected Standard Model physics rates.
7. Challenges in future linear colliders
SciTech Connect
2002-09-02
For decades, electron-positron colliders have been complementing proton-proton colliders. But the circular LEP, the largest e-e+ collider, represented an energy limit beyond which energy losses to synchrotron radiation necessitate moving to e-e+ linear colliders (LCs), thereby raising new challenges for accelerator builders. Japanese-American, German, and European collaborations have presented options for the Future Linear Collider (FLC). Key accelerator issues for any FLC option are the achievement of high enough energy and luminosity. Damping rings, taking advantage of the phenomenon of synchrotron radiation, have been developed as the means for decreasing beam size, which is crucial for ensuring a sufficiently high rate of particle-particle collisions. Related challenges are alignment and stability in an environment where even minute ground motion can disrupt performance, and the ability to monitor beam size. The technical challenges exist within a wider context of socioeconomic and political challenges, likely necessitating continued development of international collaboration among parties involved in accelerator-based physics.
SciTech Connect
Litvinenko, V.
2010-05-23
Outstanding research potential of electron-hadron colliders (EHC) was clearly demonstrated by first - and the only - electron-proton collider HERA (DESY, Germany). Physics data from HERA revealed new previously unknown facets of Quantum Chromo-Dynamics (QCD). EHC is an ultimate microscope probing QCD in its natural environment, i.e. inside the hadrons. In contrast with hadrons, electrons are elementary particles with known initial state. Hence, scattering electrons from hadrons provides a clearest pass to their secrets. It turns EHC into an ultimate machine for high precision QCD studies and opens access to rich physics with a great discovery potential: solving proton spin puzzle, observing gluon saturation or physics beyond standard model. Access to this physics requires high-energy high-luminosity EHCs and a wide reach in the center-of-mass (CM) energies. This paper gives a brief overview of four proposed electron-hadron colliders: ENC at GSI (Darmstadt, Germany), ELIC/MEIC at TJNAF (Newport News, VA, USA), eRHIC at BNL (Upton, NY, USA) and LHeC at CERN (Geneva, Switzerland). Future electron-hadron colliders promise to deliver very rich physics not only in the quantity but also in the precision. They are aiming at very high luminosity two-to-four orders of magnitude beyond the luminosity demonstrated by the very successful HERA. While ENC and LHeC are on opposite side of the energy spectrum, eRHIC and ELIC are competing for becoming an electron-ion collider (EIC) in the U.S. Administrations of BNL and Jlab, in concert with US DoE office of Nuclear Physics, work on the strategy for down-selecting between eRHIC and ELIC. The ENC, EIC and LHeC QCD physics programs to a large degree are complimentary to each other and to the LHC physics. In last decade, an Electron Ion Collider (EIC) collaboration held about 25 collaboration meetings to develop physics program for EIC with CM energy {approx}100 GeV. One of these meetings was held at GSI, where ENC topic was in the
9. Status of the Future Circular Collider Study
Benedikt, Michael
2016-03-01
Following the 2013 update of the European Strategy for Particle Physics, the international Future Circular Collider (FCC) Study has been launched by CERN as host institute, to design an energy frontier hadron collider (FCC-hh) in a new 80-100 km tunnel with a centre-of-mass energy of about 100 TeV, an order of magnitude beyond the LHC's, as a long-term goal. The FCC study also includes the design of a 90-350 GeV high-luminosity lepton collider (FCC-ee) installed in the same tunnel, serving as Higgs, top and Z factory, as a potential intermediate step, as well as an electron-proton collider option (FCC-he). The physics cases for such machines will be assessed and concepts for experiments will be developed in time for the next update of the European Strategy for Particle Physics by the end of 2018. The presentation will summarize the status of machine designs and parameters and discuss the essential technical components to be developed in the frame of the FCC study. Key elements are superconducting accelerator-dipole magnets with a field of 16 T for the hadron collider and high-power, high-efficiency RF systems for the lepton collider. In addition the unprecedented beam power presents special challenges for the hadron collider for all aspects of beam handling and machine protection. First conclusions of geological investigations and implementation studies will be presented. The status of the FCC collaboration and the further planning for the study will be outlined.
10. Development of Large Area Gas Electron Multiplier Detector and Its Application to a Digital Hadron Calorimeter for Future Collider Experiments
SciTech Connect
Yu, Jaehoon; White, Andrew
2014-09-25
The UTA High Energy Physics Group conducted generic detector development based on large area, very thin and high sensitivity gas detector using gas electron multiplier (GEM) technology. This is in preparation for a use as a sensitive medium for sampling calorimeters in future collider experiments at the Energy Frontier as well as part of the tracking detector in Intensity Frontier experiments. We also have been monitoring the long term behavior of one of the prototype detectors (30cmx30cm) read out by the SLAC-developed 13-bit KPiX analog chip over three years and have made presentations of results at various APS meetings. While the important next step was the development of large area (1m x 1m) GEM planes, we also have looked into opportunities of applying this technology to precision tracking detectors to significantly improve the performance of the Range Stack detector for CP violation experiments and to provide an amplification layer for the liquid Argon Time Projection Chamber in the LBNE experiment. We have jointly developed 33cmx100cm large GEM foils with the CERN gas detector development group to construct 33cm x100cm unit chambers. Three of these unit chambers will be put together to form a 1m x 1m detector plane. Following characterization of one 33cmx100cm unit chamber prototype, a total of five 1m x 1m planes will be constructed and inserted into an existing 1m3 RPC DHCAL stack to test the performance of the new GEM DHCAL in particle beams. The large area GEM detector we planned to develop in this proposal not only gives an important option to DHCAL for future collider experiments but also the potential to expand its use to Intensity Frontier and Cosmic Frontier experiments as high efficiency, high amplification anode planes for liquid Argon time projection chambers. Finally, thanks to its sensitivity to X-rays and other neutral radiations and its light-weight characteristics, the large area GEM has a great potential for the use in medical imaging and
11. COLLIDE: Collisions into Dust Experiment
NASA Technical Reports Server (NTRS)
Colwell, Joshua E.
1999-01-01
The Collisions Into Dust Experiment (COLLIDE) was completed and flew on STS-90 in April and May of 1998. After the experiment was returned to Earth, the data and experiment were analyzed. Some anomalies occurred during the flight which prevented a complete set of data from being obtained. However, the experiment did meet its criteria for scientific success and returned surprising results on the outcomes of very low energy collisions into powder. The attached publication, "Low Velocity Microgravity Impact Experiments into Simulated Regolith," describes in detail the scientific background, engineering, and scientific results of COLLIDE. Our scientific conclusions, along with a summary of the anomalies which occurred during flight, are contained in that publication. We offer it as our final report on this grant.
12. Optimizing integrated luminosity of future hadron colliders
Benedikt, Michael; Schulte, Daniel; Zimmermann, Frank
2015-10-01
The integrated luminosity, a key figure of merit for any particle-physics collider, is closely linked to the peak luminosity and to the beam lifetime. The instantaneous peak luminosity of a collider is constrained by a number of boundary conditions, such as the available beam current, the maximum beam-beam tune shift with acceptable beam stability and reasonable luminosity lifetime (i.e., the empirical "beam-beam limit"), or the event pileup in the physics detectors. The beam lifetime at high-luminosity hadron colliders is largely determined by particle burn off in the collisions. In future highest-energy circular colliders synchrotron radiation provides a natural damping mechanism, which can be exploited for maximizing the integrated luminosity. In this article, we derive analytical expressions describing the optimized integrated luminosity, the corresponding optimum store length, and the time evolution of relevant beam parameters, without or with radiation damping, while respecting a fixed maximum value for the total beam-beam tune shift or for the event pileup in the detector. Our results are illustrated by examples for the proton-proton luminosity of the existing Large Hadron Collider (LHC) at its design parameters, of the High-Luminosity Large Hadron Collider (HL-LHC), and of the Future Circular Collider (FCC-hh).
13. On the Future High Energy Colliders
SciTech Connect
2015-09-28
High energy particle colliders have been in the forefront of particle physics for more than three decades. At present the near term US, European and international strategies of the particle physics community are centered on full exploitation of the physics potential of the Large Hadron Collider (LHC) through its high-luminosity upgrade (HL-LHC). A number of the next generation collider facilities have been proposed and are currently under consideration for the medium and far-future of accelerator-based high energy physics. In this paper we offer a uniform approach to evaluation of various accelerators based on the feasibility of their energy reach, performance potential and cost range.
14. Nuclear collisions at the Future Circular Collider
Armesto, N.; Dainese, A.; d'Enterria, D.; Masciocchi, S.; Roland, C.; Salgado, C. A.; van Leeuwen, M.; Wiedemann, U. A.
2016-12-01
The Future Circular Collider is a new proposed collider at CERN with centre-of-mass energies around 100 TeV in the pp mode. Ongoing studies aim at assessing its physics potential and technical feasibility. Here we focus on updates in physics opportunities accessible in pA and AA collisions not covered in previous Quark Matter contributions, including Quark-Gluon Plasma and gluon saturation studies, novel hard probes of QCD matter, and photon-induced collisions.
15. Seismic studies for Fermilab future collider projects
SciTech Connect
Lauh, J.; Shiltsev, V.
1997-11-01
Ground motion can cause significant beam emittance growth and orbit oscillations in large hadron colliders due to a vibration of numerous focusing magnets. Larger accelerator ring circumference leads to smaller revolution frequency and, e.g. for the Fermilab Very Large Hadron Collider(VLHC) 50-150 Hz vibrations are of particular interest as they are resonant with the beam betatron frequency. Seismic measurements at an existing large accelerator under operation can help to estimate the vibrations generated by the technical systems in future machines. Comparison of noisy and quiet microseismic conditions might be useful for proper choice of technical solutions for future colliders. This article presents results of wide-band seismic measurements at the Fermilab site, namely, in the tunnel of the Tevatron and on the surface nearby, and in two deep tunnels in the Illinois dolomite which is though to be a possible geological environment of the future accelerators.
16. RF pulse compression for future linear colliders
SciTech Connect
Wilson, P.B.
1995-05-01
Future (nonsuperconducting) linear colliders will require very high values of peak rf power per meter of accelerating structure. The role of rf pulse compression in producing this power is examined within the context of overall rf system design for three future colliders at energies of 1.0--1.5 TeV, 5 TeV and 25 TeV. In order keep the average AC input power and the length of the accelerator within reasonable limits, a collider in the 1.0--1.5 TeV energy range will probably be built at an x-band rf frequency, and will require a peak power on the order of 150--200 MW per meter of accelerating structure. A 5 TeV collider at 34 GHz with a reasonable length (35 km) and AC input power (225 MW) would require about 550 MW per meter of structure. Two-beam accelerators can achieve peak powers of this order by applying dc pulse compression techniques (induction linac modules) to produce the drive beam. Klystron-driven colliders achieve high peak power by a combination of dc pulse compression (modulators) and rf pulse compression, with about the same overall rf system efficiency (30--40%) as a two-beam collider. A high gain (6.8) three-stage binary pulse compression system with high efficiency (80%) is described, which (compared to a SLED-11 system) can be used to reduce the klystron peak power by about a factor of two, or alternately, to cut the number of klystrons in half for a 1.0--1.5 TeV x-band collider. For a 5 TeV klystron-driven collider, a high gain, high efficiency rf pulse compression system is essential.
17. When Waves Collide: Future Conflict
DTIC Science & Technology
1995-01-01
smaller but highly mobile. The Air Force will turn to space or run the risk of extinction . New weapons will be smarter, but some ancient varieties will...future will closely resemble the present or recent past. In other words, it appears that the dinosaur that we know as the Armed Forces hopes to escape... extinction or radical alteration by becoming a minidinosaur.7 It is unlikely that this ap- proach will succeed. Things will change. The Armed Forces
18. From the LHC to Future Colliders
SciTech Connect
De Roeck, A.; Ellis, J.; Grojean, C.; Heinemeyer, S.; Jakobs, K.; Weiglein, G.; Azuelos, G.; Dawson, S.; Gripaios, B.; Han, T.; Hewett, J.; Lancaster, M.; Mariotti, C.; Moortgat, F.; Moortgat-Pick, G.; Polesello, G.; Riemann, S.; Assamagan, K.; Bechtle, P.; Carena, M.; Chachamis, G.; /more authors..
2010-06-11
Discoveries at the LHC will soon set the physics agenda for future colliders. This report of a CERN Theory Institute includes the summaries of Working Groups that reviewed the physics goals and prospects of LHC running with 10 to 300 fb{sup -1} of integrated luminosity, of the proposed sLHC luminosity upgrade, of the ILC, of CLIC, of the LHeC and of a muon collider. The four Working Groups considered possible scenarios for the first 10 fb{sup -1} of data at the LHC in which (i) a state with properties that are compatible with a Higgs boson is discovered, (ii) no such state is discovered either because the Higgs properties are such that it is difficult to detect or because no Higgs boson exists, (iii) a missing-energy signal beyond the Standard Model is discovered as in some supersymmetric models, and (iv) some other exotic signature of new physics is discovered. In the contexts of these scenarios, theWorking Groups reviewed the capabilities of the future colliders to study in more detail whatever new physics may be discovered by the LHC. Their reports provide the particle physics community with some tools for reviewing the scientific priorities for future colliders after the LHC produces its first harvest of new physics from multi-TeV collisions.
19. Future high energy colliders symposium. Summary report
SciTech Connect
Parsa, Z. |
1996-12-31
A Future High Energy Colliders Symposium was held October 21-25, 1996 at the Institute for Theoretical Physics (ITP) in Santa Barbara. This was one of the 3 symposia hosted by the ITP and supported by its sponsor, the National Science Foundation, as part of a 5 month program on New Ideas for Particle Accelerators. The long term program and symposia were organized and coordinated by Dr. Zohreh Parsa of Brookhaven National Laboratory/ITP. The purpose of the symposium was to discuss the future direction of high energy physics by bringing together leaders from the theoretical, experimental and accelerator physics communities. Their talks provided personal perspectives on the physics objectives and the technology demands of future high energy colliders. Collectively, they formed a vision for where the field should be heading and how it might best reach its objectives.
20. Searches for new gauge bosons at future colliders
SciTech Connect
Rizzo, T.G.
1996-09-01
The search reaches for new gauge bosons at future hadron and lepton colliders are summarized for a variety of extended gauge models. Experiments at these energies will vastly improve over present limits and will easily discover a Z and/or W in the multi-TeV range.
1. Future Accelerators, Muon Colliders, and Neutrino Factories
SciTech Connect
Richard A Carrigan, Jr.
2001-12-19
Particle physics is driven by five great topics. Neutrino oscillations and masses are now at the fore. The standard model with extensions to supersymmetry and a Higgs to generate mass explains much of the field. The origins of CP violation are not understood. The possibility of extra dimensions has raised tantalizing new questions. A fifth topic lurking in the background is the possibility of something totally different. Many of the questions raised by these topics require powerful new accelerators. It is not an overstatement to say that for some of the issues, the accelerator is almost the experiment. Indeed some of the questions require machines beyond our present capability. As this volume attests, there are parts of the particle physics program that have been significantly advanced without the use of accelerators such as the subject of neutrino oscillations and many aspects of the particle-cosmology interface. At this stage in the development of physics, both approaches are needed and important. This chapter first reviews the status of the great accelerator facilities now in operation or coming on within the decade. Next, midrange possibilities are discussed including linear colliders with the adjunct possibility of gamma-gamma colliders, muon colliders, with precursor neutrino factories, and very large hadron colliders. Finally visionary possibilities are considered including plasma and laser accelerators.
2. Testing electroweak baryogenesis with future colliders
Curtin, David; Meade, Patrick; Yu, Chiu-Tien
2014-11-01
Electroweak Baryogenesis (EWBG) is a compelling scenario for explaining the matter-antimatter asymmetry in the universe. Its connection to the electroweak phase transition makes it inherently testable. However, completely excluding this scenario can seem difficult in practice, due to the sheer number of proposed models. We investigate the possibility of postulating a "no-lose" theorem for testing EWBG in future e + e - or hadron colliders. As a first step we focus on a factorized picture of EWBG which separates the sources of a stronger phase transition from those that provide new sources of CP violation. We then construct a "nightmare scenario" that generates a strong first-order phase transition as required by EWBG, but is very difficult to test experimentally. We show that a 100 TeV hadron collider is both necessary and possibly sufficient for testing the parameter space of the nightmare scenario that is consistent with EWBG.
3. Searches for scalar and vector leptoquarks at future hadron colliders
SciTech Connect
Rizzo, T.G.
1996-09-01
The search reaches for both scalar(S) and vector(V) leptoquarks at future hadron colliders are summarized. In particular the authors evaluate the production cross sections of both leptoquark types at TeV33 and LHC as well as the proposed 60 and 200 TeV colliders through both quark-antiquark annihilation and gluon-gluon fusion: q{anti q},gg {r_arrow} SS,VV. Experiments at these machines should easily discover such particles if their masses are not in excess of the few TeV range.
4. SiW ECAL for future e+e- collider
Balagura, V.; Bilokin, S.; Bonis, J.; Boudry, V.; Brient, J.-C.; Callier, S.; Cheng, T.; Cornat, R.; De La Taille, C.; Doan, T. H.; Frotin, M.; Gastaldi, F.; Hirai, H.; Jain, S.; Jain, Sh.; Lacour, D.; Lavergne, L.; Lleres, A.; Magniette, F.; Mastrolorenzo, L.; Nanni, J.; Poeschl, R.; Pozdnyakov, A.; Psallidas, A.; Ruan, M.; Rubio-Roy, M.; Seguin-Moreau, N.; Shpak, K.; Suehara, T.; Thiebault, A.; Wright, J.; Yu, D.
2017-07-01
Calorimeters with silicon detectors have many unique features and are proposed for several world-leading experiments. We discuss the tests of the first three 18×18 cm2 layers segmented into 1024 pixels of the technological prototype of the silicon-tungsten electromagnetic calorimeter for a future e+e- collider. The tests have beem performed in November 2015 at CERN SPS beam line.
5. ep Collider experiments and physics
SciTech Connect
Atwood, D.; Baur, U.; Bluemlein, J.
1992-12-31
The physics prospects for detectors at ep colliders are examined. Colliders considered include the HERA facility at DESY, LEP I {times} LHC, and LEP II {times} LHC at CERN. Physics topics studied include machine energy and polarization, as well as detector resolution, calibration, jet identification and backgrounds from beam-gas interactions. QCD topics include measurements of the quark and gluon structure functions and parton distributions, as well as the expansion of the observable cross section into angular functions. Electroweak topics include measurements of the weak mixing angle, radiative corrections, and WW{gamma} (WWZ) couplings. Topics beyond the standard model include observation of new Zs, indirect production of Leptoquarks, pair production of sfermions and searches for R-parity-violating SUSY particle production.
6. Research and Development of Future Muon Collider
SciTech Connect
Yonehara, K.; /Fermilab
2012-05-01
Muon collider is a considerable candidate of the next generation high-energy lepton collider machine. A novel accelerator technology must be developed to overcome several intrinsic issues of muon acceleration. Recent research and development of critical beam elements for a muon accelerator, especially muon beam phase space ionization cooling channel, are reviewed in this paper.
7. Search for lepton flavor violation at future lepton colliders
Cho, Gi-Chol; Shimo, Hanako
2017-08-01
Lepton flavor violating (LFV) processes, e+e-→ e+ℓ- and e-e-→ e-ℓ- (ℓ = μ or τ), via four-Fermi contact interactions at future International Linear Collider (ILC) are studied. Taking account of previous experimental results of LFV processes μ → 3e and τ → 3e, we find that the upper limits on the LFV parameters for ℓ = τ could be improved at the ILC experiment using the polarized electron beam. The improvement of the upper limits could be nearly an order of magnitude as compared to previous ones.
8. Beyond standard model physics at current and future colliders
Liu, Zhen
The Large Hadron Collider (LHC), a multinational experiment which began running in 2009, is highly expected to discover new physics that will help us understand the nature of the universe and begin to find solutions to many of the unsolved puzzles of particle physics. For over 40 years the Standard Model has been the accepted theory of elementary particle physics, except for one unconfirmed component, the Higgs boson. The experiments at the LHC have recently discovered this Standard-Model-like Higgs boson. This discovery is one of the most exciting achievements in elementary particle physics. Yet, a profound question remains: Is this rather light, weakly-coupled boson nothing but a Standard Model Higgs or a first manifestation of a deeper theory? Also, the recent discoveries of neutrino mass and mixing, experimental evidences of dark matter and dark energy, matter-antimatter asymmetry, indicate that our understanding of fundamental physics is currently incomplete. For the next decade and more, the LHC and future colliders will be at the cutting-edge of particle physics discoveries and will shed light on many of these unanswered questions. There are many promising beyond-Standard-Model theories that may help solve the central puzzles of particle physics. To fill the gaps in our knowledge, we need to know how these theories will manifest themselves in controlled experiments, such as high energy colliders. I discuss how we can probe fundamental physics at current and future colliders directly through searches for new phenomena such as resonances, rare Higgs decays, exotic displaced signatures, and indirectly through precision measurements on Higgs in this work. I explore beyond standard model physics effects from different perspectives, including explicit models such as supersymmetry, generic models in terms of resonances, as well as effective field theory approach in terms of higher dimensional operators. This work provides a generic and broad overview of the physics
9. Far Future Colliders and Required R&D Program
SciTech Connect
Shiltsev, V.; /Fermilab
2012-06-01
Particle colliders for high energy physics have been in the forefront of scientific discoveries for more than half a century. The accelerator technology of the collider has progressed immensely, while the beam energy, luminosity, facility size and the cost have grown by several orders of magnitude. The method of colliding beams has not fully exhausted its potential but its pace of progress has greatly slowed down. In this paper we very briefly review the R&D toward near future colliders and make an attempt to look beyond the current horizon and outline the changes in the paradigm required for the next breakthroughs.
10. Computing and data handling requirements for SSC (Superconducting Super Collider) and LHC (Large Hadron Collider) experiments
SciTech Connect
Lankford, A.J.
1990-05-01
A number of issues for computing and data handling in the online in environment at future high-luminosity, high-energy colliders, such as the Superconducting Super Collider (SSC) and Large Hadron Collider (LHC), are outlined. Requirements for trigger processing, data acquisition, and online processing are discussed. Some aspects of possible solutions are sketched. 6 refs., 3 figs.
11. Muon Collider Overview: Progress and Future Plans
SciTech Connect
Gallardo, J.; Palmer, R.; Sessler, A.; Tollestrup, A.
1998-06-01
Besides continued work on the parameters of a 3-4 and 0.5 TeV center of mass (COM) collider, many studies are now concentrating on a machine near 100 GeV (COM) that could be a factory for the s-channel production of Higgs particles. We mention the research on the various com- ponents in such muon colliders, starting from the proton accelerator needed to generate pions from a heavy-Z tar- get and proceeding through the phase rotation and decay ({pi}{yields}{mu}{nu}{mu}) channel, muon cooling, acceleration storage in a collider ring and the collider detector. We also men- tion theoretical and experimental R & D plans for the next several years that should lead to a better understanding of the design and feasibility issues for all of the components. This note is a summary of a report[l] updating the progress on the R & D since the Feasibility Study of Muon Colliders presented at the Workshop Snowmass'96.[2
12. COLLIDE-2: Collisions Into Dust Experiment-2
NASA Technical Reports Server (NTRS)
Colwell, Joshua E.
2002-01-01
The Collisions Into Dust Experimental (COLLIDE-2) was the second flight of the COLLIDE payload. The payload performs six low-velocity impact experiments to study the collisions that are prevalent in planetary ring systems and in the early stages of planet formation. Each impact experiment is into a target of granular material, and the impacts occur at speeds between 1 and 100 cm/s in microgravity and in a vacuum. The experiments are recorded on digital videotape which is later analyzed. During the period of performance a plan was developed to address some of the technical issues that prevented the first flight of COLLIDE from being a complete success, and also to maximize the scientific return based on the science results from the first flight. The experiment was modified following a series of reviews of the design plan, and underwent extensive testing. The data from the experiment show that the primary goal of identifying transition regimes for low-velocity impacts based on cratering versus accretion was achieved. Following a brief period of storage, the experiment flew regimes for low-velocity impacts based on cratering versus accretion was achieved. as a Hitchhiker payload on the MACH-1 Hitchhiker bridge on STS-108 in December 2001. These data have been analyzed and submitted for publication. That manuscript is attached to this report. The experiment was retrieved in January 2002, and all six impact experiments functioned nominally. Preliminary results were reported at the Lunar and Planetary Science Conference.
13. Suppressing Electron Cloud in Future Linear Colliders
SciTech Connect
Pivi, M; Kirby, R.E.; Raubenheimer, T.O.; Le Pimpec, F.; /PSI, Villigen
2005-05-27
Any accelerator circulating positively charged beams can suffer from a build-up of an electron cloud (EC) in the beam pipe. The cloud develops through ionization of residual gases, synchrotron radiation and secondary electron emission and, when severe, can cause instability, emittance blow-up or loss of the circulating beam. The electron cloud is potentially a luminosity limiting effect for both the Large Hadron Collider (LHC) and the International Linear Collider (ILC). For the ILC positron damping ring, the development of the electron cloud must be suppressed. This paper discusses the state-of-the-art of the ongoing SLAC and international R&D program to study potential remedies.
14. DEPFET detectors for future electron-positron colliders
Marinas, C.
2015-11-01
The DEPFET Collaboration develops highly granular, ultra-thin pixel detectors for outstanding vertex reconstruction at future electron-positron collider experiments. A DEPFET sensor, by the integration of a field effect transistor on a fully depleted silicon bulk, provides simultaneous position sensitive detector capabilities and in pixel amplification. The characterization of the latest DEPFET prototypes has proven that a adequate signal-to-noise ratio and excellent single point resolution can be achieved for a sensor thickness of 50 micrometers. The close to final auxiliary ASICs have been produced and found to operate a DEPFET pixel detector of the latest generation with the required read-out speed. A complete detector concept is being developed for the Belle II experiment at the new Japanese super flavor factory. DEPFET is not only the technology of choice for the Belle II vertex detector, but also a prime candidate for the ILC. Therefore, in this contribution, the status of DEPFET R&D project is reviewed in the light of the requirements of the vertex detector at a future electron-positron collider.
15. The future of the Large Hadron Collider and CERN.
PubMed
Heuer, Rolf-Dieter
2012-02-28
This paper presents the Large Hadron Collider (LHC) and its current scientific programme and outlines options for high-energy colliders at the energy frontier for the years to come. The immediate plans include the exploitation of the LHC at its design luminosity and energy, as well as upgrades to the LHC and its injectors. This may be followed by a linear electron-positron collider, based on the technology being developed by the Compact Linear Collider and the International Linear Collider collaborations, or by a high-energy electron-proton machine. This contribution describes the past, present and future directions, all of which have a unique value to add to experimental particle physics, and concludes by outlining key messages for the way forward.
16. Towards a Future Linear Collider and The Linear Collider Studies at CERN
ScienceCinema
None
2016-07-12
During the week 18-22 October, more than 400 physicists will meet at CERN and in the CICG (International Conference Centre Geneva) to review the global progress towards a future linear collider. The 2010 International Workshop on Linear Colliders will study the physics, detectors and accelerator complex of a linear collider covering both the CLIC and ILC options. Among the topics presented and discussed will be the progress towards the CLIC Conceptual Design Report in 2011, the ILC Technical Design Report in 2012, physics and detector studies linked to these reports, and an increasing numbers of common working group activities. The seminar will give an overview of these topics and also CERNâs linear collider studies, focusing on current activities and initial plans for the period 2011-16. n.b: The Council Chamber is also reserved for this colloquium with a live transmission from the Main Auditorium.
17. The signatures of doubly charged leptons in future linear colliders
Guo, Yu-Chen; Yue, Chong-Xing; Liu, Zhi-Cheng
2017-08-01
We discuss the production of the doubly charged leptons in future linear electron positron colliders, such as the International Linear Collider and Compact Linear Collider. Such states are introduced in extended weak-isospin multiplets by composite models. We discuss the production cross section of {e}-γ \\to {L}--{W}+ and carry out analyses for hadronic, semi-leptonic and pure leptonic channels based on the full simulation performance of the silicon detector. The 3- and 5-sigma statistical significance exclusion curves are provided in the model parameter space. It is found that the hadronic channel could offer the most possible detectable signature.
18. RF power generation for future linear colliders
SciTech Connect
Fowkes, W.R.; Allen, M.A.; Callin, R.S.; Caryotakis, G.; Eppley, K.R.; Fant, K.S.; Farkas, Z.D.; Feinstein, J.; Ko, K.; Koontz, R.F.; Kroll, N.; Lavine, T.L.; Lee, T.G.; Miller, R.H.; Pearson, C.; Spalek, G.; Vlieks, A.E.; Wilson, P.B.
1990-06-01
The next linear collider will require 200 MW of rf power per meter of linac structure at relatively high frequency to produce an accelerating gradient of about 100 MV/m. The higher frequencies result in a higher breakdown threshold in the accelerating structure hence permit higher accelerating gradients per meter of linac. The lower frequencies have the advantage that high peak power rf sources can be realized. 11.42 GHz appears to be a good compromise and the effort at the Stanford Linear Accelerator Center (SLAC) is being concentrated on rf sources operating at this frequency. The filling time of the accelerating structure for each rf feed is expected to be about 80 ns. Under serious consideration at SLAC is a conventional klystron followed by a multistage rf pulse compression system, and the Crossed-Field Amplifier. These are discussed in this paper.
19. Crystal Ball: On the Future High Energy Colliders
SciTech Connect
2015-09-20
High energy particle colliders have been in the forefront of particle physics for more than three decades. At present the near term US, European and international strategies of the particle physics community are centered on full exploitation of the physics potential of the Large Hadron Collider (LHC) through its high-luminosity upgrade (HL-LHC). A number of next generation collider facilities have been proposed and are currently under consideration for the medium- and far-future of the accelerator-based high energy physics. In this paper we offer a uniform approach to evaluation of various accelerators based on the feasibility of their energy reach, performance reach and cost range. We briefly review such post-LHC options as linear e+e- colliders in Japan (ILC) or at CERN (CLIC), muon collider, and circular lepton or hadron colliders in China (CepC/SppC) and Europe (FCC). We conclude with a look into ultimate energy reach accelerators based on plasmas and crystals, and some perspectives for the far future of accelerator-based particle physics.
20. Displaced vertex searches for sterile neutrinos at future lepton colliders
Antusch, Stefan; Cazzato, Eros; Fischer, Oliver
2016-12-01
We investigate the sensitivity of future lepton colliders to displaced vertices from the decays of long-lived heavy (almost sterile) neutrinos with electroweak scale masses and detectable time of flight. As future lepton colliders we consider the FCC-ee, the CEPC, and the ILC, searching at the Z-pole and at the center-of-mass energies of 240, 350 and 500 GeV. For a realistic discussion of the detector response to the displaced vertex signal and the Standard Model background we consider the ILC's Silicon Detector (SiD) as benchmark for the future lepton collider detectors. We find that displaced vertices constitute a powerful search channel for sterile neutrinos, sensitive to squared active-sterile mixing angles as small as 10-11.
1. Cooling of electronics in collider experiments
SciTech Connect
Richard P. Stanek et al.
2003-11-07
Proper cooling of detector electronics is critical to the successful operation of high-energy physics experiments. Collider experiments offer unique challenges based on their physical layouts and hermetic design. Cooling systems can be categorized by the type of detector with which they are associated, their primary mode of heat transfer, the choice of active cooling fluid, their heat removal capacity and the minimum temperature required. One of the more critical detector subsystems to require cooling is the silicon vertex detector, either pixel or strip sensors. A general design philosophy is presented along with a review of the important steps to include in the design process. Factors affecting the detector and cooling system design are categorized. A brief review of some existing and proposed cooling systems for silicon detectors is presented to help set the scale for the range of system designs. Fermilab operates two collider experiments, CDF & D0, both of which have silicon systems embedded in their detectors. A review of the existing silicon cooling system designs and operating experience is presented along with a list of lessons learned.
2. Status and future directions for advanced accelerator research - conventional and non-conventional collider concepts
SciTech Connect
Siemann, R.H.
1997-01-01
The relationship between advanced accelerator research and future directions for particle physics is discussed. Comments are made about accelerator research trends in hadron colliders, muon colliders, and e{sup +}3{sup {minus}} linear colliders.
3. ISR effects for resonant Higgs production at future lepton colliders
Greco, Mario; Han, Tao; Liu, Zhen
2016-12-01
We study the effects of the initial state radiation on the s-channel Higgs boson resonant production at μ+μ- and e+e- colliders by convoluting with the beam energy spread profile of the collider and the Breit-Wigner resonance profile of the signal. We assess their impact on both the Higgs signal and SM backgrounds for the leading decay channels h → b b bar , WW*. Our study improves the existing analyses of the proposed future resonant Higgs factories and provides further guidance for the accelerator designs with respect to the physical goals.
4. Alternate approaches to future electron-positron linear colliders
SciTech Connect
Loew, G.A.
1998-07-01
The purpose of this article is two-fold: to review the current international status of various design approaches to the next generation of e{sup +}e{sup {minus}} linear colliders, and on the occasion of his 80th birthday, to celebrate Richard B. Neals many contributions to the field of linear accelerators. As it turns out, combining these two tasks is a rather natural enterprise because of Neals long professional involvement and insight into many of the problems and options which the international e{sup +}e{sup {minus}} linear collider community is currently studying to achieve a practical design for a future machine.
5. ISR effects for resonant Higgs production at future lepton colliders
DOE PAGES
Greco, Mario; Han, Tao; Liu, Zhen
2016-11-04
We study the effects of the initial state radiation on themore » $s$-channel Higgs boson resonant production at $$\\mu^+\\mu^-$$ and $e^+e^-$ colliders by convoluting with the beam energy spread profile of the collider and the Breit-Wigner resonance profile of the signal. We assess their impact on both the Higgs signal and SM backgrounds for the leading decay channels $$h\\rightarrow b\\bar b,\\ WW^*$$. In conclusion, our study improves the existing analyses of the proposed future resonant Higgs factories and provides further guidance for the accelerator designs with respect to the physical goals.« less
6. Searching for doubly-charged Higgs bosons at future colliders
SciTech Connect
Gunion, J.F.; Loomis, C.; Pitts, K.T.
1996-10-01
Doubly-charged Higgs bosons ({Delta}{sup --}/{Delta}{sup ++}) appear in several extensions to the Standard Model and can be relatively light. We review the theoretical motivation for these states and present a study of the discovery reach in future runs of the Fermilab Tevatron for pair-produced doubly-charged Higgs bosons decaying to like-sign lepton pairs. We also comment on the discovery potential at other future colliders. 16 refs., 3 figs., 1 tab.
7. Deciphering the MSSM Higgs mass at future hadron colliders
DOE PAGES
Agrawal, Prateek; Fan, JiJi; Reece, Matthew; ...
2017-06-06
Here, future hadron colliders will have a remarkable capacity to discover massive new particles, but their capabilities for precision measurements of couplings that can reveal underlying mechanisms have received less study. In this work we study the capability of future hadron colliders to shed light on a precise, focused question: is the higgs mass of 125 GeV explained by the MSSM? If supersymmetry is realized near the TeV scale, a future hadron collider could produce huge numbers of gluinos and electroweakinos. We explore whether precision measurements of their properties could allow inference of the scalar masses and tan β withmore » sufficient accuracy to test whether physics beyond the MSSM is needed to explain the higgs mass. We also discuss dark matter direct detection and precision higgs physics as complementary probes of tan β. For concreteness, we focus on the mini-split regime of MSSM parameter space at a 100 TeV pp collider, with scalar masses ranging from 10s to about 1000 TeV.« less
8. Deciphering the MSSM Higgs mass at future hadron colliders
Agrawal, Prateek; Fan, JiJi; Reece, Matthew; Xue, Wei
2017-06-01
Future hadron colliders will have a remarkable capacity to discover massive new particles, but their capabilities for precision measurements of couplings that can reveal underlying mechanisms have received less study. In this work we study the capability of future hadron colliders to shed light on a precise, focused question: is the higgs mass of 125 GeV explained by the MSSM? If supersymmetry is realized near the TeV scale, a future hadron collider could produce huge numbers of gluinos and electroweakinos. We explore whether precision measurements of their properties could allow inference of the scalar masses and tan β with sufficient accuracy to test whether physics beyond the MSSM is needed to explain the higgs mass. We also discuss dark matter direct detection and precision higgs physics as complementary probes of tan β. For concreteness, we focus on the mini-split regime of MSSM parameter space at a 100 TeV pp collider, with scalar masses ranging from 10s to about 1000 TeV.
9. Heavy-ion physics studies for the Future Circular Collider
Armesto, N.; Dainese, A.; d'Enterria, D.; Masciocchi, S.; Roland, C.; Salgado, C. A.; van Leeuwen, M.; Wiedemann, U. A.
2014-11-01
The Future Circular Collider (FCC) design study is aimed at assessing the physics potential and the technical feasibility of a new collider with centre-of-mass energies, in the hadron-hadron collision mode including proton and nucleus beams, more than seven times larger than the nominal LHC energies. An electron-positron collider in the same tunnel is also considered as an intermediate step, which in the long term would allow for electron-hadron collisions. First ideas on the physics opportunities with heavy ions at the FCC are presented, covering the physics of quark-gluon plasma, gluon saturation, photon-induced collisions, as well as connections with the physics of ultra-high-energy cosmic rays.
10. Detectors for Linear Colliders: Calorimetry at a Future Electron-Positron Collider (3/4)
ScienceCinema
None
2016-07-12
Calorimetry will play a central role in determining the physics reach at a future e+e- collider. The requirements for calorimetry place the emphasis on achieving an excellent jet energy resolution. The currently favoured option for calorimetry at a future e+e- collider is the concept of high granularity particle flow calorimetry. Here granularity and a high pattern recognition capability is more important than the single particle calorimetric response. In this lecture I will describe the recent progress in understanding the reach of high granularity particle flow calorimetry and the related R&D; efforts which concentrate on test beam demonstrations of the technological options for highly granular calorimeters. I will also discuss alternatives to particle flow, for example the technique of dual readout calorimetry.
11. Detectors for Linear Colliders: Calorimetry at a Future Electron-Positron Collider (3/4)
SciTech Connect
2010-02-17
Calorimetry will play a central role in determining the physics reach at a future e+e- collider. The requirements for calorimetry place the emphasis on achieving an excellent jet energy resolution. The currently favoured option for calorimetry at a future e+e- collider is the concept of high granularity particle flow calorimetry. Here granularity and a high pattern recognition capability is more important than the single particle calorimetric response. In this lecture I will describe the recent progress in understanding the reach of high granularity particle flow calorimetry and the related R&D; efforts which concentrate on test beam demonstrations of the technological options for highly granular calorimeters. I will also discuss alternatives to particle flow, for example the technique of dual readout calorimetry.
12. Advances in beam physics and technology: Colliders of the future
SciTech Connect
1994-11-01
Beams may be viewed as directed and focussed flow of energy and information, carried by particles and electromagnetic radiation fields (ie, photons). Often, they interact with each other (eg, in high energy colliders) or with other forms of matter (eg, in fixed targets, sychrotron radiation, neutron scattering, laser chemistry/physics, medical therapy, etc.). The whole art and science of beams revolve around the fundamental quest for, and ultimate implementation of, mechanisms of production, storage, control and observation of beams -- always directed towards studies of the basic structures and processes of the natural world and various practical applications. Tremendous progress has been made in all aspects of beam physics and technology in the last decades -- nonlinear dynamics, superconducting magnets and rf cavities, beam instrumentation and control, novel concepts and collider praradigms, to name a few. We illustrate this progress with a few examples and remark on the emergence of new collider scenarios where some of these progress might come to use -- the Gamma-Gamma Collider, the Muon Collider, laser acceleration, etc. We close with an outline of future oppotunities and outlook.
13. Beyond the Large Hadron Collider: A First Look at Cryogenics for CERN Future Circular Colliders
Lebrun, Philippe; Tavian, Laurent
Following the first experimental discoveries at the Large Hadron Collider (LHC) and the recent update of the European strategy in particle physics, CERN has undertaken an international study of possible future circular colliders beyond the LHC. The study, conducted with the collaborative participation of interested institutes world-wide, considers several options for very high energy hadron-hadron, electron-positron and hadron-electron colliders to be installed in a quasi-circular underground tunnel in the Geneva basin, with a circumference of 80 km to 100 km. All these machines would make intensive use of advanced superconducting devices, i.e. high-field bending and focusing magnets and/or accelerating RF cavities, thus requiring large helium cryogenic systems operating at 4.5 K or below. Based on preliminary sets of parameters and layouts for the particle colliders under study, we discuss the main challenges of their cryogenic systems and present first estimates of the cryogenic refrigeration capacities required, with emphasis on the qualitative and quantitative steps to be accomplished with respect to the present state-of-the-art.
14. Beam tube vacuum in future superconducting proton colliders
SciTech Connect
Turner, W.
1994-10-01
The beam tube vacuum requirements in future superconducting proton colliders that have been proposed or discussed in the literature -- SSC, LHC, and ELN -- are reviewed. The main beam tube vacuum problem encountered in these machines is how to deal with the magnitude of gas desorption and power deposition by synchrotron radiation while satisfying resistivity, impedance, and space constraints in the cryogenic environment of superconducting magnets. A beam tube vacuum model is developed that treats photodesorption of tightly bound H, C, and 0, photodesorption of physisorbed molecules, and the isotherm vapor pressure of H{sub 2}. Experimental data on cold tube photodesorption experiments are reviewed and applied to model calculations of beam tube vacuum performance for simple cold beam tube and liner configurations. Particular emphasis is placed on the modeling and interpretation of beam tube photodesorpiion experiments at electron synchrotron light sources. The paper also includes discussion of the constraints imposed by beam image current heating, the growth rate of the resistive wall instability, and single-bunch instability impedance limits.
15. Relic density and future colliders: inverse problem(s)
SciTech Connect
Arbey, Alexandre; Mahmoudi, Farvah
2010-06-23
Relic density calculations are often used to constrain particle physics models, and in particular supersymmetry. We will show that the presence of additional energy or entropy before the Big-Bang nucleosynthesis can however completely change the relic density constraints on the SUSY parameter space. Therefore one should be extremely careful when using the relic density to constrain supersymmetry as it could give misleading results, especially if combined with the future collider data. Alternatively, we will also show that combining the discoveries of the future colliders with relic density calculations can shed light on the inaccessible pre-BBN dark time physics. Finally we will present SuperIso Relic, a new relic density calculator code in Supersymmetry, which incorporates alternative cosmological models, and is publicly available.
16. Top quark FCNC couplings at future circular hadron electron colliders
Denizli, H.; Senol, A.; Yilmaz, A.; Cakir, I. Turk; Karadeniz, H.; Cakir, O.
2017-07-01
A study of single top quark production via flavor changing neutral current interactions at t q γ vertices is performed at the future circular hadron electron collider. The signal cross sections for the processes e-p →e-W±q +X and e-p →e-W±b q +X in the collision of an electron beam with energy Ee=60 GeV and a proton beam with energy Ep=50 TeV are calculated. In the analysis, the invariant mass distributions of three jets reconstructing top quark mass, requiring one b-tagged jet and two other jets reconstructing the W mass are used to count signal and background events after all selection cuts. The upper limits on the anomalous flavor changing neutral current t q γ couplings are found to be λq<0.01 at the future circular hadron electron collider for Lint=100 fb-1 with the fast simulation of detector effects. Signal significance depending on the couplings λq is analyzed and an enhanced sensitivity is found to the branching ratio BR (t →q γ ) at the future circular hadron electron collider when compared to the current experimental results.
17. Impact of detector simulation in particle physics collider experiments
Daniel Elvira, V.
2017-06-01
Through the last three decades, accurate simulation of the interactions of particles with matter and modeling of detector geometries has proven to be of critical importance to the success of the international high-energy physics (HEP) experimental programs. For example, the detailed detector modeling and accurate physics of the Geant4-based simulation software of the CMS and ATLAS particle physics experiments at the European Center of Nuclear Research (CERN) Large Hadron Collider (LHC) was a determinant factor for these collaborations to deliver physics results of outstanding quality faster than any hadron collider experiment ever before. This review article highlights the impact of detector simulation on particle physics collider experiments. It presents numerous examples of the use of simulation, from detector design and optimization, through software and computing development and testing, to cases where the use of simulation samples made a difference in the precision of the physics results and publication turnaround, from data-taking to submission. It also presents estimates of the cost and economic impact of simulation in the CMS experiment. Future experiments will collect orders of magnitude more data with increasingly complex detectors, taxing heavily the performance of simulation and reconstruction software. Consequently, exploring solutions to speed up simulation and reconstruction software to satisfy the growing demand of computing resources in a time of flat budgets is a matter that deserves immediate attention. The article ends with a short discussion on the potential solutions that are being considered, based on leveraging core count growth in multicore machines, using new generation coprocessors, and re-engineering HEP code for concurrency and parallel computing.
18. Impact of detector simulation in particle physics collider experiments
DOE PAGES
Elvira, V. Daniel
2017-06-01
Through the last three decades, precise simulation of the interactions of particles with matter and modeling of detector geometries has proven to be of critical importance to the success of the international high-energy physics experimental programs. For example, the detailed detector modeling and accurate physics of the Geant4-based simulation software of the CMS and ATLAS particle physics experiments at the European Center of Nuclear Research (CERN) Large Hadron Collider (LHC) was a determinant factor for these collaborations to deliver physics results of outstanding quality faster than any hadron collider experiment ever before. This review article highlights the impact of detectormore » simulation on particle physics collider experiments. It presents numerous examples of the use of simulation, from detector design and optimization, through software and computing development and testing, to cases where the use of simulation samples made a difference in the accuracy of the physics results and publication turnaround, from data-taking to submission. It also presents the economic impact and cost of simulation in the CMS experiment. Future experiments will collect orders of magnitude more data, taxing heavily the performance of simulation and reconstruction software for increasingly complex detectors. Consequently, it becomes urgent to find solutions to speed up simulation software in order to cope with the increased demand in a time of flat budgets. The study ends with a short discussion on the potential solutions that are being explored, by leveraging core count growth in multicore machines, using new generation coprocessors, and re-engineering of HEP code for concurrency and parallel computing.« less
19. Plasma Lens Backgrounds at a Future Linear Collider
SciTech Connect
Weidemann, Achim W
2002-04-29
A ''plasma lens'' might be used to enhance the luminosity of future linear colliders. However, its utility for this purpose depends largely on the potential backgrounds that may be induced by the insertion of such a device in the interaction region of the detector.In this note we identify different sources of such backgrounds, calculate their event rates from the elementary interaction processes, and evaluate their effects on the major parts of a hypothetical Next Linear Collider (NLC) detector. For plasma lens parameters which give a factor of seven enhancement of the luminosity, and using the NLC design for beam parameters as a reference, we find that the background yields are fairly high, and require further study and improvements in detector technology to avoid their impact.
20. Sterile neutrino searches at future e-e+, pp and e-p colliders
Antusch, Stefan; Cazzato, Eros; Fischer, Oliver
2017-05-01
Sterile neutrinos are among the most attractive extensions of the SM to generate the light neutrino masses observed in neutrino oscillation experiments. When the sterile neutrinos are subject to a protective symmetry, they can have masses around the electroweak scale and potentially large neutrino Yukawa couplings, which makes them testable at planned future particle colliders. We systematically discuss the production and decay channels at electron-positron, proton-proton and electron-proton colliders and provide a complete list of the leading order signatures for sterile neutrino searches. Among other things, we discuss several novel search channels, and present a first look at the possible sensitivities for the active-sterile mixings and the heavy neutrino masses. We compare the performance of the different collider types and discuss their complementarity.
1. Design of beam optics for the future circular collider e+e- collider rings
Oide, K.; Aiba, M.; Aumon, S.; Benedikt, M.; Blondel, A.; Bogomyagkov, A.; Boscolo, M.; Burkhardt, H.; Cai, Y.; Doblhammer, A.; Haerer, B.; Holzer, B.; Jowett, J. M.; Koop, I.; Koratzinos, M.; Levichev, E.; Medina, L.; Ohmi, K.; Papaphilippou, Y.; Piminov, P.; Shatilov, D.; Sinyatkin, S.; Sullivan, M.; Wenninger, J.; Wienands, U.; Zhou, D.; Zimmermann, F.
2016-11-01
A beam optics scheme has been designed for the future circular collider-e+e- (FCC-ee). The main characteristics of the design are: beam energy 45 to 175 GeV, 100 km circumference with two interaction points (IPs) per ring, horizontal crossing angle of 30 mrad at the IP and the crab-waist scheme [P. Raimondi, D. Shatilov, and M. Zobov, arXiv:physics/0702033; P. Raimondi, M. Zobov, and D. Shatilov, in Proceedings of the 22nd Particle Accelerator Conference, PAC-2007, Albuquerque, NM (IEEE, New York, 2007), p. TUPAN037.] with local chromaticity correction. The crab-waist scheme is implemented within the local chromaticity correction system without additional sextupoles, by reducing the strength of one of the two sextupoles for vertical chromatic correction at each side of the IP. So-called "tapering" of the magnets is applied, which scales all fields of the magnets according to the local beam energy to compensate for the effect of synchrotron radiation (SR) loss along the ring. An asymmetric layout near the interaction region reduces the critical energy of SR photons on the incoming side of the IP to values below 100 keV, while matching the geometry to the beam line of the FCC proton collider (FCC-hh) [A. Chancé et al., Proceedings of IPAC'16, 9-13 May 2016, Busan, Korea, TUPMW020 (2016).] as closely as possible. Sufficient transverse/longitudinal dynamic aperture (DA) has been obtained, including major dynamical effects, to assure an adequate beam lifetime in the presence of beamstrahlung and top-up injection. In particular, a momentum acceptance larger than ±2 % has been obtained, which is better than the momentum acceptance of typical collider rings by about a factor of 2. The effects of the detector solenoids including their compensation elements are taken into account as well as synchrotron radiation in all magnets. The optics presented in this paper is a step toward a full conceptual design for the collider. A number of issues have been identified for further
2. Design of beam optics for the future circular collider e+e- collider rings
DOE PAGES
Oide, Katsunobu; Aiba, M.; Aumon, S.; ...
2016-11-21
A beam optics scheme has been designed for the future circular collider- e+e- (FCC-ee). The main characteristics of the design are: beam energy 45 to 175 GeV, 100 km circumference with two interaction points (IPs) per ring, horizontal crossing angle of 30 mrad at the IP and the crab-waist scheme [P. Raimondi, D. Shatilov, and M. Zobov, arXiv:physics/0702033; P. Raimondi, M. Zobov, and D. Shatilov, in Proceedings of the 22nd Particle Accelerator Conference, PAC-2007, Albuquerque, NM (IEEE, New York, 2007), p. TUPAN037.] with local chromaticity correction. The crab-waist scheme is implemented within the local chromaticity correction system without additional sextupoles,more » by reducing the strength of one of the two sextupoles for vertical chromatic correction at each side of the IP. So-called “tapering” of the magnets is applied, which scales all fields of the magnets according to the local beam energy to compensate for the effect of synchrotron radiation (SR) loss along the ring. An asymmetric layout near the interaction region reduces the critical energy of SR photons on the incoming side of the IP to values below 100 keV, while matching the geometry to the beam line of the FCC proton collider (FCC-hh) [A. Chancé et al., Proceedings of IPAC’16, 9–13 May 2016, Busan, Korea, TUPMW020 (2016).] as closely as possible. Sufficient transverse/longitudinal dynamic aperture (DA) has been obtained, including major dynamical effects, to assure an adequate beam lifetime in the presence of beamstrahlung and top-up injection. In particular, a momentum acceptance larger than ±2% has been obtained, which is better than the momentum acceptance of typical collider rings by about a factor of 2. The effects of the detector solenoids including their compensation elements are taken into account as well as synchrotron radiation in all magnets. The optics presented in this study is a step toward a full conceptual design for the collider. Finally, a number of issues have
3. Higgs production from sterile neutrinos at future lepton colliders
Antusch, Stefan; Cazzato, Eros; Fischer, Oliver
2016-04-01
In scenarios with sterile (right-handed) neutrinos that are subject to an approximate "lepton-number-like" symmetry, the heavy neutrinos (i.e. the mass eigenstates) can have masses around the electroweak scale and couple to the Higgs boson with, in principle, unsuppressed Yukawa couplings while accounting for the smallness of the light neutrinos' masses. In these scenarios, the on-shell production of heavy neutrinos and their subsequent decays into a light neutrino and a Higgs boson constitutes a hitherto unstudied resonant contribution to the Higgs production mechanism. We investigate the relevance of this resonant mono-Higgs production mechanism in leptonic collisions, including thepresent experimental constraints on the neutrino Yukawa couplings, and we determine the sensitivity of future lepton colliders to the heavy neutrinos. With Monte Carlo event sampling and a simulation of the detector response we find that, at future lepton colliders, neutrino Yukawa couplings below the percent level can lead to observable deviations from the SM and, furthermore, the sensitivity improves with higher center-of-mass energies (for identical integrated luminosities).
4. TMDs and GPDs at a future Electron-Ion Collider
SciTech Connect
Ent, Rolf
2016-06-21
With two options studied at Brookhaven National Lab and Jefferson Laboratory the U.S., an Electron-Ion Collider (EIC) of energy √s=20-100 GeV was under design. Furthermore, the recent 2015 US Nuclear Science Long-Range Planning effort included a future EIC as a recommendation for future construction. The EIC will be unique in colliding polarised electrons off polarised protons and light nuclei, providing the spin degrees of freedom essential to pursue its physics program driven by spin structure, multi-dimensional tomographic images of protons and nuclei, and discovery of the role of collective effects of gluons in nuclei. The foreseen luminosity of the EIC, coupled with its energy variability and reach, will allow unprecedented three-dimensional imaging of the gluon and sea quark distributions, via both TMDs and GPDs, and to explore correlations amongst them. Its hermetic detection capability of correlated fragments promises to similar allow for precise tomographic images of the quark-gluon landscape in nuclei, transcending from light few-body nuclei to the heaviest nuclei, and could uncover how the TMD and GPD landscape changes when gluons display an anticipated collective behavior at the higher energies.
5. TMDs and GPDs at a future Electron-Ion Collider
DOE PAGES
Ent, Rolf
2016-06-21
With two options studied at Brookhaven National Lab and Jefferson Laboratory the U.S., an Electron-Ion Collider (EIC) of energy √s=20-100 GeV was under design. Furthermore, the recent 2015 US Nuclear Science Long-Range Planning effort included a future EIC as a recommendation for future construction. The EIC will be unique in colliding polarised electrons off polarised protons and light nuclei, providing the spin degrees of freedom essential to pursue its physics program driven by spin structure, multi-dimensional tomographic images of protons and nuclei, and discovery of the role of collective effects of gluons in nuclei. The foreseen luminosity of the EIC,more » coupled with its energy variability and reach, will allow unprecedented three-dimensional imaging of the gluon and sea quark distributions, via both TMDs and GPDs, and to explore correlations amongst them. Its hermetic detection capability of correlated fragments promises to similar allow for precise tomographic images of the quark-gluon landscape in nuclei, transcending from light few-body nuclei to the heaviest nuclei, and could uncover how the TMD and GPD landscape changes when gluons display an anticipated collective behavior at the higher energies.« less
6. TMDs and GPDs at a future Electron-Ion Collider
SciTech Connect
Ent, Rolf
2016-06-21
With two options studied at Brookhaven National Lab and Jefferson Laboratory the U.S., an Electron-Ion Collider (EIC) of energy √s=20-100 GeV was under design. Furthermore, the recent 2015 US Nuclear Science Long-Range Planning effort included a future EIC as a recommendation for future construction. The EIC will be unique in colliding polarised electrons off polarised protons and light nuclei, providing the spin degrees of freedom essential to pursue its physics program driven by spin structure, multi-dimensional tomographic images of protons and nuclei, and discovery of the role of collective effects of gluons in nuclei. The foreseen luminosity of the EIC, coupled with its energy variability and reach, will allow unprecedented three-dimensional imaging of the gluon and sea quark distributions, via both TMDs and GPDs, and to explore correlations amongst them. Its hermetic detection capability of correlated fragments promises to similar allow for precise tomographic images of the quark-gluon landscape in nuclei, transcending from light few-body nuclei to the heaviest nuclei, and could uncover how the TMD and GPD landscape changes when gluons display an anticipated collective behavior at the higher energies.
7. Superconducting Magnet Technology for Future High Energy Proton Colliders
Gourlay, Stephen
2017-01-01
Interest in high field dipoles has been given a boost by new proposals to build a high-energy proton-proton collider to follow the LHC and programs around the world are taking on the task to answer the need. Studies aiming toward future high-energy proton-proton colliders at the 100 TeV scale are now being organized. The LHC and current cost models are based on technology close to four decades old and point to a broad optimum of operation using dipoles with fields between 5 and 12T when site constraints, either geographical or political, are not a factor. Site geography constraints that limit the ring circumference can drive the required dipole field up to 20T, which is more than a factor of two beyond state-of-the-art. After a brief review of current progress, the talk will describe the challenges facing future development and present a roadmap for moving high field accelerator magnet technology forward. This work was supported by the Director, Office of Science, High Energy Physics, US Department of Energy, under contract No. DE-AC02-05CH11231.
8. Flavour physics and the Large Hadron Collider beauty experiment.
PubMed
Gibson, Valerie
2012-02-28
An exciting new era in flavour physics has just begun with the start of the Large Hadron Collider (LHC). The LHCb (where b stands for beauty) experiment, designed specifically to search for new phenomena in quantum loop processes and to provide a deeper understanding of matter-antimatter asymmetries at the most fundamental level, is producing many new and exciting results. It gives me great pleasure to describe a selected few of the results here-in particular, the search for rare B(0)(s)-->μ+ μ- decays and the measurement of the B(0)(s) charge-conjugation parity-violating phase, both of which offer high potential for the discovery of new physics at and beyond the LHC energy frontier in the very near future.
9. Fourth standard model family neutrino at future linear colliders
SciTech Connect
Ciftci, A.K.; Ciftci, R.; Sultansoy, S.
2005-09-01
It is known that flavor democracy favors the existence of the fourth standard model (SM) family. In order to give nonzero masses for the first three-family fermions flavor democracy has to be slightly broken. A parametrization for democracy breaking, which gives the correct values for fundamental fermion masses and, at the same time, predicts quark and lepton Cabibbo-Kobayashi-Maskawa (CKM) matrices in a good agreement with the experimental data, is proposed. The pair productions of the fourth SM family Dirac ({nu}{sub 4}) and Majorana (N{sub 1}) neutrinos at future linear colliders with {radical}(s)=500 GeV, 1 TeV, and 3 TeV are considered. The cross section for the process e{sup +}e{sup -}{yields}{nu}{sub 4}{nu}{sub 4}(N{sub 1}N{sub 1}) and the branching ratios for possible decay modes of the both neutrinos are determined. The decays of the fourth family neutrinos into muon channels ({nu}{sub 4}(N{sub 1}){yields}{mu}{sup {+-}}W{sup {+-}}) provide cleanest signature at e{sup +}e{sup -} colliders. Meanwhile, in our parametrization this channel is dominant. W bosons produced in decays of the fourth family neutrinos will be seen in detector as either di-jets or isolated leptons. As an example, we consider the production of 200 GeV mass fourth family neutrinos at {radical}(s)=500 GeV linear colliders by taking into account di-muon plus four jet events as signatures.
10. Beam Induced Hydrodynamic Tunneling in the Future Circular Collider Components
Tahir, N. A.; Burkart, F.; Schmidt, R.; Shutov, A.; Wollmann, D.; Piriz, A. R.
2016-08-01
A future circular collider (FCC) has been proposed as a post-Large Hadron Collider accelerator, to explore particle physics in unprecedented energy ranges. The FCC is a circular collider in a tunnel with a circumference of 80-100 km. The FCC study puts an emphasis on proton-proton high-energy and electron-positron high-intensity frontier machines. A proton-electron interaction scenario is also examined. According to the nominal FCC parameters, each of the 50 TeV proton beams will carry an amount of 8.5 GJ energy that is equivalent to the kinetic energy of an Airbus A380 (560 t) at a typical speed of 850 km /h . Safety of operation with such extremely energetic beams is an important issue, as off-nominal beam loss can cause serious damage to the accelerator and detector components with a severe impact on the accelerator environment. In order to estimate the consequences of an accident with the full beam accidently deflected into equipment, we have carried out numerical simulations of interaction of a FCC beam with a solid copper target using an energy-deposition code (fluka) and a 2D hydrodynamic code (big2) iteratively. These simulations show that, although the penetration length of a single FCC proton and its shower in solid copper is about 1.5 m, the full FCC beam will penetrate up to about 350 m into the target because of the "hydrodynamic tunneling." These simulations also show that a significant part of the target is converted into high-energy-density matter. We also discuss this interesting aspect of this study.
11. Status and future directions for advanced accelerator research-conventional and non-conventional collider concepts
SciTech Connect
Siemann, R.H.
1997-03-01
The relationship between advanced accelerator research and future directions for particle physics is discussed. Comments are made about accelerator research trends in hadron colliders, muon colliders, and e{sup +}e{sup {minus}} linear colliders. {copyright} {ital 1997 American Institute of Physics.}
12. SLAC electron-positron colliders: present and future
SciTech Connect
Richter, B.
1986-09-01
Stanford University's colliding beam program is outlined, including the SPEAR and PEP colliders and the SLAC linear collider. The accelerator developments to be pursued on these facilities are discussed, as well as advanced accelerator research and development. The items covered in the advanced accelerator research include beamstrahlung, stability requirements, breakdown limits, and power sources. (LEW)
13. Laser ion source for isobaric heavy ion collider experiment.
PubMed
Kanesue, T; Kumaki, M; Ikeda, S; Okamura, M
2016-02-01
Heavy-ion collider experiment in isobaric system is under investigation at Relativistic Heavy Ion Collider. For this experiment, ion source is required to maximize the abundance of the intended isotope. The candidate of the experiment is (96)Ru + (96)Zr. Since the natural abundance of particular isotope is low and composition of isotope from ion source depends on the composites of the target, an isotope enriched material may be needed as a target. We studied the performance of the laser ion source required for the experiment for Zr ions.
14. Laser ion source for isobaric heavy ion collider experiment
SciTech Connect
Kanesue, T. Okamura, M.; Kumaki, M.; Ikeda, S.
2016-02-15
Heavy-ion collider experiment in isobaric system is under investigation at Relativistic Heavy Ion Collider. For this experiment, ion source is required to maximize the abundance of the intended isotope. The candidate of the experiment is {sup 96}Ru + {sup 96}Zr. Since the natural abundance of particular isotope is low and composition of isotope from ion source depends on the composites of the target, an isotope enriched material may be needed as a target. We studied the performance of the laser ion source required for the experiment for Zr ions.
15. Top quark electroweak couplings at future lepton colliders
Englert, Christoph; Russell, Michael
2017-08-01
We perform a comparative study of the reach of future e^+e^- collider options for the scale of non-resonant new physics effects in the top quark sector, phrased in the language of higher-dimensional operators. Our focus is on the electroweak top quark pair production process e^+e^- → Z^*/γ → t\\bar{t} , and we study benchmark scenarios at the ILC and CLIC. We find that both are able to constrain mass scales up to the few {TeV} range in the most sensitive cases, improving by orders of magnitude on the forecast capabilities of the LHC. We discuss the role played by observables such as forward-backward asymmetries, and making use of different beam polarisation settings, and highlight the possibility of lifting a degeneracy in the allowed parameter space by combining top observables with precision Z-pole measurements from LEP1.
16. Dump system concepts for the Future Circular Collider
Bartmann, W.; Atanasov, M.; Barnes, M. J.; Borburgh, J.; Burkart, F.; Goddard, B.; Kramer, T.; Lechner, A.; Ull, A. Sanz; Schmidt, R.; Stoel, L. S.; Ostojic, R.; Rodziewicz, J.; van Trappen, P.; Barna, D.
2017-03-01
The Future Circular Collider (FCC-hh) beam dump system must provide a safe and reliable extraction and dilution of the stored beam onto a dump absorber. Energy deposition studies show that damage limits of presently used absorber materials will already be reached for single bunches at 50 TeV. A fast field rise of the extraction kicker is required in order to sufficiently separate swept single bunches on the extraction protection absorbers in case of an asynchronous beam dump. In line with this demand is the proposal of a highly segmented extraction kicker system which allows for accepting a single kicker switch erratic and thus, significantly reduces the probability of an asynchronous beam dump. Superconducting septa are foreseen to limit the overall system length and power consumption. Two extraction system concepts are presented and evaluated regarding overall system length, energy deposition on absorbers, hardware requirements, radiation issues, and layout flexibility.
17. Direct determination of neutrino mass parameters at future colliders
SciTech Connect
Kadastik, M.; Raidal, M.; Rebane, L.
2008-06-01
If the observed light neutrino masses are induced by their Yukawa couplings to singlet right-handed neutrinos, the natural smallness of those makes direct collider tests of the electroweak scale neutrino mass mechanisms difficult in the simplest models. In the triplet Higgs seesaw scenario the smallness of light neutrino masses may come from the smallness of B-L breaking parameters, allowing sizable Yukawa couplings even for a TeV scale triplet. We show that, in this scenario, measuring the branching fractions of doubly charged Higgs to different same-charged lepton flavors at CERN LHC and/or ILC experiments will allow one to measure the neutrino mass parameters that neutrino oscillation experiments are insensitive to, including the neutrino mass hierarchy, lightest neutrino mass, and Majorana phases.
18. Probing charged Higgs boson couplings at a future circular hadron collider
Ćakır, I. T.; Kuday, S.; Saygın, H.; Şenol, A.; ćakır, O.
2016-07-01
Many of the new physics models predict a light Higgs boson similar to the Higgs boson of the Standard Model (SM) and also extra scalar bosons. Beyond the search channels for a SM Higgs boson, the future collider experiments will explore additional channels that are specific to extended Higgs sectors. We study the charged Higgs boson production within the framework of two Higgs doublet models (THDM) in the proton-proton collisions at a future circular hadron collider (FCC-hh). With an integrated luminosity of Lint=500 fb-1 at very high energy frontier (√{s }=100 TeV ), we obtain a significant coverage of the parameter space and distinguish the charged Higgs-top-bottom interaction within the THDM or other new physics models with charged Higgs boson mass up to 1.5 TeV.
19. 120 MW, 800 MHz Magnicon for a Future Muon Collider
SciTech Connect
Jay L. Hirshfield
2005-12-15
Development of a pulsed magnicon at 800 MHz was carried out for the muon collider application, based on experience with similar amplifiers in the frequency range between 915 MHz and 34.3 GHz. Numerical simulations using proven computer codes were employed for the conceptual design, while established design technologies were incorporated into the engineering design. A cohesive design for the 800 MHz magnicon amplifier was carried out, including design of a 200 MW diode electron gun, design of the magnet system, optimization of beam dynamics including space charge effects in the transient and steady-state regimes, design of the drive, gain, and output cavities including an rf choke in the beam exit aperture, analysis of parasitic oscillations and design means to eliminate them, and design of the beam collector capable of 20 kW average power operation.
20. High energy density physics issues related to Future Circular Collider
Tahir, N. A.; Burkart, F.; Schmidt, R.; Shutov, A.; Wollmann, D.; Piriz, A. R.
2017-07-01
A design study for a post-Large Hadron Collider accelerator named, Future Circular Collider (FCC), is being carried out by the International Scientific Community. A complete design report is expected to be ready by spring 2018. The FCC will accelerate two counter rotating beams of 50 TeV protons in a tunnel having a length (circumference) of 100 km. Each beam will be comprised of 10 600 proton bunches, with each bunch having an intensity of 1011 protons. The bunch length is of 0.5 ns, and two neighboring bunches are separated by 25 ns. Although there is an option for 5 ns bunch separation as well, in the present studies, we consider the former case only. The total energy stored in each FCC beam is about 8.5 GJ, which is equivalent to the kinetic energy of Airbus 380 (560 t) flying at a speed of 850 km/h. Machine protection is a very important issue while operating with such powerful beams. It is important to have an estimate of the damage caused to the equipment and accelerator components due to the accidental release of a partial or total beam at a given point. For this purpose, we carried out numerical simulations of full impact of one FCC beam on an extended solid copper target. These simulations have been done employing an energy deposition code, FLUKA, and a two-dimensional hydrodynamic code, BIG2, iteratively. This study shows that although the static range of a single FCC proton and its shower is about 1.5 m in solid copper, the entire beam will penetrate around 350 m into the target. This substantial increase in the range is due to the hydrodynamic tunneling of the beam. Our calculations also show that a large part of the target will be converted into high energy density matter including warm dense matter and strongly coupled plasmas.
1. 2005 Final Report: New Technologies for Future Colliders
SciTech Connect
Peter McIntyre; Al McInturff
2005-12-31
This document presents an annual report on our long-term R&D grant for development of new technology for future colliders. The organizing theme of our development is to develop a compact high-field collider dipole, utilizing wind-and-react Nb3Sn coil fabrication, stress man-agement, conductor optimization, bladder preload, and flux plate suppression of magnetization multipoles. The development trail for this new technology began over four years ago with the successful testing of TAMU12, a NbTi model in which we put to a first test many of the construction details of the high-field design. We have built TAMU2, a mirror-geometry dipole containing a single coil module of the 3-module set required for the 14 Tesla design. This first Nb3Sn model was built using ITER conductor which carries much less current than high-performance conductor but enables us to prove in practice our reaction bake and impregnation strategies with ‘free’ su-perconductor. TAMU2 has been shipped to LBNL for testing. Work is beginning on the construction of TAMU3, which will contain two coil modules of the 14 Tesla design. TAMU3 has a design field of 13.5 Tesla and will enable us to fully evaluate the issues of stress management that will be important to the full design. With the completion of TAMU2 and the construction of TAMU3 the Texas A&M group ‘comes of age’ in the family of superconducting magnet R&D laboratories. We have completed the phase of developing core technologies and fixtures and entered the phase of building and testing a succession of model dipoles that each build incrementally upon a proven core design.
2. Polarized Positrons at a Future Linear Collider and the Final Focus Test Beam
SciTech Connect
Weidemann, A
2004-07-28
Having both the positron and electron beams polarized in a future linear e{sup +}e{sup -} collider is a decisive improvement for many physics studies at such a machine. The motivation for polarized positrons, and a demonstration experiment for the undulator-based production of polarized positrons are reviewed. This experiment (E-166) uses the 50 GeV Final Focus Test electron beam at SLAC with a 1 m-long helical undulator to make {approx} 10MeV polarized photons. These photons are then converted in a thin ({approx} 0.5 radiation length) target into positrons (and electrons) with about 50% polarization.
3. Resolving gluon fusion loops at current and future hadron colliders
Azatov, Aleksandr; Grojean, Christophe; Paul, Ayan; Salvioni, Ennio
2016-09-01
Inclusive Higgs measurements at the LHC have limited resolution on the gluon fusion loops, being unable to distinguish the long-distance contributions mediated by the top quark from possible short-distance new physics effects. Using an Effective Field Theory (EFT) approach we compare several proposed methods to lift this degeneracy, including toverline{t}h and boosted, off-shell and double Higgs production, and perform detailed projections to the High-Luminosity LHC and a future hadron collider. In addition, we revisit off-shell Higgs production. Firstly, we point out its sensitivity to modifications of the top- Z couplings, and by means of a general analysis we show that the reach is comparable to that of tree-level processes such as toverline{t}Z production. Implications for composite Higgs models are also discussed. Secondly, we assess the regime of validity of the EFT, performing an explicit comparison for a simple extension of the Standard Model containing one vector-like quark.
4. Strategies for using GAPDs as tracker detectors in future linear colliders
Vilella, Eva; Alonso, Oscar; Vilà, Anna; Diéguez, Angel
2016-04-01
This work presents the development of a Geiger-mode Avalanche PhotoDiode pixel detector in standard CMOS technologies aimed at the vertex and tracker regions of future linear colliders, i.e. the International Linear Collider and the Compact LInear Collider. In spite of all the advantages that characterize this technology, GAPD detectors suffer from noise pulses that cannot be distinguished from real events and low fill-factors that reduce the detection efficiency. To comply with the specifications imposed by the next generation of particle colliders, solutions to minimize the intrinsic noise pulses and increase the fill-factor have been thoroughly investigated.
5. Dynamical experiments on models of colliding disk galaxies
NASA Technical Reports Server (NTRS)
Gerber, Richard A.; Balsara, Dinshaw S.; Lamb, Susan A.
1990-01-01
Collisions between galaxies can induce large morphological changes in the participants and, in the case of colliding disk galaxies, bridges and tails are often formed. Observations of such systems indicate a wide variation in color (see Larson and Tinsley, 1978) and that some of the particpants are experiencing enhanced rates of star formation, especially in their central regions (Bushouse 1986, 1987; Kennicutt et al., 1987, Bushouse, Lamb, and Werner, 1988). Here the authors describe progress made in understanding some of the dynamics of interacting galaxies using N-body stellar dynamical computer experiments, with the goal of extending these models to include a hydrodynamical treatment of the gas so that a better understanding of globally enhanced star formation will eventually be forthcoming. It was concluded that close interactions between galaxies can produce large perturbations in both density and velocity fields. The authors measured, via computational experiments that represent a galaxy's stars, average radial velocity flows as large as 100 km/sec and 400 percent density increases. These can occur in rings that move outwards through the disk of a galaxy, in roughly homologous inflows toward the nucleus, and in off center, non-axisymmetric regions. Here the authors illustrate where the gas is likely to flow during the early stages of interaction and in future work they plan to investigate the fate of the gas more realistically by using an N-body/Smoothed Particle Hydrodynamics code to model both the stellar and gaseous components of a disk galaxy during a collision. Specifically, they will determine the locations of enhanced gas density and the strength and location of shock fronts that form during the interaction.
6. Detectors for Superboosted $\\tau$-leptons at Future Circular Colliders
SciTech Connect
Sen, Sourav; Kotwal, Ashutosh; Chekanov, Sergei; Gray, Lindsey; Tran, Nhan Viet; Yu, Shin-Shan
2016-12-21
We study the detector performance of τ -lepton identification variables at very high energy proton colliders. We study hadronically-decaying τ -leptons with transverse momentum in the TeV range. Calorimeters are benchmarked in various configurations in order to understand the impact of granularity and resolution on boosted τ -lepton discrimination.
7. SLAC linear collider: the machine, the physics, and the future
SciTech Connect
Richter, B.
1981-11-01
The SLAC linear collider, in which beams of electrons and positrons are accelerated simultaneously, is described. Specifications of the proposed system are given, with calculated preditions of performance. New areas of research made possible by energies in the TeV range are discussed. (GHT)
8. GARLIC: GAmma Reconstruction at a LInear Collider experiment
Jeans, D.; Brient, J.-C.; Reinhard, M.
2012-06-01
The precise measurement of hadronic jet energy is crucial to maximise the physics reach of a future Linear Collider. An important ingredient required to achieve this is the efficient identification of photons within hadronic showers. One configuration of the ILD detector concept employs a highly granular silicon-tungsten sampling calorimeter to identify and measure photons, and the GARLIC algorithm described in this paper has been developed to identify photons in such a calorimeter. We describe the algorithm and characterise its performance using events fully simulated in a model of the ILD detector.
9. Phenomenology of a Higgs triplet model at future e+e- colliders
Blunier, Sylvain; Cottin, Giovanna; Díaz, Marco A.; Koch, Benjamin
2017-04-01
In this work, we investigate the prospects of future e+e- colliders in testing a Higgs triplet model with a scalar triplet and a scalar singlet under S U (2 ). The parameters of the model are fixed so that the lightest C P -even state corresponds to the Higgs particle observed at the LHC at around 125 GeV. This study investigates if the second heaviest C P -even, the heaviest C P -odd and the singly charged states can be observed at existing and future colliders by computing their accessible production and decay channels. In general, the LHC is not well equipped to produce a Higgs boson which is not mainly doubletlike, so we turn our focus to lepton colliders. We find distinctive features of this model in cases where the second heaviest C P -even Higgs is tripletlike, singletlike or a mixture. These features could distinguish the model from other scenarios at future e+e- colliders.
10. A tale of two portals: testing light, hidden new physics at future e + e - colliders
Liu, Jia; Wang, Xiao-Ping; Yu, Felix
2017-06-01
We investigate the prospects for producing new, light, hidden states at a future e + e - collider in a Higgsed dark U(1) D model, which we call the Double Dark Portal model. The simultaneous presence of both vector and scalar portal couplings immediately modifies the Standard Model Higgsstrahlung channel, e + e - → Zh, at leading order in each coupling. In addition, each portal leads to complementary signals which can be probed at direct and indirect detection dark matter experiments. After accounting for current constraints from LEP and LHC, we demonstrate that a future e + e - Higgs factory will have unique and leading sensitivity to the two portal couplings by studying a host of new production, decay, and radiative return processes. Besides the possibility of exotic Higgs decays, we highlight the importance of direct dark vector and dark scalar production at e + e - machines, whose invisible decays can be tagged from the recoil mass method.
11. ACCELERATOR PHYSICS ISSUES FOR FUTURE ELECTRON ION COLLIDERS.
SciTech Connect
PEGGS,S.; BEN-ZVI,I.; KEWISCH,J.; MURPHY,J.
2001-06-18
Interest continues to grow in the physics of collisions between electrons and heavy ions, and between polarized electrons and polarized protons [1,2,3]. Table 1 compares the parameters of some machines under discussion. DESY has begun to explore the possibility of upgrading the existing HERA-p ring to store heavy ions, in order to collide them with electrons (or positrons) in the HERA-e ring, or from TESLA [4]. An upgrade to store polarized protons in the HERA-p ring is also under discussion [1]. BNL is considering adding polarized electrons to the RHIC repertoire, which already includes heavy and light ions, and polarized protons. The authors of this paper have made a first pass analysis of this ''eRHIC'' possibility [5]. MIT-BATES is also considering electron ion collider designs [6].
12. Photoinjectors R&D for future light sources & linear colliders
SciTech Connect
Piot, P.; /Northern Illinois U. /Fermilab
2006-08-01
Linac-driven light sources and proposed linear colliders require high brightness electron beams. In addition to the small emittances and high peak currents, linear colliders also require spin-polarization and possibly the generation of asymmetric beam in the two transverse degrees of freedom. Other applications (e.g., high-average-power free-electron lasers) call for high duty cycle and/or (e.g., electron cooling) angular-momentum-dominated electron beams. We review ongoing R&D programs aiming at the production of electron beams satisfying these various requirements. We especially discuss R&D on photoemission electron sources (with focus on radiofrequency guns) along with the possible use of emittance-manipulation techniques.
13. Single Anomalous Production of the Fourth SM Family Leptons at Future e+e-, ep and pp Colliders
SciTech Connect
Ciftci, A. K.; Ciftci, R.; Karadeniz, H.; Sultansoy, S.; Yildiz, H. Duran
2007-04-23
Possible single productions of fourth SM family charged and neutral leptons via anomalous interactions at the future e+e-, ep, and pp colliders are studied. Signatures of such anomalous processes are argued at above colliders comparatively.
14. The VEPP-2000 electron-positron collider: First experiments
SciTech Connect
Berkaev, D. E. Shwartz, D. B.; Shatunov, P. Yu.; Rogovskii, Yu. A.; Romanov, A. L.; Koop, I. A.; Shatunov, Yu. M.; Zemlyanskii, I. M.; Lysenko, A. P.; Perevedentsev, E. A.; Stankevich, A. S.; Senchenko, A. I.; Khazin, B. I.; Anisenkov, A. V.; Gayazov, S. E.; Kozyrev, A. N.; Ryzhenenkov, A. E.; Shemyakin, D. N.; Epshtein, L. B.; Serednyakov, S. I.; and others
2011-08-15
In 2007, at the Institute of Nuclear Physics (Novosibirsk), the construction of the VEPP-2000 electron-positron collider was completed. The first electron beam was injected into the accelerator structure with turned-off solenoids of the final focus. This mode was used to tune all subsystems of the facility and to train the vacuum chamber using synchrotron radiation at electron currents of up to 150 mA. The VEPP-2000 structure with small beta functions and partially turned-on solenoids was used for the first testing of the 'round beams' scheme at an energy of 508 MeV. Beam-beam effects were studied in strong-weak and strong-strong modes. Measurements of the beam sizes in both cases showed a dependence corresponding to model predictions for round colliding beams. Using a modernized SND (spherical neutral detector), the first energy calibration of the VEPP-2000 collider was performed by measuring the excitation curve of the phimeson resonance; the phi-meson mass is known with high accuracy from previous experiments at VEEP-2M. In October 2009, a KMD-3 (cryogenic magnetic detector) was installed at the VEPP-2000 facility, and the physics program with both the SND and LMD-3 particle detectors was started in the energy range of 1-1.9 GeV. This first experimental season was completed in summer 2010 with precision energy calibration by resonant depolarization.
15. Quartified leptonic color, bound states, and future electron-positron collider
Kownacki, Corey; Ma, Ernest; Pollard, Nicholas; Popov, Oleg; Zakeri, Mohammadreza
2017-06-01
The [ SU (3) ] 4 quartification model of Babu, Ma, and Willenbrock (BMW), proposed in 2003, predicts a confining leptonic color SU (2) gauge symmetry, which becomes strong at the keV scale. It also predicts the existence of three families of half-charged leptons (hemions) below the TeV scale. These hemions are confined to form bound states which are not so easy to discover at the Large Hadron Collider (LHC). However, just as J / ψ and ϒ appeared as sharp resonances in e-e+ colliders of the 20th century, the corresponding 'hemionium' states are expected at a future e-e+ collider of the 21st century.
16. eRHIC, the BNL design for a future Electron-Ion Collider
Roser, Thomas
2016-03-01
With the addition of a 20 GeV polarized electron accelerator to the existing Brookhaven Relativistic Heavy Ion Collider (RHIC), the world's only high energy heavy ion and polarized proton collider, a future eRHIC facility will be able to produce polarized electron-nucleon collisions at center-of-mass energies of up to 145 GeV and cover the whole science case as outlined in the Electron-Ion Collider White Paper and endorsed by the 2015 Nuclear Physics Long Range Plan with high luminosity. The presentation will describe the eRHIC design concepts and recent efforts to reduce the technical risks of the project.
17. A High Field Magnet Design for A Future Hadron Collider
SciTech Connect
Gupta, R.; Chow, K.; Dietderich, D.; Gourlay, S.; Millos, G.; McInturff, A.; Scanlan, R.
1998-09-01
US high energy physics community is exploring the possibilities of building a Very Large Hadron Collider (VLHC) after the completion of LHC. This paper presents a high field magnet design option based on Nb{sub 3}Sn technology. A preliminary magnetic and mechanical design of a 14-16 T, 2-in-1 dipole based on the 'common coil design' approach is presented. The computer code ROXIE has been upgraded to perform the field quality optimization of magnets based on the racetrack coil geometry. A magnet R&D program to investigate the issues related to high field magnet designs is also outlined.
18. Signals from flavor changing scalar currents at the future colliders
SciTech Connect
Atwood, D.; Reina, L.; Soni, A.
1996-11-22
We present a general phenomenological analysis of a class of Two Higgs Doublet Models with Flavor Changing Neutral Currents arising at the tree level. The existing constraints mainly affect the couplings of the first two generations of quarks, leaving the possibility for non negligible Flavor Changing couplings of the top quark open. The next generation of lepton and hadron colliders will offer the right environment to study the physics of the top quark and to unravel the presence of new physics beyond the Standard Model. In this context we discuss some interesting signals from Flavor Changing Scalar Neutral Currents.
19. Thermal production of charm quarks in heavy ion collisions at the Future Circular Collider
Liu, Yunpeng; Ko, Che Ming
2016-12-01
By solving the rate equation in an expanding quark-gluon plasma (QGP), we study thermal production of charm quarks in central Pb + Pb collisions at the Future Circular Collider. With the charm quark production cross section taken from the perturbative QCD at the next-to-leading order, we find that charm quark production from the QGP can be appreciable compared to that due to initial hard scattering between colliding nucleons.
20. eRHIC - Future Electron-Ion Collider at BNL
SciTech Connect
Ptitsyn, V.
2006-07-11
The work on the detailed design of electron-ion collider, eRHIC, on the basis of existing RHIC machine is underway. eRHIC aims to be an instrument for the exploration of important QCD aspects using collisions of polarized electrons and positrons on ions and polarized protons in the center of mass energy range of 30-100 GeV, with a luminosity of 1032-1034 cm-2s-1 for c-p and 1030-1032 cm-2s-1 for c-Au collisions. An electron accelerator, which delivers about 0.5A polarized electron beam current in the electron energy range of 5 to 10 GeV, would be constructed at BNL, near the existing RHIC complex and would intersect an ion ring in at least one of the available ion ring interaction regions. One design option considers the circular electron machine based on the accelerator technology similar to that of storage rings at the e+-e- B-factories. Another pursued design option employs an energy recovery linac for electron acceleration. This option paves way to higher luminosities but meets challenges of developments of high current electron polarized source and high beam power ERL technologies. To maximize the collider luminosity certain upgrades are considered for RHIC ion rings.
1. ERHIC, A FUTURE ELECTRON-ION COLLIDER AT BNL
SciTech Connect
PTITSYN,V.; AHRENS L.; ET AL.
2004-07-05
The authors review recent progress in the design of eRHIC, a proposed high luminosity, polarized electron-ion collider which would make use of the existing RHIC machine. The eRHIC collider aims to provide collisions of electrons and positrons on ions and protons in the center-of-mass energy range of 30-100 GeV, with a luminosity of 10{sup 32}-10{sup 34} cm{sup -2}s{sup -1} for e-p and 10{sup 30}-10{sup 32} cm{sup -2}s{sup -1} for e-Au collisions. An essential design requirement is to provide longitudinally polarized beams of electrons and protons (and, possibly lighter ions) at the collisions point. An eRHIC ZDR [1] has been prepared which considers various aspects of the accelerator design. An electron accelerator, which delivers about 0.5A polarized electron beam current in the electron energy range of 5 to 10 GeV, would be constructed at BNL, near the existing RHIC complex and would intersect an ion ring in at least one of the available ion ring interaction regions. In order to reach the luminosity goals, some upgrades in the ion rings would also be required.
2. Linear polarization of gluons and photons in unpolarized collider experiments
SciTech Connect
Pisano, Cristian; Boer, Daniël; Brodsky, Stanley J.; Buffing, Maarten G. A.; Mulders, Piet J.
2013-10-01
We study azimuthal asymmetries in heavy quark pair production in unpolarized electron-proton and proton-proton collisions, where the asymmetries originate from the linear polarization of gluons inside unpolarized hadrons. We provide cross section expressions and study the maximal asymmetries allowed by positivity, for both charm and bottom quark pair production. The upper bounds on the asymmetries are shown to be very large depending on the transverse momentum of the heavy quarks, which is promising especially for their measurements at a possible future Electron-Ion Collider or a Large Hadron electron Collider. We also study the analogous processes and asymmetries in muon pair production as a means to probe linearly polarized photons inside unpolarized protons. For increasing invariant mass of the muon pair the asymmetries become very similar to the heavy quark pair ones. Finally, we discuss the process dependence of the results that arises due to differences in color flow and address the problem with factorization in case of proton-proton collisions.
3. New technologies for a future superconducting proton collider
SciTech Connect
Malamud, E.; Foster, G.W.
1996-06-01
New more economic approaches are required to continue the dramatic exponential rise in particle accelerator energies as represented by the well- known Livingston plot. The old idea of low-cost, low-field iron dominated magnets in a small diameter pipe may become feasible in the next decade with dramatic recent advances in technology: (1) high T{sub c} superconductors operating at liquid N{sub 2} or H{sub 2} temperatures, (2) advanced tunneling technologies for small diameter, non human accessible tunnels, (3) accurate remote guidance systems for boring machine steering, (4) industrial applications of remote manipulation and robotics, and (5) digitally multiplexed electronics to minimize cables There is an opportunity for mutually beneficial partnerships between the High Energy Physics community and the commercial sector to develop the necessary technology. This will gain public support, a necessary part of the challenge of building a new, very high energy collider.
4. Future Experiments in Astrophysics
NASA Technical Reports Server (NTRS)
Krizmanic, John F.
2002-01-01
The measurement methodologies of astrophysics experiments reflect the enormous variation of the astrophysical radiation itself. The diverse nature of the astrophysical radiation, e.g. cosmic rays, electromagnetic radiation, and neutrinos, is further complicated by the enormous span in energy, from the 1.95 Kappa relic neutrino background to cosmic rays with energy greater than 10(exp 20)eV. The measurement of gravity waves and search for dark matter constituents are also of astrophysical interest. Thus, the experimental techniques employed to determine the energy of the incident particles are strongly dependent upon the specific particles and energy range to be measured. This paper summarizes some of the calorimetric methodologies and measurements planned by future astrophysics experiments. A focus will be placed on the measurement of higher energy astrophysical radiation. Specifically, future cosmic ray, gamma ray, and neutrino experiments will be discussed.
5. Future flavour physics experiments
PubMed Central
2015-01-01
The current status of flavour physics and the prospects for present and future experiments will be reviewed. Measurements in B‐physics, in which sensitive probes of new physics are the CKM angle γ, the Bs mixing phase ϕs, and the branching ratios of the rare decays B(s)0→μ+μ− , will be highlighted. Topics in charm and kaon physics, in which the measurements of ACP and the branching ratios of the rare decays K→πνν¯ are key measurements, will be discussed. Finally the complementarity of the future heavy flavour experiments, the LHCb upgrade and Belle‐II, will be summarised. PMID:26877543
6. Exotic decays of the 125 GeV Higgs boson at future e+e– colliders
DOE PAGES
Liu, Zhen; Wang, Lian -Tao; Zhang, Hao
2017-06-01
Discovery of unexpected properties of the Higgs boson offers an intriguing opportunity of shedding light on some of the most profound puzzles in particle physics. The Beyond Standard Model (BSM) decays of the Higgs boson could reveal new physics in a direct manner. Future electron-positron lepton colliders operating as Higgs factories, including CEPC, FCC-ee and ILC, with the advantages of a clean collider environment and large statistics, could greatly enhance the sensitivity in searching for these BSM decays. In this work, we perform a general study of Higgs exotic decays at futuremore » $e^+e^-$ lepton colliders, focusing on the Higgs decays with hadronic final states and/or missing energy, which are very challenging for the High-Luminosity program of the Large Hadron Collider (HL-LHC). We show that with simple selection cuts, $$O(10^{-3}\\sim10^{-5})$$ limits on the Higgs exotic decay branching fractions can be achieved using the leptonic decaying spectator $Z$ boson in the associated production mode $$e^+e^-\\rightarrow Z H$$. We further discuss the interplay between the detector performance and Higgs exotic decay, and other possibilities of exotic decays. Finally, our work is a first step in a comprehensive study of Higgs exotic decays at future lepton colliders, which is a key ingredient of Higgs physics that deserves further investigation.« less
7. The Standard Model from LHC to future colliders.
PubMed
Forte, S; Nisati, A; Passarino, G; Tenchini, R; Calame, C M Carloni; Chiesa, M; Cobal, M; Corcella, G; Degrassi, G; Ferrera, G; Magnea, L; Maltoni, F; Montagna, G; Nason, P; Nicrosini, O; Oleari, C; Piccinini, F; Riva, F; Vicini, A
This review summarizes the results of the activities which have taken place in 2014 within the Standard Model Working Group of the "What Next" Workshop organized by INFN, Italy. We present a framework, general questions, and some indications of possible answers on the main issue for Standard Model physics in the LHC era and in view of possible future accelerators.
8. Tests of Scintillator+WLS Strips for Muon System at Future Colliders
SciTech Connect
Denisov, Dmitri; Evdokimov, Valery; Lukić, Strahinja
2015-10-11
Prototype scintilator+WLS strips with SiPM readout for muon system at future colliders were tested for light yield, time resolution and position resolution. Depending on the configuration, light yield of up to 36 photoelectrons per muon per SiPM has been achieved, as well as time resolution of 0.5 ns and position resolution of ~ 7 cm.
9. Operational plasma density and laser parameters for future colliders based on laser-plasma accelerators
SciTech Connect
Schroeder, C. B.; Esarey, E.; Leemans, W. P.
2012-12-21
The operational plasma density and laser parameters for future colliders based on laser-plasma accelerators are discussed. Beamstrahlung limits the charge per bunch at low plasma densities. Reduced laser intensity is examined to improve accelerator efficiency in the beamstrahlung-limited regime.
10. Optical injection using colliding laser pulses: experiments at LBNL
Leemans, W. P.; Geddes, C. G. R.; Toth, C.; Faure, J.; van Tilborg, J.; Marcelis, B.; Esarey, E.; Schroeder, C. B.; Fubiani, G.; Shadwick, B. A.; Dugan, G.; Cary, J.; Giacone, R.
2002-11-01
Laser driven acceleration in plasmas has succeeded in producing electron beams containing multi-nC's of charge, with some fraction of the electrons having energies in excess of 10's of MeV's but 100 % energy spread. One of the current challenges is to produce electron beams with much reduced energy spread. We report on experimental progress in the laser triggered injection of electrons in a laser wakefield accelerator using the colliding pulse method (E. Esarey et al., Phys. Rev. Lett. 79, 2682 (1997).), (C.B. Schroeder et al., Phys. Rev. E 59, 6037 (1999).). The experiments use the l'OASIS multi-beam 10 Hz high power Ti:Al_2O3 laser system (W.P. Leemans et al., Phys. Plasmas 8, 2510 (2001).). In the present experiments, two counter propagating beams (30^rc angle) are focused onto a high density gas jet. Preliminary results indicate that electron beam properties are affected by the second beam. Details of the experiments will be shown as well as comparisons with simulations.
11. Quartified leptonic color, bound states, and future electron–positron collider
DOE PAGES
Kownacki, Corey; Ma, Ernest; Pollard, Nicholas; ...
2017-04-04
The [SU(3)]4 quartification model of Babu, Ma, and Willenbrock (BMW), proposed in 2003, predicts a confining leptonic color SU(2)gauge symmetry, which becomes strong at the keV scale. Also, it predicts the existence of three families of half-charged leptons (hemions) below the TeV scale. These hemions are confined to form bound states which are not so easy to discover at the Large Hadron Collider (LHC). But, just as J/ψand Υ appeared as sharp resonances in e-e+colliders of the 20th century, the corresponding ‘hemionium’ states are expected at a future e-e+collider of the 21st century.
12. Worldwide Activities towards a Future Circular Collider: Physics and Detector Studies
Mangano, Michelangelo
2015-04-01
Collider rings with circumference in the range of 50-100 km could host electron-positron colliders with center-of-mass energies up to 350 GeV, and proton-proton colliders up to 100 TeV. Two-stage projects, along the lines of the LEP-LHC complex, are under study by the high-energy physics community worldwide. The physics potential of such a future facility spans from improving by orders of magnitude the precision study of the Higgs boson, to extending by a factor of 10 the mass reach for the search of new particles. The talk will review the physics opportunities and the challenges that are emerging from the current studies.
13. A new micro-strip tracker for the new generation of experiments at hadron colliders
SciTech Connect
Dinardo, Mauro E.
2005-12-01
This thesis concerns the development and characterization of a prototype Silicon micro-strip detector that can be used in the forward (high rapidity) region of a hadron collider. These detectors must operate in a high radiation environment without any important degradation of their performance. The innovative feature of these detectors is the readout electronics, which, being completely data-driven, allows for the direct use of the detector information at the lowest level of the trigger. All the particle hits on the detector can be readout in real-time without any external trigger and any particular limitation due to dead-time. In this way, all the detector information is available to elaborate a very selective trigger decision based on a fast reconstruction of tracks and vertex topology. These detectors, together with the new approach to the trigger, have been developed in the context of the BTeV R&D program; our aim was to define the features and the design parameters of an optimal experiment for heavy flavour physics at hadron colliders. Application of these detectors goes well beyond the BTeV project and, in particular, involves the future upgrades of experiments at hadron colliders, such as Atlas, CMS and LHCb. These experiments, indeed, are already considering for their future high-intensity runs a new trigger strategy a la BTeV. Their aim is to select directly at trigger level events containing Bhadrons, which, on several cases, come from the decay of Higgs bosons, Zo's or W±'s; the track information can also help on improving the performance of the electron and muon selection at the trigger level. For this reason, they are going to develop new detectors with practically the same characteristics as those of BTeV. To this extent, the work accomplished in this thesis could serve as guide-line for those upgrades.
14. Power converters for future LHC experiments
Alderighi, M.; Citterio, M.; Riva, M.; Latorre, S.; Costabeber, A.; Paccagnella, A.; Sichirollo, F.; Spiazzi, G.; Stellini, M.; Tenti, P.; Cova, P.; Delmonte, N.; Lanza, A.; Bernardoni, M.; Menozzi, R.; Baccaro, S.; Iannuzzo, F.; Sanseverino, A.; Busatto, G.; De Luca, V.; Velardi, F.
2012-03-01
The paper describes power switching converters suitable for possible power supply distribution networks for the upgraded detectors at the High Luminosity LHC collider. The proposed topologies have been selected by considering their tolerance to the highly hostile environment where the converters will operate as well as their limited electromagnetic noise emission. The analysis focuses on the description of the power supplies for noble liquid calorimeters, such as the Atlas LAr calorimeters, though several outcomes of this research can be applied to other detectors of the future LHC experiments. Experimental results carried on demonstrators are provided.
15. Physics opportunities at the future eRHIC electron-ion collider
Fazio, Salvatore
2017-03-01
The 2015 nuclear physics long-range plan endorsed the realization of an electron-ion collider as the next large construction project in the United States. This new collider will provide definite answers to the following questions: How are the sea quarks and gluons, and their spins, distributed in space and momentum inside the nucleon? How are these quark and gluon distributions correlated with overall nucleon properties, such as spin direction? What is the role of the orbital motion of sea quarks and gluons in building up the nucleon spin? The eRHIC project is the Brookhaven National Laboratory's vision for the realization of the future electron-ion collider. eRHIC, with its high luminosity (> 1033 cm-2 s-1), wide kinematic reach in center-of-mass-energy (45 GeV to 145 GeV) since day-1 and highly polarized nucleon (P ≈ 70%) and electron (P ≈ 80%) beams provides an unprecedented opportunity to reach new frontiers in our understanding of the internal dynamic structure of nucleons. We give a brief description of the eRHIC project and highlight several key high precision measurements from the planned broad physics program at the future electron-ion collider and the expected impact on our current understanding of the spatial structure of nucleons and nuclei, and the transition from a non-saturated to a saturated state of nuclear matter.
16. 1995 second modulator-klystron workshop: A modulator-klystron workshop for future linear colliders
SciTech Connect
1996-03-01
This second workshop examined the present state of modulator design and attempted an extrapolation for future electron-positron linear colliders. These colliders are currently viewed as multikilometer-long accelerators consisting of a thousand or more RF sources with 500 to 1,000, or more, pulsed power systems. The workshop opened with two introductory talks that presented the current approaches to designing these linear colliders, the anticipated RF sources, and the design constraints for pulse power. The cost of main AC power is a major economic consideration for a future collider, consequently the workshop investigated efficient modulator designs. Techniques that effectively apply the art of power conversion, from the AC mains to the RF output, and specifically, designs that generate output pulses with very fast rise times as compared to the flattop. There were six sessions that involved one or more presentations based on problems specific to the design and production of thousands of modulator-klystron stations, followed by discussion and debate on the material.
17. The ATLAS Experiment at the CERN Large Hadron Collider
2008-08-01
The ATLAS detector as installed in its experimental cavern at point 1 at CERN is described in this paper. A brief overview of the expected performance of the detector when the Large Hadron Collider begins operation is also presented.
18. Exotic decays of the 125 GeV Higgs boson at future e+e- colliders
Liu, Zhen; Wang, Lian-Tao; Zhang, Hao
2017-06-01
The discovery of unexpected properties of the Higgs boson would offer an intriguing opportunity to shed light on some of the most profound puzzles in particle physics. Beyond Standard Model (BSM) decays of the Higgs boson could reveal new physics in a direct manner. Future electron-positron lepton colliders operating as Higgs factories, including CEPC, FCC-ee and ILC, with the advantages of a clean collider environment and large statistics, could greatly enhance sensitivity in searching for these BSM decays. In this work, we perform a general study of Higgs exotic decays at future e+e- lepton colliders, focusing on the Higgs decays with hadronic final states and/or missing energy, which are very challenging for the High-Luminosity program of the Large Hadron Collider (HL-LHC). We show that with simple selection cuts, (10-3-10-5) limits on the Higgs exotic decay branching fractions can be achieved using the leptonic decaying spectator Z boson in the associated production mode e+e-→ ZH. We further discuss the interplay between detector performance and Higgs exotic decays, and other possibilities of exotic decays. Our work is a first step in a comprehensive study of Higgs exotic decays at future lepton colliders, which is a key area of Higgs physics that deserves further investigation. Supported by Fermi Research Alliance, LLC (DE-AC02-07CH11359) with the U.S. Department of Energy, DOE (DE-SC0013642), IHEP(Y6515580U1), and IHEP Innovation (Y4545171Y2)
19. FUTURE SCIENCE AT THE RELATIVISTIC HEAVY ION COLLIDER.
SciTech Connect
LUDLAM, T.
2006-12-21
QCD was developed in the 1970's as a theory of the strong interaction describing the confinement of quarks in hadrons. An early consequence of this picture was the realization that at sufficiently high temperature, or energy density, the confining forces are overcome by color screening effects, resulting in a transition from hadronic matter to a new state--later named the Quark Gluon Plasma--whose bulk dynamical properties are determined by the quark and gluon degrees of freedom, rather than those of confined hadrons. The suggestion that this phase transition in a fundamental theory of nature might occur in the hot, dense nuclear matter created in heavy ion collisions triggered a series of experimental searches during the past two decades at CERN and at BNL, with successively higher-energy nuclear collisions. This has culminated in the present RHIC program. In their first five years of operation, the RHIC experiments have identified a new form of thermalized matter formed in Au+Au collisions at energy densities more than 100 times that of a cold atomic nucleus. Measurements and comparison with relativistic hydrodynamic models indicate that the matter thermalizes in an unexpectedly short time ( < 1 fm/c) , has an energy density at least 15 times larger than needed for color deconfinement, has a temperature about 2 times the critical temperature of {approx}170 MeV predicted by lattice QCD, and appears to exhibit collective motion with ideal hydrodynamic properties--a ''perfect liquid'' that appears to flow with a near-zero viscosity to entropy ratio - lower than any previously observed fluid and perhaps close to a universal lower bound. There are also indications that the new form of matter directly involves quarks. Comparison of measured relative hadron abundances with very successful statistical models indicates that hadrons chemically decouple at a temperature of 160-170 MeV. There is evidence suggesting that this happens very close to the quark-hadron phase
20. GUT models at current and future hadron colliders and implications to dark matter searches
Arcadi, Giorgio; Lindner, Manfred; Mambrini, Yann; Pierre, Mathias; Queiroz, Farinaldo S.
2017-08-01
Grand Unified Theories (GUT) offer an elegant and unified description of electromagnetic, weak and strong interactions at high energy scales. A phenomenological and exciting possibility to grasp GUT is to search for TeV scale observables arising from Abelian groups embedded in GUT constructions. That said, we use dilepton data (ee and μμ) that has been proven to be a golden channel for a wide variety of new phenomena expected in theories beyond the Standard Model to probe GUT-inspired models. Since heavy dilepton resonances feature high signal selection efficiencies and relatively well-understood backgrounds, stringent and reliable bounds can be placed on the mass of the Z‧ gauge boson arising in such theories. In this work, we obtain 95% C.L. limits on the Z‧ mass for several GUT-models using current and future proton-proton colliders with √{ s} = 13 TeV , 33 TeV ,and 100 TeV, and put them into perspective with dark matter searches in light of the next generation of direct detection experiments.
1. High-Power X-Band Semiconductor RF Switch for Pulse Compression Systems of Future Colliders
Tantawi, Sami G.; Tamura, Fumihiko
2000-04-01
We describe the potential of semiconductor X-band RF switch arrays as a means of developing high power RF pulse compression systems for future linear colliders. The switch systems described here have two designs. Both designs consist of two 3dB hybrids and active modules. In the first design the module is composed of a cascaded active phase shifter. In the second design the module uses arrays of SPST (Single Pole Single Throw) switches. Each cascaded element of the phase shifter and the SPST switch has similar design. The active element consists of symmetrical three-port tee-junctions and an active waveguide window in the symmetrical arm of the tee-junction. The design methodology of the elements and the architecture of the whole switch system are presented. We describe the scaling law that governs the relation between power handling capability and number of elements. The design of the active waveguide window is presented. The waveguide window is a silicon wafer with an array of four hundred PIN/NIP diodes covering the surface of the window. This waveguide window is located in an over-moded TE01 circular waveguide. The results of high power RF measurements of the active waveguide window are presented. The experiment is performed at power levels of tens of megawatts at X-band.
2. Scintillator Based Tracking Detectors for a Muon System at Future Colliders
Denisov, Dmitri; Evdokimov, Valery; Lukic, Strahinja; Ujic, Predrag
2017-01-01
Extruded scintilator +WLS strips with SiPM readout for large muon detection systems were tested in the muon beam of the Fermilab Test Beam Facility. Light yield of up to 140 photoelectrons per muon per strip has been observed, as well as time resolution of 330 ps and position resolution along the strip of 5.4 cm. With such excellent performance parameters this detector is natural option for large scale future colliders muon systems.
3. Time and position resolution of the scintillator strips for a muon system at future colliders
DOE PAGES
Denisov, Dmitri; Evdokimov, Valery; Lukic, Strahinja
2016-03-31
In this study, prototype scintilator+WLS strips with SiPM readout for a muon system at future colliders were tested for light yield, time resolution and position resolution. Depending on the configuration, light yield of up to 36 photoelectrons per muon per SiPM has been observed, as well as time resolution of 0.45 ns and position resolution along the strip of 7.7 cm.
4. Time and position resolution of the scintillator strips for a muon system at future colliders
SciTech Connect
Denisov, Dmitri; Evdokimov, Valery; Lukic, Strahinja
2016-03-31
In this study, prototype scintilator+WLS strips with SiPM readout for a muon system at future colliders were tested for light yield, time resolution and position resolution. Depending on the configuration, light yield of up to 36 photoelectrons per muon per SiPM has been observed, as well as time resolution of 0.45 ns and position resolution along the strip of 7.7 cm.
5. The Multi-Purpose Detector (MPD) of the collider experiment
Golovatyuk, V.; Kekelidze, V.; Kolesnikov, V.; Rogachevsky, O.; Sorin, A.
2016-08-01
The project NICA (Nuclotron-based Ion Collider fAcility) is aimed to study dense baryonic matter in heavy-ion collisions in the energy range up to √{s_{NN}} = 11 GeV with average luminosity of L = 1027 cm-2s-1 (for 197Au79). The experimental program at the NICA collider will be performed with the Multi-Purpose Detector (MPD). We report on the main physics objectives of the NICA heavy-ion program and present the main detector components.
6. Experimental validation of a novel compact focusing scheme for future energy-frontier linear lepton colliders.
PubMed
White, G R; Ainsworth, R; Akagi, T; Alabau-Gonzalvo, J; Angal-Kalinin, D; Araki, S; Aryshev, A; Bai, S; Bambade, P; Bett, D R; Blair, G; Blanch, C; Blanco, O; Blaskovic-Kraljevic, N; Bolzon, B; Boogert, S; Burrows, P N; Christian, G; Corner, L; Davis, M R; Faus-Golfe, A; Fukuda, M; Gao, J; García-Morales, H; Geffroy, N; Hayano, H; Heo, A Y; Hildreth, M; Honda, Y; Huang, J Y; Hwang, W H; Iwashita, Y; Jang, S; Jeremie, A; Kamiya, Y; Karataev, P; Kim, E S; Kim, H S; Kim, S H; Kim, Y I; Komamiya, S; Kubo, K; Kume, T; Kuroda, S; Lam, B; Lekomtsev, K; Liu, S; Lyapin, A; Marin, E; Masuzawa, M; McCormick, D; Naito, T; Nelson, J; Nevay, L J; Okugi, T; Omori, T; Oroku, M; Park, H; Park, Y J; Perry, C; Pfingstner, J; Phinney, N; Rawankar, A; Renier, Y; Resta-López, J; Ross, M; Sanuki, T; Schulte, D; Seryi, A; Shevelev, M; Shimizu, H; Snuverink, J; Spencer, C; Suehara, T; Sugahara, R; Takahashi, T; Tanaka, R; Tauchi, T; Terunuma, N; Tomás, R; Urakawa, J; Wang, D; Warden, M; Wendt, M; Wolski, A; Woodley, M; Yamaguchi, Y; Yamanaka, T; Yan, J; Yokoya, K; Zimmermann, F
2014-01-24
A novel scheme for the focusing of high-energy leptons in future linear colliders was proposed in 2001 [P. Raimondi and A. Seryi, Phys. Rev. Lett. 86, 3779 (2001)]. This scheme has many advantageous properties over previously studied focusing schemes, including being significantly shorter for a given energy and having a significantly better energy bandwidth. Experimental results from the ATF2 accelerator at KEK are presented that validate the operating principle of such a scheme by demonstrating the demagnification of a 1.3 GeV electron beam down to below 65 nm in height using an energy-scaled version of the compact focusing optics designed for the ILC collider.
7. Evaluation of the radiation field in the future circular collider detector
Besana, M. I.; Cerutti, F.; Ferrari, A.; Riegler, W.; Vlachoudis, V.
2016-11-01
The radiation load on a detector at a 100 TeV proton-proton collider, that is being investigated within the future circular collider (FCC) study, is presented. A first concept of the detector has been modeled and relevant fluence and dose distributions have been calculated using the fluka Monte Carlo code. Distributions of fluence rates are discussed separately for charged particles, neutrons and photons. Dose and 1 MeV neutron equivalent fluence, for the accumulated integrated luminosity, are presented. The peak values of these quantities in the different subdetectors are highlighted, in order to define the radiation tolerance requirements for the choice of possible technologies. The effect of the magnetic field is also discussed. Two shielding solutions have been conceived to minimize the backscattering from the forward calorimeters to the muon chambers and the forward tracking stations. The two possible designs are presented and their effectiveness is discussed.
8. ATF2 for Final Focus Test Beam for Future Linear Colliders
Kuroda, S.; ATF2 Collaboration
2016-04-01
In future linear colliders, extremely small beam size is required at collision point for high luminosity. For example, it is of order of nanometer in ILC(International Linear Collider). ATF2 is a project at ATF(Accelerator Test Facility) in KEK which demonstrates performance of final focus system experimentally. ATF2 beam line is a prototype of ILC final focus system where the local chromaticity correction scheme is adopted. The optics is basically the same and the natural chromaticity, too. Thus the tolerance of magnet alignment and field error is similar for both of the beam lines. We report here observation of small beam size of about 45nm there. We also report plan for smaller beam size with higher beam intensity.
9. HIGH ENERGY PHYSICS POTENTIAL AT MUON COLLIDERS
SciTech Connect
PARSA,Z.
2000-04-07
In this paper, high energy physics possibilities and future colliders are discussed. The {mu}{sup +} {mu}{sup {minus}} collider and experiments with high intensity muon beams as the stepping phase towards building Higher Energy Muon Colliders (HEMC) are briefly reviewed and encouraged.
10. Shedding Light on Dark Matter at Colliders
Mitsou, Vasiliki A.
2013-12-01
Dark matter remains one of the most puzzling mysteries in Fundamental Physics of our times. Experiments at high-energy physics colliders are expected to shed light to its nature and determine its properties. This review focuses on recent searches for dark matter signatures at the Large Hadron Collider, also discussing related prospects in future e+e- colliders.
11. Testing B-violating signatures from exotic instantons in future colliders
Addazi, Andrea; Kang, Xian-Wei; Khlopov, Maxim Yu.
2017-09-01
We discuss possible implications of exotic stringy instantons for baryon-violating signatures in future colliders. In particular, we discuss high-energy quark collisions and transitions. In principle, the process can be probed by high-luminosity electron-positron colliders. However, we find that an extremely high luminosity is needed in order to provide a (somewhat) stringent bound compared to the current data on NN → ππ,KK. On the other hand, (exotic) instanton-induced six-quark interactions can be tested in near future high-energy colliders beyond LHC, at energies around 20–100 TeV. The Super proton-proton Collider (SppC) would be capable of such measurement given the proposed energy level of 50–90 TeV. Comparison with other channels is made. In particular, we show the compatibility of our model with neutron-antineutron and NN → ππ,KK bounds. A. A.’s work was Supported in part by the MIUR research grant “Theoretical Astroparticle Physics" PRIN 2012CPPYP7. XWK's work is partly Supported by the DFG and the NSFC through funds provided to the Sino-German CRC 110 “Symmetries and the Emergence of Structure in QCD” when he was in Jülich, and by MOST, Taiwan, (104-2112-M-001-022) from April 2017. The work by MK was performed within the framework of the Center FRPP Supported by MEPhI Academic Excellence Project (contract 02.03.21.0005, 27.08.2013), Supported by the Ministry of Education and Science of Russian Federation, project 3.472.2014/K and grant RFBR 14-22-03048
12. Future Outlook: Experiment
Suzuki, Yoichiro
2008-11-01
The personal view for the next to the next neutrino detector, the ultimate experiment, is discussed. Considering the size, cost and head winds against the basic science, the ultimate experiment will be the only experiment in the world. Here two such experiments one for the neutrino oscillation and the other for the double beta decay were discussed. The ultimate experiment needs to include a bread and butter science and to have a discovery potential for an unexpected phenomenon. There are many technical challenges and international co-operations are absolutely necessary.
13. Future reactor experiments
SciTech Connect
Wen, Liangjian
2015-07-15
The non-zero neutrino mixing angle θ{sub 13} has been discovered and precisely measured by the current generation short-baseline reactor neutrino experiments. It opens the gate of measuring the leptonic CP-violating phase and enables the neutrino mass ordering. The JUNO and RENO-50 proposals aim at resolving the neutrino mass ordering using reactors. The experiment design, physics sensitivity, technical challenges as well as the progresses of those two proposed experiments are reviewed in this paper.
14. Physics of e/sup +/-e/sup -/ colliders: present, future, and far future
SciTech Connect
Peskin, M.E.
1984-10-01
The presentation of this lecture will proceed as follows: Section 2 reviews the features of e/sup +/-e/sup -/ collisions according to the standard gauge theory of strong, weak, and electromagnetic interactions. This discussion reviews a few of the most important features of e/sup +/-e/sup -/ collisions at currently accessible energies and the expectations for e/sup +/-e/sup -/ reactions which produce the intermediate vector bosons Z/sup 0/ and W/sup + -/. Section 3 reviews some of the experimental work done at the current generation of e/sup +/-e/sup -/ colliders; this discussion emphasizes the search for new types of elementary particles. Section 4 is a theoretical digression, introducing a number of ideas about physics at the energy scale of 1 TeV. Section 5 discusses (rather superficially) a number of technical aspects of electron-positron colliders designed to reach the TeV energies. Finally, Section 6 discusses various possible effects which could appear in e/sup +/-e/sup -/ collisions as the result of new physics appearing at 1 TeV or above. 41 refs., 35 figs.
15. Neutrinos from colliding wind binaries: future prospects for PINGU and ORCA
Becker Tjus, J.
2014-05-01
Massive stars play an important role in explaining the cosmic ray spectrum below the knee, possibly even up to the ankle, i.e. up to energies of 1015 or 1018.5 eV, respectively. In particular, Supernova Remnants are discussed as one of the main candidates to explain the cosmic ray spectrum. Even before their violent deaths, during the stars' regular life times, cosmic rays can be accelerated in wind environments. High-energy gamma-ray measurements indicate hadronic acceleration binary systems, leading to both periodic gamma-ray emission from binaries like LSI + 60 303 and continuous emission from colliding wind environments like η-Carinae. The detection of neutrinos and photons from hadronic interactions are one of the most promising methods to identify particle acceleration sites. In this paper, future prospects to detect neutrinos from colliding wind environments in massive stars are investigated. In particular, the seven most promising candidates for emission from colliding wind binaries are investigated to provide an estimate of the signal strength. The expected signal of a single source is about a factor of 5-10 below the current IceCube sensitivity and it is therefore not accessible at the moment. What is discussed in addition is future the possibility to measure low-energy neutrino sources with detectors like PINGU and ORCA: the minimum of the atmospheric neutrino flux at around 25 GeV from neutrino oscillations provides an opportunity to reduce the background and increase the significance to searches for GeV-TeV neutrino sources. This paper presents the first idea, detailed studies including the detector's effective areas will be necessary in the future to test the feasibility of such an approach.
16. Gaugino physics of split supersymmetry spectra at the LHC and future proton colliders
Jung, Sunghoon; Wells, James D.
2014-04-01
Discovery of the Higgs boson and lack of discovery of superpartners in the first run at the LHC are both predictions of split supersymmetry with thermal dark matter. We discuss what it would take to find gluinos at hadron supercolliders, including the LHC at 14 TeV center-of-mass energy, and future pp colliders at 100 TeV and 200 TeV. We generalize the discussion by reexpressing the search capacity in terms of the gluino to lightest superpartner mass ratio and apply results to other scenarios, such as gauge mediation and mirage mediation.
17. Testing Contact Interactions of Quarks and Gluons at Future pp Colliders
Argyres, E. N.; Katsilieris, G. A.; Papadopoulos, C. G.; Vlassopulos, S. D. P.
We calculate the contributions of the allowed qqqq, GGG, GGGG, qqG and qqGG contact interactions of the standard QCD quarks and gluons, at a common scale Λ, to jet cross sections at the future hadron colliders. Assuming that the two-jet normalized angular-distribution measurements will be consistent with QCD, to 95% CL we obtain bounds Λ>35-40 TeV at LHC or Λ>50-80 TeV at SSC. A similar analysis of the three-jet events would give Λ>13-15 TeV or Λ>10-25 TeV, respectively.
18. Investigation of beam self-polarization in the future e+e- circular collider
DOE PAGES
Gianfelice-Wendt, E.
2016-10-24
The use of resonant depolarization has been suggested for precise beam energy measurements (better than 100 keV) in the e+e- Future Circular Collider (FCC-e+e-) for Z and WW physics at 45 and 80 GeV beam energy respectively. Longitudinal beam polarization would benefit the Z peak physics program; however it is not essential and therefore it will be not investigated here. In this paper the possibility of self-polarized leptons is considered. As a result, preliminary results of simulations in presence of quadrupole misalignments and beam position monitors (BPMs) errors for a simplified FCC-e+e- ring are presented.
19. Machine detector interface studies: Layout and synchrotron radiation estimate in the future circular collider interaction region
Boscolo, Manuela; Burkhardt, Helmut; Sullivan, Michael
2017-01-01
The interaction region layout for the e+e- future circular collider FCC-ee is presented together with a preliminary estimate of synchrotron radiation that affects this region. We describe in this paper the main guidelines of this design and the estimate of synchrotron radiation coming from the last bending magnets and from the final focus quadrupoles, with the software tools developed for this purpose. The design follows the asymmetric optics layout as far as incoming bend radiation is concerned with the maximum foreseen beam energy of 175 GeV and we present a feasible initial layout with an indication of tolerable synchrotron radiation.
20. Testing the handedness of a heavy {ital W}{prime} at future hadron colliders
SciTech Connect
Cvetic, M.; Langacker, P.; Liu, J.
1994-03-01
We show that the associated production {ital pp}{r_arrow}{ital W}{prime}{ital W} and the rare dec at future hadron colliders. For {ital M}{sub {ital W}{prime}}{similar_to}(1--3) TeV they would allow a clean determination on whether the {ital W}{prime} couples to {ital V}{minus}{ital A} or {ital V}+{ital A} currents. As an illustration a model in which the {ital W}{prime}{sup {plus_minus}} couples only to {ital V}{minus}{ital A} currents is contrasted with the left-right-symmetric models which involve {ital V}+{ital A} currents.
1. Design considerations for the semi-digital hadronic calorimeter (SDHCAL) for future leptonic colliders
Pingault, A.
2016-07-01
The first technological SDHCAL prototype having been successfully tested, a new phase of R&D, to validate completely the SDHCAL option for the International Linear Detector (ILD) project of the International Linear Collider (ILC), has started with the conception and the realisation of a new prototype. The new one is intended to host few but large active layers of the future SDHCAL. The new active layers, made of Glass Resistive Plate Chambers (GRPC) with sizes larger than 2 m2 will be equipped with a new version of the electronic readout, fulfilling the requirements of the future ILD detector. The new GRPC are conceived to improve the homogeneity with a new gas distribution scheme. Finally the mechanical structure will be achieved using the electron beam welding technique. The progress realised will be presented and future steps will be discussed.
2. Electron density and plasma dynamics of a colliding plasma experiment
SciTech Connect
Wiechula, J. Schönlein, A.; Iberler, M.; Hock, C.; Manegold, T.; Bohlender, B.; Jacoby, J.
2016-07-15
We present experimental results of two head-on colliding plasma sheaths accelerated by pulsed-power-driven coaxial plasma accelerators. The measurements have been performed in a small vacuum chamber with a neutral-gas prefill of ArH{sub 2} at gas pressures between 17 Pa and 400 Pa and load voltages between 4 kV and 9 kV. As the plasma sheaths collide, the electron density is significantly increased. The electron density reaches maximum values of ≈8 ⋅ 10{sup 15} cm{sup −3} for a single accelerated plasma and a maximum value of ≈2.6 ⋅ 10{sup 16} cm{sup −3} for the plasma collision. Overall a raise of the plasma density by a factor of 1.3 to 3.8 has been achieved. A scaling behavior has been derived from the values of the electron density which shows a disproportionately high increase of the electron density of the collisional case for higher applied voltages in comparison to a single accelerated plasma. Sequences of the plasma collision have been taken, using a fast framing camera to study the plasma dynamics. These sequences indicate a maximum collision velocity of 34 km/s.
3. Probing the Higgs with angular observables at future e+e– colliders
DOE PAGES
Liu, Zhen
2016-10-24
In this paper, I summarize our recent works on using differential observables to explore the physics potential of future e+e– colliders in the framework of Higgs effective field theory. This proceeding is based upon Refs. 1 and 2. We study angular observables in the e+e– → ZHℓ+ℓ–bmore » $$\\bar{b}$$ channel at future circular e+e– colliders such as CEPC and FCC-ee. Taking into account the impact of realistic cut acceptance and detector effects, we forecast the precision of six angular asymmetries at CEPC (FCC-ee) with center-of-mass energy √s = 240 GeV and 5 (30) ab–1 integrated luminosity. We then determine the projected sensitivity to a range of operators relevant for the Higgsstrahlung process in the dimension-6 Higgs EFT. Our results show that angular observables provide complementary sensitivity to rate measurements when constraining various tensor structures arising from new physics. We further find that angular asymmetries provide a novel means of constraining the “blind spot” in indirect limits on supersymmetric scalar top partners. Finally, we also discuss the possibility of using ZZ-fusion at e+e– machines at different energies to probe new operators.« less
4. The generation and acceleration of low emittance flat beams for future linear colliders
SciTech Connect
Raubenheimer, Tor O.
1991-11-01
Many future linear collider designs call for electron and positron beams with normalized rms horizontal and vertical emittances of γϵx = 3x10-6 m-rad and γϵy = 3x10-8 m-rad; these are a factor of 10 to 100 below those observed in the Stanford Linear Collider. In this dissertation, we examine the feasibility of achieving beams with these very small vertical emittances. We examine the limitations encountered during both the generation and the subsequent acceleration of such low emittance beams. We consider collective limitations, such as wakefields, space charge effects, scattering processes, and ion trapping; and also how intensity limitations, such as anomalous dispersion, betatron coupling, and pulse-to-pulse beam jitter. In general, the minimum emittance in both the generation and the acceleration stages is limited by the transverse misalignments of the accelerator components. We describe a few techniques of correcting the effect of these errors, thereby easing the alignment tolerances by over an order of magnitude. Finally, we also calculate fundamental limitations on the minimum vertical emittance; these do not constrain the current designs but may prove important in the future.
5. The generation and acceleration of low emittance flat beams for future linear colliders
SciTech Connect
Raubenheimer, T.O.
1991-11-01
Many future linear collider designs call for electron and positron beams with normalized rms horizontal and vertical emittances of {gamma}{epsilon}{sub x} = 3{times}10{sup {minus}6} m-rad and {gamma}{epsilon}{sub y} = 3{times}10{sup {minus}8} m-rad; these are a factor of 10 to 100 below those observed in the Stanford Linear Collider. In this dissertation, we examine the feasibility of achieving beams with these very small vertical emittances. We examine the limitations encountered during both the generation and the subsequent acceleration of such low emittance beams. We consider collective limitations, such as wakefields, space charge effects, scattering processes, and ion trapping; and also how intensity limitations, such as anomalous dispersion, betatron coupling, and pulse-to-pulse beam jitter. In general, the minimum emittance in both the generation and the acceleration stages is limited by the transverse misalignments of the accelerator components. We describe a few techniques of correcting the effect of these errors, thereby easing the alignment tolerances by over an order of magnitude. Finally, we also calculate fundamental'' limitations on the minimum vertical emittance; these do not constrain the current designs but may prove important in the future.
6. Preservation of Ultra Low Emittances Using Adiabatic Matching in Future Plasma Wakefield-based Colliders
SciTech Connect
Gholizadeh, Reza; Muggli, Patric; Katsouleas, Tom; Mori, Warren
2009-01-22
The Plasma Wakefield Accelerator is a promising technique to lower the cost of the future high energy colliders by offering orders of magnitude higher gradients than the conventional accelerators. It has been shown that ion motion is an important issue to account for in the extreme regime of ultra high energies and ultra low emittances, characteristics of future high energy collider beams. In this regime, the transverse electric field of the beam is so high that in simulations, the plasma ions cannot be considered immobile at the time scale of electron plasma oscillation, thereby leading to a nonlinear focusing force. Therefore, the transverse emittance of a beam will not be preserved under these circumstances. However, we show that matched profile in case of a nonlinear focusing force still exists and can be derived from Vlasov equation. Furthermore, we introduce a plasma section that can reduce the emittance growth by adiabatically reducing the ion mass and hence increasing the nonlinear term in the focusing force. Simulation results are presented.
7. Diffractive ρ production at small x in future electron-ion colliders
Gonçalves, V. P.; Navarra, F. S.; Spiering, D.
2016-09-01
The future electron-ion (eA) collider is expected to probe the high energy regime of the quantum chromodynamics (QCD), with the exclusive vector meson production cross section being one of the most promising observables. In this paper we complement previous studies of exclusive processes presenting a comprehensive analysis of diffractive ρ production at small x. We compute the coherent and incoherent cross sections taking into account non-linear QCD dynamical effects and considering different models for the dipole-proton scattering amplitude and vector meson wave function. The dependence of these cross sections on the energy, photon virtuality, nuclear mass number and squared momentum transfer is analysed in detail. Moreover, we compare the non-linear predictions with those obtained in the linear regime. Finally, we also estimate the exclusive photon, J/{{\\Psi }} and ϕ production and compare with the results obtained for ρ production. Our results demonstrate that the analysis of diffractive ρ production in future electron-ion colliders will be important in understanding the non-linear QCD dynamics.
8. Probing the Higgs with angular observables at future e+e- colliders
Liu, Zhen
2016-10-01
I summarize our recent works on using differential observables to explore the physics potential of future e+e- colliders in the framework of Higgs effective field theory. This proceeding is based upon Refs. 1 and 2. We study angular observables in the e+e-→ ZHℓ+ℓ-bb¯ channel at future circular e+e- colliders such as CEPC and FCC-ee. Taking into account the impact of realistic cut acceptance and detector effects, we forecast the precision of six angular asymmetries at CEPC (FCC-ee) with center-of-mass energy s = 240 GeV and 5 (30) ab-1 integrated luminosity. We then determine the projected sensitivity to a range of operators relevant for the Higgsstrahlung process in the dimension-6 Higgs EFT. Our results show that angular observables provide complementary sensitivity to rate measurements when constraining various tensor structures arising from new physics. We further find that angular asymmetries provide a novel means of constraining the “blind spot” in indirect limits on supersymmetric scalar top partners. We also discuss the possibility of using ZZ-fusion at e+e- machines at different energies to probe new operators.
9. 100 kW CW highly-efficient multi-beam klystron for a future electron-ion collider
Teryaev, Vladimir E.; Shchelkunov, Sergey V.; Jiang, Yong; Hirshfield, Jay L.
2017-03-01
Initial results are presented for the development of a CW highly-efficient RF source needed for operation of a future electron-ion collider. The design of this compact multi-beam klystron yields high efficiency (above 70%) for the power output of 125 kW at 952.6 MHz. The klystron is to work for the RF systems for ion acceleration in the polarized Medium-energy Electron Ion Collider as being developed at Thomas Jefferson National Accelerator Facility.
10. Constraining RS Models by Future Flavor and Collider Measurements: A Snowmass Whitepaper
SciTech Connect
Agashe, Kaustubh; Bauer, Martin; Goertz, Florian; Lee, Seung J.; Vecchi, Luca; Wang, Lian-Tao; Yu, Felix
2013-10-03
Randall-Sundrum models are models of quark flavor, because they explain the hierarchies in the quark masses and mixings in terms of order one localization parameters of extra dimensional wavefunctions. The same small numbers which generate the light quark masses suppress contributions to flavor violating tree level amplitudes. In this note we update universal constraints from electroweak precision parameters and demonstrate how future measurements of flavor violation in ultra rare decay channels of Kaons and B mesons will constrain the parameter space of this type of models. We show how collider signatures are correlated with these flavor measurements and compute projected limits for direct searches at the 14 TeV LHC run, a 14 TeV LHC luminosity upgrade, a 33 TeV LHC energy upgrade, and a potential 100 TeV machine. We further discuss the effects of a warped model of leptons in future measurements of lepton flavor violation.
11. Exclusive vector meson photoproduction at the LHC and a future circular collider: A closer look on the final state
da Silveira, G. Gil; Gonçalves, V. P.; Jaime, M. M.
2017-02-01
Over the past years, the LHC experiments have reported experimental evidence for processes associated to photon-photon and photon-hadron interactions, showing their potential to investigate the production of low- and high-mass systems in exclusive events. In the particular case of the photoproduction of vector mesons, the experimental study of this final state is expected to shed light on the description of the QCD dynamics at small values of the Bjorken-x variable. In this paper, we extend previous studies for the exclusive J /Ψ and ϒ photoproduction in p p collisions based on the nonlinear QCD dynamics by performing a detailed study of the final-state distributions that can be measured experimentally at the LHC and at a future circular collider. Predictions for the rapidity and transverse momentum distributions of the vector mesons and of final-state dimuons are presented for p p collisions at √{s }=7 , 13, and 100 TeV.
12. Development of High Power X-Band Semiconductor RF Switch for Pulse Compression Systems of Future Linear Colliders
SciTech Connect
Tantawi, Sami
2000-11-06
We describe development of semiconductor X-band high-power RF switches. The target applications are high-power RF pulse compression systems for future linear colliders. We describe the design methodology of the architecture of the whole switch systems. We present the scaling law that governs the relation between power handling capability and number of elements. We designed and built several active waveguide windows for the active element. The waveguide window is a silicon wafer with an array of four hundred PIN/NIP diodes covering the surface of the window. This waveguide window is located in an over-moded TE01 circular waveguide. The results of high power RF measurements of the active waveguide window are presented. The experiment is performed at power levels of a few megawatts at X-band.
13. Measuring CP nature of top-Higgs couplings at the future Large Hadron electron Collider
2017-07-01
We investigate the sensitivity of top-Higgs coupling by considering the associated vertex as CP phase (ζt) dependent through the process pe- → t bar hνe in the future Large Hadron electron Collider. In particular the decay modes are taken to be h → b b bar and t bar → leptonic mode. Several distinct ζt dependent features are demonstrated by considering observables like cross sections, top-quark polarisation, rapidity difference between h and t bar and different angular asymmetries. Luminosity (L) dependent exclusion limits are obtained for ζt by considering significance based on fiducial cross sections at different σ-levels. For electron and proton beam-energies of 60 GeV and 7 TeV respectively, at L = 100 fb-1, the regions above π / 5 <ζt ≤ π are excluded at 2σ confidence level, which reflects better sensitivity expected at the Large Hadron Collider. With appropriate error fitting methodology we find that the accuracy of SM top-Higgs coupling could be measured to be κ = 1.00 ± 0.17 (0.08) at √{ s} = 1.3 (1.8) TeV for an ultimate L = 1ab-1.
14. Thallium-based high-temperature superconductors for beam impedance mitigation in the Future Circular Collider
Calatroni, S.; Bellingeri, E.; Ferdeghini, C.; Putti, M.; Vaglio, R.; Baumgartner, T.; Eisterer, M.
2017-07-01
CERN has recently started a design study for a possible next-generation high-energy hadron-hadron collider (Future Circular Collider—FCC-hh). The FCC-hh study calls for an unprecedented center-of-mass collision energy of 100 TeV, achievable by colliding counter-rotating proton beams with an energy of 50 TeV steered in a 100 km circumference tunnel by superconducting magnets which produce a dipole field of 16 T. The beams emit synchrotron radiation at high power levels, which, to optimize cryogenic efficiency, is absorbed by a beam-facing screen, coated with copper, and held at 50 K in the current design. The surface impedance of this screen has a strong impact on beam stability, and copper at 50 K allows for a limited beam stability margin only. This motivates the exploration of whether high-temperature superconductors (HTS), the only known materials possibly having a surface impedance lower than copper under the required operating conditions, would represent a viable alternative. This paper summarizes the FCC-hh requirements and focuses on identifying the best possible HTS material for this purpose. It reviews in particular the properties of Tl-based HTS, and discusses the consequent motivation for developing a deposition process for such compounds, which should be scalable to the size of the FCC components.
15. Detection of heavy charged Higgs bosonsin e+ e- -> t b H- production at future Linear Colliders
Moretti, S.
2004-05-01
Heavy charged Higgs bosons (H^±) of a Type II 2-Higgs doublet model (2HDM) can be detected at future electron-positron Linear Colliders (LCs) even when their mass is larger than half the collider energy. The single Higgs mode e^ + e^-to tbar b H^- + {c.c.} to 4b + {j}{j} + ell + pT^{miss} (where j represents a jet and with ell = e,μ) contributes to extend the discovery reach of H^± states into the mass region M_{H^±}gtrsim sqrt s/2, where the well studied pair production channel e^ + e^-to H^-H^ + is no longer available. With a technique that allows one to reconstruct the neutrino four-momentum in the decay tto b W^ + to b ell^ + ν, one can suppress the initially overwhelming main irreducible background due to e^ + e^-to tbar t bbar b (via a gluon splitting into bbar b pairs) to a negligible level. However, for currently foreseen luminosities, one can establish a statistically significant H^± signal only over a rather limited mass region, of 20 GeV or so, beyond M_{H^±}≈ sqrt s/2, for very large or very small values of tanβ and provided high b-tagging efficiency can be achieved.
16. Effect of 3D Polarization profiles on polarization measurements and colliding beam experiments
SciTech Connect
Fischer, W.; Bazilevsky, A.
2011-08-18
The development of polarization profiles are the primary reason for the loss of average polarization. Polarization profiles have been parametrized with a Gaussian distribution. We derive the effect of 3-dimensional polarization profiles on the measured polarization in polarimeters, as well as the observed polarization and the figure of merit in single and double spin experiments. Examples from RHIC are provided. The Relativistic Heavy Ion Collider (RHIC) is the only collider of spin polarized protons. During beam acceleration and storage profiles of the polarization P develop, which affect the polarization measured in a polarimeter, and the polarization and figure of merit (FOM) in colliding beam experiments. We calculate these for profiles in all dimensions, and give examples for RHIC. Like in RHIC we call the two colliding beams Blue and Yellow. We use the overbar to designate intensity-weighted averages in polarimeters (e.g. {bar P}), and angle brackets to designate luminosity-weighted averages in colliding beam experiments (e.g.
).
17. Exploring CP-even scalars of a Two Higgs-doublet model in future e ‑ p colliders
Mosomane, Chuene; Kumar, Mukesh; Cornell, Alan S.; Mellado, Bruce
2017-09-01
In this proceeding we shall explore the potential of a future e ‑ p collider to probe the CP-even scalars in a two Higgs doublet model. We consider Type-I in this study. The mass of the lighter scalar particle is considered to be the Higgs-boson, mh = 125 GeV, and a heavy scalar mH = 270 GeV. The centre of mass energy for the e ‑ p collision is considered as in the Large Hadron Electron Collider and the Future Circular Hadron Electron Collider configurations, by fixing the proton beam energy to be Ep = 7 and 50 TeV, respectively, and an electron beam energy of Ee = 60 GeV. Production cross sections of these scalars are also shown at higher electron beam energies. Future prospects of these studies are also discussed.
18. Cartography with Locating Fermions in Extra Dimensions at Future Lepton Colliders
SciTech Connect
Rizzo, Thomas G.
2001-01-24
In the model of Arkani-Hamed and Schmaltz the various chiral fermions of the Standard Model(SM) are localized at different points on a thick wall which forms an extra dimension. Such a scenario provides a way of understanding the absence of proton decay and the fermion mass hierarchy in models with extra dimensions. In this paper we explore the capability of future lepton colliders to determine the location of these fermions in the extra dimension through precision measurements of conventional scattering processes both below and on top of the lowest lying Kaluza-Klein gauge boson resonance. We show that for some classes of models the locations of these fermions can be very precisely determined while in others only their relative positions can be well measured.
19. Split Fermions in Extra Dimensions and Exponentially Small Cross-Sections at Future Colliders
SciTech Connect
Grossman, yuval
1999-09-22
We point out a dramatic new experimental signature for a class of theories with extra dimensions, where quarks and leptons are localized at slightly separated parallel ''walls'' whereas gauge and Higgs fields live in the bulk of the extra dimensions. The separation forbids direct local couplings between quarks and leptons, allowing for an elegant solution to the proton decay problem. We show that scattering cross sections for collisions of fermions which are separated in the extra dimensions vanish at energies high enough to probe the separation distance. This is because the separation puts a lower bound on the attainable impact parameter in the collision. We present cross sections for two body high energy scattering and estimate the power with which future colliders can probe this scenario, finding sensitivity to inverse fermion separations of order 10-70 TeV.
20. Heavy Majorana neutrino production and decay in future e{sup +}e{sup {minus}} colliders
SciTech Connect
Gluza, J.; Zralek, M.
1997-06-01
The production of heavy and light neutrinos in e{sup +}e{sup {minus}} future colliders is considered. The cross section for the process e{sup +}e{sup {minus}}{r_arrow}{nu}N and then the heavy neutrino decay N{r_arrow}W{sup {plus_minus}}e{sup {minus_plus}} is determined for experimentally possible values of mixing matrix elements. The bound on the heavy neutrino-electron mixing is estimated in models without right-handed currents. The role of neutrino CP eigenvalues and the mass of the lightest Higgs particle are investigated. The angular distributions of charged leptons in the total c.m. frame resulting from the heavy neutrino decay and from the main W{sup +}W{sup {minus}} production background process are briefly compared. {copyright} {ital 1997} {ital The American Physical Society}
1. Machine detector interface studies: Layout and synchrotron radiation estimate in the future circular collider interaction region
DOE PAGES
Boscolo, Manuela; Burkhardt, Helmut; Sullivan, Michael
2017-01-27
The interaction region layout for the e+e– future circular collider FCC-ee is presented together with a preliminary estimate of synchrotron radiation that affects this region. We describe in this paper the main guidelines of this design and the estimate of synchrotron radiation coming from the last bending magnets and from the final focus quadrupoles, with the software tools developed for this purpose. Here, the design follows the asymmetric optics layout as far as incoming bend radiation is concerned with the maximum foreseen beam energy of 175 GeV and we present a feasible initial layout with an indication of tolerable synchrotronmore » radiation.« less
2. Beyond Higgs couplings: Probing the Higgs with angular observables at future e$^{+}$e$^{-}$ colliders
SciTech Connect
Craig, Nathaniel; Gu, Jiayin; Liu, Zhen; Wang, Kechen
2016-03-09
Here, we study angular observables in the ${e}^{+}{e}^{-}\\to ZH\\to {\\ell}^{+}{\\ell}^{-}b\\overline{b}$ channel at future circular e$^{+}$ e$^{-}$ colliders such as CEPC and FCC-ee. Taking into account the impact of realistic cut acceptance and detector effects, we forecast the precision of six angular asymmetries at CEPC (FCC-ee) with center-of-mass energy $\\sqrt{s}=240$ GeV and 5 (30) ab$^{-1}$ integrated luminosity. We then determine the projected sensitivity to a range of operators relevant for he Higgs-strahlung process in the dimension-6 Higgs EFT. Our results show that angular observables provide complementary sensitivity to rate measurements when constraining various tensor structures arising from new physics. We further find that angular asymmetries provide a novel means of both probing BSM corrections to the HZγ coupling and constraining the “blind spot” in indirect limits on supersymmetric scalar top partners.
3. Boost to h →Z γ : From LHC to future e+e- colliders
No, Jose Miguel; Spannowsky, Michael
2017-04-01
A precise measurement of the Higgs h →Z γ decay is very challenging at the LHC, due to the very low branching fraction and the shortage of kinematic handles to suppress the large SM Z γ background. We show how such a measurement would be significantly improved by considering Higgs production in association with a hard jet. We compare the prospective HL-LHC sensitivity in this channel with other Higgs production modes where h is fairly boosted, e.g. weak boson fusion, and also to the potential h →Z γ measurement achievable with a future e+e- circular collider (fcc-ee). Finally, we discuss new physics implications of a precision measurement of h →Z γ .
4. The Higgs sector of the minimal B- L model at future Linear Colliders
Basso, Lorenzo; Moretti, Stefano; Pruna, Giovanni Marco
2011-08-01
We investigate the phenomenology of the Higgs sector of the minimal B- L extension of the Standard Model at a future e + e - Linear Collider. We consider the discovery potential of both a sub-TeV and a multi-TeV machine. We show that, within such a theoretical scenario, several novel production and decay channels involving the two physical Higgs states, precluded at the LHC, could experimentally be accessed at such machines. Amongst these, several Higgs signatures have very distinctive features with respect to those of other models with enlarged Higgs sector, as they involve interactions of Higgs bosons between themselves, with Z' bosons as well as with heavy neutrinos. In particular, we present the scope of the Z' strahlung process for single and double Higgs production, the only suitable mechanism enabling one to access an almost decoupled heavy scalar state (therefore outside the LHC range).
5. High field septum magnet using a superconducting shield for the Future Circular Collider
Barna, Dániel
2017-04-01
A zero-field cooled superconducting shield is proposed to realize a high-field (3-4 T) septum magnet for the Future Circular Collider hadron-hadron (FCC-hh) ring. Three planned prototypes using different materials and technical solutions are presented, which will be used to evaluate the feasibility of this idea as a part of the FCC study. The numerical simulation methods are described to calculate the field patterns around such a shield. A specific excitation current configuration is presented that maintains a fairly homogeneous field outside of a rectangular shield in a wide range of field levels from 0 to 3 Tesla. It is shown that a massless septum configuration (with an opening in the shield) is also possible and gives satisfactory field quality with realistic superconducting material properties.
6. Exotic Decays of the 125 GeV Higgs Boson at Future $e^+e^-$ Lepton Colliders
SciTech Connect
Liu, Zhen; Wang, Lian-Tao; Zhang, Hao
2016-12-29
Discovery of unexpected properties of the Higgs boson offers an intriguing opportunity of shedding light on some of the most profound puzzles in particle physics. The Beyond Standard Model (BSM) decays of the Higgs boson could reveal new physics in a direct manner. Future electron-positron lepton colliders operating as Higgs factories, including CEPC, FCC-ee and ILC, with the advantages of a clean collider environment and large statistics, could greatly enhance the sensitivity in searching for these BSM decays. In this work, we perform a general study of Higgs exotic decays at future $e^+e^-$ lepton colliders, focusing on the Higgs decays with hadronic final states and/or missing energy, which are very challenging for the High-Luminosity program of the Large Hadron Collider (HL-LHC). We show that with simple selection cuts, $O(10^{-3}\\sim10^{-5})$ limits on the Higgs exotic decay branching fractions can be achieved using the leptonic decaying spectator $Z$ boson in the associated production mode $e^+e^-\\rightarrow Z H$. We further discuss the interplay between the detector performance and Higgs exotic decay, and other possibilities of exotic decays. Our work is a first step in a comprehensive study of Higgs exotic decays at future lepton colliders, which is a key ingredient of Higgs physics that deserves further investigation.
7. Quadrupole Alignment and Trajectory Correction for Future Linear Colliders: SLC Tests of a Dispersion-Free Steering Algorithm
SciTech Connect
Assmann, R
2004-06-08
and the fiducials. Beam-based alignment methods ideally only depend upon the BPM resolution and generally provide much better precision. Many of those techniques are described in other contributions to this workshop. In this paper we describe our experiences with a dispersion-free steering algorithm for linacs. This algorithm was first suggested by Raubenheimer and Ruth in 1990 [5]. It h as been studied in simulations for NLC [5], TESLA [6], the S-BAND proposal [7] and CLIC [8]. The dispersion-free steering technique can be applied to the whole linac at once and returns the alignment (or trajectory) that minimizes the dispersive emittance growth of the beam. Thus it allows an extremely fast alignment of the beam-line. As we will show dispersion-free steering is only sensitive to quadrupole misalignments. Wakefield-free steering [3] as mentioned before is a closely related technique that minimizes the emittance growth caused by both dispersion and wakefields. Due to hardware limitations (i.e. insufficient relative range of power supplies) we could not study this method experimentally in the SLC. However, its systematics are very similar to those of dispersion-free steering. The studies of dispersion-free steering which are presented made extensive use of the unique potential of the SLC as the only operating linear collider. We used it to study the performance and problems of advanced beam-based optimization tools in a real beam-line environment and on a large scale. We should mention that the SLC has utilized beam-based alignment for years [9], using the difference of electron and positron trajectories. This method, however, cannot be used in future linear colliders. The goal of our work is to demonstrate the performance of advanced beam-based alignment techniques in linear colliders and to anticipate possible reality-related problems. Those can then be solved in the design state for the next generation of linear colliders.
8. Interplay and characterization of Dark Matter searches at colliders and in direct detection experiments
DOE PAGES
Malik, Sarah A.; McCabe, Christopher; Araujo, Henrique; ...
2015-05-18
In our White Paper we present and discuss a concrete proposal for the consistent interpretation of Dark Matter searches at colliders and in direct detection experiments. Furthermore, based on a specific implementation of simplified models of vector and axial-vector mediator exchanges, this proposal demonstrates how the two search strategies can be compared on an equal footing.
9. Fourth workshop on Experiments and Detectors for a Relativistic Heavy Ion Collider
NASA Technical Reports Server (NTRS)
Fatyga, M. (Editor); Moskowitz, B. (Editor)
1992-01-01
We present a description of an experiment which can be used to search for effects of strong electromagnetic fields on the production of e(sup +) e(sup -) pairs in the elastic scattering of two heavy ions at the Relativistic Heavy Ion Collider (RHIC). A very brief discussion of other possible studies of electromagnetic phenomena at RHIC is also presented.
10. Overview of results from the Fermilab fixed target and collider experiments
SciTech Connect
Montgomery, H.E.
1997-06-01
In this paper we present a review of recent QCD related results from Fermilab fixed target and collider experiments. Topics covered range from structure functions through W/Z production, heavy quark production and jet angular distributions. We also include the current state of knowledge about leptoquark pair production in hadronic collisions.
11. Pair Production of the Doubly Charged Leptons Associated with a Gauge Boson γ or Z in e+e- and γγ Collisions at Future Linear Colliders
Zeng, Qing-Guo; Ji, Li; Yang, Shuo
2015-03-01
In this paper, we investigate the production of a pair of doubly charged leptons associated with a gauge boson V(γ or Z) at future linear colliders via e+e- and γγ collisions. The numerical results show that the possible signals of the doubly charged leptons may be detected via the processes e+e- → VX++X-- and γγ → VX++X-- at future ILC or CLIC experiments. Supported in part by the National Natural Science Foundation of China under Grants Nos. 11275088, 11205023, 11375248 and the Program for Liaoning Excellent Talents in University under Grant No. LJQ2014135
12. Preliminary design of CERN Future Circular Collider tunnel: first evaluation of the radiation environment in critical areas for electronics
Infantino, Angelo; Alía, Rubén García; Besana, Maria Ilaria; Brugger, Markus; Cerutti, Francesco
2017-09-01
As part of its post-LHC high energy physics program, CERN is conducting a study for a new proton-proton collider, called Future Circular Collider (FCC-hh), running at center-of-mass energies of up to 100 TeV in a new 100 km tunnel. The study includes a 90-350 GeV lepton collider (FCC-ee) as well as a lepton-hadron option (FCC-he). In this work, FLUKA Monte Carlo simulation was extensively used to perform a first evaluation of the radiation environment in critical areas for electronics in the FCC-hh tunnel. The model of the tunnel was created based on the original civil engineering studies already performed and further integrated in the existing FLUKA models of the beam line. The radiation levels in critical areas, such as the racks for electronics and cables, power converters, service areas, local tunnel extensions was evaluated.
13. The future collider physics program at Fermilab: Run II and TeV33
SciTech Connect
Signore, K.D.
1998-07-01
High luminosity collider running at Fermilab is scheduled to occur during the period 2000-2005. Requisite collider detector upgrades are underway. An outline of the physics that can be realized with the upgraded Tevatron and CDF/D0 detectors is presented.
14. SNiPER: an offline software framework for non-collider physics experiments
Zou, J. H.; Huang, X. T.; Li, W. D.; Lin, T.; Li, T.; Zhang, K.; Deng, Z. Y.; Cao, G. F.
2015-12-01
SNiPER (Software for Non-collider Physics ExpeRiments) has been developed based on common requirements from both nuclear reactor neutrino and cosmic ray experiments. The design and implementation of SNiPER is described in this proceeding. Compared to the existing offline software frameworks in the high energy physics domain, the design of SNiPER is more focused on execution efficiency and flexibility. SNiPER has an open structure. User applications are executed as plug-ins based on it. The framework contains a compact kernel for software components management, event execution control, job configuration, common services, etc. Some specific features are attractive to non-collider physics experiments.
15. Unveiling the proton spin decomposition at a future electron-ion collider
DOE PAGES
Aschenauer, Elke C.; Sassot, Rodolfo; Stratmann, Marco
2015-11-24
We present a detailed assessment of how well a future electron-ion collider could constrain helicity parton distributions in the nucleon and, therefore, unveil the role of the intrinsic spin of quarks and gluons in the proton’s spin budget. Any remaining deficit in this decomposition will provide the best indirect constraint on the contribution due to the total orbital angular momenta of quarks and gluons. Specifically, all our studies are performed in the context of global QCD analyses based on realistic pseudodata and in the light of the most recent data obtained from polarized proton-proton collisions at BNL-RHIC that have providedmore » evidence for a significant gluon polarization in the accessible, albeit limited range of momentum fractions. We also present projections on what can be achieved on the gluon’s helicity distribution by the end of BNL-RHIC operations. As a result, all estimates of current and projected uncertainties are performed with the robust Lagrange multiplier technique.« less
16. Damped accelerator structures for future linear e/sup/plus minus// colliders
SciTech Connect
Deruyter, H.; Hoag, H.A.; Lisin, A.V.; Loew, G.A.; Palmer, R.B.; Paterson, J.M.; Rago, C.E.; Wang, J.W.
1989-03-01
This paper describes preliminary work on accelerator structures for future TeV linear colliders which use trains of e/sup +-/ bunches to reach the required luminosity. These bunch trains, if not perfectly aligned with respect to the accelerator axis, induce transverse wake field modes into the structure. Unless they are sufficiently damped, these modes cause cummulative beam deflections and emittance growth. The envisaged structures, originally proposed by R. B. Palmer, are disk-loaded waveguides in which the disks are slotted radially into quadrants. Wake field energy is coupled via the slots and double-ridged waveguides into a lossy region which is external to the accelerator structure. The requirement is that the Q of the HEM/sub 11/ mode be reduced to a value of less than 30. The work done so far includes MAFIA code computations and low power rf measurements to study the fields. A four-cavity 2..pi../3 mode standing-wave structure has been built to find whether the slots lower the electric breakdown thresholds below those reached with conventional disk-loaded structures. We set out to assess the microwave properties of the structure and the problems which might be encountered in fabricating it. 4 refs., 7 figs.
17. Unveiling the proton spin decomposition at a future electron-ion collider
SciTech Connect
Aschenauer, Elke C.; Sassot, Rodolfo; Stratmann, Marco
2015-11-24
We present a detailed assessment of how well a future electron-ion collider could constrain helicity parton distributions in the nucleon and, therefore, unveil the role of the intrinsic spin of quarks and gluons in the proton’s spin budget. Any remaining deficit in this decomposition will provide the best indirect constraint on the contribution due to the total orbital angular momenta of quarks and gluons. Specifically, all our studies are performed in the context of global QCD analyses based on realistic pseudodata and in the light of the most recent data obtained from polarized proton-proton collisions at BNL-RHIC that have provided evidence for a significant gluon polarization in the accessible, albeit limited range of momentum fractions. We also present projections on what can be achieved on the gluon’s helicity distribution by the end of BNL-RHIC operations. As a result, all estimates of current and projected uncertainties are performed with the robust Lagrange multiplier technique.
18. Iron-free detector magnet options for the future circular collider
Mentink, Matthias; Dudarev, Alexey; Da Silva, Helder Filipe Pais; Rolando, Gabriella; Cure, Benoit; Gaddi, Andrea; Klyukhin, Vyacheslav; Gerwig, Hubert; Wagner, Udo; ten Kate, Herman
2016-11-01
In this paper, several iron-free solenoid-based designs of a detector magnet for the future circular collider for hadron-hadron collisions (FCC-hh) are presented. The detector magnet designs for FCC-hh aim to provide bending power for particles over a wide pseudorapidity range (0 ≤|η |≤4 ). To achieve this goal, the main solenoidal detector magnet is combined with a forward magnet system, such as the previously presented force-and-torque-neutral dipole. Here, a solenoid-based alternative, the so-called balanced forward solenoid, is presented which comprises a larger inner solenoid for providing bending power to particles at |η |≥2.5 , in combination with a smaller balancing coil for ensuring that the net force and torque on each individual coil is minimized. The balanced forward solenoid is compared to the force-and-torque-neutral dipole and advantages and disadvantages are discussed. In addition, several conceptual solenoid-based detector magnet designs are shown, and quantitatively compared. The main difference between these designs is the amount of stray field reduction that is achieved. The main conclusion is that shielding coils can be used to dramatically reduce the stray field, but that this comes at the cost of increased complexity, magnet volume, and magnet weight and reduced affordability.
19. Beyond Higgs couplings: Probing the Higgs with angular observables at future e$$^{+}$$e$$^{-}$$ colliders
DOE PAGES
Craig, Nathaniel; Gu, Jiayin; Liu, Zhen; ...
2016-03-09
Here, we study angular observables in themore » $${e}^{+}{e}^{-}\\to ZH\\to {\\ell}^{+}{\\ell}^{-}b\\overline{b}$$ channel at future circular e$$^{+}$$ e$$^{-}$$ colliders such as CEPC and FCC-ee. Taking into account the impact of realistic cut acceptance and detector effects, we forecast the precision of six angular asymmetries at CEPC (FCC-ee) with center-of-mass energy $$\\sqrt{s}=240$$ GeV and 5 (30) ab$$^{-1}$$ integrated luminosity. We then determine the projected sensitivity to a range of operators relevant for he Higgs-strahlung process in the dimension-6 Higgs EFT. Our results show that angular observables provide complementary sensitivity to rate measurements when constraining various tensor structures arising from new physics. We further find that angular asymmetries provide a novel means of both probing BSM corrections to the HZγ coupling and constraining the “blind spot” in indirect limits on supersymmetric scalar top partners.« less
20. A New Chicane Experiment in PEP-II to Test Mitigations of the Electron Cloud Effect for Linear Colliders
SciTech Connect
Pivi, M. T.; Pivi, M.T.F.; Ng, J.S.T.; Arnett, D.; Cooper, F.; Kharakh, D.; King, F.K.; Kirby, R.E.; Kuekan, B.; Lipari, J.J.; Munro, M.; Olszewski, J.; Raubenheimer, T.O.; Seeman, J.; Spencer, C.M.; Wang, L.; Wittmer, W.; Celata, C.M.; Furman, M.A.; Smith, B.
2008-06-11
Beam instability caused by the electron cloud has been observed in positron and proton storage rings, and it is expected to be a limiting factor in the performance of future colliders [1-3]. The effect is expected to be particularly severe in magnetic field regions. To test possible mitigation methods in magnetic fields, we have installed a new 4-dipole chicane experiment in the PEP-II Low Energy Ring (LER) at SLAC with both bare and TiN-coated aluminum chambers. In particular, we have observed a large variation of the electron flux at the chamber wall as a function of the chicane dipole field. We infer this is a new high order resonance effect where the energy gained by the electrons in the positron beam depends on the phase of the electron cyclotron motion with respect to the bunch crossing, leading to a modulation of the secondary electron production. Presumably the cloud density is modulated as well and this resonance effect could be used to reduce its magnitude in future colliders. We present the experimental results obtained during January 2008 until the April final shut-down of the PEP-II machine.
1. A New Chicane Experiment In PEP-II to Test Mitigations of the Electron Cloud Effect for Linear Colliders
SciTech Connect
Pivi, M.T.F.; Ng, J.S.T.; Arnett, D.; Cooper, F.; Kharakh, D.; King, F.K.; Kirby, R.E.; Kuekan, B.; Lipari, J.J.; Munro, M.; Olszewski, J.; Raubenheimer, T.O.; Seeman, J.; Smith, B.; Spencer, C.M.; Wang, L.; Wittmer, W.; Celata, C.M.; Furman, M.A.; /SLAC /LBL, Berkeley
2008-07-03
Beam instability caused by the electron cloud has been observed in positron and proton storage rings, and it is expected to be a limiting factor in the performance of future colliders [1-3]. The effect is expected to be particularly severe in magnetic field regions. To test possible mitigation methods in magnetic fields, we have installed a new 4-dipole chicane experiment in the PEP-II Low Energy Ring (LER) at SLAC with both bare and TiN-coated aluminum chambers. In particular, we have observed a large variation of the electron flux at the chamber wall as a function of the chicane dipole field. We infer this is a new high order resonance effect where the energy gained by the electrons in the positron beam depends on the phase of the electron cyclotron motion with respect to the bunch crossing, leading to a modulation of the secondary electron production. Presumably the cloud density is modulated as well and this resonance effect could be used to reduce its magnitude in future colliders. We present the experimental results obtained during January 2008 until the April final shut-down of the PEP-II machine.
2. Precision muon tracking detectors and read-out electronics for operation at very high background rates at future colliders
Kortner, O.; Kroha, H.; Nowak, S.; Richter, R.; Schmidt-Sommerfeld, K.; Schwegler, Ph.
2016-07-01
The experience of the ATLAS MDT muon spectrometer shows that drift-tube chambers provide highly reliable precision muon tracking over large areas. The ATLAS muon chambers are exposed to unprecedentedly high background of photons and neutrons induced by the proton collisions. Still higher background rates are expected at future high-energy and high-luminosity colliders beyond HL-LHC. Therefore, drift-tube detectors with 15 mm tube diameter (30 mm in ATLAS), optimised for high rate operation, have been developed for such conditions. Several such full-scale sMDT chambers have been constructed with unprecedentedly high sense wire positioning accuracy of better than 10 μm. The chamber design and assembly methods have been optimised for large-scale production, reducing considerably cost and construction time while maintaining the high mechanical accuracy and reliability. Tests at the Gamma Irradiation Facility at CERN showed that the rate capability of sMDT chambers is improved by more than an order of magnitude compared to the MDT chambers. By using read-out electronics optimised for high counting rates, the rate capability can be further increased.
3. Numerical modeling of laser-driven experiments of colliding jets: Turbulent amplification of seed magnetic fields
Tzeferacos, Petros; Fatenejad, Milad; Flocke, Norbert; Graziani, Carlo; Gregori, Gianluca; Lamb, Donald; Lee, Dongwook; Meinecke, Jena; Scopatz, Anthony; Weide, Klaus
2014-10-01
In this study we present high-resolution numerical simulations of laboratory experiments that study the turbulent amplification of magnetic fields generated by laser-driven colliding jets. The radiative magneto-hydrodynamic (MHD) simulations discussed here were performed with the FLASH code and have assisted in the analysis of the experimental results obtained from the Vulcan laser facility. In these experiments, a pair of thin Carbon foils is placed in an Argon-filled chamber and is illuminated to create counter-propagating jets. The jets carry magnetic fields generated by the Biermann battery mechanism and collide to form a highly turbulent region. The interaction is probed using a wealth of diagnostics, including induction coils that are capable of providing the field strength and directionality at a specific point in space. The latter have revealed a significant increase in the field's strength due to turbulent amplification. Our FLASH simulations have allowed us to reproduce the experimental findings and to disentangle the complex processes and dynamics involved in the colliding flows. This work was supported in part at the University of Chicago by DOE NNSA ASC.
4. Studies of strong electroweak symmetry breaking at future e{sup +}e{sup {minus}} linear colliders
SciTech Connect
Barklow, T.L.
1994-08-01
Methods of studying strong electroweak symmetry breaking at future e{sup +}e{sup {minus}} linear colliders are reviewed. Specifically, we review precision measurements of triple gauge boson vertex parameters and the rescattering of longitudinal W bosons in the process e{sup +}e{sup {minus}} {yields} W{sup +}W{sup {minus}}. Quantitative estimates of the sensitivity of each technique to strong electroweak symmetry breaking are included.
5. PROCEEDINGS OF THE 1983 DPF WORKSHOP ON COLLIDER DETECTORS: PRESENT CAPABILITIES AND FUTURE POSSIBILITIES, FEB. 28 - MARCH 4, 1983.
SciTech Connect
Loken Ed, S.C.; Nemethy Ed, P.
1983-04-01
It is useful before beginning our work here to restate briefly the purpose of this workshop in the light of the present circumstances of elementary particle physics in the U.S. The goal of our field is easily stated in a general way: it is to reach higher center of mass energies and higher luminosities while employing more sensitive and more versatile event detectors, all in order to probe more deeply into the physics of elementary particles. The obstacles to achieving this goal are equally apparent. Escalating costs of construction and operation of our facilities limit alternatives and force us to make hard choices among those alternatives. The necessity to be highly selective in the choice of facilities, in conjunction with the need for increased manpower concentrations to build accelerators and mount experiments, leads to complex social problems within the science. As the frontier is removed ever further, serious technical difficulties and limitations arise. Finally, competition, much of which is usually healthy, now manifests itself with greater intensity on a regional basis within our country and also on an international scale. In the far ({ge}20 yr) future, collaboration on physics facilities by two or more of the major economic entities of the world will possibly be forthcoming. In the near future, we are left to bypass or overcome these obstacles on a regional scale as best we can. The choices we face are in part indicated in the list of planned and contemplated accelerators shown in Table I. The facilities indicated with an asterisk pose immediate questions: (1) Do we need them all and what should be their precise properties? (2) How are the ones we choose to be realized? (3) What is the nature of the detectors to exploit those facilities? (4) How do we respond to the challenge of higher luminosity as well as higher energy in those colliders? The decision-making process in this country and elsewhere depends on the answers to these technical questions
6. Estimates of Hadronic Backgrounds in Future e+e- LinearColliders
SciTech Connect
Ohgaki, Tomomi
1998-05-01
We have estimated hadronic backgrounds for an e+e- linear collider at a center- of-mass energy of 5 TeV. In order to achieve a required luminosity in TeV e+ e- colliders, the high beamstrahlung parameter {Upsilon}, such as several thousands, is caused. In the high {Upsilon} regime, the {gamma}{gamma} luminosities due to the collision of beamstrahlung photons are calculated by using the CAIN code. According to the {gamma}{gamma} luminosity distribution, we have estimated the hadronic backgrounds of {gamma}{gamma} {yields} minijets based on the parton distributions of the Drees and Grassie model by the PYTHIA 5.7 code. The Japan Linear Collider (J LC-1) detector simulator is applied for selection performances in the detector.
7. Future e/sup +/e/sup -/ linear colliders and beam-beam effects
SciTech Connect
Wilson, P.B.
1986-05-01
Numerous concepts, ranging from conventional to highly exotic, hae been proposed for the acceleration of electrons and positrons to very high energies. For any such concept to be viable, it must be possible to produce from it a set of consistent parameters for one of these ''benchmark'' machines. Attention is directed to the choice of parameters for a collider in the 300 GeV energy range, operating at a gradient on the order of 200 MV/m, using X-band power sources to drive a conventional disk-loaded accelerating structure. These rf power sources, while not completely conventional represent a reasonable extrapolation from present technology. The choice of linac parameters is strongly coupled to various beam-beam effects which take place when the electron and positron bunches collide. We summarize these beam-beam effects, and then return to the rf design of a 650 GeV center-of-mass collider. 14 refs.
8. The Fermilab PBAR-P Collider: Present status and future plans
SciTech Connect
Johnson, R.
1988-11-01
The Tevatron Collider is performing beyond expectations for its first physics run. The peak luminosity is already 1.6 times the design goal of 10/sup 30/ cm/sup /minus/2/ s/sup /minus/1/. The anticipated integrated luminosity recorded by the major detector, CDF, is 3 inverse picobarns which should be sufficient to see the top quark if its mass is less than 110 GeV. The next two Collider runs will have improved performance with luminosity approaching 10/sup 31/ at two interaction regions. In the years between 1993 and 2000, the Collider energy will be increased by using the highest field superconducting magnets then available, where 8.8 T would give 2 TeV on 2 TeV pbar-p collisions with a luminosity above 10/sup 31/. To facilitate this possibility and to improve the general Collider capabilities, a new 150 GeV Main Injector is now being designed. 3 figs., 2 tabs.
9. Ion colliders
SciTech Connect
Fischer, W.
2011-12-01
Ion colliders are research tools for high-energy nuclear physics, and are used to test the theory of Quantum Chromo Dynamics (QCD). The collisions of fully stripped high-energy ions create matter of a temperature and density that existed only microseconds after the Big Bang. Ion colliders can reach higher densities and temperatures than fixed target experiments although at a much lower luminosity. The first ion collider was the CERN Intersecting Storage Ring (ISR), which collided light ions [77Asb1, 81Bou1]. The BNL Relativistic Heavy Ion Collider (RHIC) is in operation since 2000 and has collided a number of species at numerous energies. The CERN Large Hadron Collider (LHC) started the heavy ion program in 2010. Table 1 shows all previous and the currently planned running modes for ISR, RHIC, and LHC. All three machines also collide protons, which are spin-polarized in RHIC. Ion colliders differ from proton or antiproton colliders in a number of ways: the preparation of the ions in the source and the pre-injector chain is limited by other effects than for protons; frequent changes in the collision energy and particle species, including asymmetric species, are typical; and the interaction of ions with each other and accelerator components is different from protons, which has implications for collision products, collimation, the beam dump, and intercepting instrumentation devices such a profile monitors. In the preparation for the collider use the charge state Z of the ions is successively increased to minimize the effects of space charge, intrabeam scattering (IBS), charge change effects (electron capture and stripping), and ion-impact desorption after beam loss. Low charge states reduce space charge, intrabeam scattering, and electron capture effects. High charge states reduce electron stripping, and make bending and acceleration more effective. Electron stripping at higher energies is generally more efficient. Table 2 shows the charge states and energies in the
10. Nucleon Decay and Neutrino Experiments, Experiments at High Energy Hadron Colliders, and String Theor
SciTech Connect
Jung, Chang Kee; Douglas, Michaek; Hobbs, John; McGrew, Clark; Rijssenbeek, Michael
2013-07-29
This is the final report of the DOE grant DEFG0292ER40697 that supported the research activities of the Stony Brook High Energy Physics Group from November 15, 1991 to April 30, 2013. During the grant period, the grant supported the research of three Stony Brook particle physics research groups: The Nucleon Decay and Neutrino group, the Hadron Collider Group, and the Theory Group.
11. Low velocity impacts into dust: results from the COLLIDE-2 microgravity experiment
Colwell, Joshua E.
2003-07-01
We present the results of the second flight of the Collisions Into Dust Experiment (COLLIDE-2), a space shuttle payload that performs six impact experiments into simulated planetary regolith at speeds between 1 and 100 cm/s. COLLIDE-2 flew on the STS-108 mission in December 2001 following an initial flight in April 1998. The experiment was modified since the first flight to provide higher quality data, and the impact parameters were varied. Spherical quartz projectiles of 1-cm radius were launched into quartz sand and JSC-1 lunar regolith simulant targets 2-cm deep. At impact speeds below ˜20 cm/s the projectile embedded itself in the target material and did not rebound. Some ejecta were produced at ˜10 cm/s. At speeds >25 cm/s the projectile rebounded and significant ejecta was produced. We present coefficients of restitution, ejecta velocities, and limits on ejecta masses. Ejecta velocities are typically less than 10% of the impact velocity, and the fraction of impact kinetic energy partitioned into ejecta kinetic energy is also less than 10%. Taken together with a proposed aerodynamic planetesimal growth mechanism, these results support planetesimal growth at impact speeds above the nominal observed threshold of about 20 cm/s.
12. Investigation and performance assessment of hydraulic schemes for the beam screen cooling for the Future Circular Collider of hadron beams
Kotnig, C.; Tavian, L.; Brenn, G.
2017-02-01
The international study at CERN of a possible future circular collider (FCC) considers an option for a very high energy hadron-hadron collider located in a quasi-circular underground tunnel of about 100 km of length. The technical segmentation of the collider foresees continuously cooled sections of up to 10.4 km; throughout the entire section length, more than 600 kW of heat mainly generated by the beam synchrotron radiation must be removed from the beam screen circuits at a mean temperature of 50 K. The cryogenic system has to be designed to extract the heat load dependably with a high-efficiency refrigeration process. Reliable and efficient cooling of the FCC beam screen in all possible operational modes requires a solid basic design as well as well-matched components in the final arrangement. After illustrating the decision making process leading to the selection of an elementary hydraulic scheme, this paper presents preliminary conceptual designs of the FCC beam screen cooling system and compares the different schemes regarding the technical advantages and disadvantages with respect to the exergetic efficiency.
13. Di-Higgs decay of stoponium at a future photon-photon collider
Ito, Hayato; Moroi, Takeo; Takaesu, Yoshitaro
2016-05-01
We study the detectability of the stoponium in the di-Higgs decay mode at the photon-photon collider option of the International e+e- Linear Collider, the center-of-mass energy of which is planned to reach ˜1 TeV . We find that 5 σ detection of the di-Higgs decay mode is possible with the integrated electron-beam luminosity of 1 ab-1 if the signal cross section, σ (γ γ →σt˜1→h h ) , of O (0.1 ) fb is realized for the stoponium mass smaller than ˜800 GeV at 1 TeV ILC. Such a value of the cross section can be realized in the minimal supersymmetric standard model with relatively large trilinear stop-stop-Higgs coupling constant. The implication of the stoponium cross section measurement for the minimal supersymmetric standard model stop sector is also discussed.
14. Monitoring in future e/sup +/e/sup /minus// colliders
SciTech Connect
Erickson, R.A.
1989-05-01
Study groups throughout the world have recently been examining possible parameter choices for a TeV-class linear collider. In all cases, they have concluded that in order to achieve useful luminosity within plausible cost constraints, the opposing beams of electrons and positrons must be focused to extraordinarily small spots and steered into collision with an unprecedented degree of accuracy. Some means of monitoring these beam parameters will be essential in order to guide the focusing and steering. In this talk, examples will be presented which illustrate the nature of these new requirements, along with a discussion of the limitations of conventional techniques for monitoring such beams and some recent measurements from the SLAC Linear Collider that show how the next level of resolution in beam monitoring will be achieved. 19 refs., 12 figs., 1 tab.
15. Semiconductor devices as track detectors in high energy colliding beam experiments
SciTech Connect
Ludlam, T
1980-01-01
In considering the design of experiments for high energy colliding beam facilities one quickly sees the need for better detectors. The full exploitation of machines like ISABELLE will call for detector capabilities beyond what can be expected from refinements of the conventional approaches to particle detection in high energy physics experiments. Over the past year or so there has been a general realization that semiconductor device technology offers the possibility of position sensing detectors having resolution elements with dimensions of the order of 10 microns or smaller. Such a detector could offer enormous advantages in the design of experiments, and the purpose of this paper is to discuss some of the possibilities and some of the problems.
16. Overview and Actual Understanding of the Electron Cloud Effect and Instabilities in the Future Linear Colliders
SciTech Connect
Pivi, M.
2004-12-03
The electron cloud is potentially an important effect in linear colliders. Many of the effects have been evaluated. Actions to suppress the electron cloud are required for the GLC/NLC positron main damping ring (MDR or DR) and the low emittance transport lines as well as for the TESLA damping ring. There is an ongoing R&D program studying a number of possible remedies to reduce the secondary electron yield below that required.
17. Signatures of extra gauge bosons in the littlest Higgs model with T parity at future colliders
SciTech Connect
Cao, Qing-Hong; Chen, Chuan-Ren
2007-10-01
We study the collider signatures of a T-odd gauge boson W{sub H} pair production in the littlest Higgs model with T parity (LHT) at Large Hadron Collider (LHC) and Linear Collider (LC). At the LHC, we search for the W{sub H} boson using its leptonic decay, i.e. pp{yields}W{sub H}{sup +}W{sub H}{sup -}{yields}A{sub H}A{sub H}l{sup +}{nu}{sub l}l{sup '-}{nu}{sub l{sup '}}, which gives rise to a collider signature of l{sup +}l{sup '-}+Ee{sub T}. We demonstrate that the LHC not only has a great potential of discovering the W{sub H} boson in this channel, but also can probe enormous parameter space of the LHT. Because of four missing particles in the final state, one cannot reconstruct the mass of W{sub H} at the LHC. But such a mass measurement can be easily achieved at the LC in the process of e{sup +}e{sup -}{yields}W{sub H}{sup +}W{sub H}{sup -}{yields}A{sub H}A{sub H}W{sup +}W{sup -}{yields}A{sub H}A{sub H}jjjj. We present an algorithm of measuring the mass and spin of the W{sub H} boson at the LC. Furthermore, we illustrate that the spin correlation between the W boson and its mother particle (W{sub H}) can be used to distinguish the LHT from other new physics models.
18. Relativistic-Klystron two-beam accelerator as a power source for future linear colliders
Lidia, S. M.; Anderson, D. E.; Eylon, S.; Henestroza, E.; Houck, T. L.; Westenskow, G. A.; Vanecek, D. L.; Yu, S. S.
1999-05-01
The technical challenge for making two-beam accelerators into realizable power sources for high-energy colliders lies in the creation of the drive beam and in its propagation over long distances through multiple extraction sections. This year we have been constructing a 1.2-kA, 1-MeV, induction gun for a prototype relativistic klystron two-beam accelerator (RK-TBA). The electron source will be a 8.9 cm diameter, thermionic, flat-surface cathode with a maximum shroud field stress of approximately 165 kV/cm. Additional design parameters for the injector include a pulse length of over 150-ns flat top (1% energy variation), and a normalized edge emittance of less than 300 pi-mm-mr. The prototype accelerator will be used to study, physics, engineering, and costing issues involved in the application of the RK-TBA concept to linear colliders. We have also been studying optimization parameters, such as frequency, for the application of the RK-TBA concept to multi-TeV linear colliders. As an rf power source the RK-TBA scales favorably up to frequencies around 35 GHz. An overview of this work with details of the design and performance of the prototype injector, beam line, and diagnostics will be presented.
19. Relativistic-Klystron two-beam accelerator as a power source for future linear colliders
SciTech Connect
Lidia, S.M.; Anderson, D.E.; Eylon, S.; Henestroza, E.; Vanecek, D.L.; Yu, S.S.; Westenskow, G.A.
1999-05-01
The technical challenge for making two-beam accelerators into realizable power sources for high-energy colliders lies in the creation of the drive beam and in its propagation over long distances through multiple extraction sections. This year we have been constructing a 1.2-kA, 1-MeV, induction gun for a prototype relativistic klystron two-beam accelerator (RK-TBA). The electron source will be a 8.9 cm diameter, thermionic, flat-surface cathode with a maximum shroud field stress of approximately 165 kV/cm. Additional design parameters for the injector include a pulse length of over 150-ns flat top (1{percent} energy variation), and a normalized edge emittance of less than 300 pi-mm-mr. The prototype accelerator will be used to study, physics, engineering, and costing issues involved in the application of the RK-TBA concept to linear colliders. We have also been studying optimization parameters, such as frequency, for the application of the RK-TBA concept to multi-TeV linear colliders. As an rf power source the RK-TBA scales favorably up to frequencies around 35 GHz. An overview of this work with details of the design and performance of the prototype injector, beam line, and diagnostics will be presented. {copyright} {ital 1999 American Institute of Physics.}
20. Relativistic-klystron two-beam accelerator as a power source for future linear colliders
SciTech Connect
Anderson, D E; Eylon, S; Henestroza, E; Houck, T L; Lidia, M; Vanecek, D L; Westenskow, G A; Yu, S S
1998-10-05
The technical challenge for making two-beam accelerators into realizable power sources for high-energy colliders lies in the creation of the drive beam and in its propagation over long distances through multiple extraction sections. This year we have been constructing a 1.2&A, l-MeV, induction gun for a prototype relativistic klystron two-beam accelerator (RK-TBA). The electron source will be a 8.9 cm diameter, thermionic, flat-surface cathode with a maximum shroud field stress of approximately 165 kV/cm. Additional design parameters for the injector include a pulse length of over 150-ns flat top (1% energy variation), and a normalized edge emittance of less than 300 pi-mm-n-n. The prototype accelerator will be used to study physics, engineering, and costing issues involved in the application of the RK-TBA concept to linear colliders. We have also been studying optimization parameters, such as frequency, for the application of the RK-TBA concept to multi-TeV linear colliders. As an rf power source the RK-TBA scales favorably up to frequencies around 35 GHz. An overview of this work with details of the design and performance of the prototype injector, beam line, and diagnostics will be presented.
1. Relativistic-Klystron two-beam accelerator as a power source for future linear colliders
SciTech Connect
Lidia, S. M.; Anderson, D. E.; Eylon, S.; Henestroza, E.; Vanecek, D. L.; Yu, S. S.; Houck, T. L.; Westenskow, G. A.
1999-05-07
The technical challenge for making two-beam accelerators into realizable power sources for high-energy colliders lies in the creation of the drive beam and in its propagation over long distances through multiple extraction sections. This year we have been constructing a 1.2-kA, 1-MeV, induction gun for a prototype relativistic klystron two-beam accelerator (RK-TBA). The electron source will be a 8.9 cm diameter, thermionic, flat-surface cathode with a maximum shroud field stress of approximately 165 kV/cm. Additional design parameters for the injector include a pulse length of over 150-ns flat top (1% energy variation), and a normalized edge emittance of less than 300 pi-mm-mr. The prototype accelerator will be used to study, physics, engineering, and costing issues involved in the application of the RK-TBA concept to linear colliders. We have also been studying optimization parameters, such as frequency, for the application of the RK-TBA concept to multi-TeV linear colliders. As an rf power source the RK-TBA scales favorably up to frequencies around 35 GHz. An overview of this work with details of the design and performance of the prototype injector, beam line, and diagnostics will be presented.
2. Single and double SM-like Higgs boson production at future electron-positron colliders in composite 2HDMs
De Curtis, Stefania; Moretti, Stefano; Yagyu, Kei; Yildirim, Emine
2017-05-01
We investigate single- and double-h , the discovered Standard-Model- (SM) like Higgs boson, production at future e+e- colliders in composite 2-Higgs doublet models (C2HDMs) and elementary 2-Higgs doublet models (E2HDMs) with a softly broken Z2 symmetry. We first survey their parameter spaces allowed by theoretical bounds from perturbative unitarity and vacuum stability as well as by future data at the Large Hadron Collider with an integrated luminosity up to 3000 fb-1 under the assumption that no new Higgs boson is detected. We then discuss how different the cross sections can be between the two scenarios when κV , the h V V (V =W±,Z ) coupling normalized to the SM value, is taken to be the same value in both scenarios. We find that if κV2 is found to be, e.g., 0.98, then the cross sections in C2HDMs with f (the compositeness scale) in the TeV region can be maximally changed to be about -15 % , -18 %, -50 % and -35 % for the e+e-→t t ¯h , e+e-→Z h h , e+e-→e+e-h h and e+e-→t t ¯h h processes, respectively, with respect to those in E2HDMs. Thus, a future electron-positron collider has the potential to discriminate between E2HDMs and C2HDMs, even when only h event rates are measured.
3. Heavy-ion physics with the ALICE experiment at the CERN Large Hadron Collider.
PubMed
Schukraft, J
2012-02-28
After close to 20 years of preparation, the dedicated heavy-ion experiment A Large Ion Collider Experiment (ALICE) took first data at the CERN Large Hadron Collider (LHC) accelerator with proton collisions at the end of 2009 and with lead nuclei at the end of 2010. After a short introduction into the physics of ultra-relativistic heavy-ion collisions, this article recalls the main design choices made for the detector and summarizes the initial operation and performance of ALICE. Physics results from this first year of operation concentrate on characterizing the global properties of typical, average collisions, both in proton-proton (pp) and nucleus-nucleus reactions, in the new energy regime of the LHC. The pp results differ, to a varying degree, from most quantum chromodynamics-inspired phenomenological models and provide the input needed to fine tune their parameters. First results from Pb-Pb are broadly consistent with expectations based on lower energy data, indicating that high-density matter created at the LHC, while much hotter and larger, still behaves like a very strongly interacting, almost perfect liquid.
4. Spherical Neutral Detector tracking system for experiments at VEPP-2000 e+e- collider
Aulchenko, V. M.; Bogdanchikov, A. G.; Botov, A. A.; Bukin, A. D.; Bukin, D. A.; Dimova, T. V.; Druzhinin, V. P.; Filatov, P. V.; Golubev, V. B.; Kharlamov, A. G.; Korol, A. A.; Koshuba, S. V.; Obrazovsky, A. E.; Pakhtusova, E. V.; Serednyakov, S. I.; Sirotkin, A. A.; Usov, Yu. V.; Vasiljev, A. V.
2007-10-01
The new tracking system of the Spherical Neutral Detector for experiments at the VEPP-2000 e+e- collider in Novosibirsk is described. The system consists of a 9-layer drift chamber with 24 jet cells and a proportional chamber in a common gas volume. Main system features are its small size and high density of readout channels for gaseous tracking systems at colliding experiments. The drift chamber provides at least four measurements along the track for charged particles within 94% solid angle and nine measurements for particles propagating at large angle relative to the beam axis. R-ϕ coordinates (in a plane perpendicular to the beam axis) are obtained using the ionization drift time measurement. Longitudial coordinates are measured using charge division on anode wires and charge distribution on cathode strips. Design angular resolutions for radial tracks are σϕ=0.26∘, σθ=0.3∘, the vertex resolution is σR=0.2 mm. The full-size prototype of the tracking system has been assembled and tested. The wire structure of the prototype represents one quadrant of the chamber. Results of the prototype assembly quality control and tests with radioactive sources and cosmic rays are presented. They are in a good agreement with expected system parameters.
5. Efficient twin aperture magnets for the future circular e+ /e_ collider
Milanese, A.
2016-11-01
We report preliminary designs for the arc dipoles and quadrupoles of the FCC-ee double-ring collider. After recalling cross sections and parameters of warm magnets used in previous large accelerators, we focus on twin aperture layouts, with a magnetic coupling between the gaps, which minimizes construction cost and reduces the electrical power required for operation. We also indicate how the designs presented may be further optimized so as to optimally address any further constraints related to beam physics, vacuum system, and electric power consumption.
6. Probing the Higgs with angular observables at future e+e colliders
SciTech Connect
Liu, Zhen
2016-10-24
In this paper, I summarize our recent works on using differential observables to explore the physics potential of future e+e colliders in the framework of Higgs effective field theory. This proceeding is based upon Refs. 1 and 2. We study angular observables in the e+e → ZHℓ+b$\\bar{b}$ channel at future circular e+e colliders such as CEPC and FCC-ee. Taking into account the impact of realistic cut acceptance and detector effects, we forecast the precision of six angular asymmetries at CEPC (FCC-ee) with center-of-mass energy √s = 240 GeV and 5 (30) ab–1 integrated luminosity. We then determine the projected sensitivity to a range of operators relevant for the Higgsstrahlung process in the dimension-6 Higgs EFT. Our results show that angular observables provide complementary sensitivity to rate measurements when constraining various tensor structures arising from new physics. We further find that angular asymmetries provide a novel means of constraining the “blind spot” in indirect limits on supersymmetric scalar top partners. Finally, we also discuss the possibility of using ZZ-fusion at e+e machines at different energies to probe new operators.
7. High-Power Multimode X-Band RF Pulse Compression System for Future Linear Colliders
SciTech Connect
Tantawi, S.G.; Nantista, C.D.; Dolgashev, V.A.; Pearson, C.; Nelson, J.; Jobe, K.; Chan, J.; Fant, K.; Frisch, J.; Atkinson, D.; /LLNL, Livermore
2005-08-10
We present a multimode X-band rf pulse compression system suitable for a TeV-scale electron-positron linear collider such as the Next Linear Collider (NLC). The NLC main linac operating frequency is 11.424 GHz. A single NLC rf unit is required to produce 400 ns pulses with 475 MW of peak power. Each rf unit should power approximately 5 m of accelerator structures. The rf unit design consists of two 75 MW klystrons and a dual-moded resonant-delay-line pulse compression system that produces a flat output pulse. The pulse compression system components are all overmoded, and most components are designed to operate with two modes. This approach allows high-power-handling capability while maintaining a compact, inexpensive system. We detail the design of this system and present experimental cold test results. We describe the design and performance of various components. The high-power testing of the system is verified using four 50 MW solenoid-focused klystrons run off a common 400 kV solid-state modulator. The system has produced 400 ns rf pulses of greater than 500 MW. We present the layout of our system, which includes a dual-moded transmission waveguide system and a dual-moded resonant line (SLED-II) pulse compression system. We also present data on the processing and operation of this system, which has set high-power records in coherent and phase controlled pulsed rf.
8. Effects and tolerances of injection jitter in the SLC and future linear colliders
SciTech Connect
Limberg, T.; Seeman, J.T.; Spence, W.L.
1990-05-01
The bunch injected into the main linac of a linear collider may have offsets in transverse angle and position, may have a phase error (longitudinal position offset) and, furthermore, may be optically mismatched. Each of these injection errors reduces the luminosity and must be held within tolerances. The effect of optical mismatches on the emittance at the end of the linac is calculated analytically. The tightest tolerances on magnetic elements stemming from these effects are listed. The phase tolerance is determined by the energy acceptance of the final focus system. It imposes tolerances to the integrated field strength of the damping ring and RTL bending magnets and the bunch compressor rf-phase. In this paper, measurements of injection jitter and the effect of betatron oscillations caused by changes of the angle or position of the incoming beam are described. These measurements were taken with BNS damping which relaxes certain tolerances by an order of magnitude. The injection jitter tolerances for a linac of the next generation are given. As an example, parameters for the Next Linear Collider (NLC) being designed at SLAC are used.
9. The upgrade programme of the major experiments at the Large Hadron Collider
La Rocca, P.; Riggi, F.
2014-05-01
After a successful data taking period at the CERN LHC by the major physics experiments (ALICE, ATLAS, CMS and LHCb) since 2009, a long-term plan is already envisaged to fully exploit the vast physics potential of the Large Hadron Collider (LHC) within the next two decades. The CERN accelerator complex will undergo a series of upgrades leading ultimately to increase both the collision energy and the luminosity, thus maximizing the amount of data delivered to all experiments. As a consequence, the experiments have also to cope with very high detector occupancies and operate in the hard radiation environment caused by a huge multiplicity of particles produced in each beam crossing. In parallel to the accelerator upgrades, the LHC experiments are planning various upgrades to their detector, trigger, and data acquisition systems. The main motivation for the upgrades is to extend and to improve their physics programme also in the increasingly challenging LHC environment. In this paper a general overview of the upgrade programme of the major experiments at LHC will be given, with some additional details concerning specifications and physics programme of new detector subsystems.
10. Single Anomalous Production of the Fourth SM Family Quarks at Future e+e-, ep, and pp Colliders
SciTech Connect
Ciftci, A. K.; Ciftci, R.; Sultansoy, S.; Yildiz, H. Duran
2007-04-23
Possible single productions of fourth SM family u4 and d4 quarks via anomalous interactions at the e+e-, ep, and pp colliders are investigated. Signatures of such anomalous processes are discussed at above colliders comparatively.
11. Potential Remedies for the High Synchrotron-Radiation-Induced Heat Load for Future Highest-Energy-Proton Circular Colliders
Cimino, R.; Baglin, V.; Schäfers, F.
2015-12-01
We propose a new method for handling the high synchrotron radiation (SR) induced heat load of future circular hadron colliders (like FCC-hh). FCC-hh are dominated by the production of SR, which causes a significant heat load on the accelerator walls. Removal of such a heat load in the cold part of the machine, as done in the Large Hadron Collider, will require more than 100 MW of electrical power and a major cooling system. We studied a totally different approach, identifying an accelerator beam screen whose illuminated surface is able to forward reflect most of the photons impinging onto it. Such a reflecting beam screen will transport a significant part of this heat load outside the cold dipoles. Then, in room temperature sections, it could be more efficiently dissipated. Here we will analyze the proposed solution and address its full compatibility with all other aspects an accelerator beam screen must fulfill to keep under control beam instabilities as caused by electron cloud formation, impedance, dynamic vacuum issues, etc. If experimentally fully validated, a highly reflecting beam screen surface will provide a viable and solid solution to be eligible as a baseline design in FCC-hh projects to come, rendering them more cost effective and sustainable.
12. Luminosity Limitations of Linear Colliders Based on Plasma Acceleration
SciTech Connect
Lebedev, Valeri; Burov, Alexey; Nagaitsev, Sergei
2016-01-01
Particle acceleration in plasma creates a possibility of exceptionally high accelerating gradients and appears as a very attractive option for future linear electron-positron and/or photon-photon colliders. These high accelerating gradients were already demonstrated in a number of experiments. Furthermore, a linear collider requires exceptionally high beam brightness which still needs to be demonstrated. In this article we discuss major phenomena which limit the beam brightness of accelerated beam and, consequently, the collider luminosity.
13. Physics requirements for the design of the ATLAS and CMS experiments at the Large Hadron Collider.
PubMed
Virdee, T S
2012-02-28
The ATLAS and CMS experiments at the CERN Large Hadron Collider are discovery experiments. Thus, the aim was to make them sensitive to the widest possible range of new physics. New physics is likely to reveal itself in addressing questions such as: how do particles acquire mass; what is the particle responsible for dark matter; what is the path towards unification; do we live in a world with more space-time dimensions than the familiar four? The detection of the Higgs boson, conjectured to give mass to particles, was chosen as a benchmark to test the performance of the proposed experiment designs. Higgs production is one of the most demanding hypothesized processes in terms of required detector resolution and background discrimination. ATLAS and CMS feature full coverage, 4π-detectors to measure precisely the energies, directions and identity of all the particles produced in proton-proton collisions. Realizing this goal has required the collaborative efforts of enormous teams of people from around the world.
14. Quench protection analysis integrated in the design of dipoles for the Future Circular Collider
Salmi, Tiina; Stenvall, Antti; Prioli, Marco; Ruuskanen, Janne; Verweij, Arjan; Auchmann, Bernhard; Tommasini, Davide; Schoerling, Daniel; Lorin, Clement; Toral, Fernando; Durante, Maria; Farinon, Stefania; Marinozzi, Vittorio; Fabbricatore, Pasquale; Sorbi, Massimo; Munilla, Javier
2017-03-01
The EuroCirCol collaboration is designing a 16 T Nb3Sn dipole that can be used as the main bending magnet in a 100 km long 100 TeV hadron-hadron collider. For economic reasons, the magnets need to be as compact as possible, requiring optimization of the cable cross section in different magnetic field regions. This leads to very high stored energy density and poses serious challenges for the magnet protection in case of a quench, i.e., sudden loss of superconductivity in the winding. The magnet design therefore must account for the limitations set by quench protection from the earliest stages of the design. In this paper we describe how the aspect of quench protection has been accounted for in the process of developing different options for the 16 T dipole designs. We discuss the assumed safe values for hot spot temperatures and voltages, and the efficiency of the protection system. We describe the developed tools for the quench analysis, and how their usage in the magnet design will eventually ensure a secure magnet operation.
15. Computing at h1 - Experience and Future
Eckerlin, G.; Gerhards, R.; Kleinwort, C.; KrÜNer-Marquis, U.; Egli, S.; Niebergall, F.
The H1 experiment has now been successfully operating at the electron proton collider HERA at DESY for three years. During this time the computing environment has gradually shifted from a mainframe oriented environment to the distributed server/client Unix world. This transition is now almost complete. Computing needs are largely determined by the present amount of 1.5 TB of reconstructed data per year (1994), corresponding to 1.2 × 107 accepted events. All data are centrally available at DESY. In addition to data analysis, which is done in all collaborating institutes, most of the centrally organized Monte Carlo production is performed outside of DESY. New software tools to cope with offline computing needs include CENTIPEDE, a tool for the use of distributed batch and interactive resources for Monte Carlo production, and H1 UNIX, a software package for automatic updates of H1 software on all UNIX platforms.
16. Testing C P violation in the scalar sector at future e+e- colliders
Li, Gang; Mao, Ying-nan; Zhang, Chen; Zhu, Shou-hua
2017-02-01
We propose a model-independent method to test C P violation in the scalar sector through measuring the inclusive cross sections of e+e-→Z h1,Z h2,h1h2 processes with the recoil mass technique, where h1 , h2 stand for the 125 GeV standard model-like Higgs boson and a new lighter scalar, respectively. This method effectively measures a quantity K proportional to the product of the three couplings of h1Z Z ,h2Z Z ,h1h2Z vertices. The value of K encodes a part of information about C P violation in the scalar sector. We simulate the signal and backgrounds for the processes mentioned above with m2=40 GeV at the Circular Electron-Positron Collider (CEPC) with the integrated luminosity 5 ab-1 . We find that the discovery of both Z h2 and h1h2 processes at 5 σ level indicates an O (10-2) K value that can be measured to 16% precision. The method is applied to the weakly coupled Lee model in which C P violation can be tested either before or after utilizing a "pT balance" cut (see Sec. II B for the definition). Lastly we point out that K ≠0 is a sufficient but not a necessary condition for the existence of C P violation in the scalar sector, namely, K =0 does not imply C P conservation in the scalar sector.
17. Prospects for the Simultaneous Operation of the Tevatron Collider and pp Experiments in the Antiproton Source Accumulator
SciTech Connect
Werkema, Steven J.; /Fermilab
2001-06-07
This document is a slightly expanded version of a portion of the Proton Driver design report. The Proton Driver group gets the credit for the original idea of running an Accumulator experiment in the BTeV era. The work presented here is a study of the feasibility of this idea. The addition of the Recycler Ring to the Fermilab accelerator complex provides an opportunity to continue the program of {bar p}p physics in the Antiproton Source Accumulator that was started by Fermilab experiments E760 and E835. The operational scenario presented here utilizes the Recycler Ring as an antiproton bank from which the colliders makes 'withdrawals' as needed to maintain the required luminosity in the Tevatron. The Accumulator is only needed to re-supply the bank in between withdrawals. When the {anti p} stacking rate is sufficiently high, and the luminosity requirements of the Collider experiments are sufficiently low, there will be time between Collider fills and subsequent refilling of the recycler to deliver beam to an experiment in the Accumulator. In the scenario envisioned here, the impact of the Accumulator experiment on the luminosity delivered to the Collider experiments is very small. If the Run II antiproton stacking rate goals are met, the operational conditions required for running Accumulator based experiments will be met during the BTeV era. A simple model of the operation of the Fermilab accelerator complex for BTeV and an experiment in the Accumulator has been developed. The model makes predictions of the rate at which luminosity is delivered to BTeV and an Accumulator experiment. This model was used to examine the impact of the proton driver on this experimental program.
18. Journey in the search for the Higgs boson: the ATLAS and CMS experiments at the Large Hadron Collider.
PubMed
Della Negra, M; Jenni, P; Virdee, T S
2012-12-21
The search for the standard model Higgs boson at the Large Hadron Collider (LHC) started more than two decades ago. Much innovation was required and diverse challenges had to be overcome during the conception and construction of the LHC and its experiments. The ATLAS and CMS Collaboration experiments at the LHC have discovered a heavy boson that could complete the standard model of particle physics.
19. The upgraded Pixel Detector of the ATLAS Experiment for Run 2 at the Large Hadron Collider
Backhaus, M.
2016-09-01
During Run 1 of the Large Hadron Collider (LHC), the ATLAS Pixel Detector has shown excellent performance. The ATLAS collaboration took advantage of the first long shutdown of the LHC during 2013 and 2014 and extracted the ATLAS Pixel Detector from the experiment, brought it to surface and maintained the services. This included the installation of new service quarter panels, the repair of cables, and the installation of the new Diamond Beam Monitor (DBM). Additionally, a completely new innermost pixel detector layer, the Insertable B-Layer (IBL), was constructed and installed in May 2014 between a new smaller beam pipe and the existing Pixel Detector. With a radius of 3.3 cm the IBL is located extremely close to the interaction point. Therefore, a new readout chip and two new sensor technologies (planar and 3D) are used in the IBL. In order to achieve best possible physics performance the material budget was improved with respect to the existing Pixel Detector. This is realized using lightweight staves for mechanical support and a CO2 based cooling system. This paper describes the improvements achieved during the maintenance of the existing Pixel Detector as well as the performance of the IBL during the construction and commissioning phase. Additionally, first results obtained during the LHC Run 2 demonstrating the distinguished tracking performance of the new Four Layer ATLAS Pixel Detector are presented.
20. Issues and experience with controlling beam loss at the Tevatron collider
SciTech Connect
Annala, Gerald; /Fermilab
2007-07-01
Controlling beam loss in the Tevatron collider is of great importance because of the delicate nature of the cryogenic magnet system and the collider detectors. Maximizing the physics potential requires optimized performance as well as protection of all equipment. The operating history of the Tevatron has significantly influenced the way losses are managed. The development of beam loss management in the Tevatron will be presented.
1. SLAC Linear Collider
SciTech Connect
Richter, B.
1985-12-01
A report is given on the goals and progress of the SLAC Linear Collider. The status of the machine and the detectors are discussed and an overview is given of the physics which can be done at this new facility. Some ideas on how (and why) large linear colliders of the future should be built are given.
2. Demonstration of a high-field short-period superconducting helical undulator suitable for future TeV-scale linear collider positron sources.
PubMed
Scott, D J; Clarke, J A; Baynham, D E; Bayliss, V; Bradshaw, T; Burton, G; Brummitt, A; Carr, S; Lintern, A; Rochford, J; Taylor, O; Ivanyushenkov, Y
2011-10-21
The first demonstration of a full-scale working undulator module suitable for future TeV-scale positron-electron linear collider positron sources is presented. Generating sufficient positrons is an important challenge for these colliders, and using polarized e(+) would enhance the machine's capabilities. In an undulator-based source polarized positrons are generated in a metallic target via pair production initiated by circularly polarized photons produced in a helical undulator. We show how the undulator design is developed by considering impedance effects on the electron beam, modeling and constructing short prototypes before the successful fabrication, and testing of a final module. © 2011 American Physical Society
3. Constraining fundamental physics with future CMB experiments
Galli, Silvia; Martinelli, Matteo; Melchiorri, Alessandro; Pagano, Luca; Sherwin, Blake D.; Spergel, David N.
2010-12-01
The Planck experiment will soon provide a very accurate measurement of cosmic microwave background anisotropies. This will let cosmologists determine most of the cosmological parameters with unprecedented accuracy. Future experiments will improve and complement the Planck data with better angular resolution and better polarization sensitivity. This unexplored region of the CMB power spectrum contains information on many parameters of interest, including neutrino mass, the number of relativistic particles at recombination, the primordial helium abundance, and the injection of additional ionizing photons by dark matter self-annihilation. We review the imprint of each parameter on the CMB and forecast the constraints achievable by future experiments by performing a Monte Carlo analysis on synthetic realizations of simulated data. We find that next generation satellite missions such as CMBPol could provide valuable constraints with a precision close to that expected in current and near future laboratory experiments. Finally, we discuss the implications of this intersection between cosmology and fundamental physics.
4. Constraining fundamental physics with future CMB experiments
SciTech Connect
Galli, Silvia; Martinelli, Matteo; Melchiorri, Alessandro; Pagano, Luca; Sherwin, Blake D.; Spergel, David N.
2010-12-15
The Planck experiment will soon provide a very accurate measurement of cosmic microwave background anisotropies. This will let cosmologists determine most of the cosmological parameters with unprecedented accuracy. Future experiments will improve and complement the Planck data with better angular resolution and better polarization sensitivity. This unexplored region of the CMB power spectrum contains information on many parameters of interest, including neutrino mass, the number of relativistic particles at recombination, the primordial helium abundance, and the injection of additional ionizing photons by dark matter self-annihilation. We review the imprint of each parameter on the CMB and forecast the constraints achievable by future experiments by performing a Monte Carlo analysis on synthetic realizations of simulated data. We find that next generation satellite missions such as CMBPol could provide valuable constraints with a precision close to that expected in current and near future laboratory experiments. Finally, we discuss the implications of this intersection between cosmology and fundamental physics.
5. BTEV: a dedicated B physics detector at the Fermilab Tevatron Collider
SciTech Connect
Butler, J.N.
1996-11-01
The capabilities of future Dedicated Hadron Collider B Physics experiments are discussed and compared to experiments that will run in the next few years. The design for such an experiment at the Tevatron Collider is presented and an evolutionary path for developing it is outlined. 9 refs., 3 figs., 4 tabs.
6. Experimental characterization of a coaxial plasma accelerator for a colliding plasma experiment
SciTech Connect
Wiechula, J.; Hock, C.; Iberler, M.; Manegold, T.; Schönlein, A.; Jacoby, J.
2015-04-15
We report experimental results of a single coaxial plasma accelerator in preparation for a colliding plasma experiment. The utilized device consisted of a coaxial pair of electrodes, accelerating the plasma due to J×B forces. A pulse forming network, composed of three capacitors connected in parallel, with a total capacitance of 27 μF was set up. A thyratron allowed to switch the maximum applied voltage of 9 kV. Under these conditions, the pulsed currents reached peak values of about 103 kA. The measurements were performed in a small vacuum chamber with a neutral-gas prefill at gas pressures between 10 Pa and 14 000 Pa. A gas mixture of ArH{sub 2} with 2.8% H{sub 2} served as the discharge medium. H{sub 2} was chosen in order to observe the broadening of the H{sub β} emission line and thus estimate the electron density. The electron density for a single plasma accelerator reached peak values on the order of 10{sup 16} cm{sup −3}. Electrical parameters, inter alia inductance and resistance, were determined for the LCR circuit during the plasma acceleration as well as in a short circuit case. Depending on the applied voltage, the inductance and resistance reached values ranging from 194 nH to 216 nH and 13 mΩ to 23 mΩ, respectively. Furthermore, the plasma velocity was measured using a fast CCD camera. Plasma velocities of 2 km/s up to 17 km/s were observed, the magnitude being highly correlated with gas pressure and applied voltage.
7. Probing the Z' sector of the minimal B-L model at future Linear Colliders in the e+e- → μ+μ- process
Basso, L.; Belyaev, A.; Moretti, S.; Pruna, G. M.
2009-10-01
We study the capabilities of future electron-positron Linear Colliders, with centre-of-mass energy at the TeV scale, in accessing the parameter space of a Z' boson within the minimal B-L model. In such a model, wherein the Standard Model gauge group is augmented by a broken U(1)B-L symmetry - with B(L) being the baryon(lepton) number — the emerging Z' mass is expected to be in the above energy range. We carry out a detailed comparison between the discovery regions mapped over a two-dimensional configuration space (Z' mass and coupling) at the Large Hadron Collider and possible future Linear Colliders for the case of di-muon production. As known in the literature for other Z' models, we confirm that leptonic machines, as compared to the CERN hadronic accelerator, display an additional potential in discovering a B-L Z' boson as well as in allowing one to study its properties at a level of precision well beyond that of any of the existing colliders.
8. The development of colliders
SciTech Connect
Sessler, A.M.
1997-03-01
During the period of the 50s and the 60s colliders were developed. Prior to that time there were no colliders, and by 1965 a number of small devices had worked, good understanding had been achieved, and one could speculate, as Gersh Budker did, that in a few years 20% of high energy physics would come from colliders. His estimate was an under-estimate, for now essentially all of high energy physics comes from colliders. The author presents a brief review of that history: sketching the development of the concepts, the experiments, and the technological advances which made it all possible.
9. Current and future liquid argon neutrino experiments
SciTech Connect
Karagiorgi, Georgia S.
2015-05-15
The liquid argon time projection chamber (LArTPC) detector technology provides an opportunity for precision neutrino oscillation measurements, neutrino cross section measurements, and searches for rare processes, such as SuperNova neutrino detection. These proceedings review current and future LArTPC neutrino experiments. Particular focus is paid to the ICARUS, MicroBooNE, LAr1, 2-LArTPC at CERN-SPS, LBNE, and 100 kton at Okinoshima experiments.
10. Highlights from LHC experiments and future perspectives
SciTech Connect
Campana, P.
2016-01-22
The experiments at LHC are collecting a large amount of data in a kinematic of the (x, Q{sup 2}) variables never accessed before. Boosted by LHC analyses, Quantum Chromodynamics (QCD) is experiencing an impressive progress in the last few years, and even brighter perspectives can be foreseen for the future data taking. A subset of the most recent results from the LHC experiments in the area of QCD (both perturbative and soft) are reviewed.
11. Fast feedback for linear colliders
SciTech Connect
Hendrickson, L.; Adolphsen, C.; Allison, S.; Gromme, T.; Grossberg, P.; Himel, T.; Krauter, K.; MacKenzie, R.; Minty, M.; Sass, R.
1995-05-01
A fast feedback system provides beam stabilization for the SLC. As the SLC is in some sense a prototype for future linear colliders, this system may be a prototype for future feedbacks. The SLC provides a good base of experience for feedback requirements and capabilities as well as a testing ground for performance characteristics. The feedback system controls a wide variety of machine parameters throughout the SLC and associated experiments, including regulation of beam position, angle, energy, intensity and timing parameters. The design and applications of the system are described, in addition to results of recent performance studies.
12. Search for heavy neutral CP-even Higgs within lepton-specific 2HDM at a future linear collider
Hashemi, Majid; Haghighat, Gholamhossein
2017-09-01
In this paper, the production process $e^- e^+ \\rightarrow A H$ is analyzed in the context of the type IV 2HDM and the question of observability of a neutral CP-even Higgs boson $H$ at a linear collider operating at $\\sqrt{s}=1$ TeV is addressed. The CP-odd Higgs is assumed to experience a gauge-Higgs decay as $A\\rightarrow ZH$ with hadronic decay of $Z$ boson as the signature of signal events. The production chain is thus $e^+e^- \\rightarrow AH \\rightarrow ZHH \\rightarrow jj\\ell\\ell\\ell\\ell$ where $\\ell$ is a $\\tau$ or $\\mu$. Four benchmark points with different mass hypotheses are assumed for the analysis. The Higgs mass $m_H$ is assumed to vary within the range 150-300 GeV in increments of 50 GeV. The anti-$k_t$ algorithm is used to perform the jet reconstruction. Results indicate that the neutral CP-even Higgs $H$ is observable through this production mechanism using the di-muon invariant mass distribution with possibility of mass measurement. The corresponding signal significances exceed $5\\sigma$ at integrated luminosity of 3000 $fb^{-1}$.
13. Exotic colliders
SciTech Connect
1994-11-01
The motivation, feasibility and potential for two unconventional collider concepts - the Gamma-Gamma Collider and the Muon Collider - are described. The importance of the development of associated technologies such as high average power, high repetition rate lasers and ultrafast phase-space techniques are outlined.
14. Development of Micro-Pattern Gas Detectors for the Upgrade of the Muon System of the CMS Experiment at the Large Hadron Collider
Bouhali, Othmane
2017-06-01
After the discovery of the long awaited Higgs boson in 2012, the Large Hadron Collider (LHC) at the European Organization for Nuclear Research (CERN) and its two general purpose experiments (ATLAS and CMS) are preparing to break new grounds in High Energy Physics (HEP). The international HEP collaboration has established a rigorous research program of exploring new physics at the high energy frontiers. The program includes substantial increase in the luminosity of the LHC putting detectors into a completely new and unprecedented harsh environment. In order to maintain their excellent performance, an upgrade of the existing detectors is mandatory. In this work we will describe ongoing efforts for the upgrade of the CMS muon detection system, in particular the addition of detection layers based on the Gas Electron Multiplier (GEM) technology. We will summarize the past 5-year R&D program and the future installation and operation plans.
15. The Roman pot'' spectrometer and the vertex detector of experiment UA4 at the CERN SPS collider
Battiston, R.; Bechini, A.; Bosi, F.; Bozzo, M.; Braccini, P. L.; Buskens, J.; Carbonara, F.; Carrara, R.; Castaldi, R.; Cazzola, U.; Cervelli, F.; Chiefari, G.; Drago, E.; Gorini, R.; Haguenauer, M.; Koene, B.; Maleyran, R.; Manna, F.; Matthiae, G.; Merola, L.; Morelli, A.; Napolitano, M.; Palladino, V.; Rewiersma, P.; Robert, M.; Roiron, G.; Sanguinetti, G.; Schuijlenburg, H.; Sciacca, G.; Sette, G.; Van Swol, R.; Timmermans, J.; Traspedini, L.; Vannini, C.; Velasco, J.; Visco, F.
1985-07-01
We describe the apparatus used in experiment UA4 to study proton-antiproton elastic and inelastic interactions at the CERN SPS Collider. Elastically scattered particles, travelling at very small angles, are observed by detectors placed inside movable sections ("Roman pots") of the SPS vacuum chamber. The deflection in the field of the machine quadrupoles allow the measurement of the particle momentum. Inelastic interactions are observed by a left-right symmetric system of trigger counter hodoscopes and drift-chamber telescopes. The apparatus reconstructs the interaction vertex and measures the pseudorapidity η of charged particles in the range 2.5 < ‖ η‖ < 5.6.
16. An AGS experiment to test bunching for the proton driver of the muon collider.
SciTech Connect
Norem, J.
1998-04-27
The proton driver for the muon collider must produce short pulses of protons in order to facilitate muon cooling and operation with polarized beams. In order to test methods of producing these bunches they have operated the AGS near transition and studied procedures which involved moving the transition energy {gamma} to the beam energy. They were able to produce stable bunches with RMS widths of {sigma} = 2.2-2.7 ns for longitudinal bunch areas of {minus}1.5 V-s, in addition to making measurements of the lowest two orders of the momentum compaction factor.
17. Design of beam optics for the future circular collider $e+e-$ collider rings
SciTech Connect
Oide, Katsunobu; Aiba, M.; Aumon, S.; Benedikt, M.; Blondel, A.; Bogomyagkov, A.; Boscolo, M.; Burkhardt, H.; Cai, Y.; Doblhammer, A.; Haerer, B.; Holzer, B.; Jowett, J. M.; Koop, I.; Koratzinos, M.; Levichev, E.; Medina, L.; Ohmi, K.; Papaphilippou, Y.; Piminov, P.; Shatilov, D.; Sinyatkin, S.; Sullivan, M.; Wenninger, J.; Wienands, U.; Zhou, D.; Zimmermann, F.
2016-11-21
A beam optics scheme has been designed for the future circular collider- e+e- (FCC-ee). The main characteristics of the design are: beam energy 45 to 175 GeV, 100 km circumference with two interaction points (IPs) per ring, horizontal crossing angle of 30 mrad at the IP and the crab-waist scheme [P. Raimondi, D. Shatilov, and M. Zobov, arXiv:physics/0702033; P. Raimondi, M. Zobov, and D. Shatilov, in Proceedings of the 22nd Particle Accelerator Conference, PAC-2007, Albuquerque, NM (IEEE, New York, 2007), p. TUPAN037.] with local chromaticity correction. The crab-waist scheme is implemented within the local chromaticity correction system without additional sextupoles, by reducing the strength of one of the two sextupoles for vertical chromatic correction at each side of the IP. So-called “tapering” of the magnets is applied, which scales all fields of the magnets according to the local beam energy to compensate for the effect of synchrotron radiation (SR) loss along the ring. An asymmetric layout near the interaction region reduces the critical energy of SR photons on the incoming side of the IP to values below 100 keV, while matching the geometry to the beam line of the FCC proton collider (FCC-hh) [A. Chancé et al., Proceedings of IPAC’16, 9–13 May 2016, Busan, Korea, TUPMW020 (2016).] as closely as possible. Sufficient transverse/longitudinal dynamic aperture (DA) has been obtained, including major dynamical effects, to assure an adequate beam lifetime in the presence of beamstrahlung and top-up injection. In particular, a momentum acceptance larger than ±2% has been obtained, which is better than the momentum acceptance of typical collider rings by about a factor of 2. The effects of the detector solenoids including their compensation elements are taken into account as well as synchrotron radiation in all magnets. The optics presented in this study is a step toward a full conceptual design for the collider. Finally, a number of
18. Muon colliders
SciTech Connect
Palmer, R.B. |; Sessler, A.; Skrinsky, A.
1996-01-01
Muon Colliders have unique technical and physics advantages and disadvantages when compared with both hadron and electron machines. They should thus be regarded as complementary. Parameters are given of 4 TeV and 0.5 TeV high luminosity {micro}{sup +}{micro}{sup {minus}}colliders, and of a 0.5 TeV lower luminosity demonstration machine. We discuss the various systems in such muon colliders, starting from the proton accelerator needed to generate the muons and proceeding through muon cooling, acceleration and storage in a collider ring. Problems of detector background are also discussed.
19. Muon colliders
Palmer, R. B.; Sessler, A.; Skrinsky, A.; Tollestrup, A.; Baltz, A. J.; Chen, P.; Cheng, W.-H.; Cho, Y.; Courant, E.; Fernow, R. C.; Gallardo, J. C.; Garren, A.; Green, M.; Kahn, S.; Kirk, H.; Lee, Y. Y.; Mills, F.; Mokhov, N.; Morgan, G.; Neuffer, D.; Noble, R.; Norem, J.; Popovic, M.; Schachinger, L.; Silvestrov, G.; Summers, D.; Stumer, I.; Syphers, M.; Torun, Y.; Trbojevic, D.; Turner, W.; Van Ginneken, A.; Vsevolozhskaya, T.; Weggel, R.; Willen, E.; Winn, D.; Wurtele, J.
1996-05-01
Muon Colliders have unique technical and physics advantages and disadvantages when compared with both hadron and electron machines. They should thus be regarded as complementary. Parameters are given of 4 TeV and 0.5 TeV high luminosity μ+μ- colliders, and of a 0.5 TeV lower luminosity demonstration machine. We discuss the various systems in such muon colliders, starting from the proton accelerator needed to generate the muons and proceeding through muon cooling, acceleration and storage in a collider ring. Problems of detector background are also discussed.
20. Endovascular Neurosurgery: Personal Experience and Future Perspectives.
PubMed
Raymond, Jean
2016-09-01
From Luessenhop's early clinical experience until the present day, experimental methods have been introduced to make progress in endovascular neurosurgery. A personal historical narrative, spanning the 1980s to 2010s, with a review of past opportunities, current problems, and future perspectives. Although the technology has significantly improved, our clinical culture remains a barrier to methodologically sound and safe innovative care and progress. We must learn how to safely practice endovascular neurosurgery in the presence of uncertainty and verify patient outcomes in real time. Copyright © 2016 Elsevier Inc. All rights reserved.
1. Solving the problem of anomalous J/ψ suppression by the MPD experiment on the NICA collider
Kurepin, A. B.; Topilskaya, N. S.
2016-08-01
The meassurements of charmonium states production via their decay on lepton pairs by the MPD experiment on the NICA collider at the energies √{s_{NN}} = 4-11 GeV per nucleon could provide important data for solving the problem of anomalous J/ ψ suppression first observed in central Pb-Pb collisions by the NA50 Collaboration at 158 GeV/nucleon. The anomalous J/ ψ suppression could be due to the formation of the QGP in the central heavy-ion collisions. However, this effect could be also interpreted as the result of the comover interactions in nuclear matter. The recent experiments at the SPS, at the RHIC, and the LHC reviewed in this article indicate a more complicated picture of the J/ ψ production including the recombination, medium effects, parton shadowing, and the coherent energy loss mechanism. A more simple production mechanism could be expected at low colliding energies. However, no data were obtained at energies below √{s_{NN}}=17 GeV for heavy-ion collisions. After the short review of the whole set of the data of charmonium states observation the estimation of the production rate for the MPD/NICA is made.
2. Retrieval of past and future positive and negative autobiographical experiences.
PubMed
García-Bajos, Elvira; Migueles, Malen
2017-09-01
We studied retrieval-induced forgetting for past or future autobiographical experiences. In the study phase, participants were given cues to remember past autobiographical experiences or to think about experiences that may occur in the future. In both conditions, half of the experiences were positive and half negative. In the retrieval-practice phase, for past and future experiences, participants retrieved either half of the positive or negative experiences using cued recall, or capitals of the world (control groups). Retrieval practice produced recall facilitation and enhanced memory for the practised positive and negative past and future experiences. While retrieval practice on positive experiences did not impair the recall of other positive experiences, we found inhibition for negative past and future experiences when participants practised negative experiences. Furthermore, retrieval practice on positive future experiences inhibited negative future experiences. These positivity biases for autobiographical memory may have practical implications for treatment of emotional disorders.
3. The E166 experiment: Development of an undulator-based polarized positron source for the international linear collider
Kovermann, J.; Stahl, A.; Mikhailichenko, A. A.; Scott, D.; Moortgat-Pick, G. A.; Gharibyan, V.; Pahl, P.; Põschl, R.; Schüler, K. P.; Laihem, K.; Riemann, S.; Schälicke, A.; Dollan, R.; Kolanoski, H.; Lohse, T.; Schweizer, T.; McDonald, K. T.; Batygin, Y.; Bharadwaj, V.; Bower, G.; Decker, F.-J.; Hast, C.; Iverson, R.; Sheppard, J. C.; Szalata, Z.; Walz, D.; Weidemann, A.; Alexander, G.; Reinherz-Aronis, E.; Berridge, S.; Bugg, W.; Efrimenko, Y.
2007-12-01
A longitudinal polarized positron beam is foreseen for the international linear collider (ILC). A proof-of-principle experiment has been performed in the final focus test beam at SLAC to demonstrate the production of polarized positrons for implementation at the ILC. The E166 experiment uses a 1 m long helical undulator in a 46.6 GeV electron beam to produce a few MeV photons with a high degree of circular polarization. These photons are then converted in a thin target to generate longitudinally polarized e^+ and e^-. The positron polarization is measured using a Compton transmission polarimeter. The data analysis has shown asymmetries in the expected vicinity of 3.4% and ˜1% for photons and positrons respectively and the expected positron longitudinal polarization is covering a range from 50% to 90%.
4. Towards Future RLVs: the USV Flight Experiments
Russo, Gennaro
2002-01-01
Future generations Reusable Launch Vehicles (RLV) need to be developed through an extensive use of flight demonstration. The focus of the Italian USV Program is on flight demonstration of a specific set of technologies and efficient, cost-effective operations rather than full-scale vehicle system development of a production, mission-sized vehicle. As a consequence, the approach emphasizes sub-scale, unmanned, autonomous flying laboratories used to test technology advancements at reduced cost and risk. The USV Program has been identified based on the belief that in the long run space access and re-entry will be guaranteed by aviation-like vehicles (sometime called aerospaceplanes). Among others not less important, such vehicles will require innovation and maturation in three main areas: atmospheric re-entry, reusability, hypersonic flight. USV includes thus technology developments along these three directions, up to their validation either on ground and on board Flying Test Beds. Taking into account the experience gained since many years in US, Japan and Europe, and assuming as reference scenario the one actually considered as the most probable as next generations RLVs, USV indicated in a two-stage- system experimental vehicle the best compromise between vehicle performance, test objectives and program costs. This system is considered at CIRA as either the obvious flying complement to the available on-ground facilities (e.g. the 70 MW Plasma Wind Tunnel known as SCIROCCO), and the necessary system focus for the coherent development of specific technologies. The principal guidelines for the design of USV have been defined as: The first flight experiment is planned for summer 2003 and will consist of a Dropped Transonic Flight Test (DTFT). In preparation of it, some simple scaled flight experiments are planned for summer 2002. The paper will report about the results of these preliminary flight experiments as well as about the status of development of the entire
5. Muon Colliders and Neutrino Factories
SciTech Connect
Geer, Steve; /Fermilab
2009-11-01
Over the past decade, there has been significant progress in developing the concepts and technologies needed to produce, capture, and accelerate {Omicron}(10{sup 21}) muons per year. These developments have paved the way for a new type of neutrino source (neutrino factory) and a new type of very high energy lepton-antilepton collider (muon collider). This article reviews the motivation, design, and research and development for future neutrino factories and muon colliders.
6. Muon Colliders and Neutrino Factories
SciTech Connect
Kaplan, Daniel M.
2015-05-29
Muon colliders and neutrino factories are attractive options for future facilities aimed at achieving the highest lepton-antilepton collision energies and precision measurements of Higgs boson and neutrino mixing matrix parameters. The facility performance and cost depend on how well a beam of muons can be cooled. Recent progress in muon cooling design studies and prototype tests nourishes the hope that such facilities could be built starting in the coming decade. The status of the key technologies and their various demonstration experiments is summarized. Prospects "post-P5" are also discussed.
7. The E166 experiment: Development of an Undulator-Based Polarized Positron Source for the International Linear Collider
SciTech Connect
Kovermann, J.; Stahl, A.; Mikhailichenko, A.A.; Scott, D.; Moortgat-Pick, G.A.; Gharibyan, V.; Pahl, P.; Poschl, R.; Schuler, K.P.; Laihem, K.; Riemann, S.; Schalicke, A.; Dollan, R.; Kolanoski, H.; Lohse, T.; Schweizer, T.; McDonald, K.T.; Batygin, Y.; Bharadwaj, V.; Bower, G.; Decker, F.J.; /SLAC /Tel Aviv U. /Tennessee U.
2011-11-14
A longitudinal polarized positron beam is foreseen for the international linear collider (ILC). A proof-of-principle experiment has been performed in the final focus test beam at SLAC to demonstrate the production of polarized positrons for implementation at the ILC. The E166 experiment uses a 1 m long helical undulator in a 46.6 GeV electron beam to produce a few MeV photons with a high degree of circular polarization. These photons are then converted in a thin target to generate longitudinally polarized e{sup +} and e{sup -}. The positron polarization is measured using a Compton transmission polarimeter. The data analysis has shown asymmetries in the expected vicinity of 3.4% and {approx}1% for photons and positrons respectively and the expected positron longitudinal polarization is covering a range from 50% to 90%. The full exploitation of the physics potential of an international linear collider (ILC) will require the development of polarized positron beams. Having both e{sup +} and e{sup -} beams polarized will provide new insight into structures of couplings and thus give access to physics beyond the standard model [1]. The concept for a polarized positron source is based on circularly polarized photon sources. These photons are then converted to longitudinally polarized e{sup +} and e{sup -} pairs. While in an experiment at KEK [1a], Compton backscattering is used [2], the E166 experiment uses a helical undulator to produce polarized photons. An undulator-based positron source for the ILC has been proposed in [3,4]. The proposed scheme for an ILC positron source is illustrated in figure 1. In this scheme, a 150 GeV electron beam passes through a 120 m long helical undulator to produce an intense photon beam with a high degree of circular polarization. These photons are converted in a thin target to e{sup +} e{sup -} pairs. The polarized positrons are then collected, pre-accelerated to the damping ring and injected to the main linac. The E166 experiment is
8. Fourth Annual Large Hadron Collider Physics
The fourth annual Large Hadron Collider Physics (LHCP2016) conference will be held in Lund, Sweden, in the period of June 13-18, 2016. The conference is hosted by Lund University. The LHCP conference series has emerged in 2013 as a successful result of fusion of two international conferences, Physics at Large Hadron Collider Conference and Hadron Collider Physics Symposium. The program will be devoted to a detailed review of the latest experimental and theoretical results on collider physics, particularly the first results of the LHC Run II, and discussions on further research directions within the high energy particle physics community, both in theory and experiment. The main goal of the conference is to provide intense and lively discussions between experimenters and theorists in such research areas as the Standard Model Physics and Beyond, the Higgs Boson, Supersymmetry, Heavy Quark Physics and Heavy Ion Physics as well as to share a recent progress in the high luminosity upgrades and future colliders developments. Chairpersons: Gregorio Bernardi (LPNHE-Paris CNRS/IN2P3), Guenakh Mitselmakher (University of Florida (US)), Leif Lönnblad (Lund University (SE)), Torsten Akesson (Lund University (SE)) Editorial Board Johan Bijnens (Lund University) Andreas Hoecker (CERN) Jim Olsen (Princeton University)
9. When Rubrics Collide: One Undergraduate Writing Tutor's Experience Negotiating Faculty and Institutional Assessments
ERIC Educational Resources Information Center
Martin, Kelli
2013-01-01
This article recounts one undergraduate writing tutor's experience helping a fellow peer navigate an institutional assessment rubric that seemed to contrast the assessment criteria provided by the student's instructor. This article presents a reflection on that experience, framed by Hutchings, Huber, and Ciccone's (2011) work on…
10. A search for B$0\\atop{S}$ oscillations at the Tevatron collider experiment D0
SciTech Connect
Krop, Dan N.
2007-04-01
We present a search for B$0\\atop{S}$ oscillations using semileptonic BS → DsμX (Ds → K$0\\atop{S}$K). The data were collected using the D0 detector from events produced in √s = 1.96 TeV proton-antiproton collisions at the Fermilab Tevatron. The Tevatron is currently the only place in the world that produces B$0\\atop{S}$ mesons and will be until early 2008 when the Large Hadron Collider begins operating at CERN. One of the vital ingredients for the search for B s oscillations is the determination of the flavor of the B$0\\atop{S}$ candidate (B$0\\atop{S}$ or $\\bar{B}$$0\\atop{S}$ ) at the time of its production, called initial state flavor tagging. We develop an likelihood based initial state flavor tagger that uses objects on the side of the event opposite to the reconstructed B meson candidate. To improve the performance of this flavor tagger, we have made it multidimensional so that it takes correlations between discriminants into account. This tagging is then certified by applying it to sample of semimuonic B(0,+) decays and measuring the well-known oscillation frequency Δmd. We obtain Δmd = 0.486 ± 0.021 ps-1, consistent with the world average. The tagging performance is characterized by the effective efficiency, ϵD2 = (1.90 ± 0.41)%. We then turn to the search for B$0\\atop{S}$ oscillations in the above-named channel. A special two-dimensional mass fitting procedure is developed to separate kinematic reflections from signal events. Using this mass fitting procedure in an unbinned likelihood framework, we obtain a 95% C.L. of Δms > 1.10 ps-1 and a sensitivity of 1.92 ps-1. This result is combined with other analyzed B$0\\atop{S}$ decay channels at D0 to obtain a combined 95% C.L. of Δms > 14.9 ps-1 and a sensitivity of 16.5 ps-1. The corresponding log likelihood scan has a preferred value of
11. Environmental futures research: experiences, approaches, and opportunities
Treesearch
David N., comp. Bengston
2012-01-01
These papers, presented in a special session at the International Symposium on Society and Resource Management in June 2011, explore the transdisciplinary field of futures research and its application to long-range environmental analysis, planning, and policy. Futures research began in the post-World War II era and has emerged as a mature research field. Although the...
12. Probing triple-W production and anomalous WWWW coupling at the CERN LHC and future TeV proton-proton collider
Wen, Yiwen; Qu, Huilin; Yang, Daneng; Yan, Qi-shu; Li, Qiang; Mao, Yajun
2015-03-01
Triple gauge boson production at the LHC can be used to test the robustness of the Standard Model and provide useful information for VBF di-boson scattering measurement. Especially, any derivations from SM prediction will indicate possible new physics. In this paper we present a detailed Monte Carlo study on measuring W ± W ± W ∓ production in pure leptonic and semileptonic decays, and probing anomalous quartic gauge WWWW couplings at the CERN LHC and future hadron collider, with parton shower and detector simulation effects taken into account. Apart from cut-based method, multivariate boosted decision tree method has been exploited for possible improvement. For the leptonic decay channel, our results show that at the TeV pp collider with integrated luminosity of 20(100)[3000] fb-1, one can reach a significance of 0.4(1.2)[10] σ to observe the SM W ± W ± W ∓ production. For the semileptonic decay channel, one can have 0.5(2)[14] σ to observe the SM W ± W ± W ∓ production. We also give constraints on relevant Dim-8 anomalous WWWW coupling parameters.
13. When Worlds Collide: Identity, Culture and the Lived Experiences of Research When "Teaching-Led"
ERIC Educational Resources Information Center
Sharp, John G.; Hemmings, Brian; Kay, Russell; Callinan, Carol
2015-01-01
This article presents detailed findings from the qualitative or interpretive phase of a mixed-methods case study focusing on the professional identities and lived experiences of research among six lecturers working in different capacities across the field of education in a "teaching-led" higher education institution. Building upon the…
14. The development of colliders
SciTech Connect
Sessler, A.M.
1993-02-01
Don Kerst, Gersh Budker, and Bruno Touschek were the individuals, and the motivating force, which brought about the development of colliders, while the laboratories at which it happened were Stanford, MURA, the Cambridge Electron Accelerator, Orsay, Frascati, CERN, and Novosibirsk. These laboratories supported, during many years, this rather speculative activity. Of course, many hundreds of physicists contributed to the development of colliders but the men who started it, set it in the right direction, and forcefully made it happen, were Don, Gersh, and Bruno. Don was instrumental in the development of proton-proton colliders, while Bruno and Gersh spearheaded the development of electron-positron colliders. In this brief review of the history, I will sketch the development of the concepts, the experiments, and the technological developments which made possible the development of colliders. It may look as if the emphasis is on theoretical concepts, but that is really not the case, for in this field -- the physics of beams -- the theory and experiment go hand in hand; theoretical understanding and advances are almost always motivated by the need to explain experimental results or the desire to construct better experimental devices.
15. PHENIX Conceptual Design Report. An experiment to be performed at the Brookhaven National Laboratory Relativistic Heavy Ion Collider
SciTech Connect
Nagamiya, Shoji; Aronson, Samuel H.; Young, Glenn R.; Paffrath, Leo
1993-01-29
The PHENIX Conceptual Design Report (CDR) describes the detector design of the PHENIX experiment for Day-1 operation at the Relativistic Heavy Ion Collider (RHIC). The CDR presents the physics capabilities, technical details, cost estimate, construction schedule, funding profile, management structure, and possible upgrade paths of the PHENIX experiment. The primary goals of the PHENIX experiment are to detect the quark-gluon plasma (QGP) and to measure its properties. Many of the potential signatures for the QGP are measured as a function of a well-defined common variable to see if any or all of these signatures show a simultaneous anomaly due to the formation of the QGP. In addition, basic quantum chromodynamics phenomena, collision dynamics, and thermodynamic features of the initial states of the collision are studied. To achieve these goals, the PHENIX experiment measures lepton pairs (dielectrons and dimuons) to study various properties of vector mesons, such as the mass, the width, and the degree of yield suppression due to the formation of the QGP. The effect of thermal radiation on the continuum is studied in different regions of rapidity and mass. The e{mu} coincidence is measured to study charm production, and aids in understanding the shape of the continuum dilepton spectrum. Photons are measured to study direct emission of single photons and to study {pi}{sup 0} and {eta} production. Charged hadrons are identified to study the spectrum shape, production of antinuclei, the {phi} meson (via K{sup +}K{sup {minus}} decay), jets, and two-boson correlations. The measurements are made down to small cross sections to allow the study of high p{sub T} spectra, and J/{psi} and {Upsilon} production. The PHENIX collaboration consists of over 300 scientists, engineers, and graduate students from 43 institutions in 10 countries. This large international collaboration is supported by US resources and significant foreign resources.
16. Study the effect of beam energy spread and detector resolution on the search for Higgs boson decays to invisible particles at a future e^+e^- circular collider
Cerri, Olmo; de Gruttola, Michele; Pierini, Maurizio; Podo, Alessandro; Rolandi, Gigi
2017-02-01
We study the expected sensitivity to measure the branching ratio of Higgs boson decays to invisible particles at a future circular e^+e^-collider (FCC-ee) in the process e^+e^-→ HZ with Z→ ℓ ^+ℓ ^- (ℓ =e or μ ) using an integrated luminosity of 3.5 ab^{-1} at a center-of-mass energy √{s}=240 GeV. The impact of the energy spread of the FCC-ee beam and of the resolution in the reconstruction of the leptons is discussed. The minimum branching ratio for a 5σ observation after 3.5 ab^{-1} of data taking is 1.7± 0.1%(stat+syst) . The branching ratio exclusion limit at 95% CL is 0.63 ± 0.22%((stat+syst)).
17. 3D integration of Geiger-mode avalanche photodiodes aimed to very high fill-factor pixels for future linear colliders
Vilella, E.; Alonso, O.; Diéguez, A.
2013-12-01
This paper presents an analysis of the maximum achievable fill-factor by a pixel detector of Geiger-mode avalanche photodiodes with the Chartered 130 nm/Tezzaron 3D process. The analysis shows that fill-factors between 66% and 96% can be obtained with different array architectures and a time-gated readout circuit of minimum area. The maximum fill-factor is achieved when the two-layer vertical stack is used to overlap the non-sensitive areas of one layer with the sensitive areas of the other one. Moreover, different sensor areas are used to further increase the fill-factor. A chip containing a pixel detector of the Geiger-mode avalanche photodiodes and aimed to future linear colliders has been designed with the Chartered 130 nm/Tezzaron 3D process to increase the fill-factor.
18. The development of an annular-beam, high power free-electron maser for future linear colliders
SciTech Connect
Fazio, M.V.; Carlsten, B.E.; Earley, L.M.; Fortgang, C.M.; Haddock, P.C.; Haynes, W.B.
1996-09-01
Work is under way to develop a 17 GHz free electron maser (FEM) for producing a 500 MW output pulse with a phase stability appropriate for linear collider applications. We plan to use a 500 keV, 5 kV, 6 cm diameter annular electron beam to excite a TM{sub 02} mode Raman FEM amplifier in a corrugated cylindrical waveguide. The annular beam will run close to the interaction device walls to reduce the power density in the fields, and to greatly reduce the kinetic energy loss caused by beam potential depression associated with the space charge which is a significant advantage in comparison with conventional solid beam microwave tubes at the same beam current. A key advantage of the annular beam is that the reduced plasma wave number can be tuned to achieve phase stability for an arbitrary correlation on interaction strength with beam velocity. It should be noted that this technique for improving phase stability of an EM in not possible with a solid beam klystron. The annular beam FEM provides the opportunity to extend the output power of sources in the 17 GHz regime by well over an order of magnitude with enhanced phase stability. The design and experimental status are discussed.
19. The Pixel Detector of the ATLAS Experiment for the Run 2 at the Large Hadron Collider
Mandelli, B.; ATLAS Collaboration
2016-04-01
The Pixel Detector of the ATLAS experiment has shown excellent performance during the whole Run 1 of LHC. Taking advantage of the long shutdown, the detector was extracted from the experiment and brought to surface, to equip it with new service quarter panels, to repair modules and to ease installation of the Insertable B-Layer (IBL). The IBL is a fourth layer of pixel detectors, and has been installed in May 2014 between the existing Pixel Detector and a new smaller radius beam-pipe. To cope with the high radiation and pixel occupancy due to the proximity to the interaction point, a new read-out chip and two different silicon sensor technologies (planar and 3D) have been developed. Furthermore, the physics performance will be improved through the reduction of the pixel size while, targeting for a low material budget, a new mechanical support using lightweight staves and a CO2 based cooling system have been adopted. The IBL construction and installation in the ATLAS experiment has been completed very successfully. The IBL qualification has shown outstanding detector performance with less then 0.09% of bad pixels. The final commissioning is now on-going and the ATLAS Pixel Detector is ready to join the LHC Run 2 with an improved configuration and a new pixel layer.
20. An experiment of X-ray photon-photon elastic scattering with a Laue-case beam collider
Yamaji, T.; Inada, T.; Yamazaki, T.; Namba, T.; Asai, S.; Kobayashi, T.; Tamasaku, K.; Tanaka, Y.; Inubushi, Y.; Sawada, K.; Yabashi, M.; Ishikawa, T.
2016-12-01
We report a search for photon-photon elastic scattering in vacuum in the X-ray region at an energy in the center of mass system of ωcms = 6.5keV for which the QED cross section is σQED = 2.5 ×10-47m2. An X-ray beam provided by the SACLA X-ray Free Electron Laser is split and the two beamlets are made to collide at right angle, with a total integrated luminosity of (1.24 ± 0.08) ×1028m-2. No signal X rays from the elastic scattering that satisfy the correlation between energy and scattering angle were detected. We obtain a 95% C.L. upper limit for the scattering cross section of 1.9 ×10-27m2 at ωcms = 6.5keV. The upper limit is the lowest upper limit obtained so far by keV experiments.
1. Fourth workshop on experiments and detectors for a relativistic heavy ion collider
SciTech Connect
Fatyga, M.; Moskowitz, B.
1990-01-01
This report contains papers on the following topics: physics at RHIC; flavor flow from quark-gluon plasma; space-time quark-gluon cascade; jets in relativistic heavy ion collisions; parton distributions in hard nuclear collisions; experimental working groups, two-arm electron/photon spectrometer collaboration; total and elastic pp cross sections; a 4{pi} tracking TPC magnetic spectrometer; hadron spectroscopy; efficiency and background simulations for J/{psi} detection in the RHIC dimuon experiment; the collision regions beam crossing geometries; Monte Carlo simulations of interactions and detectors; proton-nucleus interactions; the physics of strong electromagnetic fields in collisions of relativistic heavy ions; a real time expert system for experimental high energy/nuclear physics; the development of silicon multiplicity detectors; a pad readout detector for CRID/tracking; RHIC TPC R D progress and goals; development of analog memories for RHIC detector front-end electronic systems; calorimeter/absorber optimization for a RHIC dimuon experiment; construction of a highly segmented high resolution TOF system; progress report on a fast, particle-identifying trigger based on ring-imaging and highly integrated electronics for a TPC detector.
2. Characterising dark matter searches at colliders and direct detection experiments: Vector mediators
DOE PAGES
Buchmueller, Oliver; Dolan, Matthew J.; Malik, Sarah A.; ...
2015-01-09
We introduce a Minimal Simplified Dark Matter (MSDM) framework to quantitatively characterise dark matter (DM) searches at the LHC. We study two MSDM models where the DM is a Dirac fermion which interacts with a vector and axial-vector mediator. The models are characterised by four parameters: mDM, Mmed , gDM and gq, the DM and mediator masses, and the mediator couplings to DM and quarks respectively. The MSDM models accurately capture the full event kinematics, and the dependence on all masses and couplings can be systematically studied. The interpretation of mono-jet searches in this framework can be used to establishmore » an equal-footing comparison with direct detection experiments. For theories with a vector mediator, LHC mono-jet searches possess better sensitivity than direct detection searches for light DM masses (≲5 GeV). For axial-vector mediators, LHC and direct detection searches generally probe orthogonal directions in the parameter space. We explore the projected limits of these searches from the ultimate reach of the LHC and multi-ton xenon direct detection experiments, and find that the complementarity of the searches remains. In conclusion, we provide a comparison of limits in the MSDM and effective field theory (EFT) frameworks to highlight the deficiencies of the EFT framework, particularly when exploring the complementarity of mono-jet and direct detection searches.« less
3. Characterising dark matter searches at colliders and direct detection experiments: Vector mediators
SciTech Connect
Buchmueller, Oliver; Dolan, Matthew J.; Malik, Sarah A.; McCabe, Christopher
2015-01-09
We introduce a Minimal Simplified Dark Matter (MSDM) framework to quantitatively characterise dark matter (DM) searches at the LHC. We study two MSDM models where the DM is a Dirac fermion which interacts with a vector and axial-vector mediator. The models are characterised by four parameters: mDM, Mmed , gDM and gq, the DM and mediator masses, and the mediator couplings to DM and quarks respectively. The MSDM models accurately capture the full event kinematics, and the dependence on all masses and couplings can be systematically studied. The interpretation of mono-jet searches in this framework can be used to establish an equal-footing comparison with direct detection experiments. For theories with a vector mediator, LHC mono-jet searches possess better sensitivity than direct detection searches for light DM masses (≲5 GeV). For axial-vector mediators, LHC and direct detection searches generally probe orthogonal directions in the parameter space. We explore the projected limits of these searches from the ultimate reach of the LHC and multi-ton xenon direct detection experiments, and find that the complementarity of the searches remains. In conclusion, we provide a comparison of limits in the MSDM and effective field theory (EFT) frameworks to highlight the deficiencies of the EFT framework, particularly when exploring the complementarity of mono-jet and direct detection searches.
4. Results from colliding magnetized plasma jet experiments executed at the Trident laser facility
Manuel, M. J.-E.; Rasmus, A. M.; Kurnaz, C. C.; Klein, S. R.; Davis, J. S.; Drake, R. P.; Montgomery, D. S.; Hsu, S. C.; Adams, C. S.; Pollock, B. B.
2015-11-01
The interaction of high-velocity plasma flows in a background magnetic field has applications in pulsed-power and fusion schemes, as well as astrophysical environments, such as accretion systems and stellar mass ejections into the magnetosphere. Experiments recently executed at the Trident Laser Facility at the Los Alamos National Laboratory investigated the effects of an expanding aluminum plasma flow into a uniform 4.5-Tesla magnetic field created using a solenoid designed and manufactured at the University of Michigan. Opposing-target experiments demonstrate interesting collisional behavior between the two magnetized flows. Preliminary interferometry and Faraday rotation measurements will be presented and discussed. This work is funded by the U.S Department of Energy, through the NNSA-DS and SC-OFES Joint Program in High-Energy-Density Laboratory Plasmas, grant number DE-NA0001840. Support for this work was provided by NASA through Einstein Postdoctoral Fellowship grant number PF3-140111 awarded by the Chandra X-ray Center, which is operated by the Astrophysical Observatory for NASA under contract NAS8-03060.
5. Limits of scintillation materials for future experiments at high luminosity LHC and FCC
Korjik, M.
2017-08-01
This paper gives a summary of the systematic study of the radiation damage phenomena in scintillation materials which are caused by γ -quanta and energetic hadrons, the main contributors to the irradiation environment in further hadron collider experiments.
6. Galactic scale gas flows in colliding galaxies: 3-dimensional, N-body/hydrodynamics experiments
NASA Technical Reports Server (NTRS)
Lamb, Susan A.; Gerber, Richard A.; Balsara, Dinshaw S.
1994-01-01
We present some results from three dimensional computer simulations of collisions between models of equal mass galaxies, one of which is a rotating, disk galaxy containing both gas and stars and the other is an elliptical containing stars only. We use fully self consistent models in which the halo mass is 2.5 times that of the disk. In the experiments we have varied the impact parameter between zero (head on) and 0.9R (where R is the radius of the disk), for impacts perpendicular to the disk plane. The calculations were performed on a Cray 2 computer using a combined N-body/smooth particle hydrodynamics (SPH) program. The results show the development of complicated flows and shock structures in the direction perpendicular to the plane of the disk and the propagation outwards of a density wave in both the stars and the gas. The collisional nature of the gas results in a sharper ring than obtained for the star particles, and the development of high volume densities and shocks.
7. Topics in Collider Physics
SciTech Connect
Petriello, Frank J
2003-08-27
It is an exciting time for high energy physics. Several experiments are currently exploring uncharted terrain; the next generation of colliders will begin operation in the coming decade. These experiments will together help us understand some of the most puzzling issues in particle physics: the mechanism of electroweak symmetry breaking and the generation of flavor physics. It is clear that the primary goal of theoretical particle physics in the near future is to support and guide this experimental program. These tasks can be accomplished in two ways: by developing experimental signatures for new models which address outstanding problems, and by improving Standard Model predictions for precision observables. We present here several results which advance both of these goals. We begin with a study of non-commutative field theories. It has been suggested that TeV-scale non-commutativity could explain the origin of CP violation in the SM. We identify several distinct signatures of non-commutativity in high energy processes. We also demonstrate the one-loop quantum consistency of a simple spontaneously broken non-commutative U(1) theory; this result is an important preface to any attempt to embed the SM within a non-commutative framework. We then investigate the phenomenology of extra-dimensional theories, which have been suggested recently as solutions to the hierarchy problem of particle physics. We first examine the implications of allowing SM fields to propagate in the full five-dimensional spacetime of the Randall-Sundrum model, which solves the hierarchy problem via an exponential ''warping'' of the Planck scale induced by a five-dimensional anti de-Sitter geometry. In an alternative extra-dimensional theory, in which all SM fields are permitted to propagate in flat extra dimensions, we show that properties of the Higgs boson are significantly modified. Finally, we discuss the next-to-next-to leading order QCD corrections to the dilepton rapidity distribution in
8. Accelerator Challenges and Opportunities for Future Neutrino Experiments
SciTech Connect
Zisman, Michael S
2010-12-24
There are three types of future neutrino facilities currently under study, one based on decays of stored beta-unstable ion beams (?Beta Beams?), one based on decays of stored muon beams (?Neutrino Factory?), and one based on the decays of an intense pion beam (?Superbeam?). In this paper we discuss the challenges each design team must face and the R&D being carried out to turn those challenges into technical opportunities. A new program, the Muon Accelerator Program, has begun in the U.S. to carry out the R&D for muon-based facilities, including both the Neutrino Factory and, as its ultimate goal, a Muon Collider. The goals of this program will be briefly described.
9. Developing future plant experiments for spaceflight
NASA Technical Reports Server (NTRS)
Dreschel, T. W.; Brown, C. S.; Hinkle, C. R.; Sager, J. C.; Knott, W. M.
1990-01-01
Experiments are described which were designed to support the constructing and using clinostats for studies of microgravity effects and for measuring photosynthesis and respiration in plants in clinostat experiments. Particular attention is given to the development and testing a clinostat for rotating the Space Shuttle Mid-Deck Locker Plant Growth Unit (PGU), a sealed chamber for plan growth and gas exchange measurements on a clinostat, and a porous tube plant nutrient delivery system for the PGU. Design diagrams of these items are presented together with the results of tests.
10. Developing future plant experiments for spaceflight
NASA Technical Reports Server (NTRS)
Dreschel, T. W.; Brown, C. S.; Hinkle, C. R.; Sager, J. C.; Knott, W. M.
1990-01-01
Experiments are described which were designed to support the constructing and using clinostats for studies of microgravity effects and for measuring photosynthesis and respiration in plants in clinostat experiments. Particular attention is given to the development and testing a clinostat for rotating the Space Shuttle Mid-Deck Locker Plant Growth Unit (PGU), a sealed chamber for plan growth and gas exchange measurements on a clinostat, and a porous tube plant nutrient delivery system for the PGU. Design diagrams of these items are presented together with the results of tests.
11. When worlds collide: medicine, business, the Affordable Care Act and the future of health care in the U.S.
PubMed
Wicks, Andrew C; Keevil, Adrian A C
2014-01-01
The dialogue about the future of health care in the US has been impeded by flawed conceptions about medicine and business. The present paper re-examines some of the underlying assumptions about both medicine and business, and uses more nuanced readings of both terms to frame debates about the ACA and the emerging health care environment.
12. Search for the Production of Gluinos and Squarks with the CDF II Experiment at the Tevatron Collider
SciTech Connect
De Lorenzo, Gianluca
2010-05-19
sbottom decays exclusively as $\\tilde{b}$1 → b$\\tilde{x}$10. The expected signal for direct sbottom pair production is characterized by the presence of two jets of hadrons from the hadronization of the bottom quarks and E=T from the two LSPs in the final state. The events are selected with large ET and two energetic jets in the final state, and at least one jet is required to be associated with a b quark. The measurements are in good agreement with SM predictions for backgrounds. The results are translated into 95% CL exclusion limits on production cross sections and sbottom and neutralino masses in the given MSSM scenario. Cross sections down to 0.1 pb are excluded for the sbottom mass range considered. Sbottom masses up to 230 GeV/c2 are excluded at 95% CL for neutralino masses below 70 GeV/c2. This analysis increases the previous CDF limit by more than 40 GeV/c2. The sensitivity of both the inclusive and the exclusive search is dominated by systematic effects and the results of the two analyses can be considered as conclusive for CDF Run II. With the new energy frontier of the newly commissioned Large Hadron Collider in Geneva, the experience from Tevatron will be of crucial importance in the developing of effective strategies to search for SUSY in the next era of particle physics experiments.
13. Diamond sensors for future high energy experiments
Bachmair, Felix
2016-09-01
With the planned upgrade of the LHC to High-Luminosity-LHC [1], the general purpose experiments ATLAS and CMS are planning to upgrade their innermost tracking layers with more radiation tolerant technologies. Chemical Vapor Deposition CVD diamond is one such technology. CVD diamond sensors are an established technology as beam condition monitors in the highest radiation areas of all LHC experiments. The RD42-collaboration at CERN is leading the effort to use CVD diamond as a material for tracking detectors operating in extreme radiation environments. An overview of the latest developments from RD42 is presented including the present status of diamond sensor production, a study of pulse height dependencies on incident particle flux and the development of 3D diamond sensors.
14. Simulation of ionization effects for high-density positron drivers in future plasma wakefield experiments
SciTech Connect
Bruhwiler, D.L.; Dimitrov, D.A.; Cary, J.R.; Esarey, E.; Leemans, W.P.
2003-05-12
The plasma wakefield accelerator (PWFA) concept has been proposed as a potential energy doubler for present or future electron-positron colliders. Recent particle-in-cell (PIC) simulations have shown that the self-fields of the required electron beam driver can tunnel ionize neutral Li, leading to plasma wake dynamics differing significantly from that of a preionized plasma. It has also been shown, for the case of a preionized plasma, that the plasma wake of a positron driver differs strongly from that of an electron driver. We will present new PIC simulations, using the OOPIC code, showing the effects of tunneling ionization on the plasma wake generated by high-density positron drivers. The results will be compared to previous work on electron drivers with tunneling ionization and positron drivers without ionization. Parameters relevant to the energy doubler and the upcoming E-164x experiment at the Stanford Linear Accelerator Center will be considered.
15. FermiGrid - experience and future plans
SciTech Connect
Chadwick, K.; Berman, E.; Canal, P.; Hesselroth, T.; Garzoglio, G.; Levshina, T.; Sergeev, V.; Sfiligoi, I.; Timm, S.; Yocum, D.; /Fermilab
2007-09-01
Fermilab supports a scientific program that includes experiments and scientists located across the globe. In order to better serve this community, Fermilab has placed its production computer resources in a Campus Grid infrastructure called 'FermiGrid'. The FermiGrid infrastructure allows the large experiments at Fermilab to have priority access to their own resources, enables sharing of these resources in an opportunistic fashion, and movement of work (jobs, data) between the Campus Grid and National Grids such as Open Science Grid and the WLCG. FermiGrid resources support multiple Virtual Organizations (VOs), including VOs from the Open Science Grid (OSG), EGEE and the Worldwide LHC Computing Grid Collaboration (WLCG). Fermilab also makes leading contributions to the Open Science Grid in the areas of accounting, batch computing, grid security, job management, resource selection, site infrastructure, storage management, and VO services. Through the FermiGrid interfaces, authenticated and authorized VOs and individuals may access our core grid services, the 10,000+ Fermilab resident CPUs, near-petabyte (including CMS) online disk pools and the multi-petabyte Fermilab Mass Storage System. These core grid services include a site wide Globus gatekeeper, VO management services for several VOs, Fermilab site authorization services, grid user mapping services, as well as job accounting and monitoring, resource selection and data movement services. Access to these services is via standard and well-supported grid interfaces. We will report on the user experience of using the FermiGrid campus infrastructure interfaced to a national cyberinfrastructure--the successes and the problems.
16. FermiGrid—experience and future plans
Chadwick, K.; Berman, E.; Canal, P.; Hesselroth, T.; Garzoglio, G.; Levshina, T.; Sergeev, V.; Sfiligoi, I.; Sharma, N.; Timm, S.; Yocum, D. R.
2008-07-01
Fermilab supports a scientific program that includes experiments and scientists located across the globe. In order to better serve this community, Fermilab has placed its production computer resources in a Campus Grid infrastructure called 'FermiGrid'. The FermiGrid infrastructure allows the large experiments at Fermilab to have priority access to their own resources, enables sharing of these resources in an opportunistic fashion, and movement of work (jobs, data) between the Campus Grid and National Grids such as Open Science Grid (OSG) and the Worldwide LHC Computing Grid Collaboration (WLCG). FermiGrid resources support multiple Virtual Organizations (VOs), including VOs from the OSG, EGEE, and the WLCG. Fermilab also makes leading contributions to the Open Science Grid in the areas of accounting, batch computing, grid security, job management, resource selection, site infrastructure, storage management, and VO services. Through the FermiGrid interfaces, authenticated and authorized VOs and individuals may access our core grid services, the 10,000+ Fermilab resident CPUs, near-petabyte (including CMS) online disk pools and the multi-petabyte Fermilab Mass Storage System. These core grid services include a site wide Globus gatekeeper, VO management services for several VOs, Fermilab site authorization services, grid user mapping services, as well as job accounting and monitoring, resource selection and data movement services. Access to these services is via standard and well-supported grid interfaces. We will report on the user experience of using the FermiGrid campus infrastructure interfaced to a national cyberinfrastructure - the successes and the problems.
17. Future Experiments with HADES at FAIR
SciTech Connect
Tlusty, P.
2010-12-28
The Dielectron Spectrometer HADES installed at GSI Darmstadt recently provided new intriguing results on production of electron pairs and strangeness from elementary and nucleus-nucleus collisions. The obtained data call for further systematic investigations of heavier systems and/or at higher energies.For this purpose, the HADES spectrometer has been upgraded with a high-granularity RPC time-of-flight wall. In addition, a completely new detector read-out and data-acquisition system has been implemented which will greatly improve our data-taking rates. We describe the current status of the HADES spectrometer and our plans for experiments on heavy system collisions at energies up to 10 A GeV on the upcoming FAIR facility.
18. Study the radiation damage effects in Si microstrip detectors for future HEP experiments
Lalwani, Kavita; Jain, Geetika; Dalal, Ranjeet; Ranjan, Kirti; Bhardwaj, Ashutosh
2016-07-01
Silicon (Si) detectors are playing a key role in High Energy Physics (HEP) experiments due to their superior tracking capabilities. In future HEP experiments, like upgrade of the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC), CERN, the silicon tracking detectors will be operated in a very intense radiation environment. This leads to both surface and bulk damage in Si detectors, which in turn will affect the operating performance of Si detectors. It is important to complement the measurements of the irradiated Si strip detectors with device simulation, which helps in understanding of both the device behavior and optimizing the design parameters needed for the future Si tracking system. An important ingredient of the device simulation is to develop a radiation damage model incorporating both bulk and surface damage. In this work, a simplified two-trap model is incorporated in device simulation to describe the type-inversion. Further, an extensive simulation of effective doping density as well as electric field profile is carried out at different temperatures for various fluences.
19. Study of the performance of a compact sandwich calorimeter for the instrumentation of the very forward region of a future linear collider detector
Ghenescu, V.; Benhammou, Y.
2017-02-01
The FCAL collaboration is preparing large scale prototypes of special calorimeters to be used in the very forward region at a future linear electron positron collider for a precise and fast luminosity measurement and beam-tuning. These calorimeters are designed as sensor-tungsten calorimeters with very thin sensor planes to keep the Moliere radius small and dedicated FE electronics to match the timing and dynamic range requirements. A partially instrumented prototype was investigated in the CERN PS T9 beam in 2014 and at the DESY-II Synchrotron in 2015. It was operated in a mixed particle beam (electrons, muons and hadrons) of 5 GeV from PS facilities and with secondary electrons of 5 GeV energy from DESY-II. The results demonstrated a very good performance of the full readout chain. The high statistics data were used to study the response to different particles, perform sensor alignment and measure the longitudinal shower development in the sandwich. In addition, Geant4 MC simulations were done, and compared to the data.
20. Crab cavities: Past, present, and future of a challenging device
SciTech Connect
Wu, Q.
2015-05-03
In two-ring facilities operating with a crossing-angle collision scheme, luminosity can be limited due to an incomplete overlapping of the colliding bunches. Crab cavities then are introduced to restore head-on collisions by providing the destined opposite deflection to the head and tail of the bunch. An increase in luminosity was demonstrated at KEKB with global crab-crossing, while the Large Hardron Collider (LHC) at CERN currently is designing local crab crossing for the Hi-Lumi upgrade. Future colliders may investigate both approaches. In this paper, we review the challenges in the technology, and the implementation of crab cavities, while discussing experience in earlier colliders, ongoing R&D, and proposed implementations for future facilities, such as HiLumi-LHC, CERN’s compact linear collider (CLIC), the international linear collider (ILC), and the electron-ion collider under design at BNL (eRHIC).
1. Stable massive particles at colliders
SciTech Connect
Fairbairn, M.; Kraan, A.C.; Milstead, D.A.; Sjostrand, T.; Skands, P.; Sloan, T.; /Lancaster U.
2006-11-01
We review the theoretical motivations and experimental status of searches for stable massive particles (SMPs) which could be sufficiently long-lived as to be directly detected at collider experiments. The discovery of such particles would address a number of important questions in modern physics including the origin and composition of dark matter in the universe and the unification of the fundamental forces. This review describes the techniques used in SMP-searches at collider experiments and the limits so far obtained on the production of SMPs which possess various colour, electric and magnetic charge quantum numbers. We also describe theoretical scenarios which predict SMPs, the phenomenology needed to model their production at colliders and interactions with matter. In addition, the interplay between collider searches and open questions in cosmology such as dark matter composition are addressed.
2. Children's Predictions of Future Perceptual Experiences: Temporal Reasoning and Phenomenology
ERIC Educational Resources Information Center
Burns, Patrick; Russell, James
2016-01-01
We investigated the development and cognitive correlates of envisioning future experiences in 3.5- to 6.5-year old children across 2 experiments, both of which involved toy trains traveling along a track. In the first, children were asked to predict the direction of train travel and color of train side, as it would be seen through an arch.…
3. Children's Predictions of Future Perceptual Experiences: Temporal Reasoning and Phenomenology
ERIC Educational Resources Information Center
Burns, Patrick; Russell, James
2016-01-01
We investigated the development and cognitive correlates of envisioning future experiences in 3.5- to 6.5-year old children across 2 experiments, both of which involved toy trains traveling along a track. In the first, children were asked to predict the direction of train travel and color of train side, as it would be seen through an arch.…
4. Emerging Trends in Teacher Preparation: The Future of Field Experiences.
ERIC Educational Resources Information Center
Slick, Gloria Appelt, Ed.
This is the fourth in a series of four books presenting a variety of field experience program models and philosophies that drive the programs provided to preservice teachers during their undergraduate teacher preparation. This book focuses on critical issues facing teaching education in the future, in particular field experiences. Major themes…
5. Colliding pulse injection experiments in non-collinear geometry for controlled laser plasma wakefield acceleration of electrons
Toth, Csaba; Nakamura, K.; Geddes, C.; Michel, P.; Schroeder, C.; Esarey, E.; Leemans, W.
2006-10-01
A method for controlled injection of electrons into a plasma wakefield relying on colliding laser pulses [1] has been proposed a decade ago to produce high quality relativistic electron beams with energy spread below 1% and normalized emittances < 1 micron from a laser wakefield accelerator (LWFA). The original idea uses three pulses in which one pulse excites the plasma wake and a trailing laser pulse collides with a counterpropagating one to form a beat pattern that boosts background electrons to catch the plasma wave. Another, two-beam off-axis injection method [2] with crossing angles varying from 180 to 90 degrees avoids having optical elements on the path of the electron beam and has been studied at the LOASIS facility of LBNL as a viable method for laser triggered injection. It allows low dark current operation with controllable final beam energy and low energy spread. Here, we report on progress of electron optical injection via the two-beam non-collinear colliding pulse scheme using multi-terawatt Ti:Sapphire laser beams (45 fs, 100s of mJ) focused onto a Hydrogen gas plume. Experimental results indicate that electron beam properties are affected by the second beam. *This work is supported by DoE under contract DE-AC02-05CH11231. [1] E. Esarey, et al, Phys. Rev. Lett 79, 2682 (1997) [2] G. Fubiani, Phys. Rev. E 70, 016402 (2004)
6. Linear Collider Physics Resource Book Snowmass 2001
SciTech Connect
Ronan , M.T.
2001-06-01
The American particle physics community can look forward to a well-conceived and vital program of experimentation for the next ten years, using both colliders and fixed target beams to study a wide variety of pressing questions. Beyond 2010, these programs will be reaching the end of their expected lives. The CERN LHC will provide an experimental program of the first importance. But beyond the LHC, the American community needs a coherent plan. The Snowmass 2001 Workshop and the deliberations of the HEPAP subpanel offer a rare opportunity to engage the full community in planning our future for the next decade or more. A major accelerator project requires a decade from the beginning of an engineering design to the receipt of the first data. So it is now time to decide whether to begin a new accelerator project that will operate in the years soon after 2010. We believe that the world high-energy physics community needs such a project. With the great promise of discovery in physics at the next energy scale, and with the opportunity for the uncovering of profound insights, we cannot allow our field to contract to a single experimental program at a single laboratory in the world. We believe that an e{sup +}e{sup -} linear collider is an excellent choice for the next major project in high-energy physics. Applying experimental techniques very different from those used at hadron colliders, an e{sup +}e{sup -} linear collider will allow us to build on the discoveries made at the Tevatron and the LHC, and to add a level of precision and clarity that will be necessary to understand the physics of the next energy scale. It is not necessary to anticipate specific results from the hadron collider programs to argue for constructing an e{sup +}e{sup -} linear collider; in any scenario that is now discussed, physics will benefit from the new information that e{sup +}e{sup -} experiments can provide. This last point merits further emphasis. If a new accelerator could be designed and
7. Strong WW scattering physics: A comparative study for the LHC, NLC and a Muon Collider
SciTech Connect
Han, Tao
1997-04-01
We discuss the model independent parameterization for a strongly interacting electroweak sector. Phenomenological studies are made to probe such a sector for future colliders such as the LHC, e{sup +}e{sup -} Linear collider and a muon collider.
8. International Workshop on Linear Colliders 2010
SciTech Connect
2010-10-25
IWLC2010 International Workshop on Linear Colliders 2010ECFA-CLIC-ILC joint meeting: Monday 18 October - Friday 22 October 2010Venue: CERN and CICG (International Conference Centre Geneva, Switzerland) This year, the International Workshop on Linear Colliders organized by the European Committee for Future Accelerators (ECFA) will study the physics, detectors and accelerator complex of a linear collider covering both CLIC and ILC options.Contact Workshop Secretariat IWLC2010 is hosted by CERN
9. B physics at hadron colliders
SciTech Connect
Butler, J.N.; /Fermilab
2005-09-01
This paper discusses the physics opportunity and challenges for doing high precision B physics experiments at hadron colliders. It describes how these challenges have been addressed by the two currently operating experiments, CDF and D0, and how they are addressed by three experiments, ATLAS, CMS, and LHCb, at the LHC.
10. Scrutinizing the Higgs quartic coupling at a future 100 TeV proton-proton collider with taus and b-jets
Fuks, Benjamin; Kim, Jeong Han; Lee, Seung J.
2017-08-01
The Higgs potential consists of an unexplored territory in which the electroweak symmetry breaking is triggered, and it is moreover directly related to the nature of the electroweak phase transition. Measuring the Higgs boson cubic and quartic couplings, or getting equivalently information on the exact shape of the Higgs potential, is therefore an essential task. However, direct measurements beyond the cubic self-interaction of the Higgs boson consist of a huge challenge, even for a future proton-proton collider expected to operate at a center-of-mass energy of 100 TeV. We present a novel approach to extract model-independent constraints on the triple and quartic Higgs self-coupling by investigating triple Higgs-boson hadroproduction at a center-of-mass energy of 100 TeV, focusing on the ττb b bar b b bar channel that was previously overlooked due to a supposedly too large background. It is thrown into sharp relief that the assist from transverse variables such as mT2 and a boosted configuration ensures a high signal sensitivity. We derive the luminosities that would be required to constrain given deviations from the Standard Model in the Higgs self-interactions, showing for instance that a 2σ sensitivity could be achieved for an integrated luminosity of 30 ab-1 when Standard Model properties are assumed. With the prospects of combining these findings with other triple-Higgs search channels, the Standard Model Higgs quartic coupling could in principle be reached with a significance beyond the 3σ level.
11. Relativistic klystron research for linear colliders
SciTech Connect
Allen, M.A.; Callin, R.S.; Deruyter, H.; Eppley, K.R.; Fant, K.S.; Fowkes, W.R.; Herrmannsfeldt, W.B.; Higo, T.; Hoag, H.A.; Koontz, R.F.
1988-09-01
Relativistic klystrons are being developed as a power source for high gradient accelerator applications which include large linear electron-positron colliders, compact accelerators, and FEL sources. We have attained 200 MW peak power at 11.4 GHz from a relativistic klystron, and 140 MV/m longitudinal gradient in a short 11.4 GHz accelerator section. We report here on the design of our relativistic klystrons, the results of our experiments so far, and some of our plans for the near future. 5 refs., 9 figs., 1 tab.
12. Electron Ion Collider transverse spin physics
SciTech Connect
Prokudin, Alexei
2011-07-01
Electron Ion Collider is a future high energy facility for studies of the structure of the nucleon. Three-dimensional parton structure is one of the main goals of EIC. In momentum space Transverse Momentum Dependent Distributions (TMDs) are the key ingredients to map such a structure. At leading twist spin structure of spin-1/2 hadron can be described by 8 TMDs. Experimentally these functions can be studied in polarised SIDIS experiments. We discuss Sivers distribution function that describes distribution of unpolarised quarks in a transversely polarised nucleon and transversity that measures distribution of transversely polarised quarks in a transversely polarised nucleon
13. Electron Ion Collider transverse spin physics
SciTech Connect
Prokudin, Alexei
2011-07-15
Electron Ion Collider is a future high energy facility for studies of the structure of the nucleon. Three-dimensional parton structure is one of the main goals of EIC. In momentum space Transverse Momentum Dependent Distributions (TMDs) are the key ingredients to map such a structure. At leading twist spin structure of spin-1/2 hadron can be described by 8 TMDs. Experimentally these functions can be studied in polarised SIDIS experiments. We discuss Sivers distribution function that describes distribution of unpolarised quarks in a transversely polarised nucleon and transversity that measures distribution of transversely polarised quarks in a transversely polarised nucleon.
14. Multiple Parton Interactions in p$bar{p}$ Collisions in D0 Experiment at the Tevatron Collider
SciTech Connect
Golovanov, Georgy
2016-01-01
The thesis is devoted to the study of processes with multiple parton interactions (MPI) in a ppbar collision collected by D0 detector at the Fermilab Tevatron collider at sqrt(s) = 1.96 TeV. The study includes measurements of MPI event fraction and effective cross section, a process-independent parameter related to the effective interaction region inside the nucleon. The measurements are done using events with a photon and three hadronic jets in the final state. The measured effective cross section is used to estimate background from MPI for WH production at the Tevatron energy
15. The Stanford Linear Collider
SciTech Connect
Seeman, J.T.
1990-10-01
The Stanford Linear Collider (SLC) has been in operation for several years with the initial and accelerator physics experiments just completed. A synopsis of these results is included. The second round of experiments is now under preparation to install the new physics detector (SLD) in Fall 1990 and to increase the luminosity significantly by late 1991. Collisions at high intensity and with polarized electrons are planned. Many beam dynamics and technological advances are in progress to meet these goals. 10 refs., 15 figs., 1 tab.
16. Gaudi components for concurrency: Concurrency for existing and future experiments
Clemencic, M.; Funke, D.; Hegner, B.; Mato, P.; Piparo, D.; Shapoval, I.
2015-05-01
HEP experiments produce enormous data sets at an ever-growing rate. To cope with the challenge posed by these data sets, experiments’ software needs to embrace all capabilities modern CPUs offer. With decreasing memory/core ratio, the one-process-per-core approach of recent years becomes less feasible. Instead, multi-threading with fine-grained parallelism needs to be exploited to benefit from memory sharing among threads. Gaudi is an experiment-independent data processing framework, used for instance by the ATLAS and LHCbexperiments at CERN's Large Hadron Collider. It has originally been designed with only sequential processing in mind. In a recent effort, the frame work has been extended to allow for multi-threaded processing. This includes components for concurrent scheduling of several algorithms - either processingthe same or multiple events, thread-safe data store access and resource management. In the sequential case, the relationships between algorithms are encoded implicitly in their pre-determined execution order. For parallel processing, these relationships need to be expressed explicitly, in order for the scheduler to be able to exploit maximum parallelism while respecting dependencies between algorithms. Therefore, means to express and automatically track these dependencies need to be provided by the framework. In this paper, we present components introduced to express and track dependencies of algorithms to deduce a precedence-constrained directed acyclic graph, which serves as basis for our algorithmically sophisticated scheduling approach for tasks with dynamic priorities. We introduce an incremental migration path for existing experiments towards parallel processing and highlight the benefits of explicit dependencies even in the sequential case, such as sanity checks and sequence optimization by graph analysis.
17. 3-flavor oscillations with current and future atmospheric experiments
Kearns, Ed
2017-01-01
Atmospheric neutrinos are comprised of both electron and muon neutrinos with a wide range of energies and baselines. In addition, those that pass through the earth are subject to substantial matter effects. Therefore, atmospheric neutrinos are a natural laboratory for exploring 3-flavor neutrino oscillation with sensitivity to the unknown mass ordering and CP violating phase. I will review the results from current experiments and the prospects for future experiments.
18. High Energy Colliders
Palmer, R. B.; Gallardo, J. C.
INTRODUCTION PHYSICS CONSIDERATIONS GENERAL REQUIRED LUMINOSITY FOR LEPTON COLLIDERS THE EFFECTIVE PHYSICS ENERGIES OF HADRON COLLIDERS HADRON-HADRON MACHINES LUMINOSITY SIZE AND COST CIRCULAR e^{+}e^- MACHINES LUMINOSITY SIZE AND COST e^{+}e^- LINEAR COLLIDERS LUMINOSITY CONVENTIONAL RF SUPERCONDUCTING RF AT HIGHER ENERGIES γ - γ COLLIDERS μ ^{+} μ^- COLLIDERS ADVANTAGES AND DISADVANTAGES DESIGN STUDIES STATUS AND REQUIRED R AND D COMPARISION OF MACHINES CONCLUSIONS DISCUSSION
19. Prospects for future experiments to search for nucleon decay
SciTech Connect
Ayres, D.S.; Heller, K.; LoSecco, J.; Mann, A.K.; Marciano, W.; Shrock, R.E.; Thornton, R.K.
1982-01-01
We review the status of theoretical expectations and experimental searches for nucleon decay, and predict the sensitivities which could be reached by future experiments. For the immediate future, we concur with the conclusions of the 1982 Summer Workshop on Proton Decay Experiments: all detectors now in operation or construction will be relatively insensitive to some potentially important decay modes. Next-generation experiments must therefore be designed to search for these modes, and should be undertaken whether or not present experiments detect nucleon decay in other modes. These future experiments should be designed to push the lifetime limits on all decay modes to the levels at which irreducible cosmic-ray neutrino-induced backgrounds become important. Since the technology for these next-generation experiments is available now, the timetable for starting work on them will be determined by funding constraints and not by the need for extensive development of detectors. Efforts to develop advanced detector techniques should also be pursued, in order to mount more sensitive searches than can be envisioned using current technology, or to provide the most precise measurements possible of the properties of the nucleon decay interaction if it should occur at a detectable rate.
20. Achievements in Training of Future Technology Teachers: European Experience
ERIC Educational Resources Information Center
Sheludko, Inna
2015-01-01
The article discusses the possibilities and prospects of using the experience of training future technology teachers in European countries. Its structure and content in accordance with national traditions and European standards led to the success of the educational components of the European Higher Pedagogical School. This fact encourages local…
1. Professional Experience: Learning from the Past to Build the Future
ERIC Educational Resources Information Center
Le Cornu, Rosie
2016-01-01
The title of the 2014 Australian Teacher Education Association (ATEA) conference was "Teacher Education, An Audit: Building a platform for future engagement." One of the conference themes was "Professional Experience: What works? Why?" I seized upon this theme and the title of the conference as it afforded me an opportunity to…
2. The Future Problem Solving Experience Ten Years After.
ERIC Educational Resources Information Center
Flack, Jerry
1991-01-01
Four young men who had participated in the national competition of the Future Problem Solving (FPS) Program 10 years earlier offer reflections about their FPS experience. Their coach concludes that the program equips young people with the vision and skills needed to anticipate and solve problems and build better tomorrows. (JDD)
3. Professional Experience: Learning from the Past to Build the Future
ERIC Educational Resources Information Center
Le Cornu, Rosie
2016-01-01
The title of the 2014 Australian Teacher Education Association (ATEA) conference was "Teacher Education, An Audit: Building a platform for future engagement." One of the conference themes was "Professional Experience: What works? Why?" I seized upon this theme and the title of the conference as it afforded me an opportunity to…
4. Neutrino physics at a muon collider
SciTech Connect
King, B.J.
1998-02-01
This paper gives an overview of the neutrino physics possibilities at a future muon storage ring, which can be either a muon collider ring or a ring dedicated to neutrino physics that uses muon collider technology to store large muon currents. After a general characterization of the neutrino beam and its interactions, some crude quantitative estimates are given for the physics performance of a muon ring neutrino experiment (MURINE) consisting of a high rate, high performance neutrino detector at a 250 GeV muon collider storage ring. The paper is organized as follows. The next section describes neutrino production from a muon storage rings and gives expressions for event rates in general purpose and long baseline detectors. This is followed by a section outlining a serious design constraint for muon storage rings: the need to limit the radiation levels produced by the neutrino beam. The following two sections describe a general purpose detector and the experimental reconstruction of interactions in the neutrino target then, finally, the physics capabilities of a MURINE are surveyed.
5. Collider phenomenology of e-e-→W-W-
Wang, Kai; Xu, Tao; Zhang, Liangliang
2017-04-01
The Majorana nature of neutrinos is one of the most fundamental questions in particle physics. It is directly related to the violation of accidental lepton number symmetry. This motivated enormous efforts into the search of such processes; among them, one conventional experiment is the neutrinoless double-beta decay (0 ν β β ). On the other hand, there have been proposals of future electron-positron colliders as a "Higgs factory" for the precise measurement of Higgs boson properties, and it has been proposed to convert such a machine into an electron-electron collider. This option enables a new way to probe TeV Majorana neutrinos via the inverse 0 ν β β decay process (e-e-→W-W- ) as an alternative and complementary test to the conventional 0 ν β β decay experiments. In this paper, we investigate the collider search for e-e-→W-W- in different decay channels at future electron colliders. We find that the pure hadronic channel, the semileptonic channel with a muon, and the pure leptonic channel with a dimuon have the most discovery potential.
6. Effects of momentum conservation and flow on angular correlations observed in experiments at the BNL Relativistic Heavy Ion Collider
SciTech Connect
Pratt, Scott; Schlichting, Soeren; Gavin, Sean
2011-08-15
Correlations of azimuthal angles observed at the Relativistic Heavy Ion Collider have gained great attention due to the prospect of identifying fluctuations of parity-odd regions in the field sector of QCD. Whereas the observable of interest related to parity fluctuations involves subtracting opposite-sign from same-sign correlations, the STAR collaboration reported the same-sign and opposite-sign correlations separately. It is shown here how momentum conservation combined with collective elliptic flow contributes significantly to this class of correlations, although not to the difference between the opposite- and same-sign observables. The effects are modeled with a crude simulation of a pion gas. Although the simulation reproduces the scale of the correlation, the centrality dependence is found to be sufficiently different in character to suggest additional considerations beyond those present in the pion gas simulation presented here.
7. Results of a higgs boson searches in the ATLAS and CMS experiments at the large hadron collider at energies 7 and 8 TeV
SciTech Connect
Artamonov, A. A.; Epshteyn, V. S.; Gavrilov, V. B.; Gavrilyuk, A. A.; Gorbounov, P. A.; Jokin, A. S.; Lychkovskaya, N. V.; Popov, V. P.; Safronov, G. B.; Shamanov, V. V.; Shatalov, P. B.; Spiridonov, A. A.; Tsukerman, I. I.
2016-05-15
Recent achievements of the ATLAS and CMS experiments at the Large Hadron Collider searching for a Higgs boson are summarized. A new particle with the mass of 125 GeV and properties expected for the Standard Model Higgs boson was discovered three years ago in these experiments in proton-proton collisions when analyzing part of the data taken at the centre-of-mass energies 7 TeV and 8 TeV in 2011 and 2012 year exposures. Today all the data are processed and fully analyzed. Experimental results of studies of individual Higgs boson decay channels as well as their combination to extract such properties as mass, signal strength, coupling constants, spin and parity are reviewed. All experimental results are found to be compatible with the Standard Model predictions.
8. SSC [Superconducting Super Collider] Project: Technical Training for the Future of Texas. Navarro College/Dallas Community College District. Final Report for Year One.
ERIC Educational Resources Information Center
Orsak, Charles; McGlohen, Patti J.
The Superconducting Super Collider Laboratory (SSCL) is a national lab for research on the fundamental forces and constituents of the universe. A major part of the research will involve an oval ring 54 miles in circumference through which superconducting magnets will steer two beams of protons in opposite directions. In response to the…
9. SSC [Superconducting Super Collider] Project: Technical Training for the Future of Texas. Navarro College/Dallas Community College District. Final Report for Year One.
ERIC Educational Resources Information Center
Orsak, Charles; McGlohen, Patti J.
The Superconducting Super Collider Laboratory (SSCL) is a national lab for research on the fundamental forces and constituents of the universe. A major part of the research will involve an oval ring 54 miles in circumference through which superconducting magnets will steer two beams of protons in opposite directions. In response to the…
10. Future Facilities Summary
SciTech Connect
Albert De Roeck, Rolf Ent
2009-10-01
For the session on future facilities at DIS09 discussions were organized on DIS related measurements that can be expected in the near and medium –or perhaps far– future, including plans from JLab, CERN and FNAL fixed target experiments, possible measurements and detector upgrades at RHIC, as well as the plans for possible future electron proton/ion colliders such as the EIC and the LHeC project.
11. Recent results from hadron colliders
SciTech Connect
Frisch, H.J. )
1990-12-10
This is a summary of some of the many recent results from the CERN and Fermilab colliders, presented for an audience of nuclear, medium-energy, and elementary particle physicists. The topics are jets and QCD at very high energies, precision measurements of electroweak parameters, the remarkably heavy top quark, and new results on the detection of the large flux of B mesons produced at these machines. A summary and some comments on the bright prospects for the future of hadron colliders conclude the talk. 39 refs., 44 figs., 3 tabs.
12. A data handling system for modern and future Fermilab experiments
Illingworth, R. A.
2014-06-01
Current and future Fermilab experiments such as Minerva, NOνA, and MicroBoone are now using an improved version of the Fermilab SAM data handling system. SAM was originally used by the CDF and D0 experiments for Run II of the Fermilab Tevatron to provide file metadata and location cataloguing, uploading of new files to tape storage, dataset management, file transfers between global processing sites, and processing history tracking. However SAM was heavily tailored to the Run II environment and required complex and hard to deploy client software, which made it hard to adapt to new experiments. The Fermilab Computing Sector has progressively updated SAM to use modern, standardized, technologies in order to more easily deploy it for current and upcoming Fermilab experiments, and to support the data preservation efforts of the Run II experiments.
13. Imagining fictitious and future experiences: Evidence from developmental amnesia
PubMed Central
Maguire, Eleanor A.; Vargha-Khadem, Faraneh; Hassabis, Demis
2010-01-01
Patients with bilateral hippocampal damage acquired in adulthood who are amnesic for past events have also been reported to be impaired at imagining fictitious and future experiences. One such patient, P01, however, was found to be unimpaired on these tasks despite dense amnesia and 50% volume loss in both hippocampi. P01 might be an atypical case, and in order to investigate this we identified another patient with a similar neuropsychological profile. Jon is a well-characterised patient with developmental amnesia and 50% volume loss in his hippocampi. Interestingly both Jon and P01 retain some recognition memory ability, and show activation of residual hippocampal tissue during fMRI. Jon's ability to construct fictitious and future scenarios was compared with the adult-acquired cases previously reported on this task and control participants. In contrast to the adult-acquired cases, but similar to P01, Jon was able to richly imagine both fictitious and future experiences in a comparable manner to control participants. Moreover, his constructions were spatially coherent. We speculate that the hippocampal activation during fMRI noted previously in P01 and Jon might indicate some residual hippocampal function which is sufficient to support their preserved ability to imagine fictitious and future scenarios. PMID:20603137
14. Neutrino Oscillation Parameter Sensitivity in Future Long-Baseline Experiments
SciTech Connect
Bass, Matthew
2014-01-01
The study of neutrino interactions and propagation has produced evidence for physics beyond the standard model and promises to continue to shed light on rare phenomena. Since the discovery of neutrino oscillations in the late 1990s there have been rapid advances in establishing the three flavor paradigm of neutrino oscillations. The 2012 discovery of a large value for the last unmeasured missing angle has opened the way for future experiments to search for charge-parity symmetry violation in the lepton sector. This thesis presents an analysis of the future sensitivity to neutrino oscillations in the three flavor paradigm for the T2K, NO A, LBNE, and T2HK experiments. The theory of the three flavor paradigm is explained and the methods to use these theoretical predictions to design long baseline neutrino experiments are described. The sensitivity to the oscillation parameters for each experiment is presented with a particular focus on the search for CP violation and the measurement of the neutrino mass hierarchy. The variations of these sensitivities with statistical considerations and experimental design optimizations taken into account are explored. The effects of systematic uncertainties in the neutrino flux, interaction, and detection predictions are also considered by incorporating more advanced simulations inputs from the LBNE experiment.
15. Advanced Test Reactor Testing Experience: Past, Present and Future
SciTech Connect
Frances M. Marshall
2005-04-01
The Advanced Test Reactor (ATR), at the Idaho National Laboratory (INL), is one of the world’s premier test reactors for providing the capability for studying the effects of intense neutron and gamma radiation on reactor materials and fuels. The physical configuration of the ATR, a 4-leaf clover shape, allows the reactor to be operated at different power levels in the corner “lobes” to allow for different testing conditions for multiple simultaneous experiments. The combination of high flux (maximum thermal neutron fluxes of 1E15 neutrons per square centimeter per second and maximum fast [E>1.0 MeV] neutron fluxes of 5E14 neutrons per square centimeter per second) and large test volumes (up to 48" long and 5.0" diameter) provide unique testing opportunities. The current experiments in the ATR are for a variety of test sponsors -- US government, foreign governments, private researchers, and commercial companies needing neutron irradiation services. There are three basic types of test configurations in the ATR. The simplest configuration is the sealed static capsule, wherein the target material is placed in a capsule, or plate form, and the capsule is in direct contact with the primary coolant. The next level of complexity of an experiment is an instrumented lead experiment, which allows for active monitoring and control of experiment conditions during the irradiation. The highest level of complexity of experiment is the pressurized water loop experiment, in which the test sample can be subjected to the exact environment of a pressurized water reactor. For future research, some ATR modifications and enhancements are currently planned. This paper provides more details on some of the ATR capabilities, key design features, experiments, and future plans.
16. Majorana Higgses at colliders
Nemevšek, Miha; Nesti, Fabrizio; Vasquez, Juan Carlos
2017-04-01
Collider signals of heavy Majorana neutrino mass origin are studied in the minimal Left-Right symmetric model, where their mass is generated spontaneously together with the breaking of lepton number. The right-handed triplet Higgs boson Δ, responsible for such breaking, can be copiously produced at the LHC through the Higgs portal in the gluon fusion and less so in gauge mediated channels. At Δ masses below the opening of the V V decay channel, the two observable modes are pair-production of heavy neutrinos via the triplet gluon fusion gg → Δ → NN and pair production of triplets from the Higgs h → ΔΔ → 4 N decay. The latter features tri- and quad same-sign lepton final states that break lepton number by four units and have no significant background. In both cases up to four displaced vertices may be present and their displacement may serve as a discriminating variable. The backgrounds at the LHC, including the jet fake rate, are estimated and the resulting sensitivity to the Left-Right breaking scale extends well beyond 10 TeV. In addition, sub-dominant radiative modes are surveyed: the γγ, Zγ and lepton flavour violating ones. Finally, prospects for Δ signals at future e + e - colliders are presented.
17. Neutrino mass spectrum and future beta decay experiments
Farzan, Y.; Peres, O. L. G.; Smirnov, A. Yu.
2001-09-01
We study the discovery potential of future beta decay experiments on searches for the neutrino mass in the sub-eV range, and, in particular, KATRIN experiment with sensitivity m>0.3 eV. Effects of neutrino mass and mixing on the beta decay spectrum in the neutrino schemes which explain the solar and atmospheric neutrino data are discussed. The schemes which lead to observable effects contain one or two sets of quasi-degenerate states. Future beta decay measurements will allow to check the three-neutrino scheme with mass degeneracy, moreover, the possibility appears to measure the CP-violating Majorana phase. Effects in the four-neutrino schemes which can also explain the LSND data are strongly restricted by the results of Bugey and CHOOZ oscillation experiments: apart from bending of the spectrum and the shift of the end point one expects appearance of small kink of (<2%) size or suppressed tail after bending of the spectrum with rate below 2% of the expected rate for zero neutrino mass. We consider possible implications of future beta decay experiments for the neutrino mass spectrum, the determination of the absolute scale of neutrino mass and for establishing the nature of neutrinos. We show that beta decay measurements in combination with data from the oscillation and double beta decay experiments will allow to establish the structure of the scheme (hierarchical or non-hierarchical), the type of the hierarchy or ordering of states (normal or inverted) and to measure the relative CP-violating phase in the solar pair of states.
18. Update to Proposal for an Experiment to Measure Mixing, CP Violation and Rare Decays in Charm and Beauty Particle Decays at the Fermilab Collider - BTeV
SciTech Connect
Butler, Joel; Stone, Sheldon
2002-03-01
We have been requested to submit an update of the BTeV plan to the Fermilab Physics Advisory Committee, where to save money the detector has only one arm and there is no new interaction region magnet construction planned. These are to come from a currently running collider experiment at the appropriate time. The "Physics Case" section is complete and updated with the section on the "New Physics" capabilites of BTeV greatly expanded. We show that precise measurements of rare flavor-changing neutral current processes and CP violation are and will be complementary to the Tevatron and LHC in unraveling the electroweak breaking puzzle. We include a revised summary of the physics sensitivities for the one-arm detector, which are not simply taking our proposal numbers and dividing by two, because of additional improvements. One important change resulted from an improved understanding of just how important the RJCH detector is to muon and electron identification, that we can indeed separate electrons from pions and muons from pions, especially at relatively large angles beyond the physical aperture of the EM calorimeter or the Muon Detector. This is documented in the "Physics Sensitivities" section. The section on the detector includes the motivation for doing b and c physics at a hadron collider, and shows the changes in the detector since the proposal based on our ongoing R&D program. We do not here include a detailed description of the entire detector. That is available in the May, 2000 proposal. We include a summary of our R&D activities for the entire experiment. Finally, we also include a fully updated cost estimate for the one-arm system.
19. Scalar split WIMPs in future direct detection experiments
Ghorbani, Karim; Ghorbani, Hossein
2016-03-01
We consider a simple renormalizable dark matter model consisting of two real scalars with a mass splitting δ , interacting with the SM particles through the Higgs portal. We find a viable parameter space respecting all the bounds imposed by invisible Higgs decay experiments at the LHC, the direct detection experiments by XENON100 and LUX, and the dark matter relic abundance provided by WMAP and Planck. Despite the singlet scalar dark matter model that is fragile against the future direct detection experiments, the scalar split model introduced here survives such forthcoming bounds. We emphasize the role of the coannihilation processes and the mixing effects in this feature. For mDM˜63 GeV in this model we can explain as well the observed gamma-ray excess in the analyses of the Fermi-LAT data at Galactic latitudes 2 ° ≤|b |≤2 0 ° and Galactic longitudes |l |<2 0 ° .
20. ALPs at colliders
Mimasu, Ken; Sanz, Verónica
2015-06-01
New pseudo-scalars, often called axion-like particles (ALPs), abound in model-building and are often associated with the breaking of a new symmetry. Traditional searches and indirect bounds are limited to light axions, typically in or below the KeV range for ALPs coupled to photons. We present collider bounds on ALPs from mono-γ, tri-γ and mono-jet searches in a model independent fashion, as well as the prospects for the LHC and future machines. We find that they are complementary to existing searches, as they are sensitive to heavier ALPs and have the capability to cover an otherwise inaccessible region of parameter space. We also show that, assuming certain model dependent correlations between the ALP coupling to photons and gluons as well as considering the validity of the effective description of ALP interactions, mono-jet searches are in fact more suitable and effective in indirectly constraining ALP scenarios.
1. Electronics Packaging Issues for Future Accelerators and Experiments
SciTech Connect
Larsen, R.
2004-11-11
Standard instrument modules for physics reached their zenith of industrial development from the early 1960s through late 1980s. Started by laboratory engineering groups in Europe and North America, modular electronic standards were successfully developed and commercialized. In the late 1980's a major shift in large detector design toward custom chips mounted directly on detectors started a decline in the use of standard modules for data acquisition. With the loss of the detector module business, commercial support declined. Today the engineering communities supporting future accelerators and experiments face a new set of challenges that demand much more reliable system design. The dominant system metric is Availability. We propose (1) that future accelerator and detector systems be evaluated against a Design for Availability (DFA) metric; (2) that modular design and standardization applied to all electronic and controls subsystems are key to high Availability; and (3) that renewed Laboratory-Industry collaboration(s) could make an invaluable contribution to design and implementation.
2. Future perspectives of the alphasat TDP#5 Telecommunication Experiment
De Sanctis, M.; Rossi, T.; Mukherjee, S.; Ruggieri, M.
Future High throughput Satellite (HTS) systems, able to support hundreds of gigabit/s or terabit/s connectivity, will require a very large bandwidth availability; this pushes towards the exploitation of the so-called “ beyond Ka-band” systems. In particular the use of the Q/V frequency band is foreseen. This paper presents the most important features of the TDP#5 (Technology Demonstration Payload 5) scientific mission that will provide us the opportunity to perform, for the first time, a communication scientific experiment over a Q/V band satellite link.
3. A prioritized set of physiological measurements for future spaceflight experiments
NASA Technical Reports Server (NTRS)
1978-01-01
A set of desired experimental measurements to be obtained in future spaceflights in four areas of physiological investigation are identified. The basis for identifying the measurements was the physiological systems analysis performed on Skylab data and related ground-based studies. An approach for prioritizing the measurement list is identified and discussed with the use of examples. A prioritized measurement list is presented for each of the following areas; cardiopulmonary, fluid-renal and electrolyte, hematology and immunology, and musculoskeletal. Also included is a list of interacting stresses and other factors present in spaceflight experiments whose effects may need to be quantified.
4. Probing strongly-interacting electroweak dynamics through W{sup +}W{sup {minus}}/ZZ ratios at future e{sup +}e{sup {minus}} colliders
SciTech Connect
Barger, V.; Cheung, K.; Han, T.; Phillips, R.J.N.
1995-01-01
The authors point out that the ratio of W{sup +}W{sup {minus}} {yields} W{sup +}W{sup {minus}} and W{sup +}W{sup {minus}} {yields} ZZ cross sections is a sensitive probe of the dynamics of electroweak symmetry breaking, in the CM energy region {radical}s{sub ww} {approx_gt} 1 TeV where vector boson scattering may well become strong. They suggest ways in which this ratio can be extracted at a 1.5 TeV e{sup +}e{sup {minus}} linear collider, using W{sup {+-}}, Z {yields} jj hadronic decays and relying on dijet mass resolution to provide statistical discrimination between W{sup {+-}} and Z. WW fusion processes studied here are unique for exploring scalar resonances of mass about 1 TeV and are complementary to studies via the direct channel e{sup +}e{sup {minus}} {yields} W{sup +}W{sup {minus}} for the vector and non-resonant cases. With an integrated luminosity of 200 fb{sup {minus}1}, the signals obtained are statistically significant. Comparison with a study of e{sup {minus}}e{sup {minus}} {yields} {nu}{nu}W{sup {minus}}W{sup {minus}} process is made. Enhancements of the signal rate from using a polarized electron beam, or at a 2 TeV e{sup +}e{sup {minus}} linear collider and possible higher energy {mu}{sup +}{mu}{sup {minus}} colliders, are also presented.
5. The dark penguin shines light at colliders
Primulando, Reinard; Salvioni, Ennio; Tsai, Yuhsin
2015-07-01
Collider experiments are one of the most promising ways to constrain Dark Matter (DM) interactions. For several types of DM-Standard Model couplings, a meaningful interpretation of the results requires to go beyond effective field theory, considering simplified models with light mediators. This is especially important in the case of loop-mediated interactions. In this paper we perform the first simplified model study of the magnetic dipole interacting DM, by including the one-loop momentum-dependent form factors that mediate the coupling — given by the Dark Penguin — in collider processes. We compute bounds from the monojet, monophoton, and diphoton searches at the 8 and 14 TeV LHC, and compare the results to those of direct and indirect detection experiments. Future searches at the 100 TeV hadron collider and at the ILC are also addressed. We find that the optimal search strategy requires loose cuts on the missing transverse energy, to capture the enhancement of the form factors near the threshold for on-shell production of the mediators. We consider both minimal models and models where an additional state beyond the DM is accessible. In the latter case, under the assumption of anarchic flavor structure in the dark sector, the LHC monophoton and diphoton searches will be able to set much stronger bounds than in the minimal scenario. A determination of the mass of the heavier dark fermion might be feasible using the M T2 variable. In addition, if the Dark Penguin flavor structure is almost aligned with that of the DM mass, a displaced signal from the decay of the heavier dark fermion into the DM and photon can be observed. This allows us to set constraints on the mixings and couplings of the model from an existing search for non-pointing photons.
6. Report on seniors' dental care. Past experience and future challenges.
PubMed
Bowes, D
1994-01-01
After attending the CDHA North American Research Conference held in Niagara Falls last October, Ms. Bowes says she found a significant amount of interest in the area of geriatric care being expressed by those in attendance. Consequently, when she returned home, she decided to offer CDHA members an opportunity to gain some insight on this topic from her own experiences. Although no 'formal' report summarizing the findings noted here was ever forwarded to the facilities involved, Ms. Bowes says that "some of the information gathered has been used in documents and reports that were forwarded to the Ministry of Health and the local Board of Health for Simcoe County." The purpose of this report is to alert dental professionals to the future dental needs of our rapidly aging population and to perhaps assist those who are considering the provision of the elderly by outlining my personal experience.
7. Proton-antiproton collider physics
SciTech Connect
Shochet, M.J.
1995-07-01
The 9th {anti p}p Workshop was held in Tsukuba, Japan in October, 1993. A number of important issues remained after that meeting: Does QCD adequately describe the large cross section observed by CDF for {gamma} production below 30 GeV? Do the CDF and D0 b-production cross sections agree? Will the Tevatron live up to its billing as a world-class b-physics facility? How small will the uncertainty in the W mass be? Is there anything beyond the Minimal Standard Model? And finally, where is the top quark? Presentations at this workshop addressed all of these issues. Most of them are now resolved, but new questions have arisen. This summary focuses on the experimental results presented at the meeting by CDF and D0 physicists. Reviews of LEP and HERA results, future plans for hadron colliders and their experiments, as well as important theoretical presentations are summarized elsewhere in this volume. Section 1 reviews physics beyond the Minimal Standard Model. Issues in b and c physics are addressed in section 3. Section 4 focuses on the top quark. Electroweak physics is reviewed in section 5, followed by QCD studies in section 6. Conclusions are drawn in section 7.
8. Experimental Study of W Z Intermediate Bosons Associated Production with the CDF Experiment at the Tevatron Collider
SciTech Connect
Pozzobon, Nicola; /Pisa U.
2007-09-01
Studying WZ associated production at the Fermilab Tevatron Collider is of great importance for two main reasons. On the one hand, this process would be sensitive to anomalies in the triple gauge couplings such that any deviation from the value predicted by the Standard Model would be indicative of new physics. In addition, by choosing to focus on the final state where the Z boson decays to b{bar b} pairs, the event topology would be the same as expected for associated production of a W and a Standard Model light Higgs boson (m{sub H} {approx}< 135 GeV) which decays into b{bar b} pairs most of times. The process WH {yields} W b{bar b} has an expected {sigma} {center_dot} B about five times lower than WZ {yields} Wb{bar b} for m{sub H} {approx_equal} 120 GeV. Therefore, observing this process would be a benchmark for an even more difficult search aiming at discovering the light Higgs in the WH {yields} Wb{bar b} process. After so many years of Tevatron operation only a weak WZ signal was recently observed in the full leptonic decay channel, which suffers from much less competition from background. Searching for the Z in the b{bar b} decay channel in this process is clearly a very challenging endeavour. In the work described in this thesis, WZ production is searched for in a final state where the W decays leptonically to an electron-neutrino pair or a muon-neutrino pair, with associated production of a jet pair consistent with Z decays. A set of candidate events is obtained by applying appropriate cuts to the parameters of events collected by wide acceptance leptonic triggers. To improve the signal fraction of the selected events, an algorithm was used to tag b-flavored jets by means of their content of long lived b-hadrons and corrections were developed to the jet algorithm to improve the b-jet energy resolution for a better reconstruction of the Z mass. In order to sense the presence of a signal one needs to estimate the amount of background. The relative content of
9. Muon Collider Task Force Report
SciTech Connect
Ankenbrandt, C.; Alexahin, Y.; Balbekov, V.; Barzi, E.; Bhat, C.; Broemmelsiek, D.; Bross, A.; Burov, A.; Drozhdin, A.; Finley, D.; Geer, S.; /Fermilab /Argonne /Brookhaven /Jefferson Lab /LBL, Berkeley /MUONS Inc., Batavia /UCLA /UC, Riverside /Mississippi U.
2007-12-01
Muon Colliders offer a possible long term path to lepton-lepton collisions at center-of-mass energies {radical}s {ge} 1 TeV. In October 2006 the Muon Collider Task Force (MCTF) proposed a program of advanced accelerator R&D aimed at developing the Muon Collider concept. The proposed R&D program was motivated by progress on Muon Collider design in general, and in particular, by new ideas that have emerged on muon cooling channel design. The scope of the proposed MCTF R&D program includes muon collider design studies, helical cooling channel design and simulation, high temperature superconducting solenoid studies, an experimental program using beams to test cooling channel RF cavities and a 6D cooling demonstration channel. The first year of MCTF activities are summarized in this report together with a brief description of the anticipated FY08 R&D activities. In its first year the MCTF has made progress on (1) Muon Collider ring studies, (2) 6D cooling channel design and simulation studies with an emphasis on the HCC scheme, (3) beam preparations for the first HPRF cavity beam test, (4) preparations for an HCC four-coil test, (5) further development of the MANX experiment ideas and studies of the muon beam possibilities at Fermilab, (6) studies of how to integrate RF into an HCC in preparation for a component development program, and (7) HTS conductor and magnet studies to prepare for an evaluation of the prospects for of an HTS high-field solenoid build for a muon cooling channel.
10. International Workshop on Linear Colliders 2010
ScienceCinema
None
2016-07-12
IWLC2010 International Workshop on Linear Colliders 2010ECFA-CLIC-ILC joint meeting: Monday 18 October - Friday 22 October 2010Venue: CERN and CICG (International Conference Centre Geneva, Switzerland) This year, the International Workshop on Linear Colliders organized by the European Committee for Future Accelerators (ECFA) will study the physics, detectors and accelerator complex of a linear collider covering both CLIC and ILC options.Contact Workshop Secretariat  IWLC2010 is hosted by CERN
11. International Workshop on Linear Colliders 2010
ScienceCinema
None
2016-07-12
IWLC2010 International Workshop on Linear Colliders 2010ECFA-CLIC-ILC joint meeting: Monday 18 October - Friday 22 October 2010Venue: CERN and CICG (International Conference Centre Geneva, Switzerland) This year, the International Workshop on Linear Colliders organized by the European Committee for Future Accelerators (ECFA) will study the physics, detectors and accelerator complex of a linear collider covering both CLIC and ILC options.Contact Workshop Secretariat  IWLC2010 is hosted by CERN
12. Collider Signal I :. Resonance
Tait, Tim M. P.
2010-08-01
These TASI lectures were part of the summer school in 2008 and cover the collider signal associated with resonances in models of physics beyond the Standard Model. I begin with a review of the Z boson, one of the best-studied resonances in particle physics, and review how the Breit-Wigner form of the propagator emerges in perturbation theory and discuss the narrow width approximation. I review how the LEP and SLAC experiments could use the kinematics of Z events to learn about fermion couplings to the Z. I then make a brief survey of models of physics beyond the Standard Model which predict resonances, and discuss some of the LHC observables which we can use to discover and identify the nature of the BSM physics. I finish up with a discussion of the linear moose that one can use for an effective theory description of a massive color octet vector particle.
13. Accelerator Considerations of Large Circular Colliders
Chao, Alex
As we consider the tremendous physics reaches of the big future circular electron-positron and proton-proton colliders, it might be advisable to keep a close track of what accelerator challenges they face. Good progresses are being made, and yet it is reported here that substantial investments in funding, manpower, as well as a long sustained time to the R&D efforts will be required in preparation to realize these dream colliders.
14. Accelerator considerations of large circular colliders
Chao, Alex
2016-07-01
As we consider the tremendous physics reaches of the big future circular electron-positron and proton-proton colliders, it might be advisable to keep a close track of what accelerator challenges they face. Good progresses are being made, and yet it is reported here that substantial investments in funding, manpower, as well as a long sustained time to the R&D efforts will be required in preparation to realize these dream colliders.
15. NASA Astronauts on Soyuz: Experience and Lessons for the Future
NASA Technical Reports Server (NTRS)
2010-01-01
The U. S., Russia, and, China have each addressed the question of human-rating spacecraft. NASA's operational experience with human-rating primarily resides with Mercury, Gemini, Apollo, Space Shuttle, and International Space Station. NASA s latest developmental experience includes Constellation, X38, X33, and the Orbital Space Plane. If domestic commercial crew vehicles are used to transport astronauts to and from space, Soyuz is another example of methods that could be used to human-rate a spacecraft and to work with commercial spacecraft providers. For Soyuz, NASA's normal assurance practices were adapted. Building on NASA's Soyuz experience, this report contends all past, present, and future vehicles rely on a range of methods and techniques for human-rating assurance, the components of which include: requirements, conceptual development, prototype evaluations, configuration management, formal development reviews (safety, design, operations), component/system ground-testing, integrated flight tests, independent assessments, and launch readiness reviews. When constraints (cost, schedule, international) limit the depth/breadth of one or more preferred assurance means, ways are found to bolster the remaining areas. This report provides information exemplifying the above safety assurance model for consideration with commercial or foreign-government-designed spacecraft. Topics addressed include: U.S./Soviet-Russian government/agency agreements and engineering/safety assessments performed with lessons learned in historic U.S./Russian joint space ventures
16. Physics goals of the next linear collider
SciTech Connect
Kuhlman, S.; Marciano, W.J.; Gunion, J. F.; NLC ZDR Design Group; NLC Physics Working Group
1996-05-01
We present the prospects for the next generation of high-energy physics experiments with electron-positron colliding beams. This report summarizes the current status of the design and technological basis of a linear collider of center of mass energy 500 GeV-1.5 TeV, and the opportunities for high-energy physics experiments that this machine is expected to open. 132 refs., 54 figs., 14 tabs.
17. Muon collider design
Palmer, R.; Sessler, A.; Skrinsky, A.; Tollestrup, A.; Baltz, A.; Caspi, S.; P., Chen; W-H., Cheng; Y., Cho; Cline, D.; Courant, E.; Fernow, R.; Gallardo, J.; Garren, A.; Gordon, H.; Green, M.; Gupta, R.; Hershcovitch, A.; Johnstone, C.; Kahn, S.; Kirk, H.; Kycia, T.; Y., Lee; Lissauer, D.; Luccio, A.; McInturff, A.; Mills, F.; Mokhov, N.; Morgan, G.; Neuffer, D.; K-Y., Ng; Noble, R.; Norem, J.; Norum, B.; Oide, K.; Parsa, Z.; Polychronakos, V.; Popovic, M.; Rehak, P.; Roser, T.; Rossmanith, R.; Scanlan, R.; Schachinger, L.; Silvestrov, G.; Stumer, I.; Summers, D.; Syphers, M.; Takahashi, H.; Torun, Y.; Trbojevic, D.; Turner, W.; van Ginneken, A.; Vsevolozhskaya, T.; Weggel, R.; Willen, E.; Willis, W.; Winn, D.; Wurtele, J.; Zhao, Y.
1996-11-01
Muon Colliders have unique technical and physics advantages and disadvantages when compared with both hadron and electron machines. They should thus be regarded as complementary. Parameters are given of 4 TeV and 0.5 TeV high luminosity \\mu^+ \\mu^- colliders, and of a 0.5 TeV lower luminosity demonstration machine. We discuss the various systems in such muon colliders, starting from the proton accelerator needed to generate the muons and proceeding through muon cooling, acceleration and storage in a collider ring. Detector background, polarization, and nonstandard operating conditions are discussed.
18. CHARM 2010: Experiment summary and future charm facilities
SciTech Connect
Appel, Jeffrey A.; /Fermilab
2010-12-01
The CHARM 2010 meeting had over 30 presentations of experimental results, plus additional future facilities talks just before this summary talk. Since there is not enough time to even summarize all that has been shown from experiments and to recognize all the memorable plots and results - tempting as it is to reproduce the many clean signals and data vs theory figures, the quantum correlations plots, and the D-mixing plots before and after the latest CLEO-c data is added. So, this review will give only my personal observations, exposing my prejudices and my areas of ignorance, no doubt. This overview will be at a fairly high level of abstraction - no re-showing individual plots or results. I ask the forgiveness of those who will have been slighted in this way - meaning all the presents.
19. A Guide to Designing Future Ground-based CMB Experiments
SciTech Connect
Wu, W. L.K.; Errard, J.; Dvorkin, C.; Kuo, C. L.; Lee, A. T.; McDonald, P.; Slosar, A.; Zahn, O.
2014-02-18
In this follow-up work to the High Energy Physics Community Summer Study 2013 (HEP CSS 2013, a.k.a. Snowmass), we explore the scientific capabilities of a future Stage-IV Cosmic Microwave Background polarization experiment (CMB-S4) under various assumptions on detector count, resolution, and sky coverage. We use the Fisher matrix technique to calculate the expected uncertainties in cosmological parameters in vΛCDM that are especially relevant to the physics of fundamental interactions, including neutrino masses, effective number of relativistic species, dark-energy equation of state, dark-matter annihilation, and inflationary parameters. To further chart the landscape of future cosmology probes, we include forecasted results from the Baryon Acoustic Oscillation (BAO) signal as measured by DESI to constrain parameters that would benefit from low redshift information. We find the following best 1-σ constraints: σ(Mv ) = 15 meV, σ(Neff ) = 0.0156, Dark energy Figure of Merit = 303, σ(pann) = 0.00588 x 3 x 10-26 cm3/s/GeV, σ( ΩK) = 0.00074, σ(ns) = 0.00110, σ( αs) = 0.00145, and σ(r) = 0.00009. We also detail the dependences of the parameter constraints on detector count, resolution, and sky coverage.
20. The future of mammography: radiology residents' experiences, attitudes, and opinions.
PubMed
Baxi, Shrujal S; Snow, Jacqueline G; Liberman, Laura; Elkin, Elena B
2010-06-01
The objective of our study was to assess the experiences and preferences of radiology residents with respect to breast imaging. We surveyed radiology residents at 26 programs in New York and New Jersey. Survey topics included plans for subspecialty training, beliefs, and attitudes toward breast imaging and breast cancer screening and the likelihood of interpreting mammography in the future. Three hundred forty-four residents completed the survey (response rate, 62%). The length of time spent training in breast imaging varied from no dedicated time (37%) to 1-8 weeks (40%) to more than 9 weeks (23%). Most respondents (97%) agreed that mammography is important to women's health. More than 85% of residents believed that mammography should be interpreted by breast imaging specialists. Respondents shared negative views about mammography, agreeing with statements that the field was associated with a high risk of malpractice (99%), stress (94%), and low reimbursement (68%). Respondents endorsed several positive attributes of mammography, including job availability (97%), flexible work schedules (94%), and few calls or emergencies (93%). Most radiology residents (93%) said that they were likely to pursue subspecialty training, and 7% expressed interest in breast imaging fellowships. Radiology residents' negative and positive views about mammography seem to be independent of time spent training in mammography and of future plans to pursue fellowship training in breast imaging. Systematic assessment of the plans and preferences of radiology residents can facilitate the development of strategies to attract trainees to careers in breast imaging.
1. Traverse Planning Experiments for Future Planetary Surface Exploration
NASA Technical Reports Server (NTRS)
Hoffman, S. J.; Voels, S. A.; Mueller, R. P.; Lee, P. C.
2011-01-01
This paper describes the results of a recent (July-August 2010 and July 2011) planetary surface traverse planning experiment. The purpose of this experiment was to gather data relevant to robotically repositioning surface assets used for planetary surface exploration. This is a scenario currently being considered for future human exploration missions to the Moon and Mars. The specific scenario selected was a robotic traverse on the lunar surface from an outpost at Shackleton Crater to the Malapert Massif. As these are exploration scenarios, the route will not have been previously traversed and the only pre-traverse data sets available will be remote (orbital) observations. Devon Island was selected as an analog location where a traverse route of significant length could be planned and then traveled. During the first half of 2010, a team of engineers and scientists who had never been to Devon Island used remote sensing data comparable to that which is likely to be available for the Malapert region (eg., 2-meter/pixel imagery, 10-meter interval topographic maps and associated digital elevation models, etc.) to plan a 17-kilometer (km) traverse. Surface-level imagery data was then gathered on-site that was provided to the planning team. This team then assessed whether the route was actually traversable or not. Lessons learned during the 2010 experiment were then used in a second experiment in 2011 for which a much longer traverse (85 km) was planned and additional surface-level imagery different from that gathered in 2010 was obtained for a comparative analysis. This paper will describe the route planning techniques used, the data sets available to the route planners and the lessons learned from the two traverses planned and carried out on Devon Island.
2. When hope and fear collide: Expectations and experiences of first-year doctoral students in the natural sciences
Robinson, C. Sean
Although there is a significant body of research on the process of undergraduate education and retention, much less research exists as it relates to the doctoral experience, which is intended to be transformational in nature. At each stage of the process students are presented with a unique set of challenges and experiences that must be negotiated and mastered. However, we know very little about entering students' expectations, beliefs, goals, and identities, and how these may or may not change over time within a doctoral program. Utilizing a framework built upon socialization theory and cognitive-ecological theory, this dissertation examines the expectations that incoming doctoral students have about their programs as well as the actual experiences that these students have during their first year. Interviews were conducted with twelve students from the departments of Botany, Chemistry, and Physics prior to matriculation into their respective doctoral programs. These initial interviews provided information about students' expectations. Interviews were then conducted approximately every six to eight weeks to assess students' perceptions about their actual experiences throughout their first year. The findings of this study showed that new doctoral students tend to have uninformed and naive expectations about their programs. In addition, many of the specific policies or procedures necessary for navigation through a doctoral program were unknown to the students. While few differences existed in terms of students' expectations based on gender or discipline, there were significant differences in how international students described their expectations compared to American students. The two primary differences between American and international students revolved around the role of faculty members and the language barrier. It is clear that the first year of doctoral study is indeed a year of transition. The nature and clarity of the expectations associated with the role of
Kotchetkov, Dmitri
2017-01-01
Rapid growth of the high energy physics program in the USSR during 1960s-1970s culminated with a decision to build the Accelerating and Storage Complex (UNK) to carry out fixed target and colliding beam experiments. The UNK was to have three rings. One ring was to be built with conventional magnets to accelerate protons up to the energy of 600 GeV. The other two rings were to be made from superconducting magnets, each ring was supposed to accelerate protons up to the energy of 3 TeV. The accelerating rings were to be placed in an underground tunnel with a circumference of 21 km. As a 3 x 3 TeV collider, the UNK would make proton-proton collisions with a luminosity of 4 x 1034 cm-1s-1. Institute for High Energy Physics in Protvino was a project leading institution and a site of the UNK. Accelerator and detector research and development studies were commenced in the second half of 1970s. State Committee for Utilization of Atomic Energy of the USSR approved the project in 1980, and the construction of the UNK started in 1983. Political turmoil in the Soviet Union during late 1980s and early 1990s resulted in disintegration of the USSR and subsequent collapse of the Russian economy. As a result of drastic reduction of funding for the UNK, in 1993 the project was restructured to be a 600 GeV fixed target accelerator only. While the ring tunnel and proton injection line were completed by 1995, and 70% of all magnets and associated accelerator equipment were fabricated, lack of Russian federal funding for high energy physics halted the project at the end of 1990s.
4. Miniaturized Lab System for Future Cold Atom Experiments in Microgravity
Kulas, Sascha; Vogt, Christian; Resch, Andreas; Hartwig, Jonas; Ganske, Sven; Matthias, Jonas; Schlippert, Dennis; Wendrich, Thijs; Ertmer, Wolfgang; Maria Rasel, Ernst; Damjanic, Marcin; Weßels, Peter; Kohfeldt, Anja; Luvsandamdin, Erdenetsetseg; Schiemangk, Max; Grzeschik, Christoph; Krutzik, Markus; Wicht, Andreas; Peters, Achim; Herrmann, Sven; Lämmerzahl, Claus
2017-02-01
We present the technical realization of a compact system for performing experiments with cold 87Rb and 39K atoms in microgravity in the future. The whole system fits into a capsule to be used in the drop tower Bremen. One of the advantages of a microgravity environment is long time evolution of atomic clouds which yields higher sensitivities in atom interferometer measurements. We give a full description of the system containing an experimental chamber with ultra-high vacuum conditions, miniaturized laser systems, a high-power thulium-doped fiber laser, the electronics and the power management. In a two-stage magneto-optical trap atoms should be cooled to the low μK regime. The thulium-doped fiber laser will create an optical dipole trap which will allow further cooling to sub- μK temperatures. The presented system fulfills the demanding requirements on size and power management for cold atom experiments on a microgravity platform, especially with respect to the use of an optical dipole trap. A first test in microgravity, including the creation of a cold Rb ensemble, shows the functionality of the system.
5. THERMAL SHOCK INDUCED BY A 24 GEV PROTON BEAM IN THE TEST WINDOWS OF THE MUON COLLIDER EXPERIMENT E951 - TEST RESULTS AND THEORETICAL PREDICTIONS.
SciTech Connect
SIMOS,N.; KIRK,H.; FINFROCK,C.; PRIGL,R.; BROWN,K.; KAHN,S.; LUDEWIG,H.; MCDONALDK.; CATES,M.; TSAI,J.; BESHEARS,D.; RIEMER,B.
2001-11-11
The need for intense muon beams for muon colliders and neutrino factories has lead to a concept of a high performance target station in which a 1-4 MW proton beam of 6-24 GeV impinges on a target inside a high field solenoid channel. While novel technical issues exist regarding the survivability of the target itself, the need to pass the tightly focused proton beam through beam windows poses additional concerns. In this paper, issues associated with the interaction of a proton beam with window structures designed for the muon targetry experiment E951 at BNL are explored. Specifically, a 24 GeV proton beam up to 16 x 10{sup 12} per pulse and a pulse length of approximately 100 ns is expected to be tightly focused (to 0.5 mm rms one sigma radius) on an experimental target. Such beam will induce very high thermal, quasi-static and shock stresses in the window structure that exceed the strength of most common materials. In this effort, a detailed assessment of the thermal/shock response of beam windows is attempted with a goal of identifying the best window material candidate. Further, experimental strain results and comparison with the predicted values are presented and discussed.
6. State of hadron collider physics
SciTech Connect
Grannis, P.D. |
1993-12-01
The 9th Topical Workshop on Proton-Antiproton Collider Physics in Tsukuba Japan demonstrated clearly the enormous breadth of physics accessible in hadron cowders. Although no significant chinks were reported in the armor of the Standard Model, new results presented in this meeting have expanded our knowledge of the electroweak and strong interactions and have extended the searches for non-standard phenomena significantly. Much of the new data reported came from the CDF and D0 experiments at the Fermilab cowder. Superb operation of the Tevatron during the 1992-1993 Run and significant advances on the detector fronts -- in particular, the emergence of the new D0 detector as a productive physics instrument in its first outing and the addition of the CDF silicon vertex detector -- enabled much of this advance. It is noteworthy however that physics from the CERN collider experiments UA1 and UA4 continued to make a large impact at this meeting. In addition, very interesting summary talks were given on new results from HERA, cosmic ray experiments, on super-hadron collider physics, and on e{sup +}e{sup {minus}} experiments at LEP and TRISTAN. These summaries are reported in elsewhere in this volume.
7. Conventional power sources for colliders
SciTech Connect
Allen, M.A.
1987-07-01
At SLAC we are developing high peak-power klystrons to explore the limits of use of conventional power sources in future linear colliders. In an experimental tube we have achieved 150 MW at 1 ..mu..sec pulse width at 2856 MHz. In production tubes for SLAC Linear Collider (SLC) we routinely achieve 67 MW at 3.5 ..mu..sec pulse width and 180 pps. Over 200 of the klystrons are in routine operation in SLC. An experimental klystron at 8.568 GHz is presently under construction with a design objective of 30 MW at 1 ..mu..sec. A program is starting on the relativistic klystron whose performance will be analyzed in the exploration of the limits of klystrons at very short pulse widths.
8. Top quark physics: Future measurements
SciTech Connect
Frey, R.; Vejcik, S.; Berger, E.L.
1997-04-04
The authors discuss the study of the top quark at future experiments and machines. Tops large mass makes it a unique probe of physics at the natural electroweak scale. They emphasize measurements of the top quarks mass, width, and couplings, as well as searches for rare or nonstandard decays, and discuss the complementary roles played by hadron and lepton colliders.
9. Top quark physics: Future Measurements
SciTech Connect
Frey, Raymond; Gerdes, David; Jaros, John; Vejcik, Steve; Berger, Edmond L.; Chivukula, R. Sekhar; Cuypers, Frank; Drell, Persis S.; Fero, Michael; Hadley, Nicholas; Han, Tao; Heinson, Ann P.; Knuteson, Bruce; Larios, Francisco; Miettinen, Hannu; Orr, Lynne H.; Peskin, Michael E.; Rizzo, Thomas; Sarid, Uri; Schmidt, Carl; Stelzer, Tim; Sullivan, Zack
1996-12-31
We discuss the study of the top quark at future experiments and machines. Top's large mass makes it a unique probe of physics at the natural electroweak scale. We emphasize measurements of the top quark's mass, width, and couplings, as well as searches for rare or nonstandard decays, and discuss the complementary roles played by hadron and lepton colliders.
10. Considerations on Energy Frontier Colliders after LHC
SciTech Connect
2016-11-15
Since 1960’s, particle colliders have been in the forefront of particle physics, 29 total have been built and operated, 7 are in operation now. At present the near term US, European and international strategies of the particle physics community are centered on full exploitation of the physics potential of the Large Hadron Collider (LHC) through its high-luminosity upgrade (HL-LHC). The future of the world-wide HEP community critically depends on the feasibility of possible post-LHC colliders. The concept of the feasibility is complex and includes at least three factors: feasibility of energy, feasibility of luminosity and feasibility of cost. Here we overview all current options for post-LHC colliders from such perspective (ILC, CLIC, Muon Collider, plasma colliders, CEPC, FCC, HE-LHC) and discuss major challenges and accelerator R&D required to demonstrate feasibility of an energy frontier accelerator facility following the LHC. We conclude by taking a look into ultimate energy reach accelerators based on plasmas and crystals, and discussion on the perspectives for the far future of the accelerator-based particle physics. This paper largely follows previous study [1] and the presenta ion given at the ICHEP’2016 conference in Chicago [2].
11. PHENIX CDR update: An experiment to be performed at the Brookhaven National Laboratory relativistic heavy ion collider. Revision
SciTech Connect
Not Available
1994-11-01
The PHENIX Conceptual Design Report Update (CDR Update) is intended for use together with the Conceptual Design Report (CDR). The CDR Update is a companion document to the CDR, and it describes the collaborations progress since the CDR was submitted in January 1993. Therefore, this document concentrates on changes, refinements, and decisions that have been made over the past year. These documents together define the baseline PHENIX detector that the collaboration intends to build for operation at RHIC startup. In this chapter the current status of the detector and its motivation are briefly described. In Chapters 2 and 3 the detector and the physics performance are more fully developed. In Chapters 4 through 13 the details of the present design status, the technology choices, and the construction costs and schedules are presented. The physics goals of PHENIX collaboration have remained exactly as they were described in the CDR. Primary among these is the detection of a new phase of matter, the quark-gluon plasma (QGP), and the measurement of its properties. The PHENIX experiment will measure many of the best potential QGP signatures to see if any or all of these physics variables show anomalies simultaneously due to the formation of the QGP.
12. PROSPECTS FOR COLLIDERS AND COLLIDER PHYSICS TO THE 1 PEV ENERGY SCALE
SciTech Connect
KING,B.J.
2000-05-05
A review is given of the prospects for future colliders and collider physics at the energy frontier. A proof-of-plausibility scenario is presented for maximizing the authors progress in elementary particle physics by extending the energy reach of hadron and lepton colliders as quickly and economically as might be technically and financially feasible. The scenario comprises 5 colliders beyond the LHC--one each of e{sup +}e{sup {minus}} and hadron colliders and three {mu}{sup +}{mu}{sup {minus}} colliders--and is able to hold to the historical rate of progress in the log-energy reach of hadron and lepton colliders, reaching the 1 PeV constituent mass scale by the early 2040's. The technical and fiscal requirements for the feasibility of the scenario are assessed and relevant long-term R and D projects are identified. Considerations of both cost and logistics seem to strongly favor housing most or all of the colliders in the scenario in a new world high energy physics laboratory.
13. Misremembering Past Affect Predicts Adolescents' Future Affective Experience During Exercise.
PubMed
Karnaze, Melissa M; Levine, Linda J; Schneider, Margaret
2017-09-01
Increasing physical activity among adolescents is a public health priority. Because people are motivated to engage in activities that make them feel good, this study examined predictors of adolescents' feelings during exercise. During the 1st semester of the school year, we assessed 6th-grade students' (N = 136) cognitive appraisals of the importance of exercise. Participants also reported their affect during a cardiovascular fitness test and recalled their affect during the fitness test later that semester. During the 2nd semester, the same participants rated their affect during a moderate-intensity exercise task. Affect reported during the moderate-intensity exercise task was predicted by cognitive appraisals of the importance of exercise and by misremembering affect during the fitness test as more positive than it actually was. This memory bias mediated the association between appraising exercise as important and experiencing a positive change in affect during the moderate-intensity exercise task. These findings highlight the roles of both cognitive appraisals and memory as factors that may influence affect during exercise. Future work should explore whether affect during exercise can be modified by targeting appraisals and memories related to exercise experiences.
14. Solar cell experiments for space: past, present and future
Hoheisel, R.; Messenger, S. R.; Lumb, M. P.; Gonzalez, M.; Bailey, C. G.; Scheiman, D. A.; Maximenko, S.; Jenkins, P. P.; Walters, R. J.
2013-03-01
Since the early beginnings of the space age in the 1950s, solar cells have been considered as the primary choice for long term electrical power generation of satellites and space systems. This is mainly due to their high power/mass ratio and the good scalability of solar modules according to the power requirements of a space mission. During the last decades, detailed solar cell material studies including the non-trivial interaction with high-energy space particles have led to continuous and significant improvements in device efficiency. This allowed the powering of advanced space systems like the International Space Station, rovers on the Martian surface as well as satellites which have helped to understand the universe and our planet. It is noteworthy that in addition to their success in space, these photovoltaic technologies have also broken ground for the application of photovoltaic systems in terrestrial systems. This paper discusses the development of space solar cells, gives insight into related experiments like the analysis of the interaction with space particles and provides an overview on challenges and requirements for future space missions.
15. Solar model uncertainties, MSW analysis, and future solar neutrino experiments
Hata, Naoya; Langacker, Paul
1994-07-01
Various theoretical uncertainties in the standard solar model and in the Mikheyev-Smirnov-Wolfenstein (MSW) analysis are discussed. It is shown that two methods give consistent estimations of the solar neutrino flux uncertainties: (a) a simple parametrization of the uncertainties using the core temperature and the ncuelar production cross sections; (b) the Monte Carlo method of Bahcall and Ulrich. In the MSW analysis, we emphasize proper treatments of correlations of theoretical uncertainties between flux components and between different detectors, the Earth effect, and multiple solutions in a combined χ2 procedure. In particular the large-angle solution of the combined observation is allowed at 95% C.L. only when the theoretical uncertainties are included. If their correlations were ignored, the region would be overestimated. The MSW solutions for various standard and nonstandard solar models are also shown. The MSW predictions of the global solutions for the future solar neutrino experiments are given, emphasizing the measurement of the energy spectrum and the day-night effect in Sudbury Neutrino Observatory and Super-Kamiokande to distinguish the two solutions.
16. Observational Definition of Future AGN Echo-Mapping Experiments
NASA Technical Reports Server (NTRS)
Collier, Stefan; Peterson, Bradley M.; Horne, Keith
2001-01-01
We describe numerical simulations we have begun in order to determine the observational requirements for future echo-apping experiments. We focus on two particular problems: (1) determination of the structure and kinematics of the broad-line region through emission- line reverberation mapping, and (2) detection of interband continuum lags that may be used as a probe of the continuum source, presumably a temperature-stratified accretion disk. Our preliminary results suggest the broad-line region can be reverberation-mapped to good precision with spectra of signal-to-noise ratio per pixel S/N approx. = 30, time resolution (Delta)t approx. = 0.1 day, and duration of about 60 days (which is a factor of three larger than the longest time scale in the input models); data that meet these requirements do not yet exist. We also find that interband continuum lags of approx. greater than 0.5 days can be detected at approx. greater than 95% confidence with at least daily observations for about 6 weeks, or rather more easily and definitively with shorter programs undertaken with satellite-based observatories. The results of these simulations show that significant steps forward in multiwavelength monitoring will almost certainly require dedicated facilities.
17. Positive climate feedback under future climate implied by multifactor experiment
Beier, C.; van der Linden, L.; Ibrom, A.; Larsen, K. S.; Ambus, P.; Climaite Scientific Team
2011-12-01
Results after 2 years of a "full factor" climate change experiment in a semi natural shrubland ecosystem within the CLIMAITE project suggests that all three climate change factors warming, drought and elevated CO2 reduced the carbon sink strength of the ecosystem. In particular elevated CO2 stimulated the carbon loss from the ecosystem leading to a significant positive climate feedback. A fundamental question related to climate change concerns the overall biosphere-atmosphere feedback. Will terrestrial ecosystems mitigate climate change through increased plant derived uptake of CO2, or will they accelerate climate change through increased emission of CO2 from decomposition of organic matter? This fundamental question is key to understanding and predicting future climate change and the consequences for the globe. However, our knowledge in this field is still limited and experimental data is generally missing. The CLIMAITE experiment exposed a semi-natural Danish heathland ecosystem to elevated atmospheric carbon dioxide (CO2 - 510 ppm), warming (+1 oC), and extended summer drought (4-6 week precipitation removal) in all combinations to simulate a realistic climate scenario in Denmark in 2075. In total, the experiment provides a full-factorial design with 6 replicates of all eight combinations of D, T and CO2 and an untreated control for reference (A), i.e. N = 48. Details on the experimental setup are given by Mikkelsen et al. (2008). Generally, single factor treatments (i.e. CO2, warming or drought treatments alone) showed effects often in accordance with previous single factor studies, while, more interestingly, multifactor treatments often interacted generally leading to relatively small net effects of the full factor combined treatments relative to the control (Larsen et al., 2011). Warming and drought both reduced carbon uptake and stimulated carbon emissions slightly leading to a small and additive reduction in the carbon sink strength by these factors
18. Design, fabrication and characterization of multi-guard-ring furnished p+n-n+ silicon strip detectors for future HEP experiments
Lalwani, Kavita; Jain, Geetika; Dalal, Ranjeet; Ranjan, Kirti; Bhardwaj, Ashutosh
2016-07-01
Si detectors, in various configurations (strips and pixels), have been playing a key role in High Energy Physics (HEP) experiments due to their excellent vertexing and high precision tracking information. In future HEP experiments like upgrade of the Compact Muon Solenoid experiment (CMS) at the Large Hadron Collider (LHC), CERN and the proposed International Linear Collider (ILC), the Si tracking detectors will be operated in a very harsh radiation environment, which leads to both surface and bulk damage in Si detectors which in turn changes their electrical properties, i.e. change in the full depletion voltage, increase in the leakage current and decrease in the charge collection efficiency. In order to achieve the long term durability of Si-detectors in future HEP experiments, it is required to operate these detectors at very high reverse biases, beyond the full depletion voltage, thus requiring higher detector breakdown voltage. Delhi University (DU) is involved in the design, fabrication and characterization of multi-guard-ring furnished ac-coupled, single sided, p+n-n+ Si strip detectors for future HEP experiments. The design has been optimized using a two-dimensional numerical device simulation program (TCAD-Silvaco). The Si strip detectors are fabricated with eight-layers mask process using the planar fabrication technology by Bharat Electronic Lab (BEL), India. Further an electrical characterization set-up is established at DU to ensure the quality performance of fabricated Si strip detectors and test structures. In this work measurement results on non irradiated Si Strip detectors and test structures with multi-guard-rings using Current Voltage (IV) and Capacitance Voltage (CV) characterization set-ups are discussed. The effect of various design parameters, for example guard-ring spacing, number of guard-rings and metal overhang on breakdown voltage of test structures have been studied.
19. Polarized proton collider at RHIC
Alekseev, I.; Allgower, C.; Bai, M.; Batygin, Y.; Bozano, L.; Brown, K.; Bunce, G.; Cameron, P.; Courant, E.; Erin, S.; Escallier, J.; Fischer, W.; Gupta, R.; Hatanaka, K.; Huang, H.; Imai, K.; Ishihara, M.; Jain, A.; Lehrach, A.; Kanavets, V.; Katayama, T.; Kawaguchi, T.; Kelly, E.; Kurita, K.; Lee, S. Y.; Luccio, A.; MacKay, W. W.; Mahler, G.; Makdisi, Y.; Mariam, F.; McGahern, W.; Morgan, G.; Muratore, J.; Okamura, M.; Peggs, S.; Pilat, F.; Ptitsin, V.; Ratner, L.; Roser, T.; Saito, N.; Satoh, H.; Shatunov, Y.; Spinka, H.; Syphers, M.; Tepikian, S.; Tominaka, T.; Tsoupas, N.; Underwood, D.; Vasiliev, A.; Wanderer, P.; Willen, E.; Wu, H.; Yokosawa, A.; Zelenski, A. N.
2003-03-01
In addition to heavy ion collisions (RHIC Design Manual, Brookhaven National Laboratory), RHIC will also collide intense beams of polarized protons (I. Alekseev, et al., Design Manual Polarized Proton Collider at RHIC, Brookhaven National Laboratory, 1998 [2]), reaching transverse energies where the protons scatter as beams of polarized quarks and gluons. The study of high energy polarized protons beams has been a long term part of the program at BNL with the development of polarized beams in the Booster and AGS rings for fixed target experiments. We have extended this capability to the RHIC machine. In this paper we describe the design and methods for achieving collisions of both longitudinal and transverse polarized protons in RHIC at energies up to s=500 GeV.
20. Experiment and Simulations with Nearly Equal Horizontal and Vertical Focusing Functions: Single and Colliding Beam Results from the Cornell Electron Storage Ring
Bagley, Peter Paul
1995-01-01
For colliding beam particle accelerators, the dynamics of the beam beam interaction are one limit on the luminosity or event rate. Simulations of the beam beam interaction have suggested that round beams (equal horizontal and vertical emittances and beta ^{*}) could produce saturated tune shifts of about 0.100, much larger than those predicted for flat beams (horizontal emittance and beta ^{*} much larger than the vertical). This experiment was designed to test round beams and had a single interaction point at the North Interaction Region or NIP, with nearly zero horizontal dispersion and about 25 cm beta^{*} 's. In early 1990 we used about 140 hours of machine time. Beginning with flat beams (horizontal emittance much larger than the vertical emittance), we achieved saturated vertical tune shift parameters of about 0.045, very high for CESR at the time, but much smaller than the 0.080 predicted by the simulations for this case. During this flat beam work, we realized we had several experimental problems and halted the experiment without attempting the round beam work. Our separation scheme for the South Interaction Region or SIP produced different horizontal emittances and damping times for the electrons and positrons and so we reduced the separation in the SIP until we were concerned about the near miss beam crossing there. Also later analysis of orbit measurements showed small, but important, horizontal separations at the NIP. We've used a beam beam simulation to understand the effects that each of these problems has on the beam beam dynamics. Also using both an analytic formalism for the effects of resonances on single particles and several diagnostics to look at the simulation results for single particles, we've developed some understanding of why the simulations give the results they do and which resonances are important. We believe "dirt" effects, rather than fundamental limitations, set our experimental tune shift limit and that the nearly equal beta
1. J. J. Sakurai Prize for Theoretical Particle Physics Talk: Collider Physics: Yesterday, Today and Tomorrow
Eichten, Estia
2011-04-01
More than a quarter century ago, theoretical issues with the Standard Model scalar boson sector inspired theorists to develop alternative models of electroweak symmetry breaking. The goal of the EHLQ study of hadron collider physics was to help determine the basic parameters of a supercollider that could distinguish these alternatives. Now we await data from the CMS and ATLAS experiments at CERN's Large Hadron Collider to solve this mystery. Does the Standard Model survive or, as theorists generally expect, does new physics appear (Strong Dynamics, SUSY, Extra Dimensions,...)? Even well into the LHC era it is likely that questions about the origin of fermion mass and mixings will remain and new physics will bring new puzzles. This time, the associated new scales are unknown. The opportunity to address new physics at a future multi-TeV lepton collider is briefly addressed.
2. Photon Collider Physics with Real Photon Beams
SciTech Connect
Gronberg, J; Asztalos, S
2005-11-03
Photon-photon interactions have been an important probe into fundamental particle physics. Until recently, the only way to produce photon-photon collisions was parasitically in the collision of charged particles. Recent advances in short-pulse laser technology have made it possible to consider producing high intensity, tightly focused beams of real photons through Compton scattering. A linear e{sup +}e{sup -} collider could thus be transformed into a photon-photon collider with the addition of high power lasers. In this paper they show that it is possible to make a competitive photon-photon collider experiment using the currently mothballed Stanford Linear Collider. This would produce photon-photon collisions in the GeV energy range which would allow the discovery and study of exotic heavy mesons with spin states of zero and two.
3. Linear collider: a preview
SciTech Connect
Wiedemann, H.
1981-11-01
Since no linear colliders have been built yet it is difficult to know at what energy the linear cost scaling of linear colliders drops below the quadratic scaling of storage rings. There is, however, no doubt that a linear collider facility for a center of mass energy above say 500 GeV is significantly cheaper than an equivalent storage ring. In order to make the linear collider principle feasible at very high energies a number of problems have to be solved. There are two kinds of problems: one which is related to the feasibility of the principle and the other kind of problems is associated with minimizing the cost of constructing and operating such a facility. This lecture series describes the problems and possible solutions. Since the real test of a principle requires the construction of a prototype I will in the last chapter describe the SLC project at the Stanford Linear Accelerator Center.
4. Colliding with a crunching bubble
SciTech Connect
Freivogel, Ben; Freivogel, Ben; Horowitz, Gary T.; Shenker, Stephen
2007-03-26
In the context of eternal inflation we discuss the fate of Lambda = 0 bubbles when they collide with Lambda< 0 crunching bubbles. When the Lambda = 0 bubble is supersymmetric, it is not completely destroyed by collisions. If the domain wall separating the bubbles has higher tension than the BPS bound, it is expelled from the Lambda = 0 bubble and does not alter its long time behavior. If the domain wall saturates the BPS bound, then it stays inside the Lambda = 0 bubble and removes a finite fraction of future infinity. In this case, the crunch singularity is hidden behind the horizon of a stable hyperbolic black hole.
5. Photon collider at TESLA
Telnov, Valery
2001-10-01
High energy photon colliders ( γγ, γe) based on backward Compton scattering of laser light is a very natural addition to e +e - linear colliders. In this report, we consider this option for the TESLA project. Recent study has shown that the horizontal emittance in the TESLA damping ring can be further decreased by a factor of four. In this case, the γγ luminosity in the high energy part of spectrum can reach about (1/3) Le +e -. Typical cross-sections of interesting processes in γγ collisions are higher than those in e +e - collisions by about one order of magnitude, so the number of events in γγ collisions will be more than that in e +e - collisions. Photon colliders can, certainly, give additional information and they are the best for the study of many phenomena. The main question is now the technical feasibility. The key new element in photon colliders is a very powerful laser system. An external optical cavity is a promising approach for the TESLA project. A free electron laser is another option. However, a more straightforward solution is "an optical storage ring (optical trap)" with a diode pumped solid state laser injector which is today technically feasible. This paper briefly reviews the status of a photon collider based on the linear collider TESLA, its possible parameters and existing problems.
6. Crabbing System for an Electron-Ion Collider
Castilla, Alejandro
As high energy and nuclear physicists continue to push further the boundaries of knowledge using colliders, there is an imperative need, not only to increase the colliding beams' energies, but also to improve the accuracy of the experiments, and to collect a large quantity of events with good statistical sensitivity. To achieve the latter, it is necessary to collect more data by increasing the rate at which these pro- cesses are being produced and detected in the machine. This rate of events depends directly on the machine's luminosity. The luminosity itself is proportional to the frequency at which the beams are being delivered, the number of particles in each beam, and inversely proportional to the cross-sectional size of the colliding beams. There are several approaches that can be considered to increase the events statistics in a collider other than increasing the luminosity, such as running the experiments for a longer time. However, this also elevates the operation expenses, while increas- ing the frequency at which the beams are delivered implies strong physical changes along the accelerator and the detectors. Therefore, it is preferred to increase the beam intensities and reduce the beams cross-sectional areas to achieve these higher luminosities. In the case where the goal is to push the limits, sometimes even beyond the machines design parameters, one must develop a detailed High Luminosity Scheme. Any high luminosity scheme on a modern collider considers--in one of their versions--the use of crab cavities to correct the geometrical reduction of the luminosity due to the beams crossing angle. In this dissertation, we present the design and testing of a proof-of-principle compact superconducting crab cavity, at 750 MHz, for the future electron-ion collider, currently under design at Jefferson Lab. In addition to the design and validation of the cavity prototype, we present the analysis of the first order beam dynamics and the integration of the crabbing
7. Tau anomalous magnetic moment in γγ colliders
Peressutti, Javier; Sampayo, Oscar A.
2012-08-01
We investigate the possibility of setting model independent limits for a nonstandard anomalous magnetic moment aτNP of the tau lepton, in future γγ colliders based on Compton backscattering. For a hypothetical collider we find that, at various levels of confidence, the limits for aτNP could be improved, compared to previous studies based on LEP1, LEP2 and SLD data. We show the results for a realistic range of the center of mass energy of the e+e- collider. As a more direct application, we also present the results of the simulation for the photon collider at the TESLA project.
8. When Rubble Piles Collide...
Leinhardt, Z. M.; Richardson, D. C.; Quinn, T.
1999-01-01
There is increasing evidence that many or most km-sized bodies in the Solar System may be rubble piles, that is, gravitationally bound aggregates of material susceptible to disruption or distortion by planetary tides (Richardson, Bottke, & Love 1998, Icarus 134, 47). If this is true, then collisions may occur in free space between rubble piles. Here we present preliminary results from a project to map the parameter space of rubble-pile collisions. The results will assist in parameterization of collision outcomes for simulations of Solar System formation and may give insight into scaling laws for catastrophic disruption. We use a direct numerical method (Richardson, Quinn, Stadel, & Lake 1998, submitted) to evolve the particle positions and velocities under the constraints of gravity and physical collisions. We test the dependence of the collision outcomes on the impact speed and impact parameter, as well as the spin and size of the colliding bodies. We use both spheroidal and ellipsoidal shapes, the former as a control and the latter as a more representative model of real bodies. Speeds are kept low so that the maximum strain on the component material does not exceed the crushing strength. This is appropriate for dynamically cool systems, such as in the primordial disk during the early stage of planet formation or possibly in the present-day classical Kuiper Belt. We compare our results to analytic estimates and to stellar system collision models. Other parameters, such as the coefficient of restitution (dissipation), bulk density, and particle resolution will be investigated systematically in future work.
9. Experimental Approaches at Linear Colliders
SciTech Connect
Jaros, John A
2002-02-13
Precision measurements have played a vital role in our understanding of elementary particle physics. Experiments performed using e{sup +}e{sup -} collisions have contributed an essential part. Recently, the precision measurements at LEP and SLC have probed the standard model at the quantum level and severely constrained the mass of the Higgs boson [1]. Coupled with the limits on the Higgs mass from direct searches [2], this enables the mass to be constrained to be in the range 115-205 GeV. Developments in accelerator R and D have matured to the point where one could contemplate construction of a linear collider with initial energy in the 500 GeV range and a credible upgrade path to {approx} 1 TeV. Now is therefore the correct time to critically evaluate the case for such a facility. The Working Group E3, Experimental Approaches at Linear Colliders, was encouraged to make this evaluation. The group was charged with examining critically the physics case for a Linear Collider (LC) of energy of order 1 TeV as well as the cases for higher energy machines, assessing the performance requirements and exploring the viability of several special options. In addition it was asked to identify the critical areas where R and D is required (the complete text of the charge can be found in the Appendix). In order to address this, the group was organized into subgroups, each of which was given a specific task. Three main groups were assigned to the TeV-class Machines, Multi-TeV Machines and Detector Issues. The central activity of our working group was the exploration of TeV class machines, since they are being considered as the next major initiative in high energy physics. We have considered the physics potential of these machines, the special options that could be added to the collider after its initial running, and addressed a number of important questions. Several physics scenarios were suggested in order to benchmark the physics reach of the linear collider and persons were
10. Comparing Tsallis and Boltzmann temperatures from relativistic heavy ion collider and large hadron collider heavy-ion data
Gao, Y.-Q.; Liu, F.-H.
2016-03-01
The transverse momentum spectra of charged particles produced in Au + Au collisions at the relativistic heavy ion collider and in Pb + Pb collisions at the large hadron collider with different centrality intervals are described by the multisource thermal model which is based on different statistic distributions for a singular source. Each source in the present work is described by the Tsallis distribution and the Boltzmann distribution, respectively. Then, the interacting system is described by the (two-component) Tsallis distribution and the (two-component) Boltzmann distribution, respectively. The results calculated by the two distributions are in agreement with the experimental data of the Solenoidal Tracker At Relativistic heavy ion collider, Pioneering High Energy Nuclear Interaction eXperiment, and A Large Ion Collider Experiment Collaborations. The effective temperature parameters extracted from the two distributions on the descriptions of heavy-ion data at the relativistic heavy ion collider and large hadron collider are obtained to show a linear correlation.
11. Intercultural Preparation for Future Mobile Students: A Pedagogical Experience
ERIC Educational Resources Information Center
Beaven, Ana; Golubeva, Irina
2016-01-01
Higher education (HE) student mobility offers the opportunity to acquire, among other things, intercultural experience. Nevertheless, it is crucial to prepare students and give them the tools to reflect on their experiences before, during and after study abroad. In this pedagogical paper, we present and discuss "Perceptions of self and…
12. Intercultural Preparation for Future Mobile Students: A Pedagogical Experience
ERIC Educational Resources Information Center
Beaven, Ana; Golubeva, Irina
2016-01-01
Higher education (HE) student mobility offers the opportunity to acquire, among other things, intercultural experience. Nevertheless, it is crucial to prepare students and give them the tools to reflect on their experiences before, during and after study abroad. In this pedagogical paper, we present and discuss "Perceptions of self and…
13. LINEAR COLLIDER PHYSICS RESOURCE BOOK FOR SNOWMASS 2001.
SciTech Connect
ABE,T.; DAWSON,S.; HEINEMEYER,S.; MARCIANO,W.; PAIGE,F.; TURCOT,A.S.; ET AL
2001-05-03
The American particle physics community can look forward to a well-conceived and vital program of experimentation for the next ten years, using both colliders and fixed target beams to study a wide variety of pressing questions. Beyond 2010, these programs will be reaching the end of their expected lives. The CERN LHC will provide an experimental program of the first importance. But beyond the LHC, the American community needs a coherent plan. The Snowmass 2001 Workshop and the deliberations of the HEPAP subpanel offer a rare opportunity to engage the full community in planning our future for the next decade or more. A major accelerator project requires a decade from the beginning of an engineering design to the receipt of the first data. So it is now time to decide whether to begin a new accelerator project that will operate in the years soon after 2010. We believe that the world high-energy physics community needs such a project. With the great promise of discovery in physics at the next energy scale, and with the opportunity for the uncovering of profound insights, we cannot allow our field to contract to a single experimental program at a single laboratory in the world. We believe that an e{sup +}e{sup {minus}} linear collider is an excellent choice for the next major project in high-energy physics. Applying experimental techniques very different from those used at hadron colliders, an e{sup +}e{sup {minus}} linear collider will allow us to build on the discoveries made at the Tevatron and the LHC, and to add a level of precision and clarity that will be necessary to understand the physics of the next energy scale. It is not necessary to anticipate specific results from the hadron collider programs to argue for constructing an e{sup +}e{sup {minus}} linear collider; in any scenario that is now discussed, physics will benefit from the new information that e{sup +}e{sup {minus}} experiments can provide.
14. Linear Collider Physics Resource Book for Snowmass 2001
SciTech Connect
Peskin, Michael E
2001-06-05
The American particle physics community can look forward to a well-conceived and vital program of experimentation for the next ten years, using both colliders and fixed target beams to study a wide variety of pressing questions. Beyond 2010, these programs will be reaching the end of their expected lives. The CERN LHC will provide an experimental program of the first importance. But beyond the LHC, the American community needs a coherent plan. The Snowmass 2001 Workshop and the deliberations of the HEPAP subpanel offer a rare opportunity to engage the full community in planning our future for the next decade or more. A major accelerator project requires a decade from the beginning of an engineering design to the receipt of the first data. So it is now time to decide whether to begin a new accelerator project that will operate in the years soon after 2010. We believe that the world high-energy physics community needs such a project. With the great promise of discovery in physics at the next energy scale, and with the opportunity for the uncovering of profound insights, we cannot allow our field to contract to a single experimental program at a single laboratory in the world. We believe that an e{sup +}e{sup -} linear collider is an excellent choice for the next major project in high-energy physics. Applying experimental techniques very different from those used at hadron colliders, an e{sup +}e{sup -} linear collider will allow us to build on the discoveries made at the Tevatron and the LHC, and to add a level of precision and clarity that will be necessary to understand the physics of the next energy scale. It is not necessary to anticipate specific results from the hadron collider programs to argue for constructing an e{sup +}e{sup -} linear collider; in any scenario that is now discussed, physics will benefit from the new information that e{sup +}e{sup -} experiments can provide.
15. Top quark studies at hadron colliders
SciTech Connect
Sinervo, P.K.
1997-01-01
The techniques used to study top quarks at hadron colliders are presented. The analyses that discovered the top quark are described, with emphasis on the techniques used to tag b quark jets in candidate events. The most recent measurements of top quark properties by the CDF and DO Collaborations are reviewed, including the top quark cross section, mass, branching fractions, and production properties. Future top quark studies at hadron colliders are discussed, and predictions for event yields and uncertainties in the measurements of top quark properties are presented.
16. Beam instrumentation for the Tevatron Collider
SciTech Connect
Moore, Ronald S.; Jansson, Andreas; Shiltsev, Vladimir; /Fermilab
2009-10-01
The Tevatron in Collider Run II (2001-present) is operating with six times more bunches and many times higher beam intensities and luminosities than in Run I (1992-1995). Beam diagnostics were crucial for the machine start-up and the never-ending luminosity upgrade campaign. We present the overall picture of the Tevatron diagnostics development for Run II, outline machine needs for new instrumentation, present several notable examples that led to Tevatron performance improvements, and discuss the lessons for future colliders.
17. Precision measurements of W and Z boson production and their decays to electrons at hadron colliders
SciTech Connect
Ehlers, Jans Hermann
2006-01-01
For many measurements at hadron colliders, such as cross sections and branching ratios, the uncertainty of the integrated luminosity is an important contribution to the error of the final result. In 1997, the ETH Zurich group proposed a new approach to determine the integrated luminosity via a counting measurement of the W and Z bosons through their decays to leptons. In this thesis this proposal has been applied on real data as well as on simulation for a future experiment.
18. Robots in PSE G's nuclear plants - experience and future projections
SciTech Connect
Roman, H.T. )
1992-01-01
Since the cleanup at Three Mile Island Unit 2 utilities have used robots, specifically teleoperated devices, to save significant human exposure, reduce plant downtime, and improve plant operations. Early work has centered on plant inspection, surveillance, and monitoring tasks, with future efforts likely to be directed toward operation and maintenance tasks. Public Service Electric Gas (PSE G) Company has been a pioneer in the application of this technology, gaining worldwide recognition for its work. PSE G's leadership role in this technology and their nationally recognized Applied Robotics Technology (ART) Facility has served as a model for the national and international utility industries. This paper very briefly explores the growth in utility robotic applications; discusses in detail PSE G's use of robotic devices; examines the role of the ART Facility in PSE G's success; and projects the potential role of robots in the power plant of the future.
19. Development work for a superconducting linear collider
NASA Technical Reports Server (NTRS)
Matheisen, Axel
1995-01-01
For future linear e(+)e(-) colliders in the TeV range several alternatives are under discussion. The TESLA approach is based on the advantages of superconductivity. High Q values of the accelerator structures give high efficiency for converting RF power into beam power. A low resonance frequency for the RF structures can be chosen to obtain a large number of electrons (positrons) per bunch. For a given luminosity the beam dimensions can be chosen conservatively which leads to relaxed beam emittance and tolerances at the final focus. Each individual superconducting accelerator component (resonator cavity) of this linear collider has to deliver an energy gain of 25 MeV/m to the beam. Today s.c. resonators are in use at CEBAF/USA, at DESY/Germany, Darmstadt/Germany KEK/Japan and CERN/Geneva. They show acceleration gradients between 5 MV/m and 10 MV/m. Encouraging experiments at CEA Saclay and Cornell University showed acceleration gradients of 20 MV/m and 25 MV/m in single and multicell structures. In an activity centered at DESY in Hamburg/Germany the TESLA collaboration is constructing a 500 MeV superconducting accelerator test facility (TTF) to demonstrate that a linear collider based on this technique can be built in a cost effective manner and that the necessary acceleration gradients of more than 15 MeV/m can be reached reproducibly. The test facility built at DESY covers an area of 3.000 m2 and is divided into 3 major activity areas: (1) The testlinac, where the performance ofthe modular components with an electron beam passing the 40 m long acceleration section can be demonstrated. (2) The test area, where all individual resonators are tested before installation into a module. (3) The preparation and assembly area, where assembly of cavities and modules take place. We report here on the design work to reach a reduction of costs compared to actual existing superconducting accelerator structures and on the facility set up to reach high acceleration gradients in
20. Development work for a superconducting linear collider
Matheisen, Axel
1995-04-01
For future linear e(+)e(-) colliders in the TeV range several alternatives are under discussion. The TESLA approach is based on the advantages of superconductivity. High Q values of the accelerator structures give high efficiency for converting RF power into beam power. A low resonance frequency for the RF structures can be chosen to obtain a large number of electrons (positrons) per bunch. For a given luminosity the beam dimensions can be chosen conservatively which leads to relaxed beam emittance and tolerances at the final focus. Each individual superconducting accelerator component (resonator cavity) of this linear collider has to deliver an energy gain of 25 MeV/m to the beam. Today s.c. resonators are in use at CEBAF/USA, at DESY/Germany, Darmstadt/Germany KEK/Japan and CERN/Geneva. They show acceleration gradients between 5 MV/m and 10 MV/m. Encouraging experiments at CEA Saclay and Cornell University showed acceleration gradients of 20 MV/m and 25 MV/m in single and multicell structures. In an activity centered at DESY in Hamburg/Germany the TESLA collaboration is constructing a 500 MeV superconducting accelerator test facility (TTF) to demonstrate that a linear collider based on this technique can be built in a cost effective manner and that the necessary acceleration gradients of more than 15 MeV/m can be reached reproducibly. The test facility built at DESY covers an area of 3.000 m2 and is divided into 3 major activity areas: (1) The testlinac, where the performance ofthe modular components with an electron beam passing the 40 m long acceleration section can be demonstrated. (2) The test area, where all individual resonators are tested before installation into a module. (3) The preparation and assembly area, where assembly of cavities and modules take place. We report here on the design work to reach a reduction of costs compared to actual existing superconducting accelerator structures and on the facility set up to reach high acceleration gradients in
1. The Muon Collider
SciTech Connect
Zisman, Michael S.
2011-01-05
We describe the scientific motivation for a new type of accelerator, the muon collider. This accelerator would permit an energy-frontier scientific program and yet would fit on the site of an existing laboratory. Such a device is quite challenging, and requires a substantial R&D program. After describing the ingredients of the facility, the ongoing R&D activities of the Muon Accelerator Program are discussed. A possible U.S. scenario that could lead to a muon collider at Fermilab is briefly mentioned.
2. The Muon Collider
SciTech Connect
Zisman, Michael S
2010-05-17
We describe the scientific motivation for a new type of accelerator, the muon collider. This accelerator would permit an energy-frontier scientific program and yet would fit on the site of an existing laboratory. Such a device is quite challenging, and requires a substantial R&D program. After describing the ingredients of the facility, the ongoing R&D activities of the Muon Accelerator Program are discussed. A possible U.S. scenario that could lead to a muon collider at Fermilab is briefly mentioned.
3. Black Holes Collide
NASA Image and Video Library
2017-09-28
When two black holes collide, they release massive amounts of energy in the form of gravitational waves that last a fraction of a second and can be "heard" throughout the universe - if you have the right instruments. Today we learned that the #LIGO project heard the telltale chirp of black holes colliding, fulfilling Einstein's General Theory of Relativity. NASA's LISA mission will look for direct evidence of gravitational waves. go.nasa.gov/23ZbqoE This video illustrates what that collision might look like.
4. Chronovisor - A Dream of the Future or Real Experiments?
Teodorani, M.
2006-10-01
This book, entirely dedicated to the legends concerning "chronovision", is divided into three main parts: a) discussion and criticism of the alleged experiments carried out by father Pellegrino Ernetti; b) in depth study of the "neutrino space theory" by father and physicist Luigi Borello; c) discussion and criticism concerning alleged experiments carried out in the field of chronovision in the past and in recent years, using several methods.
5. Get Real: Effects of Repeated Simulation and Emotion on the Perceived Plausibility of Future Experiences
ERIC Educational Resources Information Center
Szpunar, Karl K.; Schacter, Daniel L.
2013-01-01
People frequently imagine specific interpersonal experiences that might occur in their futures. The present study used a novel experimental paradigm to examine the influence of repeated simulation of future interpersonal experiences on subjective assessments of plausibility for positive, negative, and neutral events. The results demonstrate that…
6. Precipitation manipulation experiments--challenges and recommendations for the future.
PubMed
Beier, Claus; Beierkuhnlein, Carl; Wohlgemuth, Thomas; Penuelas, Josep; Emmett, Bridget; Körner, Christian; de Boeck, Hans; Christensen, Jens Hesselbjerg; Leuzinger, Sebastian; Janssens, Ivan A; Hansen, Karin
2012-08-01
Climatic changes, including altered precipitation regimes, will affect key ecosystem processes, such as plant productivity and biodiversity for many terrestrial ecosystems. Past and ongoing precipitation experiments have been conducted to quantify these potential changes. An analysis of these experiments indicates that they have provided important information on how water regulates ecosystem processes. However, they do not adequately represent global biomes nor forecasted precipitation scenarios and their potential contribution to advance our understanding of ecosystem responses to precipitation changes is therefore limited, as is their potential value for the development and testing of ecosystem models. This highlights the need for new precipitation experiments in biomes and ambient climatic conditions hitherto poorly studied applying relevant complex scenarios including changes in precipitation frequency and amplitude, seasonality, extremity and interactions with other global change drivers. A systematic and holistic approach to investigate how soil and plant community characteristics change with altered precipitation regimes and the consequent effects on ecosystem processes and functioning within these experiments will greatly increase their value to the climate change and ecosystem research communities. Experiments should specifically test how changes in precipitation leading to exceedance of biological thresholds affect ecosystem resilience and acclimation.
7. Designing a future Conditions Database based on LHC experience
Barberis, D.; Formica, A.; Gallas, E. J.; Govi, G.; Lehman Miotto, G.; Pfeiffer, A.
2015-12-01
Starting from the experience collected by the ATLAS and CMS experiments in handling condition data during the first LHC run, we present a proposal for a new generation of condition databases, which could be implemented by 2020. We will present the identified relevant data flows for condition data and underline the common use cases that lead to a joint effort for the development of a new system. Condition data is needed in any scientific experiment. It includes any ancillary data associated with primary data taking such as detector configuration, state or calibration or the environment in which the detector is operating. Condition data typically reside outside the primary data store for various reasons (size, complexity or availability) and are best accessed at the point of processing or analysis (including for Monte Carlo simulations). The ability of any experiment to produce correct and timely results depends on the complete and efficient availability of needed conditions for each stage of data handling. Therefore, any experiment needs a condition data architecture which can not only store conditions, but deliver the data efficiently, on demand, to potentially diverse and geographically distributed set of clients. The architecture design should consider facilities to ease conditions management and the monitoring of its conditions entry, access and usage.
8. Status and future of the tritium plasma experiment
SciTech Connect
Causey, R.A.; Buchenauer, D.; Taylor, D.; Harbin, W.; Anderl, B.
1995-10-01
The Tritium Plasma Experiment (TPE) has been recently upgraded and relocated at the Tritium System Test Assembly (TSTA) at Los Alamos National Laboratory. The first tritium plasma in the upgraded system was achieved on May 11, 1995. TPE is a unique facility devoted to experiments on the migration and retention of tritium in fusion reactor materials. This facility is now capable of delivering 100 to 200 eV tritons at a level of 1 A/cm{sup 2} to a 5 mm diameter sample, similar to that expected for the divertor of the International Thermonuclear Experimental Reactor (ITER). An aggressive research plan has been established, and experiments are expected to begin in June of 1995. 4 figs.
9. Traverse Planning Experiments for Future Planetary Surface Exploration
NASA Technical Reports Server (NTRS)
Hoffman, Stephen J.; Voels, Stephen A.; Mueller, Robert P.; Lee, Pascal C.
2012-01-01
The purpose of the investigation is to evaluate methodology and data requirements for remotely-assisted robotic traverse of extraterrestrial planetary surface to support human exploration program, assess opportunities for in-transit science operations, and validate landing site survey and selection techniques during planetary surface exploration mission analog demonstration at Haughton Crater on Devon Island, Nunavut, Canada. Additionally, 1) identify quality of remote observation data sets (i.e., surface imagery from orbit) required for effective pre-traverse route planning and determine if surface level data (i.e., onboard robotic imagery or other sensor data) is required for a successful traverse, and if additional surface level data can improve traverse efficiency or probability of success (TRPF Experiment). 2) Evaluate feasibility and techniques for conducting opportunistic science investigations during this type of traverse. (OSP Experiment). 3) Assess utility of remotely-assisted robotic vehicle for landing site validation survey. (LSV Experiment).
10. The Next Linear Collider: NLC2001
SciTech Connect
D. Burke et al.
2002-01-14
Recent studies in elementary particle physics have made the need for an e{sup +}e{sup -} linear collider able to reach energies of 500 GeV and above with high luminosity more compelling than ever [1]. Observations and measurements completed in the last five years at the SLC (SLAC), LEP (CERN), and the Tevatron (FNAL) can be explained only by the existence of at least one particle or interaction that has not yet been directly observed in experiment. The Higgs boson of the Standard Model could be that particle. The data point strongly to a mass for the Higgs boson that is just beyond the reach of existing colliders. This brings great urgency and excitement to the potential for discovery at the upgraded Tevatron early in this decade, and almost assures that later experiments at the LHC will find new physics. But the next generation of experiments to be mounted by the world-wide particle physics community must not only find this new physics, they must find out what it is. These experiments must also define the next important threshold in energy. The need is to understand physics at the TeV energy scale as well as the physics at the 100-GeV energy scale is now understood. This will require both the LHC and a companion linear electron-positron collider. A first Zeroth-Order Design Report (ZDR) [2] for a second-generation electron-positron linear collider, the Next Linear Collider (NLC), was published five years ago. The NLC design is based on a high-frequency room-temperature rf accelerator. Its goal is exploration of elementary particle physics at the TeV center-of-mass energy, while learning how to design and build colliders at still higher energies. Many advances in accelerator technologies and improvements in the design of the NLC have been made since 1996. This Report is a brief update of the ZDR.
11. Present Status and Future Perspectives of the NEXT Experiment
DOE PAGES
Gómez Cadenas, J. J.; Álvarez, V.; Borges, F. I. G.; ...
2014-01-01
NEXT is an experiment dedicated to neutrinoless double beta decay searches in xenon. The detector is a TPC, holding 100 kg of high-pressure xenon enriched in the136Xe isotope. It is under construction in the Laboratorio Subterráneo de Canfranc in Spain, and it will begin operations in 2015. The NEXT detector concept provides an energy resolutionbetter than 1% FWHM and a topological signal that can be used to reduce the background. Furthermore, the NEXT technology can be extrapolated to a 1 ton-scale experiment.
12. EURECA mission control experience and messages for the future
NASA Technical Reports Server (NTRS)
Huebner, H.; Ferri, P.; Wimmer, W.
1994-01-01
EURECA is a retrievable space platform which can perform multi-disciplinary scientific and technological experiments in a Low Earth Orbit for a typical mission duration of six to twelve months. It is deployed and retrieved by the NASA Space Shuttle and is designed to support up to five flights. The first mission started at the end of July 1992 and was successfully completed with the retrieval in June 1993. The operations concept and the ground segment for the first EURECA mission are briefly introduced. The experiences in the preparation and the conduction of the mission from the flight control team point of view are described.
13. Future high precision experiments and new physics beyond Standard Model
SciTech Connect
Luo, Mingxing
1993-04-01
High precision (< 1%) electroweak experiments that have been done or are likely to be done in this decade are examined on the basis of Standard Model (SM) predictions of fourteen weak neutral current observables and fifteen W and Z properties to the one-loop level, the implications of the corresponding experimental measurements to various types of possible new physics that enter at the tree or loop level were investigated. Certain experiments appear to have special promise as probes of the new physics considered here.
14. Future high precision experiments and new physics beyond Standard Model
SciTech Connect
Luo, Mingxing.
1993-01-01
High precision (< 1%) electroweak experiments that have been done or are likely to be done in this decade are examined on the basis of Standard Model (SM) predictions of fourteen weak neutral current observables and fifteen W and Z properties to the one-loop level, the implications of the corresponding experimental measurements to various types of possible new physics that enter at the tree or loop level were investigated. Certain experiments appear to have special promise as probes of the new physics considered here.
15. Linear Collider Diagnostics
SciTech Connect
Ross, Marc
2000-05-17
Each major step toward higher energy particle accelerators relies on new technology. Linear colliders require beams of unprecedented brightness and stability. Instrumentation and control technology is the single most critical tool that enables linear colliders to extend the energy reach. In this paper the authors focus on the most challenging aspects of linear collider instrumentation systems. In the Next Linear Collider (NLC), high brightness multibunch e{sup +}/e{sup {minus}} beams, with I{sub {+-}} = 10{sup 12} particles/pulse and sigma{sub x,y} {approximately} 50 x 5 mu-m, originate in damping rings and are subsequently accelerated to several hundred GeV in 2 X-band 11,424 MHz linacs from which they emerge with typical sigma{sub x,y} {approximately} 7 x 1 mu-m. Following a high power collimation section the e{sup +}/e{sup {minus}} beams are focused to sigma{sub x,y} {approximately} 300 x 5 nm at the interaction point. In this paper they review the beam intensity, position and profile monitors (x,y,z), mechanical vibration sensing and stabilization systems, long baseline RF distribution systems and beam collimation hardware.
SciTech Connect
Pondrom, L.
1991-10-03
An introduction to the techniques of analysis of hadron collider events is presented in the context of the quark-parton model. Production and decay of W and Z intermediate vector bosons are used as examples. The structure of the Electroweak theory is outlined. Three simple FORTRAN programs are introduced, to illustrate Monte Carlo calculation techniques. 25 refs.
17. High energy colliders
SciTech Connect
Palmer, R.B.; Gallardo, J.C.
1997-02-01
The authors consider the high energy physics advantages, disadvantages and luminosity requirements of hadron (pp, p{anti p}), lepton (e{sup +}e{sup {minus}}, {mu}{sup +}{mu}{sup {minus}}) and photon-photon colliders. Technical problems in obtaining increased energy in each type of machine are presented. The machines relative size are also discussed.
18. High luminosity particle colliders
SciTech Connect
Palmer, R.B.; Gallardo, J.C.
1997-03-01
The authors consider the high energy physics advantages, disadvantages and luminosity requirements of hadron (pp, p{anti p}), lepton (e{sup +}e{sup {minus}}, {mu}{sup +}{mu}{sup {minus}}) and photon-photon colliders. Technical problems in obtaining increased energy in each type of machine are presented. The machines relative size are also discussed.
19. Developing Future Professionals: Influences of Literacy Coursework and Field Experiences.
ERIC Educational Resources Information Center
Harlin, Rebecca P.
1999-01-01
Describes changes in preservice teachers' images and perceptions of teaching and literacy as they engaged in collaborative learning experiences within the college classrooms and in the elementary classrooms working with cooperating teachers and young children. Finds changes of preservice teachers in language use, perceptions of children's…
20. Professional Learning between Past Experience and Future Work
ERIC Educational Resources Information Center
Weber, Kirsten
2010-01-01
This paper deals with the professionalization of human service work. It analyses learning processes and identity development in the emerging profession of child care with concrete examples from empirical research, based on a life history approach. It discusses examples of careers mainly based on students' life experience, pointing out that their…
1. Perceptions Regarding Supervised Experience Programs: Past Research and Future Direction.
ERIC Educational Resources Information Center
Barrick, R. Kirby; And Others
1991-01-01
A literature review found that (1) most school administrators, agricultural employers, and teachers value supervised occupational experience (SOE) programs; (2) agricultural teachers have primary responsibility for SOE but would like more released time; and (3) most teachers and administrators saw a need to expand the SOE concept and clientele.…
2. Tight aspect ratio tokamak experiments and prospects for the future
SciTech Connect
Sykes, A; Peng, Yueng Kay Martin
1995-01-01
The present status of experimental results from low aspect ratio tokamaks is described, together with plans for physics experiments at the mega-amp level. Further development of the concept, and its potential for a materials/component test facility or ultimately a fusion power plant, are indicated.
3. Ethno-Experiments: Creating Robust Inquiry and Futures
ERIC Educational Resources Information Center
Ramsey, Caroline
2007-01-01
This article introduces a practice-centred inquiry method called an "ethno-experiment". The method is built on a social constructionist understanding of practice as a social performance rather than as an individual's act. Additionally, it draws on Garfinkel's early ethnomethodological work and Marshall's self-reflective inquiry to construct a…
4. Accelarators, Colliders and Their Application
Wilson, E.
This document is part of Subvolume C 'Accelerators and Colliders' of Volume 21 'Elementary Particles' of Landolt-Börnstein - Group I 'Elementary Particles, Nuclei and Atoms'. It contains the Chapter '1 Accelarators, Colliders and Their Application' with the content:
5. Introductory Lectures on Collider Physics
Tait, Tim M. P.; Wang, Lian-Tao
2013-12-01
These are elementary lectures about collider physics. They are aimed at graduate students who have some background in computing Feynman diagrams and the Standard Model, but assume no particular sophistication with the physics of high energy colliders.
6. New technology for linear colliders
SciTech Connect
McIntyre, P.M.
1991-08-01
The purpose of this contract is to develop and evaluate new technology for future e{sup +}e{sup {minus}} linac colliders. TeV linac colliders will require major improvements in the performance of microwave power tubes: >100 mW/m peak power, {approximately}20 GHz frequency, and high frequency. For the past three years we have been developing gigatron, a new design concept for microwave power tubes. It incorporates three key innovations: a gated field-emitter cathode which produces a fully modulated electron beam directly into the vacuum; a ribbon beam geometry which eliminates space charge and phase dispersion, and a traveling wave coupler which provides optimum output coupling even over a wide ribbon beam. During the past year we have built prototypes of two cathode designs: a stripline edge-emitter array and a porous silicon dioxide cathode. A highlight of our results is the development and testing of the porous SiO{sub 2} cathode. It delivers exceptional performance as a modulated electron source in general and for gigatron in particular. Its high emitter density and low work function accommodate higher tube gain, simpler cathode coupling, and higher peak power than any other technology. The protection of the active emitting surface by {approximately}2 {mu}m of porous SiO{sub 2} should provide for rugged operation in a tube environment.
7. Very large hadron collider (VLHC)
SciTech Connect
1998-09-01
A VLHC informal study group started to come together at Fermilab in the fall of 1995 and at the 1996 Snowmass Study the parameters of this machine took form. The VLHC as now conceived would be a 100 TeV hadron collider. It would use the Fermilab Main Injector (now nearing completion) to inject protons at 150 GeV into a new 3 TeV Booster and then into a superconducting pp collider ring producing 100 TeV c.m. interactions. A luminosity of {approximately}10{sup 34} cm{sup -2}s{sup -1} is planned. Our plans were presented to the Subpanel on the Planning for the Future of US High- Energy Physics (the successor to the Drell committee) and in February 1998 their report stated The Subpanel recommends an expanded program of R&D on cost reduction strategies, enabling technologies, and accelerator physics issues for a VLHC. These efforts should be coordinated across laboratory and university groups with the aim of identifying design concepts for an economically and technically viable facility The coordination has been started with the inclusion of physicists from Brookhaven National Laboratory (BNL), Lawrence Berkeley National Laboratory (LBNL), and Cornell University. Clearly, this collaboration must expanded internationally as well as nationally. The phrase economically and technically viable facility presents the real challenge.
8. Shuttle flight pressure instrumentation: Experience and lessons for the future
NASA Technical Reports Server (NTRS)
Siemers, P. M., III; Bradley, P. F.; Wolf, H.; Flanagan, P. F.; Weilmuenster, K. J.; Kern, F. A.
1983-01-01
Flight data obtained from the Space Transportation System orbiter entries are processed and analyzed to assess the accuracy and performance of the Development Flight Instrumentation (DFI) pressure measurement system. Selected pressure measurements are compared with available wind tunnel and computational data and are further used to perform air data analyses using the Shuttle Entry Air Data System (SEADS) computation technique. The results are compared to air data from other sources. These comparisons isolate and demonstrate the effects of the various limitations of the DFI pressure measurement system. The effects of these limitations on orbiter performance analyses are addressed, and instrumentation modifications are recommended to improve the accuracy of similar fight data systems in the future.
9. Renovation of HEPnet-J for near-future experiments
Suzuki, Soh Y.; Yuasa, Fukuko; Nakamura, Tomoaki; Hara, Takanori
2015-12-01
Originally HEPnet-J had only one instance that is connected to Internet as the network connectivity by campus network of institutes in Japan was very limited, so the main purpose of HEPnet-J was providing enough connectivity for interactive use on domestic and international links funded by KEK. In last 10 years, the domestic and international connectivity provided by NRENs have been dramatically improved and they are enough for manual transfer of typical skimmed data files. Therefore, HEPnet-J has many closed networks that connect domestic sites related to specific projects, in order to access them on computer farms in private networks in their home institutes. The rapid growth of data volume makes it unable to apply same model to new generation experiments. As the tier structure for LHC computing sites has proved that the distributed computing model over collaboration sites is really applicable to the huge scale experiment, the external connectivity for international collaboration sites should be faster and secure. For example, the Belle II experiment in KEK will have many repositories in U.S. and EU. The expected throughput from KEK to U.S. is about 20 Gbps, thus it need the bypass of slow security devices like a firewall. Now bypass lines for Belle II are prepared and under tasting. This article reports the brief history of HEPnet- J and recent changes for project-specific networks.
10. Initial performance studies of a general-purpose detector for multi-TeV physics at a 100 TeV pp collider
DOE PAGES
Chekanov, S. V.; Beydler, M.; Kotwal, A. V.; ...
2017-06-13
This paper describes simulations of detector response to multi-TeV physics at the Future Circular Collider (FCC-hh) or Super proton-proton Collider (SppC) which aim to collide proton beams with a centre-of-mass energy of 100 TeV. The unprecedented energy regime of these future experiments imposes new requirements on detector technologies which can be studied using the detailed geant4 simulations presented in this paper. The initial performance of a detector designed for physics studies at the FCC-hh or SppC experiments are described with an emphasis on measurements of single particles up to 33 TeV in transverse momentum. Furthermore, the granularity requirements for calorimetrymore » are investigated using the two-particle spatial resolution achieved for hadron showers.« less
11. Initial Performance Studies of a General-Purpose Detector for Multi-TeV Physics at a 100 TeV pp Collider
SciTech Connect
Chekanov, S. V.; Beydler, M.; Kotwal, A. V.; Gray, L.; Sen, S.; Tran, N. V.; Yu, S. -S.; Zuzelski, J.
2016-12-21
This paper describes simulations of detector response to multi-TeV physics at the Future Circular Collider (FCC-hh) or Super proton-proton Collider (SppC) which aim to collide proton beams with a centre-of-mass energy of 100 TeV. The unprecedented energy regime of these future experiments imposes new requirements on detector technologies which can be studied using the detailed GEANT4 simulations presented in this paper. The initial performance of a detector designed for physics studies at the FCC-hh or SppC experiments is described with an emphasis on measurements of single particles up to 33 TeV in transverse momentum. The reconstruction of hadronic jets has also been studied in the transverse momentum range from 50 GeV to 26 TeV. The granularity requirements for calorimetry are investigated using the two-particle spatial resolution achieved for hadron showers.
12. Initial performance studies of a general-purpose detector for multi-TeV physics at a 100 TeV pp collider
Chekanov, S. V.; Beydler, M.; Kotwal, A. V.; Gray, L.; Sen, S.; Tran, N. V.; Yu, S.-S.; Zuzelski, J.
2017-06-01
This paper describes simulations of detector response to multi-TeV particles and jets at the Future Circular Collider (FCC-hh) or Super proton-proton Collider (SppC) which aim to collide proton beams with a centre-of-mass energy of 100 TeV. The unprecedented energy regime of these future experiments imposes new requirements on detector technologies which can be studied using the detailed GEANT4 simulations presented in this paper. The initial performance of a detector designed for physics studies at the FCC-hh or SppC experiments is described with an emphasis on measurements of single particles up to 33 TeV in transverse momentum. The reconstruction of hadronic jets has also been studied in the transverse momentum range from 50 GeV to 26 TeV. The granularity requirements for calorimetry are investigated using the two-particle spatial resolution achieved for hadron showers.
13. The Birth of Lepton Colliders in Italy and the United States
Paris, Elizabeth
2003-04-01
In 1960 the highest center-of-mass energies in particle physics were being achieved via proton synchrotrons utilizing stationary targets. However, efforts were already underway to challenge this hegemony. In addition to Soviet work in Novosibirsk, groups at Stanford University in California and at the Frascati National Laboratories near Rome each had begun original investigation towards one particular type of challenger: colliding beam storage rings. For the group in California, the accomplishment involved creating the potential for feasible experiments. The energetic advantages of the colliding beam configuration had long been accepted - together with its impossibility for realization. The builders of the Princeton-Stanford machine feel that creating usable beams and a reasonable reaction rate is what stood between this concept and its glorious future. For the European builders of AdA, however, the beauty emerges from recognizing the enormous potential inherent in electron-positron annihilations. At least as important for the rise of electron-positron colliders, though, is the role of both of these projects as cultural firsts -- as places where particular sets of physicists got their feet wet associating with beams and beam problems and with the many individuals who were addressing beam problems. The Princeton-Stanford Collider provided experience which its builders would use to move on, functioning as both a technological and political platform for creating what would eventually become SPEAR. For the Roman group, the pursuit of AdA encouraged investigation which applied equally well to their next machine, Adone.
14. Physics Case for the International Linear Collider
SciTech Connect
Fujii, Keisuke; Grojean, Christophe; Peskin, Michael E.; Barklow, Tim; Gao, Yuanning; Kanemura, Shinya; Kim, Hyungdo; List, Jenny; Nojiri, Mihoko; Perelstein, Maxim; Poeschl, Roman; Reuter, Juergen; Simon, Frank; Tanabe, Tomohiko; Yu, Jaehoon; Wells, James D.; Murayama, Hitoshi; Yamamoto, Hitoshi; /Tohoku U.
2015-06-23
We summarize the physics case for the International Linear Collider (ILC). We review the key motivations for the ILC presented in the literature, updating the projected measurement uncertainties for the ILC experiments in accord with the expected schedule of operation of the accelerator and the results of the most recent simulation studies.
15. GMOs: building the future on the basis of past experience.
PubMed
Reis, Luiz F L; Van Sluys, Marie-Anne; Garratt, Richard C; Pereira, Humberto M; Teixeira, Mauro M
2006-12-01
Biosafety of genetically modified organisms (GMOs) and their derivatives is still a major topic in the agenda of government and societies worldwide. The aim of this review is to bring into light that data that supported the decision taken back in 1998 as an exercise to stimulate criticism from the scientific community for upcoming discussions and to avoid emotional and senseless arguments that could jeopardize future development in the field. It must be emphasized that Roundup Ready soybean is just one example of how biotechnology can bring in significant advances for society, not only through increased productivity, but also with beneficial environmental impact, thereby allowing more rational use of agricultural pesticides for improvement of the soil conditions. The adoption of agricultural practices with higher yield will also allow better distribution of income among small farmers. New species of genetically modified plants will soon be available and society should be capable of making decisions in an objective and well-informed manner, through collegiate bodies that are qualified in all aspects of biosafety and environmental impact.
16. GMO quantification: valuable experience and insights for the future.
PubMed
Milavec, Mojca; Dobnik, David; Yang, Litao; Zhang, Dabing; Gruden, Kristina; Zel, Jana
2014-10-01
Cultivation and marketing of genetically modified organisms (GMOs) have been unevenly adopted worldwide. To facilitate international trade and to provide information to consumers, labelling requirements have been set up in many countries. Quantitative real-time polymerase chain reaction (qPCR) is currently the method of choice for detection, identification and quantification of GMOs. This has been critically assessed and the requirements for the method performance have been set. Nevertheless, there are challenges that should still be highlighted, such as measuring the quantity and quality of DNA, and determining the qPCR efficiency, possible sequence mismatches, characteristics of taxon-specific genes and appropriate units of measurement, as these remain potential sources of measurement uncertainty. To overcome these problems and to cope with the continuous increase in the number and variety of GMOs, new approaches are needed. Statistical strategies of quantification have already been proposed and expanded with the development of digital PCR. The first attempts have been made to use new generation sequencing also for quantitative purposes, although accurate quantification of the contents of GMOs using this technology is still a challenge for the future, and especially for mixed samples. New approaches are needed also for the quantification of stacks, and for potential quantification of organisms produced by new plant breeding techniques.
17. Contracting for nurse education: nurse leader experiences and future visions.
PubMed
Moule, P
1999-02-01
The integration of nurse education into higher education establishments following Working for Patients, Working Paper 10 (DOH 1989a) has seen changes to the funding and delivery of nurse education. The introduction of contracting for education initiated a business culture which subsumed previous relationships, affecting collaborative partnerships and shared understanding. Discourse between the providers and purchasers of nurse education is vital to achieve proactive curriculum planning, which supports the development of nursing practitioners who are fit for award and fit for purpose. Research employed philosophical hermeneutics to guide the interviewing of seven nurse leaders within one region. Data analysis occurred within a hermeneutic circle and was refined using NUDIST. Two key themes were seen as impacting on the development of an effective educational strategy. Firstly, the development of collaborative working was thought to have been impeded by communication difficulties between the Trusts and higher education provider. Secondly, there was concern that curriculum developments would support the future evolution of nursing, acknowledging the professional issues impacting on nursing roles. The research findings suggest purchasers and providers of nurse education must move towards achieving mutual understanding and collaborate in developing a curriculum which will prepare nurses for practice and for award.
18. ALMA test interferometer control system: past experiences and future developments
Marson, Ralph G.; Pokorny, Martin; Kern, Jeff; Stauffer, Fritz; Perrigouard, Alain; Gustafsson, Birger; Ramey, Ken
2004-09-01
The Atacama Large Millimeter Array (ALMA) will, when it is completed in 2012, be the world's largest millimeter & sub-millimeter radio telescope. It will consist of 64 antennas, each one 12 meters in diameter, connected as an interferometer. The ALMA Test Interferometer Control System (TICS) was developed as a prototype for the ALMA control system. Its initial task was to provide sufficient functionality for the evaluation of the prototype antennas. The main antenna evaluation tasks include surface measurements via holography and pointing accuracy, measured at both optical and millimeter wavelengths. In this paper we will present the design of TICS, which is a distributed computing environment. In the test facility there are four computers: three real-time computers running VxWorks (one on each antenna and a central one) and a master computer running Linux. These computers communicate via Ethernet, and each of the real-time computers is connected to the hardware devices via an extension of the CAN bus. We will also discuss our experience with this system and outline changes we are making in light of our experiences.
19. Collider signatures of Higgs-portal scalar dark matter
Han, Huayong; Yang, Jin Min; Zhang, Yang; Zheng, Sibo
2016-05-01
In the simplest Higgs-portal scalar dark matter model, the dark matter mass has been restricted to be either near the resonant mass (mh / 2) or in a large-mass region by the direct detection at LHC Run 1 and LUX. While the large-mass region below roughly 3 TeV can be probed by the future Xenon1T experiment, most of the resonant mass region is beyond the scope of Xenon1T. In this paper, we study the direct detection of such scalar dark matter in the narrow resonant mass region at the 14 TeV LHC and the future 100 TeV hadron collider. We show the luminosities required for the 2σ exclusion and 5σ discovery.
20. Automated management of life cycle for future network experiment based on description language
Niu, Hongxia; Liang, Junxue; Lin, Zhaowen; Ma, Yan
2016-12-01
Future network is a complex resources pool including multiple physical resources and virtual resources. Establishing experiment on future network is complicate and tedious. That achieving the automated management of future network experiments is so important. This paper brings forward the way for researching and managing the life cycle of experiment based on the description language. The description language uses the framework, which couples with a low hierarchical structure and a complete description of the network experiment. In this way, the experiment description template can be generated by this description framework accurately and completely. In reality, we can also customize and reuse network experiment by modifying the description template. The results show that this method can achieve the aim for managing the life cycle of network experiment effectively and automatically, which greatly saves time, reduces the difficulty, and implements the reusability of services.
1. Lattice of the NICA Collider Rings
SciTech Connect
Sidorin, Anatoly; Kozlov, Oleg; Meshkov, Igor; Mikhaylov, Vladimir; Trubnikov, Grigoriy; Lebedev, Valeri Nagaitsev, Sergei; Senichev, Yurij; /Julich, Forschungszentrum
2010-05-01
The Nuclotron-based Ion Collider fAcility (NICA) is a new accelerator complex being constructed at JINR. It is designed for collider experiments with ions and protons and has to provide ion-ion (Au{sup 79+}) and ion-proton collisions in the energy range 1 {divided_by} 4.5 GeV/n and collisions of polarized proton-proton and deuteron-deuteron beams. Collider conceptions with constant {gamma}{sub tr} and with possibility of its variation are considered. The ring has the racetrack shape with two arcs and two long straight sections. Its circumference is about 450m. The straight sections are optimized to have {beta}* {approx} 35cm in two IPs and a possibility of final betatron tune adjustment.
2. Marine TAIGER OBS Experiment and its future prospects
Lee, C.; Wang, T.; van Avendonk, H. J.; Huang, Y.; Lin, J.; Lallemand, S.; Klingelhoeher, F.
2009-12-01
A total of 260 OBSs were deployed in the marine TAIGER program from late March to late July, 2009. These data were collected by US Columbia University’s R/V Langseth as the big-power seismic shooting ship and 10 Taiwanese ships to take terms for supporting of the OBS experiment in the entire seismic cruises. The OBS were provided by the National Taiwan Ocean University, French IFREMER and Scripps Institution of Oceanography. During these 4 months, we have worked around Taiwan in the South China Sea, Luzon Arc, East Taiwan and West Philippine Basin. All efforts are put together by many earth scientists from Taiwan, USA and France under one major purpose, to get a better understanding of the Taiwan mountain building processes. As a result, these new data will provide as a base to combine with many other disciplinal studies, such as the multi-channel seismic, land recorded seismometer data, gravity and magnetic as well as the natural earthquake data recorded by the OBS during the experiment time. Four very preliminary OBS data analyses will be presented in the same T25 postal section. Beside the research, we also carried out our teaching to our students on board a Taiwanese student training ship, Yu-Yin No.2. Therefore, an educational post is also to be shown in the ED01 section. Even the data analyses are in an early stage, but we are exciting about it. For example, 3 OBS profiles (T4, T5 and T6) in the East Taiwan were shot twice in normal and reversed directions with different shot intervals (30 and 60 seconds per shot). This exercise will be important to interpret the complicate collision/subduction structures in the East Taiwan. Two OBS profiles (T1 and T2) in the Luzon Arc were shot 5 times in the separated R/V Langseth cruises (due to the typhoon effects), again with different shot intervals (20 and 60 seconds per shot). These will provide us more opportunities to examine the collisional features in between Taiwan and Luzon. One OBS long profile (550 km) was
3. Exploring Astrobiology: Future and In-Service Teacher Research Experiences
Cola, J.; Williams, L. D.; Snell, T.; Gaucher, E.; Harris, B.; Usselman, M. C.; Millman, R. S.
2009-12-01
The Georgia Tech Center for Ribosome Adaptation and Evolution, a center funded by the NASA Astrobiology Institute, developed an educational Astrobiology program titled, “Life on the Edge: Astrobiology.” The purpose of the program was to provide educators with the materials, exposure, and skills necessary to prepare our future workforce and to foster student interest in scientific discovery on Earth and throughout the universe. A one-week, non-residential summer enrichment program for high school students was conducted and tested by two high school educators, an undergraduate student, and faculty in the Schools of Biology, and Chemistry and Biochemistry at Georgia Tech. In an effort to promote and encourage entry into teaching careers, Georgia Tech paired in-service teachers in the Georgia Intern-Fellowship for Teachers (GIFT) program with an undergraduate student interested in becoming a teacher through the Tech to Teaching program. The GIFT and Tech to Teaching fellows investigated extremophiles which have adapted to life under extreme environmental conditions. As a result, extremophiles became the focus of a week-long, “Life on the Edge: Astrobiology” curriculum aligned with the Georgia Performance Standards in Biology. Twenty-five high school students explored the adaptation and survival rates for various types of extremophiles exposed to UV radiation and desiccation; students were also introduced to hands-on activities and techniques such as genomic DNA purification, gel electrophoresis, and Polymerase Chain Reaction (PCR). The impact on everyone invested and involved in the Astrobiology program including the GIFT and Tech to Teaching fellows, high school students, and faculty are discussed.
4. Designing for our future selves: the Swedish experience.
PubMed
Benktzon, M
1993-02-01
The social context of Sweden provides a good environment for research and development of products and technical aids for the disabled and elderly. However, the model used by Swedish ergonomists and designers in Ergonomi Design Gruppen emphasizes how the application of experience gained from designing such aids can lead to better products for everyone. Three main examples are given to demonstrate how ergonomics studies and prototype/model evaluation by the target users have led to new designs for familiar objects: eating implements, walking sticks and coffee pots. Addressing particular aspects of design for people with specific difficulties, and problems associated with the use of everyday items, has led to designs which are acceptable to a broader range of users.
5. A Future Polarized Drell-Yan Experiment at Fermilab
SciTech Connect
Kleinjan, David William
2015-06-04
The topic is treated in a series of slides under the following headings: Motivation (Nucleon Spin Puzzle, Quark Orbital Momentum and the Sivers Function, Accessing Sivers via Polarized Drell-Yan (p+p↑ → μ+μ-)); Transition of Seaquest (E906 → E1039) (Building a Polarized proton Target, Status of Polarized Target); and Outlook. The nucleon spin puzzle: when the quark and gluon contributions to the proton spin are evaluated, nearly 50% of the measured spin is missing; lattice QCD calculations indicate as much as 50% may come from quark orbital angular momentum. Sea quarks should carry orbital angular momentum (O.A.M.). The E1039 Polarized Target Drell-Yan Experiment provides opportunity to study possible Sea Quark O.A.M. Data taking is expected to begin in the spring of 2017.
6. Mentoring future dental educators through an apprentice teaching experience.
PubMed
Bibb, Carol A; Lefever, Karen H
2002-06-01
To address concerns about the growing shortage of dental educators, the UCLA School of Dentistry initiated an elective course to introduce fourth-year students to issues in academic dentistry and to provide an apprentice teaching experience. Participants in the elective (referred to as student teachers) developed a microcourse entitled "Welcome to Dental Anatomy," presented to incoming first-year students during orientation week. Under the guidance of faculty mentors, the student teachers were responsible for development of course content, teaching aids, and evaluation methodology. Two cycles of the elective have been completed reaching a total of twenty-one fourth-year students to date. The positive impact on student teachers and incoming first-year students indicates that this approach has great potential for encouraging more graduates to pursue careers in academic dentistry. In addition, the program has the potential to be expanded by adaptation to other foundational courses in the dental and dental hygiene curricula.
Bauer, Gerry; Beccati, Barbara; Behrens, Ulf; Biery, Kurt; Bouffet, Olivier; Branson, James; Bukowiec, Sebastian; Cano, Eric; Cheung, Harry; Ciganek, Marek; Cittolin, Sergio; Coarasa, Jose Antonio; Deldicque, Christian; Dupont, Aymeric; Erhan, Samim; Gigi, Dominique; Glege, Frank; Gomez-Reino, Robert; Hatton, Derek; Holzner, Andre; Hwong, Yi Ling; Masetti, Lorenzo; Meijers, Frans; Meschi, Emilio; Mommsen, Remigius K.; O'Dell, Vivian; Orsini, Luciano; Paus, Christoph; Petrucci, Andrea; Pieri, Marco; Racz, Attila; Raginel, Olivier; Sakulin, Hannes; Sani, Matteo; Schieferdecker, Philipp; Schwick, Christoph; Shpakov, Dennis; Simon, Michal; Sumorok, Konstanty
2011-12-01
The Compact Muon Solenoid (CMS) experiment has developed an electrical implementation of the S-LINK64 extension (Simple Link Interface 64 bit) operating at 400 MB/s in order to read out the detector. This paper studies a possible replacement of the existing S-LINK64 implementation by an optical link, based on 10 Gigabit Ethernet in order to fulfil larger throughput, replace aging hardware and simplify an architecture. A prototype transmitter unit has been developed based on the FPGA Altera PCI Express Development Kit with a custom firmware. A standard PC has been acted as receiving unit. The data transfer has been implemented on a stack of protocols: RDP over IP over Ethernet. This allows receiving the data by standard hardware components like PCs or network switches and NICs. The first test proved that basic exchange of the packets between transmitter and receiving unit works. The paper summarizes the status of these studies.
8. International linear collider reference design report
SciTech Connect
Aarons, G.
2007-06-22
The International Linear Collider will give physicists a new cosmic doorway to explore energy regimes beyond the reach of today's accelerators. A proposed electron-positron collider, the ILC will complement the Large Hadron Collider, a proton-proton collider at the European Center for Nuclear Research (CERN) in Geneva, Switzerland, together unlocking some of the deepest mysteries in the universe. With LHC discoveries pointing the way, the ILC -- a true precision machine -- will provide the missing pieces of the puzzle. Consisting of two linear accelerators that face each other, the ILC will hurl some 10 billion electrons and their anti-particles, positrons, toward each other at nearly the speed of light. Superconducting accelerator cavities operating at temperatures near absolute zero give the particles more and more energy until they smash in a blazing crossfire at the centre of the machine. Stretching approximately 35 kilometres in length, the beams collide 14,000 times every second at extremely high energies -- 500 billion-electron-volts (GeV). Each spectacular collision creates an array of new particles that could answer some of the most fundamental questions of all time. The current baseline design allows for an upgrade to a 50-kilometre, 1 trillion-electron-volt (TeV) machine during the second stage of the project. This reference design provides the first detailed technical snapshot of the proposed future electron-positron collider, defining in detail the technical parameters and components that make up each section of the 31-kilometer long accelerator. The report will guide the development of the worldwide R&D program, motivate international industrial studies and serve as the basis for the final engineering design needed to make an official project proposal later this decade.
9. Future Dust Detection Experiments on Japanese Space Missions
Sasaki, S.; Ohashi, H.; Nogami, K.; Iglseder, H.; Fujiwara, A.; Yano, H.; Mukai, T.; Ishimoto, H.; Yamamoto, S.; Kobayashi, K.; Shibata, H.
1996-09-01
Direct measurements of dust particle in space have unveiled characteristics of interplanetary and interstellar dust particles. In addition to the approved PLANET-B MDC (Mars Dust Counter) in 1998, two dust detectors with mass spectrometry are proposed for Japanese future space missions: lunar and asteroid missions. A lunar orbiter mission is planned by NASDA (National Space Development Agency of Japan) and ISAS (The Institute of Space and Astronautical Science). The mission, which has also a relay satellite and a test lander, will be launched in 2003 by H-II vehicle. The orbiter is three-axis stabilized satellite that has both lunar and anti-lunar direction platforms. We propose two dust analyzers involving mass spectrometry using impact ionization: one on the lunar side will measure dust flux from the moon and the other on the anti-lunar side will measure interplanetary and interstellar particles. Each dust analyzer which has an axisymmetric ion mirror is 4.3kg in weight and 240x275x357 mm(3) in shape with 38000mm(2) aperture area. We aim at chemical analysis (with mass resolution m/dm >= 300) of dust particles as small as 1 micron. MUSES-C is an asteroid sample return mission which is managed by ISAS. It will be launched in late 2001 by M-V, will arrive at asteroid Nereus in 2004 and return back to the Earth with surface samples in 2006. In addition to the sampler, several scientific instruments are proposed to be on board MUSES-C. We propose an impact ionization dust detector with a simple ion mirror for mass spectrometry. The detector is 1.0kg in weight and 180x125x275 mm(3) in shape. The main purpose is to measure time and spatial variations of interplanetary and interstellar dust particles and to investigate if dust particles are enhanced at Nereus orbit. To detect relatively low velocity particles around Nereus, we also propose to have a piezo film detector on the satellite surface.
10. Results from p p colliders
SciTech Connect
Huth, J.
1991-08-01
Recent results {bar p}p colliders are presented. From elastic scattering experiments at the Tevatron, an average value of {sigma}{sub tot} = 72.1{plus minus}2 mb is reported, along with a new measurement of {rho} = 0.13 {plus minus} 0.7. New measurements of jet direct photon and high p{sub t} W and Z production are compared to more precise, higher order predictions from perturbative QCD. Recently available data on the W mass and width give combined values for M{sub W} = 80.14{plus minus}0.27 GeV/c{sup 2}, and {Gamma}(W) =2. 14 {plus minus} 0.08 GeV. From electroweak radiative corrections and M{sub W}, one finds M{sub top} = 130{plus minus}40 GeV/c{sup 2}, with a 95% C.L. upper limit at 210 GeV/c{sup 2}. Current limits on M{sub top} are presented, along with a review of the prospects for top discovery. From jet data there is no evidence of quark substructure down to the distance scale of 1.4 {times} 10{sup {minus}17} cm, nor is there evidence for supersymmetry or heavy gauge bosons at {bar p}p colliders, allowing lower limits on M{sub W}, > 520 GeV/c{sup 2} and M{sub Z} 412 GeV/c{sup 2}. 66 refs., 26 figs.
11. Optimal scan strategies for future CMB satellite experiments
Wallis, Christopher G. R.; Brown, Michael L.; Battye, Richard A.; Delabrouille, Jacques
2017-04-01
The B-mode polarization power spectrum in the cosmic microwave background (CMB) is about four orders of magnitude fainter than the CMB temperature power spectrum. Any instrumental imperfections that couple temperature fluctuations to B-mode polarization must therefore be carefully controlled and/or removed. We investigate the role that a scan strategy can have in mitigating certain common systematics by averaging systematic errors down with many crossing angles. We present approximate analytic forms for the error on the recovered B-mode power spectrum that would result from differential gain, differential pointing and differential ellipticity for the case where two detector pairs are used in a polarization experiment. We use these analytic predictions to search the parameter space of common satellite scan strategies in order to identify those features of a scan strategy that have most impact in mitigating systematic effects. As an example, we go on to identify a scan strategy suitable for the CMB satellite proposed for the European Space Agency M5 call, considering the practical considerations of fuel requirement, data rate and the relative orientation of the telescope to the earth. Having chosen a scan strategy we then go on to investigate the suitability of the scan strategy.
12. 3-flavor oscillations with current and future reactor experiments
Dwyer, Dan
2017-01-01
Nuclear reactors have been a crucial tool for our understanding of neutrinos. The disappearance of electron antineutrinos emitted by nuclear reactors has firmly established that neutrino flavor oscillates, and that neutrinos consequently have mass. The current generation of precision measurements rely on some of the world's most intense reactor facilities to demonstrate that the electron antineutrino mixes with the third antineutrino mass eigenstate (v3-). Accurate measurements of antineutrino energies robustly determine the tiny difference between the masses-squared of the v3- state and the two more closely-spaced v1- and v2- states. These results have given us a much clearer picture of neutrino mass and mixing, yet at the same time open major questions about how to account for these small but non-zero masses in or beyond the Standard Model. These observations have also opened the door for a new generation of experiments which aim to measure the ordering of neutrino masses and search for potential violation of CP symmetry by neutrinos. I will provide a brief overview of this exciting field. Work supported under DOE OHEP DE-AC02-05CH11231.
13. Bouncing and Colliding Branes
SciTech Connect
Lehners, Jean-Luc
2007-11-20
In a braneworld description of our universe, we must allow for the possibility of having dynamical branes around the time of the big bang. Some properties of such domain walls in motion are discussed here, for example the ability of negative-tension domain walls to bounce off spacetime singularities and the consequences for cosmological perturbations. In this context, we will also review a colliding branes solution of heterotic M-theory that has been proposed as a model for early universe cosmology.
14. Accelerators, Colliders, and Snakes
Courant, Ernest D.
2003-12-01
The author traces his involvement in the evolution of particle accelerators over the past 50 years. He participated in building the first billion-volt accelerator, the Brookhaven Cosmotron, which led to the introduction of the "strong-focusing" method that has in turn led to the very large accelerators and colliders of the present day. The problems of acceleration of spin-polarized protons are also addressed, with discussions of depolarizing resonances and "Siberian snakes" as a technique for mitigating these resonances.
SciTech Connect
P. Bauer, C. Darve and I. Terechkine
2002-11-21
Hadron machines mostly use high field superconducting magnets operating at low temperatures. Therefore the issue of extracting a SR power heat load becomes more critical and costly. Conceptual solutions to the problem exist in the form of beam screens and photon stops. Cooled beam screens are more expensive in production and operation than photon stops, but they are, unlike photon stops, routinely used in existing machines. Photon stops are the most economical solution because the heat load is extracted at room temperature. They presently consider it most prudent to work with a combined beam screen and photon stop approach, in which the photon stop absorbs most of the SR power, and the beam screen serves only the vacuum purpose. Provided that the recently launched photon stop R and D [10] supports it, we would like to explore solutions with photon stops only. This would allow to reduce the magnet apertures to a certain extent with respect to those required to accommodate high SR power compliant beam screens and reduce cost. The possibility of magnet designs, which have larger vertical apertures where large cooling capillaries can be housed at no additional cost, would allow to soften this statement somewhat and should therefore be pursued as well.
16. Colliding crystalline beams
SciTech Connect
Wei, J.; Sessler, A.M.
1998-08-01
The understanding of crystalline beams has advanced to the point where one can now, with reasonable confidence, undertake an analysis of the luminosity of colliding crystalline beams. Such a study is reported here. It is necessary to observe the criteria, previously stated, for the creation and stability of crystalline beams. This requires, firstly, the proper design of a lattice. Secondly, a crystal must be formed, and this can usually be done at various densities. Thirdly, the crystals in a colliding-beam machine are brought into collision. The authors study all of these processes using the molecular dynamics (MD) method. The work parallels what was done previously, but the new part is to study the crystal-crystal interaction in collision. They initially study the zero-temperature situation. If the beam-beam force (or equivalent tune shift) is too large then overlapping crystals can not be created (rather two spatially separated crystals are formed). However, if the beam-beam force is less than but comparable to that of the space-charge forces between the particles, they find that overlapping crystals can be formed and the beam-beam tune shift can be of the order of unity. Operating at low but non-zero temperature can increase the luminosity by several orders of magnitude over that of a usual collider. The construction of an appropriate lattice, and the development of adequately strong cooling, although theoretically achievable, is a challenge in practice.
17. Recent experiences and future expectations in data storage technology
SciTech Connect
Pfister, J.
1990-04-01
For more than 10 years the conventional media for High Energy Physics has been 9 track magnetic tape in various densities. More recently, especially in Europe, the IBM 3480 technology has been adopted while in the United States, especially at Fermilab, 8mm is being used by the largest experiments as a primary recording media and where possible they are using 8mm for the production, analysis and distribution of data summary tapes. VHS and Digital Audio tape have recurrently appeared but seem to serve primarily as back-up storage media. The reasons for what appear to be a radical departure are many. Economics, form factor, and convenience are dominant among the reasons. The traditional data media suppliers seem to have been content to evolve the traditional media at their own pace with only modest enhancements primarily in value engineering'' of extant products. Meanwhile, start-up companies providing small system and workstations sought other media both to reduce the price of their offerings and respond to the real need of lower cost back-up for lower cost systems. This happening in a market context where traditional computer systems vendors were leaving the tape market altogether or shifting to 3480'' technology which has certainly created a climate for reconsideration and change. The newest data storage products, in most cases, are not coming from the technologies developed by the computing industry but by the audio and video industry. Just where these flopticals, opticals, 19 mm tape and the new underlying technologies, such as, digital paper'' may fit in the HEP computing requirement picture will be reviewed. What these technologies do for and to HEP will be discussed along with some suggestions for a methodology for tracking and evaluating extant and emerging technologies.
18. Recent experiences and future expectations in data storage technology''
Pfister, Jack
1990-08-01
For more than 10 years the conventional media for High Energy Physics has been 9 track magnetic tape in various densities. More recently, especially in Europe, the IBM 3480 technology has been adopted while in the United States, especially at Fermilab, 8 mm is being used by the largest experiments as a primary recording media and where possible they are using 8 mm for the production, analysis and distribution of data summary tapes. VHS and Digital Audio tape have recurrently appeared but seem to serve primarily as a back-up storage media. The reasons for what appear to be a radical departure are many. Economics (media and controllers are inexpensive), form factor (two gigabytes per shirt pocket), and convenience (fewer mounts/dismounts per minute) are dominant among the reasons. The traditional data media suppliers seem to have been content to evolve the traditional media at their own pace with only modest enhancements primarily in value engineering'' of extant products. Meanwhile, start-up companies providing small system and workstations sought other media both to reduce the price of their offerings and respond to the real need of lower cost back-up for lower cost systems. This happening in a market context where traditional computer systems vendors were leaving the tape market altogether or shifting to 3480'' technology which has certainly created a climate for reconsideration and change. The newest data storage products, in most cases, are not coming from the technologies developed by the computing industry but by the audio and video industry. Just where these flopticals, opticals, 19 mm tape and the new underlying technologies, such as, digital paper'' may fit in the HEP computing requirement picture will be reviewed. What these technologies do for and to HEP will be discussed along with some suggestions for a methodology for tracking and evaluating extant and emerging technologies.
19. Physics and Analysis at a Hadron Collider - An Introduction (1/3)
ScienceCinema
None
2016-07-12
This is the first lecture of three which together discuss the physics of hadron colliders with an emphasis on experimental techniques used for data analysis. This first lecture provides a brief introduction to hadron collider physics and collider detector experiments as well as offers some analysis guidelines. The lectures are aimed at graduate students.
20. Gravitational wave and collider implications of electroweak baryogenesis aided by non-standard cosmology
Artymowski, Michal; Lewicki, Marek; Wells, James D.
2017-03-01
We consider various models realizing baryogenesis during the electroweak phase transition (EWBG). Our focus is their possible detection in future collider experiments and possible observation of gravitational waves emitted during the phase transition. We also discuss the possibility of a non-standard cosmological history which can facilitate EWBG. We show how acceptable parameter space can be extended due to such a modification and conclude that next generation precision experiments such as the ILC will be able to confirm or falsify many models realizing EWBG. We also show that, in general, collider searches are a more powerful probe than gravitational wave searches. However, observation of a deviation from the SM without any hints of gravitational waves can point to models with modified cosmological history that generically enable EWBG with weaker phase transition and thus, smaller GW signals.
1. Supervised Occupational Experience Programs: History, Philosophy, Current Status, and Future Implications.
ERIC Educational Resources Information Center
Boone, Harry N.; And Others
1987-01-01
The authors examine the evolution of supervised occupational experience programs in agricultural education, provide an overview of their current status, and suggest the direction they will take in the future. Information was collected from a review of the literature. (CH)
2. Mexican American Seventh Graders' Future Work and Family Plans: Associations with Cultural Experiences and Adjustment
ERIC Educational Resources Information Center
Cansler, Emily; Updegraff, Kimberly A.; Simpkins, Sandra D.
2012-01-01
We describe Mexican American seventh graders' expectations for future work and family roles and investigate links between patterns of future expectations and adolescents' cultural experiences and adjustment. Adolescents participated in home interviews and a series of seven nightly phone calls. Five unique patterns of adolescents' future…
3. What Affects Willingness to Mentor in the Future? An Investigation of Attachment Styles and Mentoring Experiences
ERIC Educational Resources Information Center
Wang, Sheng; Noe, Raymond A.; Wang, Zhong-Ming; Greenberger, David B.
2009-01-01
This study examined the influence of attachment styles and mentoring experiences on willingness to mentor in the future in a formal mentoring program in China. For both mentors and proteges, avoidance and anxiety dimensions of attachment styles and their interaction had a significant influence on willingness to mentor in the future. Mentoring…
4. Really large hadron collider working group summary
SciTech Connect
Dugan, G.; Limon, P.; Syphers, M.
1996-12-01
A summary is presented of preliminary studies of three 100 TeV center-of-mass hadron colliders made with magnets of different field strengths, 1.8T, 9.5T and 12.6T. Descriptions of the machines, and some of the major and most challenging subsystems, are presented, along with parameter lists and the major issues for future study.
5. Perturbative QCD tests from the LEP, HERA, and TEVATRON colliders
SciTech Connect
Kuhlmann, S.
1994-09-01
A review of QCD tests from LEP, HERA and the TEVATRON colliders is presented. This includes jet production, quark/gluon jet separation, quark/gluon propagator spin, {alpha}{sub s} updates, photon production, and rapidity gap experiments.
6. Theoretical perspective on RHIC (relativistic heavy ion collider) physics
SciTech Connect
Dover, C.B.
1990-10-01
We discuss the status of the relativistic heavy ion collider (RHIC) project at Brookhaven, and assess some key experiments which propose to detect the signatures of a transient quark-gluon plasma (QGP) phase in such collisions. 24 refs.
7. Collider searches for extra dimensions
SciTech Connect
Landsberg, Greg; /Brown U.
2004-12-01
Searches for extra spatial dimensions remain among the most popular new directions in our quest for physics beyond the Standard Model. High-energy collider experiments of the current decade should be able to find an ultimate answer to the question of their existence in a variety of models. Until the start of the LHC in a few years, the Tevatron will remain the key player in this quest. In this paper, we review the most recent results from the Tevatron on searches for large, TeV{sup -1}-size, and Randall-Sundrum extra spatial dimensions, which have reached a new level of sensitivity and currently probe the parameter space beyond the existing constraints. While no evidence for the existence of extra dimensions has been found so far, an exciting discovery might be just steps away.
8. Futurism.
ERIC Educational Resources Information Center
Foy, Jane Loring
The objectives of this research report are to gain insight into the main problems of the future and to ascertain the attitudes that the general population has toward the treatment of these problems. In the first section of this report the future is explored socially, psychologically, and environmentally. The second section describes the techniques…
9. Stuck in the here and now: Construction of fictitious and future experiences following ventromedial prefrontal damage.
PubMed
Bertossi, Elena; Aleo, Fabio; Braghittoni, Davide; Ciaramelli, Elisa
2016-01-29
There is increasing interest in uncovering the cognitive and neural bases of episodic future thinking (EFT), the ability to imagine events relevant to one's own future. Recent functional neuroimaging evidence shows that the ventromedial prefrontal cortex (vmPFC) is engaged during EFT. However, vmPFC is also activated during imagination of fictitious, atemporal experiences. Therefore, its role in EFT is currently unclear. To test (1) whether vmPFC is critical for EFT, and (2) whether it supports EFT specifically, or, rather, construction of any complex experience, patients with focal lesions to vmPFC (vmPFC patients), control patients with lesions not involving vmPFC, and healthy controls were asked to imagine personal future experiences and fictitious experiences. Compared to the control groups, vmPFC patients were impaired at imagining both future and fictitious experiences, indicating a general deficit in constructing novel experiences. Unlike the control groups, however, vmPFC patients had more difficulties in imagining future compared to fictitious experiences. Exploratory correlation analyses showed that general construction deficits correlated with lesion volume in BA 11, whereas specific EFT deficits correlated with lesion volume in BA 32 and BA 10. Together, these findings indicate that vmPFC is crucial for EFT. We propose, however, that different vmPFC subregions may support different component processes of EFT: the most ventral part, BA 11, may underlie core constructive processes needed to imagine any complex experience (e.g., scene construction), whereas BA 10 and BA 32 may mediate simulation of those specific experiences that likely await us in the future.
10. Innovative Approach to the Organization of Future Social Workers' Practical Training: Foreign Experience
ERIC Educational Resources Information Center
Polishchuk, Vira; Slozanska, Hanna
2014-01-01
Innovative approaches to practical training of future social workers in higher educational establishments have been defined. Peculiarities of foreign experience of social workers' practical training in higher educational establishments have been analyzed. Experience of organizing practice for bachelor students studying at "Social Work"…
11. Experience as a Basis for the Professional Development of Future Teacher of Music
ERIC Educational Resources Information Center
Popovych, Natalia
2014-01-01
This paper investigates the problem of forming the professional and personal experience of the future music teacher as the basis for improving its professional excellence. The aim of the study was the theoretical justification and experimental verification of the contents of the experience gained and pedagogical technology of development of the…
12. New DIS and collider results on PDFs
SciTech Connect
Rizvi, E.
2015-05-15
The HERA ep collider experiments have measured the proton structure functions over a wide kinematic range. New data from the H1 experiment now extend the range to higher 4-momentum transfer (√(Q{sup 2})) over which a precision of ∼ 2% is achieved in the neutral current channel. A factor of two reduction in the systematic uncertainties over previous measurement is attained. The charged current structure function measurements are also significantly improved in precision. These data, when used in QCD analyses of the parton density functions (PDFs) reduce the PDF uncertainties particularly at high momentum fractions x which is relevant to low energy neutrino scattering cross sections. New data from the LHC pp collider experiments may also offer significant high x PDF improvements as the experimental uncertainties improve.
13. Muon colliders and neutrino factories
SciTech Connect
Geer, S.; /Fermilab
2010-09-01
Over the last decade there has been significant progress in developing the concepts and technologies needed to produce, capture and accelerate {Omicron}(10{sup 21}) muons/year. This development prepares the way for a new type of neutrino source (Neutrino Factory) and a new type of very high energy lepton-antilepton collider (Muon Collider). This article reviews the motivation, design and R&D for Neutrino Factories and Muon Colliders.
14. Physics at a photon collider
SciTech Connect
Stefan Soldner-Rembold
2002-09-30
A Photon Collider will provide unique opportunities to study the SM Higgs boson and to determine its properties. MSSM Higgs bosons can be discovered at the Photon Collider for scenarios where they might escape detection at the LHC. As an example for the many other physics topics which can be studied at a Photon Collider, recent results on Non-Commutative Field Theories are also discussed.
15. Progress report on future accelerators
SciTech Connect
Panofsky, W.K.H.
1984-02-01
SLAC intends to pursue high energy physics work in the future along three lines: (1) continued exploration of electron and photon physics on stationary targets; (2) colliding beam physics using electron-positron storage rings; (3) single-pass collider physics with electrons using first the Stanford Linear Collider (SLC) and eventually a single-pass collider operating near the highest practical upper limit for such devices. These long-range plans are discussed.
16. Governance of the International Linear Collider Project
SciTech Connect
Foster, B.; Barish, B.; Delahaye, J.P.; Dosselli, U.; Elsen, E.; Harrison, M.; Mnich, J.; Paterson, J.M.; Richard, F.; Stapnes, S.; Suzuki, A.; Wormser, G.; Yamada, S.; /KEK, Tsukuba
2012-05-31
Governance models for the International Linear Collider Project are examined in the light of experience from similar international projects around the world. Recommendations for one path which could be followed to realize the ILC successfully are outlined. The International Linear Collider (ILC) is a unique endeavour in particle physics; fully international from the outset, it has no 'host laboratory' to provide infrastructure and support. The realization of this project therefore presents unique challenges, in scientific, technical and political arenas. This document outlines the main questions that need to be answered if the ILC is to become a reality. It describes the methodology used to harness the wisdom displayed and lessons learned from current and previous large international projects. From this basis, it suggests both general principles and outlines a specific model to realize the ILC. It recognizes that there is no unique model for such a laboratory and that there are often several solutions to a particular problem. Nevertheless it proposes concrete solutions that the authors believe are currently the best choices in order to stimulate discussion and catalyze proposals as to how to bring the ILC project to fruition. The ILC Laboratory would be set up by international treaty and be governed by a strong Council to whom a Director General and an associated Directorate would report. Council would empower the Director General to give strong management to the project. It would take its decisions in a timely manner, giving appropriate weight to the financial contributions of the member states. The ILC Laboratory would be set up for a fixed term, capable of extension by agreement of all the partners. The construction of the machine would be based on a Work Breakdown Structure and value engineering and would have a common cash fund sufficiently large to allow the management flexibility to optimize the project's construction. Appropriate contingency, clearly
17. A new boson with a mass of 125 GeV observed with the CMS experiment at the Large Hadron Collider.
PubMed
2012-12-21
The Higgs boson was postulated nearly five decades ago within the framework of the standard model of particle physics and has been the subject of numerous searches at accelerators around the world. Its discovery would verify the existence of a complex scalar field thought to give mass to three of the carriers of the electroweak force-the W(+), W(-), and Z(0) bosons-as well as to the fundamental quarks and leptons. The CMS Collaboration has observed, with a statistical significance of five standard deviations, a new particle produced in proton-proton collisions at the Large Hadron Collider at CERN. The evidence is strongest in the diphoton and four-lepton (electrons and/or muons) final states, which provide the best mass resolution in the CMS detector. The probability of the observed signal being due to a random fluctuation of the background is about 1 in 3 × 10(6). The new particle is a boson with spin not equal to 1 and has a mass of about 125 [corrected] giga-electron volts. Although its measured properties are, within the uncertainties of the present data, consistent with those expected of the Higgs boson, more data are needed to elucidate the precise nature of the new particle.
18. A New Boson with a Mass of 125 GeV Observed with the CMS Experiment at the Large Hadron Collider
CMS Collabortion; Abbaneo, D.; Abbiendi, G.; Abbrescia, M.; Abdullin, S.; Abdulsalam, A.; Acharya, B. S.; Acosta, D.; Acosta, J. G.; Adair, A.; Adam, W.; Adam, N.; Adamczyk, D.; Adams, T.; Adams, M. R.; Adiguzel, A.; Adler, V.; Adolphi, R.; Adzic, P.; Afanasiev, S.; Agostino, L.; Agram, J.-L.; Aguilar-Benitez, M.; Aguilo, E.; Ahmad, M.; Ahmad, M. K. H.; Ahuja, S.; Akchurin, N.; Akgun, U.; Akgun, B.; Akin, I. V.; Alagoz, E.; Albajar, C.; Albayrak, E. A.; Albergo, S.; Albert, M.; Albrow, M.; Alcaraz Maestre, J.; Aldá Júnior, W. L.; Aldaya Martin, M.; Alemany-Fernandez, R.; Alexander, J.; Aliev, T.; Alimena, J.; Allfrey, P.; Almeida, N.; Alverson, G.; Alves, G. A.; Aly, A.; Amaglobeli, N.; Amapane, N.; Ambroglini, F.; Amsler, C.; Anagnostou, G.; Anastassov, A.; Andelin, D.; Anderson, J.; Anderson, M.; Andrea, J.; Andreev, Yu.; Andreev, V.; Andreev, V.; Andrews, W.; Anfreville, M.; Angelini, F.; Anghel, I. M.; Anisimov, A.; Anjos, T. S.; Ansari, M. H.; Antonelli, L.; Anttila, E.; Antunovic, Z.; Apanasevich, L.; Apollinari, G.; Appelt, E.; Apresyan, A.; Apyan, A.; Arce, P.; Arcidiacono, R.; Ardalan, F.; Arenton, M. W.; Arezzini, S.; Arfaei, H.; Argiro, S.; Arisaka, K.; Arndt, K.; Arneodo, M.; Arora, S.; Asavapibhop, B.; Asawatangtrakuldee, C.; Asghar, M. I.; Askew, A.; Aspell, P.; Assran, Y.; Ata, M.; Atac, M.; Attebury, G.; Attikis, A.; Auffray, E.; Autermann, C.; Auzinger, G.; Avdeeva, E.; Avery, P.; Avetisyan, A.; Avila, C.; Awad, A.; Ayan, A. S.; Azarkin, M.; Azhgirey, I.; Aziz, T.; Azzi, P.; Azzolini, V.; Azzurri, P.; Baarmand, M. M.; Babb, J.; Baccaro, S.; Bacchetta, N.; Bachtis, M.; Baden, A.; Badgett, W.; Badier, J.; Baechler, J.; Baffioni, S.; Bagaturia, I.; Bagliesi, G.; Bai, Y.; Bailleux, D.; Baillon, P.; Bainbridge, R.; Bakhshiansohi, H.; Bakirci, M. N.; Bakken, J. A.; Balazs, M.; Baldin, B.; Ball, A. H.; Ball, G.; Ballin, J.; Ban, Y.; Banerjee, S.; Banerjee, S.; Bäni, L.; Banicz, K.; Bansal, M.; Bansal, S.; Banzuzi, K.; Barashko, V.; Barbagli, G.; Barberis, E.; Barbone, L.; Barczyk, A.; Bard, R.; Barfuss, A. F.; Bargassa, P.; Barge, D.; Baringer, P.; Barker, A.; Barnes, V. E.; Barnett, B. A.; Barney, D.; Barone, L.; Barrass, T.; Bartalini, P.; Barth, C.; Bartoloni, A.; Basegmez, S.; Basso, L.; Basti, A.; Bateman, E.; Battilana, C.; Bauer, J.; Bauer, D.; Bauer, G.; Bauerdick, L. A. T.; Baulieu, G.; Baumbaugh, B.; Baumgartel, D.; Baur, U.; Bayshev, I.; Bazterra, V. E.; Bean, A.; Beauceron, S.; Beaudette, F.; Beaumont, W.; Beaupere, N.; Becheva, E.; Bedjidian, M.; Beernaert, K.; Behner, F.; Behr, J.; Behrenhoff, W.; Behrens, U.; Belforte, S.; Beliy, N.; Belknap, D.; Bell, A. J.; Bell, K. W.; Bellan, R.; Bellato, M.; Bellazzini, R.; Bellinger, J. N.; Belotelov, I.; Belyaev, A.; Belyaev, A.; Benaglia, A.; Bencze, G.; Bendavid, J.; Benedetti, D.; Benelli, G.; Benettoni, M.; Benhabib, L.; Beni, N.; Benitez, J. F.; Benussi, L.; Benvenuti, A. C.; Beranek, S.; Beretvas, A.; Bergauer, T.; Berger, J.; Bergholz, M.; Beri, S. B.; Bernardes, C. A.; Bernardini, J.; Bernardino Rodrigues, N.; Bernet, C.; Berry, D.; Berry, E.; Berryhill, J.; Bertl, W.; Bertoldi, M.; Berzano, U.; Besancon, M.; Besson, A.; Betchart, B.; Betev, B.; Bethani, A.; Betts, R. R.; Beuselinck, R.; Bhandari, V.; Bhardwaj, A.; Bhat, P. C.; Bhatnagar, V.; Bhattacharya, S.; Bhattacharya, S.; Bhatti, A.; Bheesette, S.; Bialas, W.; Bialkowska, H.; Biallass, P.; Bian, J. G.; Bianchi, G.; Bianchini, L.; Bianco, S.; Biasini, M.; Biasotto, M.; Biino, C.; Bilei, G. M.; Bilin, B.; Bilki, B.; Binkley, M.; Bisello, D.; Bitioukov, S.; Blau, B.; Blekman, F.; Blobel, V.; Bloch, D.; Bloch, P.; Bloom, K.; Bluj, M.; Blüm, P.; Blumenfeld, B.; Blyweert, S.; Boccali, T.; Bocci, A.; Bochenek, J.; Bockelman, B.; Bodek, A.; Bodin, D.; Boimska, B.; Bolla, G.; Bolognesi, S.; Bolton, T.; Bonacorsi, D.; Bonato, A.; Bondu, O.; Bonnett Del Alamo, M.; Bontenackels, M.; Boos, E.; Borcherding, F.; Bornheim, A.; Borras, K.; Borrello, L.; Bortignon, P.; Bortoletto, D.; Bose, T.; Bose, S.; Böser, C.; Bosi, F.; Bostock, F.; Botta, C.; Boudoul, G.; Bouhali, O.; Boulahouache, C.; Bourilkov, D.; Boutemeur, M.; Boutigny, D.; Boutle, S.; Bradley, D.; Braibant-Giacomelli, S.; Branca, A.; Branson, A.; Branson, J. G.; Brauer, R.; Braunschweig, W.; Breedon, R.; Breto, G.; Breuker, H.; Brew, C.; Brez, A.; Brigliadori, L.; Brigljevic, V.; Brinkerhoff, A.; Brito, L.; Broccolo, G.; Brochero Cifuentes, J. A.; Brochet, S.; Brom, J.-M.; Brona, G.; Brooke, J. J.; Broutin, C.; Brown, R. M.; Brownson, E.; Brun, H.; Bruno, G.; Buchmann, M. A.; Buchmuller, O.; Bucinskaite, I.; Budd, H.; Buege, V.; Bujak, A.; Bunichev, V.; Bunin, P.; Bunkowski, K.; Bunn, J.; Buontempo, S.; Burgmeier, A.; Burkett, K.; Busson, P.; Busza, W.; Butler, A. P. H.; Butler, P. H.; Butler, J. N.; Butt, J.; Butz, E.; Bylsma, B.
2012-12-01
The Higgs boson was postulated nearly five decades ago within the framework of the standard model of particle physics and has been the subject of numerous searches at accelerators around the world. Its discovery would verify the existence of a complex scalar field thought to give mass to three of the carriers of the electroweak force—the W+, W-, and Z0 bosons—as well as to the fundamental quarks and leptons. The CMS Collaboration has observed, with a statistical significance of five standard deviations, a new particle produced in proton-proton collisions at the Large Hadron Collider at CERN. The evidence is strongest in the diphoton and four-lepton (electrons and/or muons) final states, which provide the best mass resolution in the CMS detector. The probability of the observed signal being due to a random fluctuation of the background is about 1 in 3 × 106. The new particle is a boson with spin not equal to 1 and has a mass of about 1.25 giga-electron volts. Although its measured properties are, within the uncertainties of the present data, consistent with those expected of the Higgs boson, more data are needed to elucidate the precise nature of the new particle.
19. Heavy flavour physics at colliders with silicon strip vertex detectors
Schwarz, Andreas S.
1994-03-01
The physics of heavy flavours has played a dominant role in high energy physics research ever since the discovery of charm in 1974, followed by the τ lepton in 1975 and bottom in 1977. With the startup of the large experiments at the e+e- colliders LEP and the SLC a new type of detector system has now come into operation which has a major impact on the studies of heavy flavours: the silicon strip vertex detector. The basic design priciples of these novel detector systems are outlined and three representative experimental realizations are discussed. The impact of these detectors on the studies of the properties of heavy flavours is just emerging and focuses on the measurement of lifetimes and the tagging of the presence of heavy flavour hadrons in hadronic events. The tools that are being developed for these studies are described as well as details of representative analyses. The potential of these devices and the associated technological developments that were necessary for their application in the colding beam environment is reflected in a plethora of new proposals to build sophisticated silicon detector systems for a large variety of future high energy physics applications. Two examples will be briefly sketched, a vertex detector for an asymmetric e+e- bottom factory and a large scale tracking system for a multipurpose detector at one of the new large hadron colliders.
20. 2001 Report on the Next Linear Collider
SciTech Connect
Gronnberg, J; Breidenbach; Burke, D; Corlett, J; Dombeck, T; Markiewicz, T
2001-08-28
Recent studies in elementary particle physics have made the need for an e{sup +}e{sup -} linear collider able to reach energies of 500 GeV and above with high luminosity more compelling than ever [1]. Observations and measurements completed in the last five years at the SLC (SLAC), LEP (CERN), and the Tevatron (FNAL) can be explained only by the existence of at least one particle or interaction that has not yet been directly observed in experiment. The Higgs boson of the Standard Model could be that particle. The data point strongly to a mass for the Higgs boson that is just beyond the reach of existing colliders. This brings great urgency and excitement to the potential for discovery at the upgraded Tevatron early in this decade, and almost assures that later experiments at the LHC will find new physics. But the next generation of experiments to be mounted by the world-wide particle physics community must not only find this new physics, they must find out what it is. These experiments must also define the next important threshold in energy. The need is to understand physics at the TeV energy scale as well as the physics at the 100-GeV energy scale is now understood. This will require both the LHC and a companion linear electron-positron collider.
1. The Next Linear Collider Design: NLC 2001
SciTech Connect
Larsen, Alberta
2001-08-21
Recent studies in elementary particle physics have made the need for an e{sup +}e{sup -} linear collider able to reach energies of 500 GeV and above with high luminosity more compelling than ever. Observations and measurements completed in the last five years at the SLC (SLAC), LEP (CERN), and the Tevatron (FNAL) can be explained only by the existence of at least one particle or interaction that has not yet been directly observed in experiment. The Higgs boson of the Standard Model could be that particle. The data point strongly to a mass for the Higgs boson that is just beyond the reach of existing colliders. This brings great urgency and excitement to the potential for discovery at the upgraded Tevatron early in this decade, and almost assures that later experiments at the LHC will find new physics. But the next generation of experiments to be mounted by the world-wide particle physics community must not only find this new physics, they must find out what it is. These experiments must also define the next important threshold in energy. The need is to understand physics at the TeV energy scale as well as the physics at the 100-GeV energy scale is now understood. This will require both the LHC and a companion linear electron-positron collider.
2. Mexican American 7th Graders’ Future Work and Family Plans: Associations with Cultural Experiences and Adjustment
PubMed Central
Cansler, Emily; Updegraff, Kimberly A.; Simpkins, Sandra D.
2011-01-01
We describe Mexican American 7th graders’ expectations for future work and family roles and investigate links between patterns of future expectations and adolescents’ cultural experiences and adjustment. Adolescents participated in home interviews and a series of seven nightly phone calls. Five unique patterns of adolescents’ future expectations were identified (N = 246): Career Oriented, Independent, Family Oriented, Early, and Inconsistent. Career Oriented adolescents had the highest socioeconomic status and contact with the U.S. (e.g., generation status) whereas Family Oriented adolescents had the lowest. Cultural orientations, values, and involvement also varied across groups. For example, Career Oriented adolescents reported significantly higher familism values compared to Inconsistent adolescents. Clusters also differed on adjustment: Career Oriented and Family Oriented adolescents reported higher parental warmth and less risky behavior compared to Independent and Inconsistent adolescents. Findings underscore the multi-faceted nature of adolescents’ future expectations and the diversity in cultural experiences among Mexican origin youth. PMID:23338812
3. Alignment and vibration issues in TeV linear collider design
SciTech Connect
Fischer, G.E.
1989-07-01
The next generation of linear colliders will require alignment accuracies and stabilities of component placement at least one, perhaps two, orders of magnitude better than can be achieved by the conventional methods and procedures in practice today. The magnitudes of these component-placement tolerances for current designs of various linear collider subsystems are tabulated. In the micron range, long-term ground motion is sufficiently rapid that on-line reference and mechanical correction systems are called for. Some recent experiences with the upgraded SLAC laser alignment systems and examples of some conceivable solutions for the future are described. The so called ''girder'' problem is discussed in the light of ambient and vibratory disturbances. The importance of the quality of the underlying geology is stressed. The necessity and limitations of public-beam-derived placement information are mentioned. 40 refs., 4 figs., 1 tab.
4. Alighment and Vibration Issues in TeV Linear Collider Design
SciTech Connect
Fischer, G.E.; /SLAC
2005-08-12
The next generation of linear colliders will require alignment accuracies and stabilities of component placement at least one, perhaps two, orders of magnitude better than can be achieved by the conventional methods and procedures in practice today. The magnitudes of these component-placement tolerances for current designs of various linear collider subsystems are tabulated. In the micron range, long-term ground motion is sufficiently rapid that on-line reference and mechanical correction systems are called for. Some recent experiences with the upgraded SLAC laser alignment systems and examples of some conceivable solutions for the future are described. The so called ''girder'' problem is discussed in the light of ambient and vibratory disturbances. The importance of the quality of the underlying geology is stressed. The necessity and limitations of particle-beam-derived placement information are mentioned.
5. TARGETRY FOR A MU+MU- COLLIDER.
SciTech Connect
KIRK,H.G.
1999-03-29
The requirement for high luminosity in a {mu}{sup +}{mu}{sup -} collider leads one to conclude that a prodigious source of pions is needed followed by an efficient capture/decay channel. Significant targetry issues are raised by these demands. Among these are (1) the best target configuration to tolerate a high-rep rate, high-power proton beam ({approx} 10{sup 14} ppp at 15 Hz), (2) the pion spectra of the produced pions and (3) the best configuration for maximizing the quantity of captured pions. In this paper, the current thinking of the {mu}{sup +}{mu}{sup -} collider collaboration for solutions to these issues is discussed. In addition, we give a description of the R&D program designed to provide a proof-of-principle for a muon capture system capable of meeting the demands of a future high-luminosity machine.
6. Mass storage system experiences and future needs at the National Center for Atmospheric Research
NASA Technical Reports Server (NTRS)
Olear, Bernard T.
1992-01-01
This presentation is designed to relate some of the experiences of the Scientific Computing Division at NCAR dealing with the 'data problem'. A brief history and a development of some basic Mass Storage System (MSS) principles are given. An attempt is made to show how these principles apply to the integration of various components into NCAR's MSS. There is discussion of future MSS needs for future computing environments.
7. Optimization of a closed-loop gas system for the operation of Resistive Plate Chambers at the Large Hadron Collider experiments
Capeans, M.; Glushkov, I.; Guida, R.; Hahn, F.; Haider, S.
2012-01-01
Resistive Plate Chambers (RPCs), thanks to their fast time resolution (˜1 ns), suitable space resolution (˜1 cm) and low production cost (˜50 €/m2), are widely employed for the muon trigger systems at the Large Hadron Collider (LHC). Their large detector volume (they cover a surface of about 4000 m2 equivalent to 16 m3 of gas volume both in ATLAS and CMS) and the use of a relatively expensive Freon-based gas mixture make a closed-loop gas circulation unavoidable. It has been observed that the return gas of RPCs operated in conditions similar to the difficult experimental background foreseen at LHC contains a large amount of impurities potentially dangerous for long-term operation. Several gas-cleaning agents are currently in use in order to avoid accumulation of impurities in the closed-loop circuits. We present the results of a systematic study characterizing each of these cleaning agents. During the test, several RPCs were operated at the CERN Gamma Irradiation Facility (GIF) in a high radiation environment in order to observe the production of typical impurities: mainly fluoride ions, molecules of the Freon group and hydrocarbons. The polluted return gas was sent to several cartridges, each containing a different cleaning agent. The effectiveness of each material was studied using gas chromatography and mass-spectrometry techniques. Results of this test have revealed an optimized configuration of filters that is now under long-term validation.Gas optimization studies are complemented with a finite element simulation of gas flow distribution in the RPCs, aiming at its eventual optimization in terms of distribution and flow rate.
8. Measurement of s (pp → tt) in the t + jets channel using 4.7 FB-1 of data from the atlas experiment of The Large Hadron Collider
Sytsma, Michael J.
The top quark is the heaviest of the known elementary particles in the Standard Model. Top quark decay can result into various final states; therefore, careful study of its production rate and other properties is very important for particle physics. With the shutdown of the Tevatron, The Large Hadron Collider (LHC) is the only facility currently capable of studying top quark properties. The data obtained by proton-proton collisions in the LHC is recorded by two general purpose detectors, ATLAS and CMS. The results in the dissertation are from the ATLAS detector. A new measurement is reported of σ(pp → tt¯) at s = 7 TeV using 4.7 fb -1 of data collected during 2011. In this analysis, the final state of the top quark decay is a hadronically decaying tau lepton and a pair of light quark jets. Only those events in which the tau lepton subsequently decays to one or three charged hadrons, zero or more neutral hadrons and a tau neutrino, are selected. Boosted Decision Trees are used for hadronic tau identification. The signature thus consists of one hadronically decaying tau lepton and four or more jets, of which at least one is initiated by a b quark accompanying the W in the top quark decays, and a large net missing momentum in the transverse plane due to the energetic neutrino-antineutrino pair. This momentum is not detected by the ATLAS detector. For multi-jet background estimation, a template fitting method is used. The template is fitted to the data to obtain the fractions for the signal and it's various backgrounds. The measured cross section along with the uncertainties on the statistics, systematics and luminosity is: σtt¯ = 170.6 +/- 12 (stat.) +19-20 (syst.) +/- 3 (lumi.) pb..
9. Prospects for heavy flavor physics at hadron colliders
SciTech Connect
Butler, J.N.
1997-09-01
The role of hadron colliders in the observation and study of CP violation in B decays is discussed. We show that hadron collider experiments can play a significant role in the early studies of these phenomena and will play an increasingly dominant role as the effort turns towards difficult to measure decays, especially those of the B{sub s} meson, and sensitive searches for rare decays and subtle deviations from Standard Model predictions. We conclude with a discussion of the relative merits of hadron collider detectors with forward vs central rapidity coverage.
10. TOP AND HIGGS PHYSICS AT THE HADRON COLLIDERS
SciTech Connect
Jabeen, Shabnam
2013-10-20
This review summarizes the recent results for top quark and Higgs boson measurements from experiments at Tevatron, a proton–antiproton collider at a center-of-mass energy of √ s =1 . 96 TeV, and the Large Hadron Collider, a proton–proton collider at a center- of-mass energy of √ s = 7 TeV. These results include the discovery of a Higgs-like boson and measurement of its various properties, and measurements in the top quark sector, e.g. top quark mass, spin, charge asymmetry and production of single top quark.
11. Positrons for linear colliders
SciTech Connect
Ecklund, S.
1987-11-01
The requirements of a positron source for a linear collider are briefly reviewed, followed by methods of positron production and production of photons by electromagnetic cascade showers. Cross sections for the electromagnetic cascade shower processes of positron-electron pair production and Compton scattering are compared. A program used for Monte Carlo analysis of electromagnetic cascades is briefly discussed, and positron distributions obtained from several runs of the program are discussed. Photons from synchrotron radiation and from channeling are also mentioned briefly, as well as positron collection, transverse focusing techniques, and longitudinal capture. Computer ray tracing is then briefly discussed, followed by space-charge effects and thermal heating and stress due to showers. (LEW)
12. Top physics at the Tevatron Collider
SciTech Connect
Margaroli, Fabrizio; /Purdue U.
2007-10-01
The top quark has been discovered in 1995 at the CDF and DO experiments located in the Tevatron ring at the Fermilab laboratory. After more than a decade the Tevatron collider, with its center-of-mass energy collisions of 1.96 TeV, is still the only machine capable of producing such exceptionally heavy particle. Here I present a selection of the most recent CDF and DO measurements performed analyzing {approx} 1 fb{sup -1} of integrated luminosity.
13. MUON COLLIDERS: THE ULTIMATE NEUTRINO BEAMLINES.
SciTech Connect
KING,B.J.
1999-03-29
It is shown that muon decays in straight sections of muon collider rings will naturally produce highly collimated neutrino beams that can be several orders of magnitude stronger than the beams at existing accelerators. We discuss possible experimental setups and give a very brief overview of the physics potential from such beamlines. Formulae are given for the neutrino event rates at both short and long baseline neutrino experiments in these beams.
14. Longitudinal damping in the Tevatron collider
SciTech Connect
Kerns, Q.A.; Jackson, G.; Kerns, C.R.; Miller, H.; Reid, J.; Siemann, R.; Wildman, D.
1989-03-01
This paper describes the damper design for 6 proton on 6 pbar bunches in the Tevatron collider. Signal pickup, transient phase detection, derivative networks, and phase correction via the high-level rf are covered. Each rf station is controlled by a slow feedback loop. In addition, global feedback loops control each set of four cavities, one set for protons and one set for antiprotons. Operational experience with these systems is discussed. 7 refs., 9 figs.
15. Space-charge limitations in a collider
SciTech Connect
Fedotov, A.; Heimerle, M.
2010-08-03
Design of several projects which envision hadron colliders operating at low energies such as NICA at JINR [1] and Electron-Nucleon Collider at FAIR [2] is under way. In Brookhaven National Laboratory (BNL), a new physics program requires operation of Relativistic Heavy Ion Collider (RHIC) with heavy ions at low energies at g=2.7-10 [3]. In a collider, maximum achievable luminosity is typically limited by beam-beam effects. For heavy ions significant luminosity degradation, driving bunch length and transverse emittance growth, comes from Intrabeam Scattering (IBS). At these low energies, IBS growth can be effectively counteracted, for example, with cooling techniques. If IBS were the only limitation, one could achieve small hadron beam emittance and bunch length with the help of cooling, resulting in a dramatic luminosity increase. However, as a result of low energies, direct space-charge force from the beam itself is expected to become the dominant limitation. Also, the interplay of both beambeam and space-charge effects may impose an additional limitation on achievable maximum luminosity. Thus, understanding at what values of space-charge tune shift one can operate in the presence of beam-beam effects in a collider is of great interest for all of the above projects. Operation of RHIC for Low-Energy physics program started in 2010 which allowed us to have a look at combined impact of beam-beam and space-charge effects on beam lifetime experimentally. Here we briefly discuss expected limitation due to these effects with reference to recent RHIC experience.
16. Will there be energy frontier colliders after LHC?
SciTech Connect
2016-09-15
High energy particle colliders have been in the forefront of particle physics for more than three decades. At present the near term US, European and international strategies of the particle physics community are centered on full exploitation of the physics potential of the Large Hadron Collider (LHC) through its high-luminosity upgrade (HL-LHC). The future of the world-wide HEP community critically depends on the feasibility of possible post-LHC colliders. The concept of the feasibility is complex and includes at least three factors: feasibility of energy, feasibility of luminosity and feasibility of cost. Here we overview all current options for post-LHC colliders from such perspective (ILC, CLIC, Muon Collider, plasma colliders, CEPC, FCC, HE-LHC) and discuss major challenges and accelerator R&D required to demonstrate feasibility of an energy frontier accelerator facility following the LHC. We conclude by taking a look into ultimate energy reach accelerators based on plasmas and crystals, and discussion on the perspectives for the far future of the accelerator-based particle physics.
17. The future of reactor neutrino experiments: A novel approach to measuring theta{sub 13}
SciTech Connect
Heeger, Karsten M.; Freedman, Stuart J.; Luk, Kam-Biu
2003-08-24
Results from non-accelerator neutrino oscillation experiments have provided evidence for the oscillation of massive neutrinos. The subdominant oscillation, the coupling of the electron neutrino flavor to the third mass eigenstate, has not been measured yet. The size of this coupling U{sub e3} and its corresponding mixing angle theta{sub 13} are critical for CP violation searches in the lepton sector and will define the future of accelerator neutrino physics. The current best limit on U{sub e3} comes from the CHOOZ reactor neutrino disappearance experiment. In this talk we review proposals for future measurements of theta-13 with reactor antineutrinos.
18. Status of the MEIC ion collider ring design
SciTech Connect
Morozov, Vasiliy; Derbenev, Yaroslav; Harwood, Leigh; Hutton, Andrew; Lin, Fanglei; Pilat, Fulvia; Zhang, Yuhong; Cai, Yunhai; Nosochkov, Y. M.; Sullivan, Michael; Wang, M.-H.; Wienands, Uli; Gerity, James; Mann, Thomas; McIntyre, Peter; Pogue, Nathaniel; Sattarov, Akhdiyor
2015-09-01
We present an update on the design of the ion collider ring of the Medium-energy Electron-Ion Collider (MEIC) proposed by Jefferson Lab. The design is based on the use of super-ferric magnets. It provides the necessary momentum range of 8 to 100 GeV/c for protons and ions, matches the electron collider ring design using PEP-II components, fits readily on the JLab site, offers a straightforward path for a future full-energy upgrade by replacing the magnets with higher-field ones in the same tunnel, and is more cost effective than using presently available current-dominated super-conducting magnets. We describe complete ion collider optics including an independently-designed modular detector region.
19. Status of the MEIC ion collider ring design
SciTech Connect
Morozov, V. S.; Derbenev, Ya. S.; Harwood, L.; Hutton, A.; Lin, F.; Pilat, F.; Zhang, Y.; Cai, Y.; Nosochkov, Y. M.; Sullivan, M.; Wang, M-H; Wienands, U.; Gerity, J.; Mann, T.; McIntyre, P.; Pogue, N. J.; Satttarov, A.
2015-07-14
We present an update on the design of the ion collider ring of the Medium-energy Electron-Ion Collider (MEIC) proposed by Jefferson Lab. The design is based on the use of super-ferric magnets. It provides the necessary momentum range of 8 to 100 GeV/c for protons and ions, matches the electron collider ring design using PEP-II components, fits readily on the JLab site, offers a straightforward path for a future full-energy upgrade by replacing the magnets with higher-field ones in the same tunnel, and is more cost effective than using presently available current-dominated superconducting magnets. We describe complete ion collider optics including an independently-designed modular detector region.
20. Assessment of CORDEX-South Asia experiments for monsoonal precipitation over Himalayan region for future climate
Choudhary, A.; Dimri, A. P.
2017-07-01
Precipitation is one of the important climatic indicators in the global climate system. Probable changes in monsoonal (June, July, August and September; hereafter JJAS) mean precipitation in the Himalayan region for three different greenhouse gas emission scenarios (i.e. representative concentration pathways or RCPs) and two future time slices (near and far) are estimated from a set of regional climate simulations performed under Coordinated Regional Climate Downscaling Experiment-South Asia (CORDEX-SA) project. For each of the CORDEX-SA simulations and their ensemble, projections of near future (2020-2049) and far future (2070-2099) precipitation climatology with respect to corresponding present climate (1970-2005) over Himalayan region are presented. The variability existing over each of the future time slices is compared with the present climate variability to determine the future changes in inter annual fluctuations of monsoonal mean precipitation. The long-term (1970-2099) trend (mm/day/year) of monsoonal mean precipitation spatially distributed as well as averaged over Himalayan region is analyzed to detect any change across twenty-first century as well as to assess model uncertainty in simulating the precipitation changes over this period. The altitudinal distribution of difference in trend of future precipitation from present climate existing over each of the time slices is also studied to understand any elevation dependency of change in precipitation pattern. Except for a part of the Hindu-Kush area in western Himalayan region which shows drier condition, the CORDEX-SA experiments project in general wetter/drier conditions in near future for western/eastern Himalayan region, a scenario which gets further intensified in far future. Although, a gradually increasing precipitation trend is seen throughout the twenty-first century in carbon intensive scenarios, the distribution of trend with elevation presents a very complex picture with lower elevations
1. A Tevatron collider beauty factory
This document which is labeled a final report consists of several different items. The first is a proposal for a detector to be developed for beauty physics. The detector is proposed for the Fermilab Tevatron and would be designed to measure mixing reactions, rare decay modes, and even CP violation in hadron collider beauty production. The general outline of the work proposed is given, and an estimate of the time to actually design the detector is presented, along with proposed changes to the Tevatron to accommodate the system. A preliminary report on an experiment to verify a reported observation of a 17 keV neutrino in tritium decay is presented. The present results in the decay spectra actually showing a depression below expected levels, which is not consistent with a massive neutrino. Additional interest has been shown in finishing an electrostatic beta spectrometer which was started several years previously. The instrument uses hemispherical electrostatic electric fields to retard electrons emitted in tritium decay allowing measurement of integral spectra. The design goal has a 5 eV energy resolution, which may be achievable. A new PhD student is pursuing this experiment. Also the report contains a proposal for additional work in the field of non-perturbative quantum field theory by the theoretical group at OU. The work which is proposed will be applied to electroweak and strong interactions, as well as to quantum gravitational phenomena.
2. From the past to the future: Integrating work experience into the design process.
PubMed
Bittencourt, João Marcos; Duarte, Francisco; Béguin, Pascal
2017-01-01
Integrating work activity issues into design process is a broadly discussed theme in ergonomics. Participation is presented as the main means for such integration. However, a late participation can limit the development of both project solutions and future work activity. This article presents the concept of construction of experience aiming at the articulated development of future activities and project solutions. It is a non-teleological approach where the initial concepts will be transformed by the experience built up throughout the design process. The method applied was a case study of an ergonomic participation during the design of a new laboratory complex for biotechnology research. Data was obtained through analysis of records in a simulation process using a Lego scale model and interviews with project participants. The simulation process allowed for developing new ways of working and generating changes in the initial design solutions, which enable workers to adopt their own developed strategies for conducting work more safely and efficiently in the future work system. Each project decision either opens or closes a window of opportunities for developing a future activity. Construction of experience in a non-teleological design process allows for understanding the consequences of project solutions for future work.
3. The role of personal goals in autonoetic experience when imagining future events.
PubMed
Lehner, Edith; D'Argembeau, Arnaud
2016-05-01
Although autonoetic experience-a sense of mental time travel-has been considered as the hallmark of episodic future thinking, what determines this subjective feeling is not yet fully understood. Here, we investigated the role of autobiographical knowledge by manipulating the relevance of imagined events for personal goals. Participants were asked to imagine three types of events (goal-related future events, experimenter-provided future events, and atemporal events) and to assess various characteristics of their mental representations. The results showed that the three types of events were represented with similar levels of detail and vividness. Importantly, however, goal-related future events were associated with a stronger autonoetic experience. Furthermore, autonoetic experience was significantly predicted by the importance of imagined events for personal goals. These findings suggest that the subjective feeling of pre-experiencing one's personal future in part depends on the extent to which imagined events can be placed in an autobiographical context. Copyright © 2016 Elsevier Inc. All rights reserved.
4. Future changes in daily snowfall intensity projected by large ensemble regional climate experiments
Kawase, H.
2015-12-01
We investigate the future changes in daily snowfall intensity in Japan analyzing the large ensemble regional climate experiments. Dynamical downscalings are conducted by Non-Hydrostatic Regional Climate Model (NHRCM) with 20 km from the global climate projections using Meteorological Research Institute-Atmospheric General Circulation Model (MRI-AGCM). Fifty ensemble experiments are performed in the present climate. For the future climate projections, 90 ensemble experiments are performed based on the six patterns of SST changes in the periods when 4 K rise in global-mean surface air temperature is projected. The accumulated snowfall in winter decreases in Japan except for the northern parts of Japan. Especially, the inland areas in the Sea of Japan side, which is famous for the heaviest snowfall region in the world, shows the remarkable decrease in snowfall in the future climate. The experiments also show increased number of days without snowfall and decreased number of days with weak snowfall due to significant warming in the most parts of Japan. On the other hand, the extreme daily snowfall, which occurs once ten years, would increase at higher elevations in the Sea of Japan side. This means that extreme daily snowfall in the present climate would occur more frequently in the future climate. The warmer atmosphere can contain more water vapor and warmer ocean can supply more water vapor to the low atmosphere. The surface air temperature at higher elevations is still lower than 0 degree Celsius, which could result in the increased extreme daily snowfall.
5. Measure of the impact of future dark energy experiments based on discriminating power among quintessence models
Barnard, Michael; Abrahamse, Augusta; Albrecht, Andreas; Bozek, Brandon; Yashar, Mark
2008-08-01
We evaluate the ability of future data sets to discriminate among different quintessence dark energy models. This approach gives an alternative (and complementary) measure for assessing the impact of future experiments, as compared with the large body of literature that compares experiments in abstract parameter spaces (such as the well-known w0-wa parameters) and more recent work that evaluates the constraining power of experiments on individual parameter spaces of specific quintessence models. We use the Dark Energy Task Force (DETF) models of future data sets and compare the discriminative power of experiments designated by the DETF as stages 2, 3, and 4 (denoting increasing capabilities). Our work reveals a minimal increase in discriminating power when comparing stage 3 to stage 2, but a very striking increase in discriminating power when going to stage 4 (including the possibility of completely eliminating some quintessence models). We also see evidence that even modest improvements over DETF stage 4 (which many believe are realistic) could result in even more dramatic discriminating power among quintessence dark energy models. We develop and demonstrate the technique of using the independently measured modes of the equation of state (derived from principle component analysis) as a common parameter space in which to compare the different quintessence models, and we argue that this technique is a powerful one. We use the PNGB, Exponential, Albrecht-Skordis, and Inverse Tracker (or inverse power law) quintessence models for this work. One of our main results is that the goal of discriminating among these models sets a concrete measure on the capabilities of future dark energy experiments. Experiments have to be somewhat better than DETF stage 4 simulated experiments to fully meet this goal.
6. Searching for dark matter at colliders
Richard, Francois; Arcadi, Giorgio; Mambrini, Yann
2015-04-01
Dark Matter (DM) detection prospects at future colliders are reviewed under the assumption that DM particles are fermions of the Majorana or Dirac type. Although the discussion is quite general, one will keep in mind the recently proposed candidate based on an excess of energetic photons observed in the center of our Galaxy with the Fermi-LAT satellite. In the first part we will assume that DM interactions are mediated by vector bosons, or . In the case of -boson Direct Detection limits force only axial couplings with the DM. This solution can be naturally accommodated by Majorana DM but is disfavored by the GC excess. Viable scenarios can be instead found in the case of mediator. These scenarios can be tested at colliders through ISR events, . A sensitive background reduction can be achieved by using highly polarized beams. In the second part scalar particles, in particular Higgs particles, have been considered as mediators. The case of the SM Higgs mediator is excluded by limits on the invisible branching ratio of the Higgs. On the contrary particularly interesting is the case in which the DM interactions are mediated by the pseudoscalar state in two Higgs-doublet model scenarios. In this last case the main collider signature is.
7. Hadron collider physics at UCR
SciTech Connect
Kernan, A.; Shen, B.C.
1997-07-01
This paper describes the research work in high energy physics by the group at the University of California, Riverside. Work has been divided between hadron collider physics and e{sup +}-e{sup {minus}} collider physics, and theoretical work. The hadron effort has been heavily involved in the startup activities of the D-Zero detector, commissioning and ongoing redesign. The lepton collider work has included work on TPC/2{gamma} at PEP and the OPAL detector at LEP, as well as efforts on hadron machines.
8. Muon collider interaction region design
DOE PAGES
Alexahin, Y. I.; Gianfelice-Wendt, E.; Kashikhin, V. V.; ...
2011-06-02
Design of a muon collider interaction region (IR) presents a number of challenges arising from low β* < 1 cm, correspondingly large beta-function values and beam sizes at IR magnets, as well as the necessity to protect superconducting magnets and collider detectors from muon decay products. As a consequence, the designs of the IR optics, magnets and machine-detector interface are strongly interlaced and iterative. A consistent solution for the 1.5 TeV center-of-mass muon collider IR is presented. It can too provide an average luminosity of 1034 cm-2s-1 with an adequate protection of magnet and detector components.
9. A model for computing at the SSC (Superconducting Super Collider)
SciTech Connect
Baden, D. . Dept. of Physics); Grossman, R. . Lab. for Advanced Computing)
1990-06-01
High energy physics experiments at the Superconducting Super Collider (SSC) will show a substantial increase in complexity and cost over existing forefront experiments, and computing needs may no longer be met via simple extrapolations from the previous experiments. We propose a model for computing at the SSC based on technologies common in private industry involving both hardware and software. 11 refs., 1 fig.
10. Radiation hardness of semiconductor avalanche detectors for calorimeters in future HEP experiments
Kushpil, V.; Mikhaylov, V.; Kugler, A.; Kushpil, S.; Ladygin, V. P.; Svoboda, O.; Tlustý, P.
2016-02-01
During the last years, semiconductor avalanche detectors are being widely used as the replacement of classical PMTs in calorimeters for many HEP experiments. In this report, basic selection criteria for replacement of PMTs by solid state devices and specific problems in the investigation of detectors radiation hardness are discussed. The design and performance of the hadron calorimeters developed for the future high energy nuclear physics experiments at FAIR, NICA, and CERN are discussed. The Projectile Spectator Detector (PSD) for the CBM experiment at the future FAIR facility, the Forward Calorimeter for the NA61 experiment at CERN and the Multi Purpose Detector at the future NICA facility are reviewed. Moreover, new methods of data analysis and results interpretation for radiation experiments are described. Specific problems of development of detectors control systems and possibilities of reliability improvement of multi-channel detectors systems are shortly overviewed. All experimental material is based on the investigation of SiPM and MPPC at the neutron source in NPI Rez.
11. Lessons from the GP-B Experience for Future Fundamental Physics Missions in Space
NASA Technical Reports Server (NTRS)
Kolodziejczak, Jeffery
2006-01-01
Gravity Probe B launched in April 2004 and completed its science data collection in September 2005, with the objective of sub-milliarcsec measurement of two General Relativistic effects on the spin axis orientation of orbiting gyroscopes. Much of the technology required by GP-B has potential application in future missions intended to make precision measurements. The philosophical approach and experiment design principles developed for GP-B are equally adaptable to these mission concepts. This talk will discuss GP-B's experimental approach and the technological and philosophical lessons learned that apply to future experiments in fundamental physics. Measurement of fundamental constants to high precision, probes of short-range forces, searches for equivalence principle violations, and detection of gravitational waves are examples of concepts and missions that will benefit kern GP-B's experience.
12. Lessons from the GP-B Experience for Future Fundamental Physics Missions in Space
NASA Technical Reports Server (NTRS)
Kolodziejczak, Jeffery
2006-01-01
Gravity Probe B launched in April 2004 and completed its science data collection in September 2005, with the objective of sub-milliarcsec measurement of two General Relativistic effects on the spin axis orientation of orbiting gyroscopes. Much of the technology required by GP-B has potential application in future missions intended to make precision measurements. The philosophical approach and experiment design principles developed for GP-B are equally adaptable to these mission concepts. This talk will discuss GP-B's experimental approach and the technological and philosophical lessons learned that apply to future experiments in fundamental physics. Measurement of fundamental constants to high precision, probes of short-range forces, searches for equivalence principle violations, and detection of gravitational waves are examples of concepts and missions that will benefit kern GP-B's experience.
13. QCD at collider energies
Nicolaidis, A.; Bordes, G.
1986-05-01
We examine available experimental distributions of transverse energy and transverse momentum, obtained at the CERN pp¯ collider, in the context of quantum chromodynamics. We consider the following. (i) The hadronic transverse energy released during W+/- production. This hadronic transverse energy is made out of two components: a soft component which we parametrize using minimum-bias events and a semihard component which we calculate from QCD. (ii) The transverse momentum of the produced W+/-. If the transverse momentum (or the transverse energy) results from a single gluon jet we use the formalism of Dokshitzer, Dyakonov, and Troyan, while if it results from multiple-gluon emission we use the formalism of Parisi and Petronzio. (iii) The relative transverse momentum of jets. While for W+/- production quarks play an essential role, jet production at moderate pT and present energies is dominated by gluon-gluon scattering and therefore we can study the Sudakov form factor of the gluon. We suggest also how through a Hankel transform of experimental data we can have direct access to the Sudakov form factors of quarks and gluons.
14. When Black Holes Collide
NASA Technical Reports Server (NTRS)
Baker, John
2010-01-01
Among the fascinating phenomena predicted by General Relativity, Einstein's theory of gravity, black holes and gravitational waves, are particularly important in astronomy. Though once viewed as a mathematical oddity, black holes are now recognized as the central engines of many of astronomy's most energetic cataclysms. Gravitational waves, though weakly interacting with ordinary matter, may be observed with new gravitational wave telescopes, opening a new window to the universe. These observations promise a direct view of the strong gravitational dynamics involving dense, often dark objects, such as black holes. The most powerful of these events may be merger of two colliding black holes. Though dark, these mergers may briefly release more energy that all the stars in the visible universe, in gravitational waves. General relativity makes precise predictions for the gravitational-wave signatures of these events, predictions which we can now calculate with the aid of supercomputer simulations. These results provide a foundation for interpreting expect observations in the emerging field of gravitational wave astronomy.
15. When Black Holes Collide
NASA Technical Reports Server (NTRS)
Baker, John
2010-01-01
Among the fascinating phenomena predicted by General Relativity, Einstein's theory of gravity, black holes and gravitational waves, are particularly important in astronomy. Though once viewed as a mathematical oddity, black holes are now recognized as the central engines of many of astronomy's most energetic cataclysms. Gravitational waves, though weakly interacting with ordinary matter, may be observed with new gravitational wave telescopes, opening a new window to the universe. These observations promise a direct view of the strong gravitational dynamics involving dense, often dark objects, such as black holes. The most powerful of these events may be merger of two colliding black holes. Though dark, these mergers may briefly release more energy that all the stars in the visible universe, in gravitational waves. General relativity makes precise predictions for the gravitational-wave signatures of these events, predictions which we can now calculate with the aid of supercomputer simulations. These results provide a foundation for interpreting expect observations in the emerging field of gravitational wave astronomy.
16. Cultural immersion through international experiences among Japanese nurses: Present status, future intentions, and perceived barriers.
PubMed
Chiba, Yoko; Nakayama, Takeo
2016-07-01
Given limited exposure to various ethnicities, languages, and cultures, providing health care to an increasing foreign population in Japan will likely be challenging for Japanese nurses. This study aimed to examine past and intended future international experiences of Japanese nurses to assess their cultural immersion level. A cross-sectional electronic survey was conducted among 2029 nurses in 2010. Participants were categorized by travel purpose, and the frequency of non-holiday travel was analyzed. To examine participants' desire for and perceived feasibility of future non-holiday international experiences by background characteristics, logistic regression analyses were performed. Of 1039 participants, 10.1% had past non-holiday international experiences, with 80% having traveled to high-income, English-speaking countries. The median value for travel frequency was once, and the median duration of travel was less than 1 month. The most common purpose of travel was participation in short-term programs (e.g. professional training, language study). Fifty-one percent of female nurses reported a desire for future non-holiday international experiences. Of these, 37.2% considered such experiences feasible. Age of the youngest child, having nursing specialization, English proficiency, and past international experience were significant predictors for feasibility. Japanese nurses with foreign experience were considered valuable human resources for culturally appropriate care. Efforts should be made to integrate them into the Japanese healthcare setting. The present study revealed room for improvement in foreign language proficiency and cross-cultural training with a focus on non-English-speaking and developing countries. A supportive workplace environment should be created that allows nurses to pursue the international experiences they desire. © 2016 Japan Academy of Nursing Science.
17. Technology for the Future: In-Space Technology Experiments Program, part 1
NASA Technical Reports Server (NTRS)
Breckenridge, Roger A. (Compiler); Clark, Lenwood G. (Compiler); Willshire, Kelli F. (Compiler); Beck, Sherwin M. (Compiler); Collier, Lisa D. (Compiler)
1991-01-01
The purpose of the Office of Aeronautics and Space Technology (OAST) In-Space Technology Experiment Program (In-STEP) 1988 Workshop was to identify and prioritize technologies that are critical for future national space programs and require validation in the space environment, and review current NASA (In-Reach) and industry/university (Out-Reach) experiments. A prioritized list of the critical technology needs was developed for the following eight disciplines: structures; environmental effects; power systems and thermal management; fluid management and propulsion systems; automation and robotics; sensors and information systems; in-space systems; and humans in space. This is part one of two parts and is the executive summary and experiment description. The executive summary portion contains keynote addresses, strategic planning information, and the critical technology needs summaries for each theme. The experiment description portion contains brief overviews of the objectives, technology needs and backgrounds, descriptions, and development schedules for current industry, university, and NASA space flight technology experiments.
18. Materials Science Experiments Under Microgravity - A Review of History, Facilities, and Future Opportunities
NASA Technical Reports Server (NTRS)
Stenzel, Ch.
2012-01-01
Materials science experiments have been a key issue already since the early days of research under microgravity conditions. A microgravity environment facilitates processing of metallic and semiconductor melts without buoyancy driven convection and sedimentation. Hence, crystal growth of semiconductors, solidification of metallic alloys, and the measurement of thermo-physical parameters are the major applications in the field of materials science making use of these dedicated conditions in space. In the last three decades a large number of successful experiments have been performed, mainly in international collaborations. In parallel, the development of high-performance research facilities and the technological upgrade of diagnostic and stimuli elements have also contributed to providing optimum conditions to perform such experiments. A review of the history of materials science experiments in space focussing on the development of research facilities is given. Furthermore, current opportunities to perform such experiments onboard ISS are described and potential future options are outlined.
19. Beam Rounders for Circular Colliders
SciTech Connect
A. Burov; S. Nagaitsev; Ya. Derbenev
2001-07-01
By means of linear optics, an arbitrary uncoupled beam can be locally transformed into a round (rotation-invariant) state and then back. This provides an efficient way to round beams in the interaction region of circular colliders.
20. Physicists dream of supersized collider | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8204295039176941, "perplexity": 3443.1744803868614}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948519776.34/warc/CC-MAIN-20171212212152-20171212232152-00739.warc.gz"} |
https://www.physicsforums.com/threads/thermal-conduction.688376/ | # Thermal Conduction
1. Apr 27, 2013
### Joshb60796
1. The problem statement, all variables and given/known data
A rod, with sides insulated to prevent heat loss, has one end immersed in boiling water at 100C and the other end immersed in a water/ice mixture at 0C. The rod has uniform cross-sectional area of 4.04 cm^2 and length 91cm. Under steady state conditions, heat conducted by the rod melts the ice at a rate of 1.0g every 34 seconds. What is the thermal conductivity of the rod?
2. Relevant equations
H=dQ/dt=k*Area(TempChange)(1/length)
Heat of Fusion of water is 3.34*10^5 J/kg
3. The attempt at a solution
(3.34*10^5*91)/(34seconds*100C*4.04cm^2) = 2200
My answer key says 220 W/m*K, i've tried converting 91cm to .91m and 4.04cm^2 to .000404m^2 and I get the same 2200 answer. I think I'm making a conversion error but I'm not sure, please advice, thank you.
2. Apr 27, 2013
### technician
have you realised that it is 1x10^-3 kg melted in 34secs
3. Apr 27, 2013
### cepheid
Staff Emeritus
Your work is off by a factor of 1000 because you forgot to multiply the heat of fusion of water (J/kg) by the rate at which mass is melting (0.001 kg). Only 334 J of heat is being transferred in 34 sec, NOT 334,000 J.
4. Apr 27, 2013
### Joshb60796
hmm I believe I follow what you are saying. I recalculated with 334J and my answer comes to 2.20 but that's still not what the answer key says. Am I just dense or is maybe the key incorrect?
5. Apr 27, 2013
### Joshb60796
wait, maybe the 2.20 is in W/cm*K and the 220 on the key is W/m*K? ....nevermind, that would be backwards
6. Apr 27, 2013
### cepheid
Staff Emeritus
I don't know what to tell you, because it's just arithmetic at this point. You are messing it up somewhere and just need to be meticulous and get it right.
7. Apr 27, 2013
### Staff: Mentor
(334*0.91)/(34seconds*100C*.000404) = 221
8. Apr 27, 2013
### Joshb60796
Thank you Chester, I'd been messing with this over and over and apparently it's like trying to grammar check your own novel, I never tried both converting to meters, and fixing my gram and kilogram mistake. Thank you so much :) I don't know how I missed it now.
Draft saved Draft deleted
Similar Discussions: Thermal Conduction | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8272936344146729, "perplexity": 2128.3632696717978}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886106754.4/warc/CC-MAIN-20170820131332-20170820151332-00223.warc.gz"} |
https://cstheory.stackexchange.com/questions/41165/nondeterminism-is-on-average-useless-for-circuits | # Nondeterminism is on average useless for circuits?
Savický and Woods (The Number of Boolean Functions Computed by Formulas of a Given Size) prove the following result.
Theorem[SW98]: For every constant $k>1$, almost all boolean functions with formula complexity at most $n^k$ have circuit complexity at least $n^k/k$.
The proof consists of deriving a lower bound on $B(n,n^k)$, the number of boolean functions on $n$ inputs computed by formulas of size $n^k$. By comparing $B(n,n^k)$ to the number of circuits of size $C = n^k/k$, which is at most $C^{C}e^{C+4n}$, it can be realized that for large $n$, $C^{C}e^{C+4n} << B(n,n^k)$, and the result follows.
It looks to me that the result could be strengthened by noting that the number of nondeterministic circuits of size $n^k$ with $m$ nondeterministic inputs is not much larger than the number of deterministic circuits of size $n^k$ (for $m$ not too large, say $m=n$). Hence, I think the following corollary holds:
Corollary: For every constant $k>1$, almost all boolean functions with formula complexity at most $n^k$ have nondeterministic circuit complexity at least $n^k/k$ (for nondeterministic circuits with $n$ nondeterministic inputs).
(Recall that a nondeterministic circuit has, in addition to the ordinary inputs $x = (x_1,\dots,x_n)$, a set of "nondeterministic" inputs $y=(y_1,\dots,y_m)$. A nondeterministic circuit $C$ accepts input $x$ if there exists $y$ such that the circuit output $1$ on $(x,y)$).
Obviously, the lower bound on $B(n,n^k)$ is also a lower bound on the number of boolean functions on $n$ inputs computed by circuits of size $n^k$, hence "formula complexity at most $n^k$" can be replaced by "circuit complexity at most $n^k$" in the corollary. The corollary can also be stated as: for functions with polynomial circuit complexity, switching to nondeterministic circuits cannot, on average, decrease the complexity by more than a constant factor.
Questions:
(1) Are there any interesting implications/consequences of the corollary above?
(2) Are there any other results in the same direction? For example, what is known about the following proposition? For problems in P, switching from TMs to NTMs cannot, on average, decrease the complexity by more than a constant factor.
(Gil Kalai also has a question somewhat related to this one.)
1) Realize that nondeterminism is a red herring here. You could have used alternation or circuits that have gates that solve the halting problem. It boils down to a simple counting argument that once you fix the model, you can only compute $2^k$ functions that have a $k$ bit description.
• @LanceFortnow Do we know $P\not\subseteq NTIME(n^\alpha)$ at any $\alpha\in(0,1)$? What is the smallest $\alpha$ that this is true? – T.... Jul 7 '18 at 15:23
• I believe the parity function cannot be computed in NTIME($n^\alpha$) time for any $\alpha<1$. Otherwise you'd have $2^{o(n)}$ size depth-2 circuits for parity which cannot happen. – Lance Fortnow Jul 7 '18 at 16:25
• @LanceFortnow It seems then $P\not\subseteq \exists(\log n)TIME[polylog(n)]$? – T.... Jul 7 '18 at 22:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9681918025016785, "perplexity": 146.5316326654651}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540543252.46/warc/CC-MAIN-20191212102302-20191212130302-00055.warc.gz"} |
https://www.cuemath.com/algebra/cubing-a-binomial/ | # Cubing a Binomial
Go back to 'Polynomials'
What will we obtain if we cube a general binomial?
$${\left( {x + y} \right)^3} = ?$$
We have
$\begin{array}{l}{\left( {x + y} \right)^3} = \left( {x + y} \right) \times {\left( {x + y} \right)^2}\\\qquad\quad\;\;\;= \left( {x + y} \right)\left( {{x^2} + 2xy + {y^2}} \right)\end{array}$
Now, we multiply these two brackets term-by-term:
$\begin{array}{l}{\left( {x + y} \right)^3} = \left( {x + y} \right)\left( {{x^2} + 2xy + {y^2}} \right)\\ \qquad\qquad= \left\{ \begin{array}{l}x \times \left( {{x^2} + 2xy + {y^2}} \right)\\ \qquad \qquad + \\y \times \left( {{x^2} + 2xy + {y^2}} \right)\end{array} \right.\\ \qquad\qquad= \left\{ \begin{array}{l}{x^3} + 2{x^2}y + x{y^2} + \\{x^2}y + 2x{y^2} + {y^3}\end{array} \right.\end{array}$
Thus,
$${\left( {x + y} \right)^3} = {x^3} + 3{x^2}y + 3x{y^2} + {y^3}$$
This is an identity – it holds true for every value of x and y. If we replace $$y \to - y,$$ we have:
$${\left( {x - y} \right)^3} = {x^3} - 3{x^2}y + 3x{y^2} - {y^3}$$
Polynomials
Grade 9 | Questions Set 1
Polynomials
Grade 10 | Questions Set 1
Polynomials | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9895015954971313, "perplexity": 2779.042317740983}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141486017.50/warc/CC-MAIN-20201130192020-20201130222020-00024.warc.gz"} |
https://en-academic.com/dic.nsf/enwiki/60154 | # Algebraic integer
Algebraic integer
"This article deals with the ring of complex numbers integral over" Z. "For the general notion of algebraic integer, see Integrality".
In number theory, an algebraic integer is a complex number which is a root of some monic polynomial (leading coefficient 1) with integer coefficients. The set of all algebraic integers is closed under addition and multiplication so it forms a subring of complex numbers denoted by A. The ring A is the integral closure of regular integers in complex numbers.
The ring of integers of a number field "K", denoted by "OK" , is the intersection of "K" and A: it may also be characterised as the maximal order of the field "K".Each algebraic integer belongs to the ring of integers of some number field. A number "x" is an algebraic integer if and only if the ring Z ["x"] is finitely generated as an abelian group, which is to say, a Z-module.
Examples
* The only algebraic integers in rational numbers are the ordinary integers. In other words, the intersection of Q and A is exactly Z. The rational number "a"/"b" is not an algebraic integer unless "b" divides "a". Note that the leading coefficient of the polynomial "bx" − "a" is the integer "b". As another special case, the square root √"n" of a non-negative integer "n" is an algebraic integer, and so is irrational unless "n" is a perfect square.
*If "d" is a square free integer then the extension "K" = Q(√"d") is a quadratic field extension of rational numbers. The ring of algebraic integers "OK" contains √"d" since this is a root of the monic polynomial "x"² − "d". Moreover, if "d" ≡ 1 (mod 4) the element (1 + √"d")/2 is also an algebraic integer. It satisfies the polynomial "x"² − "x" + (1 − "d")/4 where the constant term (1 − "d")/4 is an integer. The full ring of integers is generated by √"d" or (1 + √"d")/2 respectively.
* If $zeta_n$ is a primitive "n"-th root of unity, then the ring of integers of the cyclotomic field $mathbf\left\{Q\right\}\left(zeta\right)$ is precisely $mathbf\left\{Z\right\} \left[zeta\right]$.
* If "α" is an algebraic integer then is another algebraic integer. A polynomial for "β" is obtained by substituting "x""n" in the polynomial for "α".
Non-example
* If "P"("x") is a primitive polynomial which has integer coefficients but is not monic, and "P" is irreducible over Q, then none of the roots of "P" are algebraic integers. (Here "primitive" is used in the sense that the highest common factor of the set of coefficients of "P" is 1; this is weaker than requiring the coefficients to be pairwise relatively prime.)
Facts
* The sum, difference and product of two algebraic integers is an algebraic integer. In general their quotient is not. The monic polynomial involved is generally of higher degree than those of the original algebraic integers, and can be found by taking resultants and factoring. For example, if "x"² − "x" − 1 = 0, "y"³ − "y" − 1 = 0 and "z" = "xy", then eliminating "x" and "y" from "z" − "xy" and the polynomials satisfied by "x" and "y" using the resultant gives "z"6 − 3"z"4 − 4"z"³ + "z"² + "z" − 1, which is irreducible, and is the monic polynomial satisfied by the product. (To see that the "xy" is a root of the x-resultant of "z" − "xy" and "x"² − "x" − 1, one might use the fact that the resultant is contained in the ideal generated by its two input polynomials.)
* Any number constructible out of the integers with roots, addition, and multiplication is therefore an algebraic integer; but not all algebraic integers are so constructible: most roots of irreducible quintics are not.
* Every root of a monic polynomial whose coefficients are algebraic integers is itself an algebraic integer. In other words, the algebraic integers form a ring which is integrally closed in any of its extension.
* The ring of algebraic integers A is a Bézout domain.
References
* Daniel A. Marcus, "Number Fields", third edition, Springer-Verlag, 1977
ee also
*Integrality
*Gaussian integer
*Eisenstein integer
*Root of unity
*Dirichlet's unit theorem
*Fundamental units
Wikimedia Foundation. 2010.
### Look at other dictionaries:
• algebraic integer — noun A number (real or complex) which is a solution to an equation of the form for some set of integers through . See Also: quadratic integer … Wiktionary
• Algebraic number field — In mathematics, an algebraic number field (or simply number field) F is a finite (and hence algebraic) field extension of the field of rational numbers Q. Thus F is a field that contains Q and has finite dimension when considered as a vector… … Wikipedia
• Algebraic number — In mathematics, an algebraic number is a complex number that is a root of a non zero polynomial in one variable with rational (or equivalently, integer) coefficients. Complex numbers such as pi that are not algebraic are said to be transcendental … Wikipedia
• Integer — This article is about the mathematical concept. For integers in computer science, see Integer (computer science). Symbol often used to denote the set of integers The integers (from the Latin integer, literally untouched , hence whole : the word… … Wikipedia
• Algebraic number theory — In mathematics, algebraic number theory is a major branch of number theory which studies the algebraic structures related to algebraic integers. This is generally accomplished by considering a ring of algebraic integers O in an algebraic number… … Wikipedia
• Algebraic modeling language — Algebraic Modeling Languages (AML) are high level programming languages for describing and solving high complexity problems for large scale mathematical computation (i.e. large scale optimization type problems). One particular advantage of AMLs… … Wikipedia
• Algebraic-group factorisation algorithm — Algebraic group factorisation algorithms are algorithms for factoring an integer N by working in an algebraic group defined modulo N whose group structure is the direct sum of the reduced groups obtained by performing the equations defining the… … Wikipedia
• Algebraic curve — In algebraic geometry, an algebraic curve is an algebraic variety of dimension one. The theory of these curves in general was quite fully developed in the nineteenth century, after many particular examples had been considered, starting with… … Wikipedia
• Algebraic data type — In computer programming, particularly functional programming and type theory, an algebraic data type (sometimes also called a variant type[1]) is a datatype each of whose values is data from other datatypes wrapped in one of the constructors of… … Wikipedia
• Algebraic structure — In algebra, a branch of pure mathematics, an algebraic structure consists of one or more sets closed under one or more operations, satisfying some axioms. Abstract algebra is primarily the study of algebraic structures and their properties. The… … Wikipedia | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 3, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8876740336418152, "perplexity": 575.4742097759762}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500058.1/warc/CC-MAIN-20230203154140-20230203184140-00271.warc.gz"} |
https://brilliant.org/problems/easier-than-a-pentakill-2/ | # Easier than a pentakill
Number Theory Level 3
Let $$K$$ be a two-digit positive integer. How many $$K$$ are there such that $$K^{2}$$ ends in $$K$$?
For example,$$6^{2}$$ ends in 6.
× | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8834102749824524, "perplexity": 624.0043086054498}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720845.92/warc/CC-MAIN-20161020183840-00117-ip-10-171-6-4.ec2.internal.warc.gz"} |
https://cp4space.wordpress.com/2013/05/04/words-in-summation-convention/ | ## Words in summation convention
In Einstein summation convention, repeated indices are summed over. For example, $\delta_{ii} = \delta_{11} + \delta_{22} + \delta_{33} = 3$. If a Kronecker delta has two different suffices, we can ‘contract’ them as follows: $\delta_{ij} \delta_{jk} \delta_{ki} = \delta_{ij} \delta_{ji} = \delta_{ii} = 3$, whereas $\delta_{ii} \delta_{jj} \delta_{kk} = 27$.
Vishal Patil considered contracting English words in this manner. For example, ‘intestines’ gives us the expression $\delta_{in} \delta_{te} \delta_{st} \delta_{in} \delta_{es} = 9$. In general, suppose we have a word W of 2n letters (n pairs of distinct letters in some permutation), and wish to evaluate it in summation convention. Then the product of the deltas evaluates to $3^c$, where c is the number of cycles in the n-vertex (multi-)graph produced by joining two vertices with an edge if the corresponding indices share a delta. The graph corresponding to ‘intestines’ is shown below:
### English words consisting entirely of repeated letters
I wanted to find other examples (other than ‘intestines’) of words in the English language which feature each letter precisely twice. To do so, I downloaded the official Scrabble dictionary SOWPODS (verifying that it was comprehensive by checking that it contains the word ‘boustrophedonic’) and wrote the following simple Mathematica program:
As you can see, there are a couple of 12-letter words in there, namely ‘trisectrices’ and ‘happenchance’. Conveniently, 12 is a multiple of 3, so we can evaluate the product of Levi-Civita symbols with these subscripts:
$\epsilon_{hap} \epsilon_{pen} \epsilon_{cha} \epsilon_{nce} = \epsilon_{pha} \epsilon_{pen} \epsilon_{cha} \epsilon_{cen} = (\delta_{he} \delta_{an} - \delta_{hn} \delta_{ae})(\delta_{he} \delta_{an} - \delta_{hn} \delta_{ae}) = 9 + 9 - 3 - 3 = 12$
Here I’ve used the fact that the Levi-Civita symbol is invariant under cyclic permutations, and then simplified the expression using epsilon-delta contractions. We can also evaluate ‘trisectrices’:
$\epsilon_{tri} \epsilon_{sec} \epsilon_{tri} \epsilon_{ces} = -36$
You can have a go yourself at evaluating expressions corresponding to other English words.
### Sum of all words
Suppose we evaluate each of the $(2n)! 2^{-n}$ permutations of aabbcc…dd (with 2n letters) using Kronecker deltas (as we did with intestines) and wish to sum them together. For $n = 1$, we just have one word with a value of 3. For $n = 2$, we have six words, two of which have a value of 9 and four of which have a value of 3, so the sum is 30. We can continue this sequence.
If we choose a random word on n letters, the probability generating function for the number of cycles is given below:
Setting s = 3 gives us the expected value of a random word. Multiplying this by the number of words will thus give us the total:
This sequence is already in the OEIS as A007019, but there is no mention of it being equal to the ‘sum of all words’ in Einstein summation convention. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 10, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8815325498580933, "perplexity": 374.06388056847146}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608004.38/warc/CC-MAIN-20170525063740-20170525083740-00134.warc.gz"} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.