url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
---|---|---|---|
http://popflock.com/learn?s=Intersection_(set_theory) | Intersection (set Theory)
Get Intersection Set Theory essential facts below. View Videos or join the Intersection Set Theory discussion. Add Intersection Set Theory to your PopFlock.com topic list for future reference or share this resource on social media.
Intersection Set Theory
The intersection of two sets ${\displaystyle A}$ and ${\displaystyle B,}$ represented by circles. ${\displaystyle A\cap B}$ is in red.
In mathematics, the intersection of two sets ${\displaystyle A}$ and ${\displaystyle B,}$ denoted by ${\displaystyle A\cap B,}$[1] is the set containing all elements of ${\displaystyle A}$ that also belong to ${\displaystyle B}$ or equivalently, all elements of ${\displaystyle B}$ that also belong to ${\displaystyle A.}$[2]
## Notation and terminology
Intersection is written using the symbol "${\displaystyle \cap }$" between the terms; that is, in infix notation. For example:
${\displaystyle \{1,2,3\}\cap \{2,3,4\}=\{2,3\}}$
${\displaystyle \{1,2,3\}\cap \{4,5,6\}=\varnothing }$
${\displaystyle \mathbb {Z} \cap \mathbb {N} =\mathbb {N} }$
${\displaystyle \{x\in \mathbb {R} :x^{2}=1\}\cap \mathbb {N} =\{1\}}$
The intersection of more than two sets (generalized intersection) can be written as:
${\displaystyle \bigcap _{i=1}^{n}A_{i}}$
which is similar to capital-sigma notation.
For an explanation of the symbols used in this article, refer to the table of mathematical symbols.
## Definition
Intersection of three sets:
${\displaystyle ~A\cap B\cap C}$
Intersections of the unaccented modern Greek, Latin, and Cyrillic scripts, considering only the shapes of the letters and ignoring their pronunciation
Example of an intersection with sets
The intersection of two sets ${\displaystyle A}$ and ${\displaystyle B,}$ denoted by ${\displaystyle A\cap B}$,[3] is the set of all objects that are members of both the sets ${\displaystyle A}$ and ${\displaystyle B.}$ In symbols:
${\displaystyle A\cap B=\{x:x\in A{\text{ and }}x\in B\}.}$
That is, ${\displaystyle x}$ is an element of the intersection ${\displaystyle A\cap B}$ if and only if ${\displaystyle x}$ is both an element of ${\displaystyle A}$ and an element of ${\displaystyle B.}$[3]
For example:
• The intersection of the sets {1, 2, 3} and {2, 3, 4} is {2, 3}.
• The number 9 is not in the intersection of the set of prime numbers {2, 3, 5, 7, 11, ...} and the set of odd numbers {1, 3, 5, 7, 9, 11, ...}, because 9 is not prime.
### Intersecting and disjoint sets
We say that ${\displaystyle A}$ intersects (meets) ${\displaystyle B}$ if there exists some ${\displaystyle x}$ that is an element of both ${\displaystyle A}$ and ${\displaystyle B,}$ in which case we also say that ${\displaystyle A}$ intersects (meets) ${\displaystyle B}$ at ${\displaystyle x}$. Equivalently, ${\displaystyle A}$ intersects ${\displaystyle B}$ if their intersection ${\displaystyle A\cap B}$ is an inhabited set, meaning that there exists some ${\displaystyle x}$ such that ${\displaystyle x\in A\cap B.}$
We say that ${\displaystyle A}$ and ${\displaystyle B}$ are disjoint if ${\displaystyle A}$ does not intersect ${\displaystyle B.}$ In plain language, they have no elements in common. ${\displaystyle A}$ and ${\displaystyle B}$ are disjoint if their intersection is empty, denoted ${\displaystyle A\cap B=\varnothing .}$
For example, the sets ${\displaystyle \{1,2\}}$ and ${\displaystyle \{3,4\}}$ are disjoint, while the set of even numbers intersects the set of multiples of 3 at the multiples of 6.
## Algebraic properties
Binary intersection is an associative operation; that is, for any sets ${\displaystyle A,B,}$ and ${\displaystyle C,}$ one has
${\displaystyle A\cap (B\cap C)=(A\cap B)\cap C.}$
Thus the parentheses may be omitted without ambiguity: either of the above can be written as ${\displaystyle A\cap B\cap C}$. Intersection is also commutative. That is, for any ${\displaystyle A}$ and ${\displaystyle B,}$ one has
${\displaystyle A\cap B=B\cap A.}$
The intersection of any set with the empty set results in the empty set; that is, that for any set ${\displaystyle A}$,
${\displaystyle A\cap \varnothing =\varnothing }$
Also, the intersection operation is idempotent; that is, any set ${\displaystyle A}$ satisfies that ${\displaystyle A\cap A=A}$. All these properties follow from analogous facts about logical conjunction.
Intersection distributes over union and union distributes over intersection. That is, for any sets ${\displaystyle A,B,}$ and ${\displaystyle C,}$ one has
{\displaystyle {\begin{aligned}A\cap (B\cup C)=(A\cap B)\cup (A\cap C)\\A\cup (B\cap C)=(A\cup B)\cap (A\cup C)\end{aligned}}}
Inside a universe ${\displaystyle U,}$ one may define the complement ${\displaystyle A^{c}}$ of ${\displaystyle A}$ to be the set of all elements of ${\displaystyle U}$ not in ${\displaystyle A.}$ Furthermore, the intersection of ${\displaystyle A}$ and ${\displaystyle B}$ may be written as the complement of the union of their complements, derived easily from De Morgan's laws:
${\displaystyle A\cap B=\left(A^{c}\cup B^{c}\right)^{c}}$
## Arbitrary intersections
The most general notion is the intersection of an arbitrary nonempty collection of sets. If ${\displaystyle M}$ is a nonempty set whose elements are themselves sets, then ${\displaystyle x}$ is an element of the intersection of ${\displaystyle M}$ if and only if for every element ${\displaystyle A}$ of ${\displaystyle M,}$ ${\displaystyle x}$ is an element of ${\displaystyle A.}$ In symbols:
${\displaystyle \left(x\in \bigcap _{A\in M}A\right)\Leftrightarrow \left(\forall A\in M,\ x\in A\right).}$
The notation for this last concept can vary considerably. Set theorists will sometimes write "${\displaystyle \cap M}$", while others will instead write "${\displaystyle \cap _{A\in M}A}$". The latter notation can be generalized to "${\displaystyle \cap _{i\in I}A_{i}}$", which refers to the intersection of the collection ${\displaystyle \left\{A_{i}:i\in I\right\}.}$ Here ${\displaystyle I}$ is a nonempty set, and ${\displaystyle A_{i}}$ is a set for every ${\displaystyle i\in I.}$
In the case that the index set ${\displaystyle I}$ is the set of natural numbers, notation analogous to that of an infinite product may be seen:
${\displaystyle \bigcap _{i=1}^{\infty }A_{i}.}$
When formatting is difficult, this can also be written "${\displaystyle A_{1}\cap A_{2}\cap A_{3}\cap \cdots }$". This last example, an intersection of countably many sets, is actually very common; for an example, see the article on ?-algebras.
## Nullary intersection
Conjunctions of the arguments in parentheses
The conjunction of no argument is the tautology (compare: empty product); accordingly the intersection of no set is the universe.
Note that in the previous section, we excluded the case where ${\displaystyle M}$ was the empty set (${\displaystyle \varnothing }$). The reason is as follows: The intersection of the collection ${\displaystyle M}$ is defined as the set (see set-builder notation)
${\displaystyle \bigcap _{A\in M}A=\{x:{\text{ for all }}A\in M,x\in A\}.}$
If ${\displaystyle M}$ is empty, there are no sets ${\displaystyle A}$ in ${\displaystyle M,}$ so the question becomes "which ${\displaystyle x}$'s satisfy the stated condition?" The answer seems to be every possible ${\displaystyle x}$. When ${\displaystyle M}$ is empty, the condition given above is an example of a vacuous truth. So the intersection of the empty family should be the universal set (the identity element for the operation of intersection),[4] but in standard (ZF) set theory, the universal set does not exist.
In type theory however, ${\displaystyle x}$ is of a prescribed type ${\displaystyle \tau ,}$ so the intersection is understood to be of type ${\displaystyle \mathrm {set} \ \tau }$ (the type of sets whose elements are in ${\displaystyle \tau }$), and we can define ${\displaystyle \bigcap _{A\in \emptyset }A}$ to be the universal set of ${\displaystyle \mathrm {set} \ \tau }$ (the set whose elements are exactly all terms of type ${\displaystyle \tau }$).
## References
1. ^ "Intersection of Sets". web.mnstate.edu. Retrieved .
2. ^ "Stats: Probability Rules". People.richland.edu. Retrieved .
3. ^ a b "Set Operations | Union | Intersection | Complement | Difference | Mutually Exclusive | Partitions | De Morgan's Law | Distributive Law | Cartesian Product". www.probabilitycourse.com. Retrieved .
4. ^ Megginson, Robert E. (1998), "Chapter 1", An introduction to Banach space theory, Graduate Texts in Mathematics, 183, New York: Springer-Verlag, pp. xx+596, ISBN 0-387-98431-3 | 2022-01-18 11:15:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 107, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9184143543243408, "perplexity": 253.83674028379536}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300810.66/warc/CC-MAIN-20220118092443-20220118122443-00309.warc.gz"} |
https://www.wikidata.org/wiki/Q619985 | # multinomial theorem (Q619985)
describes how to expand a power of a sum in terms of powers of the terms in that sum. It is the generalization of the binomial theorem to polynomials
Language Label Description Also known as
English
multinomial theorem
describes how to expand a power of a sum in terms of powers of the terms in that sum. It is the generalization of the binomial theorem to polynomials
0 references
0 references
## Identifiers
1 reference
28 October 2013
1 reference
0 references
• cawiki
• cswiki
• dewiki
• enwiki
• eswiki
• frwiki
• hewiki
• huwiki
• itwiki
• jawiki
• kowiki
• ptwiki
• rowiki
• ruwiki
• skwiki
• svwiki
• ukwiki
• urwiki
• zhwiki | 2018-03-19 21:30:15 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9154630303382874, "perplexity": 7149.644879629365}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647146.41/warc/CC-MAIN-20180319194922-20180319214922-00450.warc.gz"} |
https://www.graduate.technion.ac.il/Theses/Abstracts.asp?Id=30107 | Ph.D Thesis
Ph.D Student Abd El Majid Suzan Microstructures,Mechanical Properties and Castability of Aluminum Alloys(A201)with Additions of Si,Ti, and B Department of Materials Science and Engineering Professor Emeritus Menachem Bamberger
Abstract
The alloys investigated in this study are the aluminum alloys A201: Al-4.97 wt. % Cu - 0.56 wt. % Ag based alloys. These alloys are commercially important because of their high mechanical properties, excellent machinability and good formability, shown to be suitable for aerospace and automobile parts. These alloys have a particularly high response to age hardening (especially T6 heat-treated conditions (and as a result offer good mechanical properties.
The additions of silicon, titanium and boron to the A201 alloy (Al - 4.97wt. % Cu - 0.56 wt. % Ag) were used to modify the microstructure and to improve the castability and mechanical properties of the alloy. The four alloys A201, A201-1 wt. % Si, A201-1.33 wt. %B- 3.17 wt. % Ti and A201-1 wt. % Si -1.33 wt. %B- 3.17 wt. % Ti alloys were investigated in the as-cast, solution treated at 550˚C for ~20 hours and aged at 170˚C up to 32 days conditions. The effects of precipitation on the alloys properties were investigated by a combination of HRSEM?, TEM and microhardness test. The as-cast alloys contained α-Al as a matrix and eutectic structure (a-Al/θ-Al2Cu) along grain boundaries.
Addition of 1 wt. % of Si improved the microhardness; additions of 2, 4 and 6 wt. %Si improved the fluidity, prevented hot tears, but generated a lot of pores along GBs, then decreased the mechanical properties of the alloys. Additions of Ti and B caused the grain refinement, prevented hot tears, but caused a large amount of Al3Ti phase which makes the melt more viscose, and less fluid and castable. After solution treatment the eutectic structure at the GBs dissolved and an intermetallic phase α-Al15(Fe,Mn)3Si2 with faceted {112} planes, appeared .
The precipitation sequence during aging was the following: supersaturated solid solution (SSSS) à GP zones à θ'' Ω à θ' Ω à θ. At the maximum hardness, there are two kinds of semi-coherent precipitates: θ' (Al2Cu-tetragonal) and Ω (Al2Cu-orthorhombic).
The Si addition induced a large amount of GP zones at the initial stage of aging and, by these means, increased the microhardness. The additions of Ti and B (without Si) caused a large amount of semi-coherent Ω precipitates in the expense of θ'' and θ' phases. Additions of 1 wt. % Si or Ti and B (without Si) improved the corrosion resistance of A201 alloys after ST and aging by adding a passivation plateau.
The A201 wt. % Si showed the optimal combination of mechanical properties and corrosion resistance | 2020-05-31 20:59:20 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8126400113105774, "perplexity": 13949.162258726637}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347413624.48/warc/CC-MAIN-20200531182830-20200531212830-00004.warc.gz"} |
https://www.physicsforums.com/threads/f-g-m1-m2-r-2-related-question.1006930/ | # F=g*m1*m2/r^2 related question
Mikasun1108
Homework Statement:
The weight of a man on the moon is 1/6 of his weight on Earth. If a man is at the surface of the moon whose diameter of its cross-section is approximately 4.47x10^6, what is the mass of the moon?
Relevant Equations:
F=gm1m2/r^2
Thanks for the help! :)
Edit: My answer is 1.25 x 10^7 ( I do not think it is correct, I'll try to think some more and update my answer)
Do we need to get the mass of the man? or is this problem actually solvable?
-sun1108
Last edited:
Gold Member
2r=4.47 10^6 meter
Homework Helper
Gold Member
2022 Award
Homework Statement:: The weight of a man on the moon is 1/6 of his weight on Earth. If a man is at the surface of the moon whose diameter of its cross-section is approximately 4.47x10^6, what is the mass of the moon?
Relevant Equations:: F=gm1m2/r^2
Thanks for the help! :)
-sun1108
We need to see your best attempt to solve this yourself.
Mikasun1108
Mikasun1108
We need to see your best attempt to solve this yourself.
Ook, sorry I didn't put my attempt I wasn't sure about my answer. Ill make sure to write it.
Homework Helper
Gold Member
2022 Award
Edit: My answer is 1.25 x 10^7 ( I do not think it is correct, I'll try to think some more and update my answer)
Looks like an error with the orders of magnitude. Maybe a conversion problem.
Mikasun1108
Looks like an error with the orders of magnitude. Maybe a conversion problem.
Oh right i think i might have place a 667 instead of 6.67. Thanks for the feedback.
I tried doing it again this time I got 1.213 X 10^23 kg
Working: m2= Fr^2/Gm1
Assuming mass is 60kg weight in moon will be 97.2 N
So m2= 97.2N(4.985225 x 10^12m)/6.67 x 10^-11 Nm^2kg^-2(60kg)
m2= 1.213 x 10^23kg
Homework Helper
Gold Member
2022 Award
Oh right i think i might have place a 667 instead of 6.67. Thanks for the feedback.
I tried doing it again this time I got 1.213 X 10^23 kg
Working: m2= Fr^2/Gm1
Assuming mass is 60kg weight in moon will be 97.2 N
So m2= 97.2N(4.985225 x 10^12m)/6.67 x 10^-11 Nm^2kg^-2(60kg)
m2= 1.213 x 10^23kg
When I search online, the mass of the Moon is given as ##7.35 \times 10^{22} kg##.
Homework Helper
Gold Member
2022 Award
PS that said, the diameter of the Moon is given as ##3.5 \times 10^6 m##.
Homework Helper
Gold Member
2022 Award
Using the data you were given, your answer is approximately correct. If I use ##g = 9.81 m/s^2## for Earth, I get ##M = 1.25 \times 10^{23}kg##
Gold Member
PS that said, the diameter of the Moon is given as ##3.5 \times 10^6 m##.
Oh! so re #2, it was not meter but perhaps milli mile ?
[EDIT]
Wrong comment. I confused diameter with radius.
Last edited:
Homework Helper
Gold Member
2022 Award
Oh! so re #2, not meter but mile ?
##m## is metres.
Gold Member
Relevant Equations::[/B] F=gm1m2/r^2
$$GmM_E/6R_E^2=GmM_M/R_M^2$$
$$\frac{\rho_M}{\rho_E} =\frac{1}{6}\frac{ R_E}{R_M}$$
where ##\rho_M,\rho_E## are average density of Moon and the Earth with assumption that both have sphere shape. In observation
$$\frac{\rho_M}{\rho_E} =0.6$$ and $$\frac{ R_E}{R_M} =3.6$$
Last edited:
Mikasun1108
When I search online, the mass of the Moon is given as ##7.35 \times 10^{22} kg##.
So sorry for the very late reply, I haven't been opening my account, thank you so much for your feedback.
I apologize for the very late update: I tried doing the question again later, but then I still arrived at the same answer. Here is my working.
#### Attachments
• IMG-3093.jpg
39.4 KB · Views: 24
Homework Helper
Gold Member
2022 Award
I just put the numbers in a spreadsheet:
G D R g M 6.67E-11 4.47E+06 2.24E+06 1.635 1.22E+23
The first column is the gravitational constant. The second is the given diameter of the Moon. The third is the radius of the Moon (##R = D/2##). The fourth is the Moon's surface gravity, which I calculated as ##\frac{ 9.81}{6} \ m/s^2##.
The final column, the estimated mass of the Moon, I calculated using the formula for surface gravity ##g = \frac{GM}{R^2}##.
Mikasun1108
Mikasun1108
I just put the numbers in a spreadsheet:
G D R g M 6.67E-11 4.47E+06 2.24E+06 1.635 1.22E+23
The first column is the gravitational constant. The second is the given diameter of the Moon. The third is the radius of the Moon (##R = D/2##). The fourth is the Moon's surface gravity, which I calculated as ##\frac{ 9.81}{6} \ m/s^2##.
The final column, the estimated mass of the Moon, I calculated using the formula for surface gravity ##g = \frac{GM}{R^2}##.
Yeyy, so my answer is correct :). Thank you so much for your help and prompt reply, I truly appreciate it :) | 2023-03-20 16:19:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.933867335319519, "perplexity": 1728.4074934627886}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943484.34/warc/CC-MAIN-20230320144934-20230320174934-00530.warc.gz"} |
http://www.maa.org/david-p-robbins-prize?device=mobile | # David P. Robbins Prize
Approved by the Board of Governors, April 2005
In 2005, the family of David P. Robbins gave the Mathematical Association of America funds sufficient to support a prize honoring the author or authors of a paper reporting on novel research in algebra, combinatorics, or discrete mathematics. The prize of $5000 is awarded every third year. David Robbins spent most of his career on the research staff at the Institute for Defense Analyses Center for Communications Research (IDA-CCR) in Princeton. He exhibited extraordinary creativity and brilliance in his classified work, while also finding time to make major contributions in combinatorics, notably to the proof of the MacDonald Conjecture and to the discovery of conjectural relationships between plane partitions and alternating sign matrices. 1. The David P. Robbins Prize in Algebra, Combinatorics, and Discrete Mathematics shall be given to the author or authors of an outstanding paper in algebra, combinatorics, or discrete mathematics. Papers will be judged on quality of research, clarity of exposition, and accessibility to undergraduates. The paper must have been published within six years of the presentation of the prize, and must be written in English. 2. The prize is to be$5000, together with a certificate and a citation prepared by the Selection Committee. In the event of joint authors, the prize shall be divided equally.
3. The prize shall be given every third year at a national meeting of the Association.
4. The recipient need not be a member of the Association.
5. A standing committee on the David P. Robbins Prize shall recommend the recipient of the prize. The recommendation must be confirmed by the Board of Governors.
6. The Committee on the David P. Robbins Prize shall be appointed by the President of the Association. The Committee shall consist of four members, including academic and non-academic mathematicians. The term of appointment is six years and is non-renewable. Former members of the committee are eligible for reappointment after an interim of six yeasr, except that members appointed to fulfill an unexpired term of less than three years may be reappointed, immediately thereafter, for a full term.
## List of Recipients
#### 2011
Mike Paterson, Yuval Peres, Mikkel Thorup, Peter Winkler, and Uri Zwick, "Overhang," American Mathematical Monthly 116, January 2009; and "Maximum Overhang," American Mathematical Monthly 116, December 2009.
#### 2008
Neil J.A. Sloane, "The on-line encyclopedia of integer sequences," Notices of the American Mathematical Society, Vol. 50, 2003, pp. 912-915.
Yuval Peres received the David P. Robbins Prize from the Mathematical Association of America at the 2011 Joint Mathematics Meetings in New Orleans. | 2013-12-08 22:22:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4179303050041199, "perplexity": 2443.014317320889}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163824647/warc/CC-MAIN-20131204133024-00050-ip-10-33-133-15.ec2.internal.warc.gz"} |
http://nrich.maths.org/10186 | ### Consecutive Numbers
An investigation involving adding and subtracting sets of consecutive numbers. Lots to find out, lots to explore.
### Calendar Capers
Choose any three by three square of dates on a calendar page...
### Latin Numbers
Can you create a Latin Square from multiples of a six digit number?
# Pile Driver
##### Stage: 3 Short Challenge Level:
The diagram shows a figure made from six equal, touching squares arranged with a vertical line of symmetry. A straight line is drawn through the bottom corner $P$ in such a way that that the area of the figure is halved.
Where will the cut cross the edge $XY$?
If you liked this problem, here is an NRICH task that challenges you to use similar mathematical ideas.
This problem is taken from the UKMT Mathematical Challenges.
View the archive of all weekly problems grouped by curriculum topic
View the previous week's solution
View the current weekly problem | 2016-10-24 07:07:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32471519708633423, "perplexity": 2319.647333074871}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719542.42/warc/CC-MAIN-20161020183839-00298-ip-10-171-6-4.ec2.internal.warc.gz"} |
https://math.stackexchange.com/questions/3072350/what-is-the-probability-that-a-random-function-from-mathbbn-to-mathbbn | # What is the probability that a random function from $\mathbb{N} \to \mathbb{N}$ is surjective?
Here $$[n]$$ denotes the first $$n$$ positive integers $$\{1,2,...,n\}$$ and $$\mathbb{N}$$ is the set of natural numbers.
Let $$f:[n] \to [n]$$ be a random function chosen uniformly from all possible functions from $$[n] \to [n]$$. The probability that $$f$$ is surjective is $$\frac{n!}{n^n}$$ which tends to zero as $$n \to \infty$$. This makes me want to believe that a function from $$\mathbb{N} \to \mathbb{N}$$ has probability zero of being surjective.
On the other hand, let $$g:\mathbb{N} \to [n]$$ be a random function chosen uniformly from all possible functions from $$\mathbb{N} \to [n]$$ . For each $$m \in [n]$$, the probability that $$g^{-1}(m) = \emptyset$$ is equal to the limit as $$k \to \infty$$ of $$(\frac{n-1}{n})^k$$ which equals zero. This makes me think that the desired probability is 1.
• What is the probability space under discussion here? What measure are you defining on this [rather large] space? – ncmathsadist Jan 13 at 18:21
• What kind of probability measure do you want to put on the uncountable set of functions from $\mathbb{N}$ to itself? – Hans Engler Jan 13 at 18:22
• I'm pretty sure that it is not possible to choose a function uniformly at random from all possible functions from $\mathbb{N}$ to $[n]$. Your first approach seems like a reasonable way to define this kind of thing: it's essentially analogous to natural density. – user3482749 Jan 13 at 18:23
• @user3482749: The natural "uniform" measure seems to be the one where the values of $f(k)$ are iid uniform on $[n]$. – Nate Eldredge Jan 13 at 18:55
• @NateEldredge Sure, but there doesn't seem to be a nice way to pass that to a distribution on the functions $\mathbb{N}\to\mathbb{N}$. Sure, you could instead take the same approach as for natural density, but it seems a bit odd to do that in one argument, and do something entirely different in the other. – user3482749 Jan 13 at 19:02
It is possible to define a uniform distribution on the functions from $$\mathbb{N}$$ to $$[n]$$ - we just choose each value $$f(k)$$ independently from the discrete uniform distribution.
Now, the probability we're looking for, of the function being surjective? That's $$1$$. Why the difference from the $$[n]\to[n]$$ case? Because it's really a limit of functions from $$[m]$$ to $$[n]$$ as $$m\to\infty$$. The likelihood of those being surjective? See the coupon collector problem. While the specific probability at any $$m$$ is very complicated, the time needed to get all of them has expected value $$n(1+\frac12+\frac13+\cdots+\frac1n)$$. The very fact that there's an expected value for the time tells us that the probability of infinite time is zero, and the probability our random function includes all values is $$1$$.
The title question, about functions from $$\mathbb{N}$$ to $$\mathbb{N}$$? That's something there's no canonical probability distribution for. Still, we can do something. Choose some distribution $$u$$ such that every natural number has positive probability under $$u$$, and choose each value of the function $$f$$ independently from $$u$$. The probability that a given value $$k$$ is absent from the image of $$f$$ is zero for each $$k$$. By countable additivity of measure, the probability that at least one $$k$$ is absent from the image is thus $$\le \sum_{k=1}^{\infty} 0 = 0$$, and $$f$$ is surjective with probability $$1$$.
• If I choose $u$ to be a Poisson distribution (perhaps with a very small value of the parameter $\lambda$) then are you saying that a random function created as you describe above is surjective with probability 1. Moreover, the preimage of any element in the codomain is infinite? – Geoffrey Critzer Jan 13 at 19:17
• With probability 1, yes. Countable additivity can be nonintuitive sometimes. – jmerry Jan 13 at 19:19
Consider the problem of looking for the value of $$0^0$$. If you were to argue that $$0^0$$ should be $$\lim\limits_{y\to 0} 0^y$$ then you might think that $$0^0$$ should be zero since $$0$$ to any other power is again $$0$$. On the other hand if you were to argue that $$0^0$$ should be $$\lim\limits_{x\to 0} x^0$$ you might say that $$0^0$$ should be $$1$$ since any other number raised to the zeroth power is $$1$$. In the end, we say that $$\lim\limits_{(x,y)\to (0,0)} x^y$$ does not exist since we arrive at different answers depending on choices made. (In the context of combinatorics and set theory we do commonly make the arbitrary decision to define $$0^0$$ to be equal to $$1$$ for a number of reasons. See elsewhere on this site for a discussion of that topic in greater detail)
In effect, it matters "how quickly" the base and the exponent each travel to zero in relation to one another. In related problems with limits you might want to talk about a limit which appears to be in the form of $$\frac{\infty}{\infty}$$ such as $$\lim\limits_{n\to\infty}\frac{n!}{n^2}$$ or $$\lim\limits_{n\to\infty}\frac{\sqrt{n}}{\log(n)}$$. In this problem too, the relative speed at which each of the top or bottom approaches infinity will influence the final result.
In your specific problem, since you have not adequately specified a probability distribution (uniform distributions are tricky and do not make sense in a number of exotic scenarios and cannot work in countably infinite scenarios), we are forced to try to make sense of it using limits as the probability of a function being a surjection when taken from a function from $$[m]\to[n]$$ and letting both $$m$$ and $$n$$ approach infinity. As in the earlier mentioned problems with limits however, it depends on the speed at which each of the terms moving toward infinity travel. Indeed, as you showed, under one choice of relative speeds you would have expected a probability of $$1$$. Under a difference choice of relative speeds you would have expected a probability of $$0$$.
That is not to say that the final answer is necessarily that there is no answer, just that given the way you have so far presented the problem it is impossible to answer. | 2019-07-19 02:08:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 62, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9363895058631897, "perplexity": 100.57793109277947}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525973.56/warc/CC-MAIN-20190719012046-20190719034046-00033.warc.gz"} |
http://planetmath.org/ProperMap | # proper map
Definition Suppose $X$ and $Y$ are topological spaces, and $f$ is a map $f:X\to Y$. Then $f$ is a proper map if the inverse image of every compact subset in $Y$ of is a compact set in $X$.
Title proper map ProperMap 2013-03-22 13:59:49 2013-03-22 13:59:49 matte (1858) matte (1858) 6 matte (1858) Definition msc 54C10 msc 54-00 PolynomialFunctionIsAProperMap | 2018-03-25 03:33:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 7, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9827776551246643, "perplexity": 1336.6356137366733}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257651780.99/warc/CC-MAIN-20180325025050-20180325045050-00676.warc.gz"} |
https://physicslens.com/category/a-level-topics/06-motion-in-a-circle/ | ## Cardboard Boomerang
A indoor boomerang can be constructed using 3 strips of cardboard put together. Throwing it may require some practice though but when you get the hang of it, it can inject great fun into your lesson. You can explore using different types of material to get the best boomerang.
Materials
1. Cardboard about 1 mm thick, of suitable rigidity
2. Staples
3. Scissors
4. Rubber band or tape for added weight
Procedure
1. Cut 3 equal rectangular strips of cardboard measuring 12 cm x 2.5 cm. You may like to trim the sharp corners on one of the ends of each strip.
2. Cut a slit of 1.5 cm along the middle of each strip, on the untrimmed end.
3. Join the strips together at the slits, the angle between two adjacent strips being 120 degrees.
4. One side of the slit should overlap another so that it looks like the above:
5. Staple the overlapping centre together.
6. The boomerang is ready for use! Throwing the boomerang is done by holding onto one of the wings. The boomerang should be almost vertical, at an angle of about 10o. With a flick of the wrist, spin the boomerang as it leaves the hand. The direction of spin should be toward the side that is tilted up.
Science Explained
A boomerang requires a centripetal force to cause it to fly in a circular path back to the thrower. This centripetal force comes from the lift that the wings generate as they cut through the air.
## Angular Displacement - 2011 A-level question
A disc rotates clockwise about its centre O until point P has moved to point Q, such that OP equals the length of the straight line PQ. What is the angular displacement of OQ relative to OP? A. $\frac{\pi}{3} rad$ B. $\frac{2\pi}{3} rad$ C. $\frac{4\pi}{3} rad$ D. $\frac{5\pi}{3} rad$ | 2019-11-23 00:05:31 | {"extraction_info": {"found_math": true, "script_math_tex": 4, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 8, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35922491550445557, "perplexity": 1267.745667142187}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496672170.93/warc/CC-MAIN-20191122222322-20191123011322-00007.warc.gz"} |
https://www.neetprep.com/questions/55-Physics/688-Kinetic-Theory-Gases?courseId=8&testId=1138155-NCERT-Solved-Examples-Based-MCQs | The density of water is 1000 kg m–3. The density of water vapour at 100 °C and 1 atm pressure is 0.6 kg m–3. The volume of a molecule multiplied by the total number gives, what is called, molecular volume. The ratio (or fraction) of the molecular volume to the total volume occupied by the water vapour under the above conditions of temperature and pressure is:
1. $5×{10}^{-4}$
2. $60×{10}^{-4}$
3. $50×{10}^{-4}$
4. $6×{10}^{-4}$
Subtopic: Ideal Gas Equation |
To view explanation, please take trial in the course below.
NEET 2023 - Target Batch - Aryan Raj Singh
To view explanation, please take trial in the course below.
NEET 2023 - Target Batch - Aryan Raj Singh
Please attempt this question first.
Launched MCQ Practice Books
Prefer Books for Question Practice? Get NEETprep's Unique MCQ Books with Online Video/Text Solutions via Telegram Bot
The density of water is 1000 kg m–3. The volume of a water molecule is:
1.
2.
3.
4.
Subtopic: Ideal Gas Equation |
To view explanation, please take trial in the course below.
NEET 2023 - Target Batch - Aryan Raj Singh
To view explanation, please take trial in the course below.
NEET 2023 - Target Batch - Aryan Raj Singh
Please attempt this question first.
Launched MCQ Practice Books
Prefer Books for Question Practice? Get NEETprep's Unique MCQ Books with Online Video/Text Solutions via Telegram Bot
The density of water is 1000 kg m–3. The density of water vapour at 100 °C and 1 atm pressure is 0.6 kg m–3. What is the average distance between molecules (intermolecular distance) in water? (Given, the diameter of a water molecule in liquid state = 4 $\stackrel{\mathrm{ο}}{\mathrm{A}}$)
1.
2.
3.
4.
Subtopic: Ideal Gas Equation |
To view explanation, please take trial in the course below.
NEET 2023 - Target Batch - Aryan Raj Singh
To view explanation, please take trial in the course below.
NEET 2023 - Target Batch - Aryan Raj Singh
Please attempt this question first.
Launched MCQ Practice Books
Prefer Books for Question Practice? Get NEETprep's Unique MCQ Books with Online Video/Text Solutions via Telegram Bot
A vessel contains two nonreactive gases: neon (monatomic) and oxygen (diatomic). The ratio of their partial pressures is 3:2. The ratio of the number of molecules is:
(Atomic mass of Ne = 20.2 u, molecular mass of O2 = 32.0 u)
1. 2:3
2. 3:2
3. 1:3
4. 3:1
Subtopic: Ideal Gas Equation |
To view explanation, please take trial in the course below.
NEET 2023 - Target Batch - Aryan Raj Singh
To view explanation, please take trial in the course below.
NEET 2023 - Target Batch - Aryan Raj Singh
Please attempt this question first.
Launched MCQ Practice Books
Prefer Books for Question Practice? Get NEETprep's Unique MCQ Books with Online Video/Text Solutions via Telegram Bot
A vessel contains two nonreactive gases: neon (monatomic) and oxygen (diatomic). The ratio of their partial pressures is 3:2. The ratio of mass density of neon and oxygen in the vessel is: (Atomic mass of Ne = 20.2 u, molecular mass of O2 = 32.0 u).
1. 0.397
2. 0.937
3. 0.947
4. 1
Subtopic: Ideal Gas Equation |
To view explanation, please take trial in the course below.
NEET 2023 - Target Batch - Aryan Raj Singh
To view explanation, please take trial in the course below.
NEET 2023 - Target Batch - Aryan Raj Singh
Please attempt this question first.
Launched MCQ Practice Books
Prefer Books for Question Practice? Get NEETprep's Unique MCQ Books with Online Video/Text Solutions via Telegram Bot
A flask contains argon and chlorine in the ratio of 2:1 by mass. The temperature of the mixture is 27 °C. The ratio of average kinetic energy per molecule of the molecules of the two gases is:
(Atomic mass of argon = 39.9 u; Molecular mass of chlorine = 70.9 u)
1. 1:2
2. 2:1
3. 1:1
4. 1:2
Subtopic: Types of Velocities |
To view explanation, please take trial in the course below.
NEET 2023 - Target Batch - Aryan Raj Singh
To view explanation, please take trial in the course below.
NEET 2023 - Target Batch - Aryan Raj Singh
Please attempt this question first.
Launched MCQ Practice Books
Prefer Books for Question Practice? Get NEETprep's Unique MCQ Books with Online Video/Text Solutions via Telegram Bot
A flask contains argon and chlorine in the ratio of 2:1 by mass. The temperature of the mixture is 27 °C. The ratio of root mean square speed vrms of the molecules of the two gases is:
(Atomic mass of argon = 39.9 u; Molecular mass of chlorine = 70.9 u)
1. 2.33
2. 1.33
3. 0.5
4. 2
Subtopic: Types of Velocities |
To view explanation, please take trial in the course below.
NEET 2023 - Target Batch - Aryan Raj Singh
To view explanation, please take trial in the course below.
NEET 2023 - Target Batch - Aryan Raj Singh
Please attempt this question first.
Launched MCQ Practice Books
Prefer Books for Question Practice? Get NEETprep's Unique MCQ Books with Online Video/Text Solutions via Telegram Bot
Uranium has two isotopes of masses 235 and 238 units. If both are present in Uranium hexafluoride gas, which would have the larger average speed?
1. ${}_{235}\mathrm{UF}_{6}$
2. ${}_{238}{\mathrm{UF}}_{6}$
3. Both will have the same average speed.
4. Data insufficient
Subtopic: Types of Velocities |
To view explanation, please take trial in the course below.
NEET 2023 - Target Batch - Aryan Raj Singh
To view explanation, please take trial in the course below.
NEET 2023 - Target Batch - Aryan Raj Singh
Please attempt this question first.
Launched MCQ Practice Books
Prefer Books for Question Practice? Get NEETprep's Unique MCQ Books with Online Video/Text Solutions via Telegram Bot
Uranium has two isotopes of masses 235 and 238 units. If both are present in Uranium hexafluoride gas. If the atomic mass of fluorine is 19 units, what is the percentage difference in speeds of isotopes of Uranium at any temperature?
1. 0.43%
2. 0.34%
3. 0.55%
4. Data insufficient
Subtopic: Types of Velocities |
To view explanation, please take trial in the course below.
NEET 2023 - Target Batch - Aryan Raj Singh
To view explanation, please take trial in the course below.
NEET 2023 - Target Batch - Aryan Raj Singh
Please attempt this question first.
Launched MCQ Practice Books
Prefer Books for Question Practice? Get NEETprep's Unique MCQ Books with Online Video/Text Solutions via Telegram Bot
When a molecule (or an elastic ball) hits a ( massive) wall, it rebounds with the same speed. When a ball hits a massive bat held firmly, the same thing happens. However, when the bat is moving towards the ball, the ball rebounds at a different speed. Does the ball move faster or slower?
1. Faster
2. Slower
3. Speed of ball does not changes
4. None of these
Subtopic: Ideal Gas Equation |
To view explanation, please take trial in the course below.
NEET 2023 - Target Batch - Aryan Raj Singh
To view explanation, please take trial in the course below.
NEET 2023 - Target Batch - Aryan Raj Singh
Please attempt this question first.
Launched MCQ Practice Books
Prefer Books for Question Practice? Get NEETprep's Unique MCQ Books with Online Video/Text Solutions via Telegram Bot | 2022-09-26 18:11:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 7, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5604809522628784, "perplexity": 11207.888500236853}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00617.warc.gz"} |
https://chem.libretexts.org/Courses/University_of_California_Davis/UCD_Chem_110A%3A_Physical_Chemistry__I/UCD_Chem_110A%3A_Physical_Chemistry_I_(Larsen)/Lectures/Lecture_16%3A_Linear_Momentum_and_Electronic_Spectroscopy | # 16: Linear Momentum and Electronic Spectroscopy
Recap of Lecture 15
Last lecture continued the discussion of the 3D rigid rotor. We discussed the three aspect of the solutions to this system: The wavefunctions (the spherical harmonics), the energies (and degeneracies) and the TWO quantum numbers ($$J$$ and $$m_J$$) and their ranges. We discussed that the components of the angular momentum operator are subject to the Heisenberg uncertainty principle and cannot be know to infinite precision simultaneously, however the magnitude of angular momentum and any component can be. This results in the vectoral representation of angular momentum
This is one example of several cyclic permutations of the fundamental commutation relations satisfied by the components of an orbital angular momentum:
$[L_x, L_y] = {\rm i}\,\hbar\, L_z \label{6.3.17a}$
$[L_y, L_z] = {\rm i}\,\hbar\, L_x \label{6.3.17b}$
$[L_z, L_x] = {\rm i}\,\hbar\, L_y \label{6.3.17c}$
Therefore, two orthogonal components of angular momentum (for example $$L_x$$ and $$L_y$$) are complementary and cannot be simultaneously known or measured, except in special cases such as $$\displaystyle L_{x}=L_{y}=L_{z}=0$$.
We can introduce a new operator $$\hat{L}^2$$:
$\hat{L}^2 = L_x^{\,2}+L_y^{\,2}+L_z^{\,2} \label{6.3.5}$
That is the magnitude of the Angular momentum squared.
It is possible to simultaneously measure or specify $$L^2$$ and any one component of $$L$$; for example, $$L^2$$ and $$L_z$$. This is often useful, and the values are characterized by ($$J$$) and ($$m_J$$). In this case the quantum state of the system is a simultaneous eigenstate of the operators $$L^2$$ and $$L_z$$, but not of $$L_x$$ or $$L_y$$.
Illustration of the vector model of orbital angular momentum. Image used with permission (Public domain; Maschen).
Since the angular momenta are quantum operators, they cannot be drawn as vectors like in classical mechanics. Nevertheless, it is common to depict them heuristically in this way. Depicted above is a set of states with quantum numbers $${\displaystyle J =2}$$, and $${\displaystyle m_{J}=-2,-1,0,1,2}$$ for the five cones from bottom to top.
Since $${\displaystyle |L|={\sqrt {L^{2}}}=\hbar {\sqrt {6}}}$$, the vectors are all shown with length $${\displaystyle \hbar {\sqrt {6}}}$$. The rings represent the fact that $${\displaystyle L_{z}}$$ is known with certainty, but $${\displaystyle L_{x}}$$ and $${\displaystyle L_{y}}$$ are unknown; therefore every classical vector with the appropriate length and z-component is drawn, forming a cone.
The expected value of the angular momentum for a given ensemble of systems in the quantum state characterized by $${\displaystyle \J}$$ and $${\displaystyle m_{J }}$$ could be somewhere on this cone while it cannot be defined for a single system (since the components of $${\displaystyle L}$$ do not commute with each other).
## Spherical Harmonics in Rigid Rotors
The wavefunction of a rigid rotor can be separte (via the separation of variables approach) into product of two functions:
$\color{red} | Y(\theta, \phi) \rangle = \Theta(\theta) \cdot \Phi(\phi) \rangle$
The $$\Phi$$ function is found to have quantum number $$m$$. $$\Phi_m (\phi) = A_m e^{im\phi}$$, where $$A_m$$ is the normalization constant and $$m = 0, \pm1, \pm2 ... \pm\infty$$. The $$\Theta$$ function was solved and is known as Legendre polynomials, which have quantum numbers $$m$$ and $$\ell$$. When $$\Theta$$ and $$\Phi$$ are multiplied together, the product is known as spherical harmonics with labeling $$Y_{J}^{m} (\theta, \phi)$$.
$$m_J$$
$$J$$
$$\Theta ^{m_J}_J (\theta)$$
$$\Phi (\varphi)$$
$$Y^{m_J}_J (\theta , \varphi)$$
0
0
$$\dfrac {1}{\sqrt {2}}$$
$$\dfrac {1}{\sqrt {2 \pi}}$$
$$\dfrac {1}{\sqrt {4 \pi}}$$
0
1
$$\sqrt {\dfrac {3}{2}}\cos \theta$$
$$\dfrac {1}{\sqrt {2 \pi}}$$
$$\sqrt {\dfrac {3}{4 \pi}}\cos \theta$$
1
1
$$\sqrt {\dfrac {3}{4}}\sin \theta$$
$$\dfrac {1}{\sqrt {2 \pi}}e^{i \varphi}$$
$$\sqrt {\dfrac {3}{8 \pi}}\sin \theta e^{i \varphi}$$
-1
1
$$\sqrt {\dfrac {3}{4}}\sin \theta$$
$$\dfrac {1}{\sqrt {2 \pi}}e^{-i\varphi}$$
$$\sqrt {\dfrac {3}{8 \pi}}\sin \theta e^{-i \varphi}$$
0
2
$$\sqrt {\dfrac {5}{8}}(3\cos ^2 \theta - 1)$$
$$\dfrac {1}{\sqrt {2 \pi}}$$
$$\sqrt {\dfrac {5}{16\pi}}(3\cos ^2 \theta - 1)$$
1
2
$$\sqrt {\dfrac {15}{4}} \sin \theta \cos \theta$$
$$\dfrac {1}{\sqrt {2 \pi}}e^{i \varphi}$$
$$\sqrt {\dfrac {15}{8\pi}} \sin \theta \cos \theta e^{i\varphi}$$
-1
2
$$\sqrt {\dfrac {15}{4}} \sin \theta \cos \theta$$
$$\dfrac {1}{\sqrt {2 \pi}}e^{-i\varphi}$$
$$\sqrt {\dfrac {15}{8\pi}} \sin \theta \cos \theta e^{-i\varphi}$$
2
2
$$\sqrt {\dfrac {15}{16}} \sin ^2 \theta$$
$$\dfrac {1}{\sqrt {2 \pi}}e^{2i\varphi}$$
$$\sqrt {\dfrac {15}{32\pi}} \sin ^2 \theta e^{2i\varphi}$$
-2
2
$$\sqrt {\dfrac {15}{16}} \sin ^2 \theta$$
$$\dfrac {1}{\sqrt {2 \pi}}e^{2i\varphi}$$
$$\sqrt {\dfrac {15}{32\pi}} \sin ^2 \theta e^{-2i\varphi}$$
The above figure show the spherical harmonics $$Y_J^M$$, which are solutions of the angular Schrödinger equation of a 3D rigid rotor.
## Microwave Spectroscopy Probes Rotations
I will skip over this topic in lecture, but you have a workseet on it in your discussion and I expect you to master this concept. Not that hard actually: once you get the eigenstate for a system you can couple in a spectroscopy, in this microwave spectroscopy.
The permanent electric dipole moments of polar molecules can couple to the electric field of electromagnetic radiation. This coupling induces transitions between the rotational states of the molecules. The energies that are associated with these transitions are detected in the far infrared and microwave regions of the spectrum. For example, the microwave spectrum for carbon monoxide spans a frequency range of 100 to 1200 GHz, which corresponds to 3 - 40 $$cm^{-1}$$.
The selection rules for the rotational transitions are derived from the transition moment integral by using the spherical harmonic functions and the appropriate dipole moment operator, $$\hat {\mu}$$.
$\mu _T = \int Y_{J_f}^{m_f*} \hat {\mu} Y_{J_i}^{m_i} \sin \theta \,d \theta \,d \varphi \label {5.9.1}$
Evaluating the transition moment integral involves a bit of mathematical effort. This evaluation reveals that the transition moment depends on the square of the dipole moment of the molecule, $$\mu ^2$$ and the rotational quantum number, $$J$$, of the initial state in the transition,
$\mu _T = \mu ^2 \dfrac {J + 1}{2J + 1} \label {5.9.2}$
and that the selection rules for rotational transitions are
$\color{red} \Delta J = \pm 1 \label {5.9.3}$
$\color{red} \Delta m_J = 0, \pm 1 \label {5.9.4}$
A photon is absorbed for $$\Delta J = +1$$ and a photon is emitted for $$\Delta J = -1$$.
The energies of the rotational levels are given by
$E = J(J + 1) \dfrac {\hbar ^2}{2I}$
and each energy level has a degeneracy of $$2J+1$$ due to the different $$m_J$$ values.
Each energy level of a rigid rotor has a degeneracy of $$2J+l$$ due to the different $$m_J$$ values.
The transition energies for absorption of radiation are given by
$\Delta E_{states} = E_f - E_i = E_{photon} = h \nu = hc \bar {\nu} \label {5.9.5}$
$h \nu =hc \bar {\nu} = J_f (J_f +1) \dfrac {\hbar ^2}{2I} - J_i (J_i +1) \dfrac {\hbar ^2}{2I} \label {5.9.6}$
Since microwave spectroscopists use frequency, and infrared spectroscopists use wavenumber units when describing rotational spectra and energy levels, both $$\nu$$ and $$\bar {\nu}$$ are included in Equation $$\ref{5.9.6}$$, and $$J_i$$ and $$J_f$$ are the rotational quantum numbers of the initial (lower) and final (upper) levels involved in the absorption transition. When we add in the constraints imposed by the selection rules, $$J_f$$ is replaced by $$J_i + 1$$, because the selection rule requires $$J_f – J_i = 1$$ for absorption. The equation for absorption transitions then can be written in terms of the quantum number $$J_i$$ of the initial level alone.
$h \nu = hc \bar {\nu} = 2 (J_i + 1) \dfrac {\hbar ^2}{2I} \label {5.9.7}$
Divide Equation $$\ref{5.9.7}$$ by $$h$$ to obtain the frequency of the allowed transitions,
$\nu = 2B (J_i + 1) \label {5.9.8}$
where $$B$$, the rotational constant for the molecule, is defined as
$B = \dfrac {\hbar ^2}{2I} \label {5.9.9}$
The rotation spectrum of $$^{12}C^{16}O$$ at 40 K.
This microwave spectrum can be decomposed into eigenstates thusly
Energy levels and line positions calculated in the rigid rotor approximation. Image used with permission from Wikipedia
Example $$\PageIndex{1}$$: Carbon Monoxide
Calculate the bondlength of carbon monoxide if the $$J = 0$$ to $$J = 1$$ transition for $$^{12}C^{16}O$$ in its microwave spectrum is $$1.153 \times 10^{5} MHz$$.
Solution:
Assuming that $$^{12}C^{16}O$$ can be treated as a ridged rotor
$\nu = 2B(J + 1)$
with $$J = 1, 2, 3, . . .$$ and $$B = \dfrac{h}{8\pi^2l}$$
For the $$J = 0$$ to $$J = 1$$ transition,
$\dfrac {1}{2}\nu = B = \dfrac{h}{8\pi^2l}$
$\dfrac {1}{2}1.153 \times 10^{11} s^{-1} = B = \dfrac{6.626 \times 10^{-34} J.s}{8\pi^2\mu r^2}$
We can find $$\mu$$ and use the relationship $$r^2 = \dfrac{1}{\mu}$$ to find $$r$$.
$\mu = \dfrac{(12.00)(15.99)}{27.99} 1.661 \times 10^{-27} kg = 1.139 \times 10^{-26} kg$
$r^2 = \dfrac{6.626 \times 10^{-34} J.s}{4pi^2 1.139 \times 10^{-26} kg 1.153 \times 10^{11} s^-1}$
= $$1.13 \times 10^{-10} m$$ = 113 pm
Thus the bondlength in carbon monoxide is 113 pm.
## The Eigenvalue Problem for the Hydrogen Atom
Step 1: Define the potential for the problem
For the Hydrogen atom, the potential energy is given by the Coulombic potential, which is
$\color{red}V(r) = -\dfrac {e^2}{4\pi \epsilon_0 r}$
Step 2: Define the Schrödinger Equation for the problem
With every quantum eigenvalue problem, we define the Hamiltonian as such:
$\hat {H} = T + V$
The potential is defined above and the Kinetic energy is given by
$T = -\dfrac {\hbar^2}{2m_e} \bigtriangledown^2$
The Hamiltonian for the Hydrogen atom becomes
$\hat {H} = -\dfrac {\hbar^2}{2m_e}\bigtriangledown^2 - \dfrac {e^2}{4\pi \epsilon_0 r}\label {1}$
and since the potential has no time-dependence, we can se the time independent Schrödinger Equation
$\hat {H} | \psi (x,y,z) \rangle = E | \psi (x,y,z) \rangle$
Step 3: Solve the Schrödinger Equation for the problem
The potential has a spherical symmetry (i..e, depends only on $$r$$ and not typically in terms of $$x$$, $$y$$ and $$z$$), so switching to spherical coordinates is useful. The new eigenvalue problem is
$\hat {H}\psi(r,\theta,\phi)$
$= -\dfrac {\hbar^2}{2m_e} \left [\dfrac {1}{r^2} \dfrac {d}{d r} \left(r^2 \dfrac {d \psi(r,\theta,\phi)}{d r}\right) + \dfrac {1}{r^2 \sin(\theta)} \dfrac {d}{d \theta} \left(\sin(\theta) \dfrac {d \psi(r,\theta,\phi)}{d \theta}\right) + \dfrac {1}{r^2 \sin^2(\theta)} \dfrac {d^2 \psi(r,\theta,\phi)}{d \phi^2} \right] - \dfrac {e^2}{4\pi\epsilon_0 r} \psi(r,\theta,\phi)$
$= E\psi (r,\theta,\phi) \label{2}$
Multiplying equation $${2}$$ by $$2m_e r^2$$ and moving $$E$$ to the left side gives
$\hbar^2 \left(\dfrac {d}{d r} r^2 \dfrac {d \psi(r,\theta,\phi) }{d r}\right) - \hbar^2 \left[\dfrac {1}{\sin (\theta)} \left(\dfrac {d}{d \theta} \sin (\theta) \dfrac {d \psi(r,\theta,\phi) }{d \theta}\right) + \dfrac {1}{\sin^2 (\theta)} \dfrac {d^2 \psi(r,\theta,\phi) }{d \phi^2} \right] - 2m_e r^2 \left [\dfrac {e^2}{4\pi\epsilon_0 r} + E \right] \psi (r,\theta,\phi) = 0 \label {3}$
Although Equation $$\ref{3}$$ is a complex equation, it can be simplified by using three formulas,
$\hat{L}^2 = - \hbar^2 \left [\dfrac {1}{\sin (\theta)} (\dfrac {d}{d \theta} \sin (\theta) \dfrac {d \psi}{d \theta}) + \dfrac {1}{\sin^2 (\theta)} \dfrac {d^2 \psi}{d \phi^2}\right] \label{ 4a}$
$\color{red} \psi(r,\theta,\phi) = R(r)Y_{\ell}^{m}(\theta,\phi) \label{4b}$
where
$\hat {L}^2 Y_{\ell}^{m} = \hbar^2 \ell(\ell + 1) Y_{\ell}^{m} \label{4c}$
Using these three equations for Equation $$\ref{3}$$ gives
$-\dfrac {\hbar^2}{2m_e r^2} \dfrac {d}{d r} (r^2 \dfrac {d R(r)}{d r}) + \left[\dfrac {\hbar^2 \ell(\ell+1)}{2m_e r^2} - \dfrac{e^2}{4\pi\epsilon_0 r} - E \right] R(r) = 0 \label {5}$
A solution for $$R(r)$$ can be found with a quantum number $$n$$, and then $$E_n$$ is solved as
$\color{red} E_n = -\dfrac {m_e e^4}{8\epsilon_0^2 h^2 n^2}\label{6}$
with $$n=1,2,3 ...\infty$$
Step 4: Do something with the Eigenstates and associated energies
Let's talk about the solutions first.
## Ranges of the Three Quantum Numbers
In solving these types of differential equations, there are limits on $$\ell$$ and $$m_{\ell}$$, but not $$n$$.
• $$n = 1, 2, 3, ...\infty$$
• $$\ell = 0, 1, 2, ... (n-1)$$
• $$m_{\ell} = 0, \pm 1, \pm 2, ... \pm \ell$$
## Orthonormality
The normalization condition for the hydrogen atomic wavefunction is given by
$\int_0^{\infty} r^2 dr \int_0^{\pi} \sin(\theta) d\theta \int_0^{2\pi} \psi^*_{n \ell\ m} \psi_{n \ell\ m} d\phi = d_{nn'} d_{\ell\ell'} d_{mm'}\label{7a}$
Each $$n$$, $$\ell$$, $$m$$ value correspond to a specific orbital, which can be represented by a wavefunction. For example, $$\psi_{100}$$ corresponds to a wavefuntion with $$n=1, \ell=0, m_{\ell} = 0$$. This is the 1s orbital. Thus $$\psi_{100}$$ can be referred to as $$\psi_{1s}$$. With the same reasoning, $$\psi_{210}$$ refers to $$\psi_{2p_z}$$
$\color{red} | \psi(r,\theta,\phi) \rangle = |R(r) \rangle |Y_{\ell}^{m}(\theta,\phi) \rangle \label{7b}$
## Angular Part
From our work on the rigid rotor, we know that the eigenfunctions of the angular momentum operator are the Spherical Harmonic functions (Table M4), $$Y (\theta ,\phi )$$,
Spherical Harmonics 5 as commonly displayed, sorted by increasing energies and aligned for symmetry.
The Spherical Harmonic functions provide information about where the electron is around the proton, and the radial function R(r) describes how far the electron is away from the proton. The $$R(r)$$ functions that solve the radial differential equation, are products of the associated Laguerre polynomials times the exponential factor, multiplied by a normalization factor $$(N_{n,l})$$ and $$\left (\dfrac {r}{a_0} \right)^l$$.
$\color{red} R (r) = \underbrace{N_{n,l} \left ( \dfrac {r}{a_0} \right ) ^l}_{\text{Normalization}} \times \overbrace{L_{n,l} (r)}^{\text{Laguerre Poly}} \times \underbrace{e^{-\frac {r}{n {a_0}}}}_{\text{exponential}} \label {6.1.17}$
The decreasing exponential term overpowers the increasing polynomial term so that the overall wavefunction exhibits the desired approach to zero at large values of $$r$$. The first six radial functions are provided in Table below. Note that the functions in the table exhibit a dependence on $$Z$$, the atomic number of the nucleus.
Radial wave functions for wavefunctions of the first three shells. the Bohr radius $$a_o$$ is approximately 52.9 pm (half an angstrom).
$$n$$
$$l$$
$$R(r)$$
1
0
$$2 \left( \dfrac{Z}{a_o} \right)^{3/2} e^{ -Zr/a_o}$$
2
0
$$\dfrac{1}{2\sqrt{2}} \left( \dfrac{Z}{a_o} \right)^{3/2}\left [2-\dfrac{Zr}{a_o} \right]e^{ -Zr/2a_o}$$
2
1
$$\dfrac{1}{2\sqrt{6}} \left( \dfrac{Z}{a_o} \right)^{3/2}\left [\dfrac{Zr}{a_o} \right] e^{ -Zr/2a_o}$$
3
0
$$\dfrac{2}{81 \sqrt{3}} \left( \dfrac{Z}{a_o} \right)^{3/2}\left [27 -18 \dfrac{Zr}{a_o} +2 \left( \dfrac{Zr}{a_o}\right)^2 \right] e^{ -Zr/3a_o}$$
3
1
$$\dfrac{4}{81 \sqrt{6}} \left( \dfrac{Z}{a_o} \right)^{3/2}\left [6 \dfrac{Zr}{a_o} - \left( \dfrac{Zr}{a_o}\right)^2 \right] e^{ -Zr/3a_o}$$
3
2
$$\dfrac{4}{81 \sqrt{30}} \left( \dfrac{Z}{a_o} \right)^{3/2} \left ( \dfrac{Zr}{a_o} \right)^2 e^{ -Zr/3a_o}$$
Java simulation of particles in boxes :https://phet.colorado.edu/en/simulation/bound-states
## Energy
The motion of the electron in the hydrogen atom is not free. The electron is bound to the atom by the attractive force of the nucleus and consequently quantum mechanics predicts that the total energy of the electron is quantized. The expression for the energy is:
$E_n =\dfrac{-2 \pi^2 m e^4Z^2}{n^2h^2} \label{17.1}$
with $$n = 1,2,3,4...$$
where $$m$$ is the mass of the electron, $$e$$ is the magnitude of the electronic charge, $$n$$ is a quantum number, $$h$$ is Planck's constant and $$Z$$ is the atomic number (the number of positive charges in the nucleus). Equation $$\ref{17.1}$$ applies to any one-electron atom or ion. For example, He+ is a one-electron system for which Z = 2. We can again construct an energy level diagram listing the allowed energy values (Figure $$\PageIndex{1}$$).
These are obtained by substituting all possible values of n into Equation $$\ref{17.1}$$. As in our previous example, we shall represent all the constants which appear in the expression for $$E_n$$ by teh Rydberg constant $$R$$ and we shall set $$Z = 1$$, i.e., consider only the hydrogen atom.
$E_n = \dfrac{-R}{n^2} \label{17.2}$
with $$n = 1,2,3,4...$$
## Laguerre polynomials
The first six Laguerre polynomials. Image used with permission from Wikipedia.
The constraint that $$n$$ be greater than or equal to $$l +1$$ also turns out to quantize the energy, producing the same quantized expression for hydrogen atom energy levels that was obtained from the Bohr model of the hydrogen atom discussed in Chapter 2.
$E_n = - \dfrac {m_e e^4}{8 \epsilon ^2_0 h^2 n^2}$
Comparison to Bohr Theory
It is interesting to compare the results obtained by solving the Schrödinger equation with Bohr’s model of the hydrogen atom. There are several ways in which the Schrödinger model and Bohr model differ.
1. First, and perhaps most strikingly, the Schrödinger model does not produce well-defined orbits for the electron. The wavefunctions only give us the probability for the electron to be at various directions and distances from the proton.
2. Second, the quantization of angular momentum is different from that proposed by Bohr. Bohr proposed that the angular momentum is quantized in integer units of $$\hbar$$, while the Schrödinger model leads to an angular momentum of $$\sqrt{(l (l +1) \hbar ^2}$$.
3. Third, the quantum numbers appear naturally during solution of the Schrödinger equation while Bohr had to postulate the existence of quantized energy states. Although more complex, the Schrödinger model leads to a better correspondence between theory and experiment over a range of applications that was not possible for the Bohr model.
Methods for separately examining the radial portions of atomic orbitals provide useful information about the distribution of charge density within the orbitals.
Figure 6.2.2: Radial function, R(r), for the 1s, 2s, and 2p orbitals.
We could also represent the distribution of negative charge in the hydrogen atom in the manner used previously for the electron confined to move on a plane (Figure $$\PageIndex{1}$$), by displaying the charge density in a plane by means of a contour map. Imagine a plane through the atom including the nucleus. The density is calculated at every point in this plane. All points having the same value for the electron density in this plane are joined by a contour line (Figure $$\PageIndex{3}$$). Since the electron density depends only on r, the distance from the nucleus, and not on the direction in space, the contours will be circular. A contour map is useful as it indicates the "shape" of the density distribution.
$$n$$
$$\ell$$
$$m$$ Eigenstates
Hydrogen-like atomic wavefunctions for $$n$$ values $$1,2,3$$: $$Z$$ is the atomic number of the nucleus, and $$\rho = \dfrac {Zr}{a_0}$$, where $$a_0$$ is the Bohr radius and r is the radial variable.
$$n=1$$ $$\ell=0$$ $$m=0$$ $$\psi_{100} = \dfrac {1}{\sqrt {\pi}} \left(\dfrac {Z}{a_0}\right)^{\frac {3}{2}} e^{-\rho}$$
$$n=2$$ $$\ell=0$$ $$m=0$$ $$\psi_{200} = \dfrac {1}{\sqrt {32\pi}} \left(\dfrac {Z}{a_0}\right)^{\frac {3}{2}} (2-\rho)e^{\dfrac {-\rho}{2}}$$
$$\ell=1$$ $$m=0$$ $$\psi_{210} = \dfrac {1}{\sqrt {32\pi}} \left(\dfrac {Z}{a_0}\right)^{\frac {3}{2}} \rho e^{-\rho/2} \cos(\theta)$$
$$\ell=1$$ $$m=\pm 1$$ $$\psi_{21\pm\ 1} = \dfrac {1}{\sqrt {64\pi}} \left(\dfrac {Z}{a_0}\right)^{\dfrac {3}{2}} \rho e^{-\rho/2} \sin(\theta) e^{\pm\ i\phi}$$
$$n=3$$ $$\ell=0$$ $$m=0$$ $$\psi_{300} = \dfrac {1}{81\sqrt {3\pi}} \left(\dfrac {Z}{a_0}\right)^{\frac {3}{2}} (27-18\rho +2\rho^2)e^{-\rho/3}$$
$$\ell=1$$ $$m=0$$ $$\psi_{310} = \dfrac{1}{81} \sqrt {\dfrac {2}{\pi}} \left(\dfrac {Z}{a_0}\right)^{\dfrac {3}{2}} (6r - \rho^2)e^{-\rho/3} \cos(\theta)$$
$$\ell=1$$ $$m=\pm 1$$ $$\psi_{31\pm\ 1} = \dfrac {1}{81\sqrt {\pi}} \left(\dfrac {Z}{a_0}\right)^{\frac {3}{2}} (6\rho - \rho^2)e^{-r/3} \sin(\theta)e^{\pm\ i \phi}$$
$$\ell=2$$ $$m=0$$ $$\psi_{320} = \dfrac {1}{81\sqrt {6\pi}} \left(\dfrac {Z}{a_0}\right)^{\frac {3}{2}} \rho^2 e^{-\rho/3}(3cos^2(\theta) -1)$$
$$\ell=2$$ $$m=\pm 1$$ $$\psi_{32\pm\ 1} = \dfrac {1}{81\sqrt {\pi}} \left(\dfrac {Z}{a_0}\right)^{\frac {3}{2}} \rho^2 e^{-\rho/3} \sin(\theta)\cos(\theta)e^{\pm\ i \phi}$$
$$\ell=2$$ $$m=\pm 2$$
$$\psi_{32\pm\ 2} = \dfrac {1}{162\sqrt {\pi}} \left(\dfrac {Z}{a_0}\right)^{\frac {3}{2}} \rho^2 e^{-\rho/3}{\sin}^2(\theta)e^{\pm\ 2i\phi}$$ | 2019-10-17 23:02:24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8876128792762756, "perplexity": 275.05660315844136}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986677230.18/warc/CC-MAIN-20191017222820-20191018010320-00001.warc.gz"} |
http://mathhelpforum.com/calculus/67280-improper-double-integral.html | 1. improper double integral
Compute this integral.
I've tried using polar coordinates (it seems the obvious thing to do) but the domain pretty much messed everthing up. since D is the upper right square [0,1]X[0,1].
I got 2 possible boundaries to choose from:
1) 0<=R<=sqrt(2) , 0<=Theta<=sin(1/R)
2) 0<=R<=1/sin(Theta) , 0<=Theta<=pie/2
I couldn't integrate either of them.. your assistance is welcome :-)
2. Originally Posted by zokomoko
Compute this integral.
I've tried using polar coordinates (it seems the obvious thing to do) but the domain pretty much messed everthing up. since D is the upper right square [0,1]X[0,1].
I got 2 possible boundaries to choose from:
1) 0<=R<=sqrt(2) , 0<=Theta<=sin(1/R)
2) 0<=R<=1/sin(Theta) , 0<=Theta<=pie/2
I couldn't integrate either of them.. your assistance is welcome :-)
If you switch to polar coordinates won't you just have
$\int_{\theta = 0}^{\pi/4} \int_{r = 0}^{r = 1/\cos \theta} dr \, d \theta + \int^{\pi/2}_{\theta = \pi/4} \int_{r = 0}^{r = 1/\sin \theta} dr \, d \theta$
and the resulting integrals in $\theta$ are not nasty.
3. thanks
I got stuck with another problem, it says "compute the following integral. hint: use the rotational transformation to get new coordinates (u,v) so the u = ..."
I didn't understand how the substitution they suggested is a rotational transformation, and consequently what v(x,y) should be.
4. Originally Posted by zokomoko
I've tried using polar coordinates
To do that, split the original square into two triangles:
\begin{aligned}
\int_{0}^{1}{\int_{0}^{1}{\frac{dx\,dy}{\sqrt{x^{2 }+y^{2}}}}}&=2\int_{0}^{1}{\int_{0}^{x}{\frac{dy\, dx}{\sqrt{x^{2}+y^{2}}}}} \\
& =2\int_{0}^{\frac{\pi }{4}}{\int_{0}^{\sec \varphi }{dr}\,d\varphi }=2\int_{0}^{\frac{\pi }{4}}{\sec \varphi \,d\varphi } \\
& =2\ln \left| \sec \varphi +\tan \varphi \right|\bigg|_{0}^{\frac{\pi }{4}}=2\ln \left( \sqrt{2}+1 \right).
\end{aligned} | 2016-12-10 13:10:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9949846267700195, "perplexity": 5824.345044109229}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698543170.25/warc/CC-MAIN-20161202170903-00342-ip-10-31-129-80.ec2.internal.warc.gz"} |
https://ageconsearch.umn.edu/record/62294 | Formats
Format
BibTeX
MARC
MARCXML
DublinCore
EndNote
NLM
RefWorks
RIS
### Abstract
A survey was used to gauge consumer preferences toward four fresh pork attributes : juiciness, tenderness, marbling, and leanness. The survey elicited consumer willingness-to-pay a premium for an improvement in these attributes. Approximately one-half of the respondents were willing to pay some premium for the attributes of juiciness, leanness, and tenderness. The average premium size ranged from $0.20/lb. for marbling to$0.37/lb. for tenderness. Neither the choice of a certifying agency nor the use of a cheap talk script influenced premium levels. | 2021-04-23 12:26:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21028967201709747, "perplexity": 11823.745281051817}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039617701.99/warc/CC-MAIN-20210423101141-20210423131141-00599.warc.gz"} |
http://mathematica.stackexchange.com/questions/18876/listplot-with-labels-that-appear-when-the-pointer-goes-over-the-points?answertab=oldest | # ListPlot with labels that appear when the pointer goes over the points
I would like to generate a ListPlot with labels that appear when the pointer goes over the individual points. Specifically, I have a table of dimensions {100,3} and I want to use the first two columns for the plot and the third for the label.
Does anyone know how to do that?
-
Maybe something like: ListPlot[Tooltip@Prime[Range[25]], Filling -> Axis]. Is that what you try to do? – Pinguin Dirk Feb 1 '13 at 14:55
Thanks! It is almost what I want. Say that I have a table tab of dimensions {100,3} and I want to use the first two columns for the plot and the third for the label. Do you know how to do that? – Valerio Feb 1 '13 at 15:04
Please update your question with the additional information you supplied in your comment. It makes the question quite different than as originally posted. – m_goldberg Feb 1 '13 at 15:31
Also have a look at BubbleChart. – Jens Feb 1 '13 at 17:39
Based on update question: It seems that ListPlot cannot handle Tooltip "directly", so I used a Table to add a Tooltip to each point.
I use the following random data:
data = Append[#, RandomChoice[{"label1", "label2", "label3"}]] & /@
RandomInteger[100, {10, 2}]
(*{{80, 14, "label1"}, {98, 70, "label1"}, {66, 86, "label3"}, {43, 90,
"label2"}, {82, 29, "label2"}, {65, 91, "label1"}, {68, 59,
"label3"}, {9, 56, "label1"}, {17, 50, "label2"}, {79, 99,
"label3"}}*)
And then plot:
ListPlot[Table[Tooltip[data[[i, 1 ;; 2]], data[[i, 3]]], {i, Length@data}]]
Is this what you wanted?
EDIT
Based on Mr.Wizard's comment (see below), we can also concisely write:
ListPlot[Tooltip[{#, #2}, #3] & @@@ data]
-
added pictures... ;) – cormullion Feb 1 '13 at 16:35
@cormullion: thanks! that's something I never really understood on my mac... – Pinguin Dirk Feb 1 '13 at 16:38
You can use QuickTimePlayer to record the screen, then import the file and export it to animated GIF. Of course, embedded movies would be easier... – cormullion Feb 1 '13 at 16:46
ah thanks, I'll try that! (for next time) – Pinguin Dirk Feb 1 '13 at 16:54
Good answer. However, you should read this, and then understand: ListPlot[Tooltip[{#, #2}, #3] & @@@ data] – Mr.Wizard Mar 13 '13 at 16:07 | 2016-02-12 14:17:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3036041557788849, "perplexity": 2717.0289901950064}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701164268.69/warc/CC-MAIN-20160205193924-00017-ip-10-236-182-209.ec2.internal.warc.gz"} |
https://runestone.academy/ns/books/published/httlacs/functions_unit-testing.html | # How to Think Like a Computer Scientist: The PreTeXt Interactive Edition
## Section6.3Unit Testing
A test case expresses requirements for a program, in a way that can be checked automatically. Specifically, a test asserts something about the state of the program at a particular point in its execution. A unit test is an automatic procedure used to validate that individual units of code are working properly. A function is one form of a unit. A collection of these unit tests is called a test suite.
We have previously suggested that it's a good idea to first write down comments about what your code is supposed to do, before actually writing the code. It is an even better idea to write down some test cases before writing a program.
There are several reasons why it's a good habit to write test cases.
• Before we write code, we have in mind what it should do, but those thoughts may be a little vague. Writing down test cases forces us to be more concrete about what should happen.
• As we write the code, the test cases can provide automated feedback. You've actually been the beneficiary of such automated feedback via test cases throughout this book in some of the activecode windows and almost all of the exercises. We wrote the code for those test cases but kept it hidden, so as not to confuse you and also to avoid giving away the answers. You can get some of the same benefit from writing your own test cases.
• In larger software projects, the set of test cases can be run every time a change is made to the code base. Unit tests check that small bits of code are correctly implemented.
One way to implement unit tests in Python is with assert.
• Following the word assert there will be a python expression.
• If that expression evaluates to the Boolean False, then the interpreter will raise a runtime error.
• If the expression evaluates to True, then nothing happens and the execution goes on to the next line of code.
Take a look at the way assert is used in the following code.
In the code above, we explicitly state some natural assumptions about how truncated division might work in python. It turns out that the second asumption is wrong: 9.0//5 produces 2.0, a floating point value!
The python interpreter does not enforce restrictions about the data types of objects that can be bound to particular variables; however, type checking could alert us that something has gone wrong in our program execution. If we are assuming at that x is a list, but it's actually an integer, then at some point later in the program execution, there will probably be an error. We can add assert statements that will cause an error to be flagged sooner rather than later, which might make it a lot easier to debug.
The expression x==y evaluates to True The expression x==y evaluates to True x==y is a Boolean expression, not an assignment statement The expression x==y evaluates to True When an assertion test passes, no message is printed.
### Subsection6.3.1assert with for loops
Why would you ever want to write a line of code that can never compute anything useful for you, but sometimes causes a runtime error? For all the reasons we described above about the value of automated tests. You want a test that will alert that you that some condition you assumed was true is not in fact true. It's much better to be alerted to that fact right away than to have some unexpected result much later in your program execution, which you will have trouble tracing to the place where you had an error in your code.
Why doesn't assert print out something saying that the test passed? The reason is that you don't want to clutter up your output window with the results of automated tests that pass. You just want to know when one of your tests fails. In larger projects, other testing harnesses are used instead of assert, such as the python unittest module. Those provide some output summarizing tests that have passed as well as those that failed. In this textbook, we will just use simple assert statements for automated tests.
In the code below, lst is bound to a list object. In python, not all the elements of a list have to be of the same type. We can check that they all have the same type and get an error if they are not. Notice that with lst2, one of the assertions fails.
### Subsection6.3.2Return Value Tests
Testing whether a function returns the correct value is the easiest test case to define. You simply check whether the result of invoking the function on a particular input produces the particular output that you expect. Take a look at the following code.
Because each test checks whether a function works properly on specific inputs, the test cases will never be complete: in principle, a function might work properly on all the inputs that are tested in the test cases, but still not work properly on some other inputs. That's where the art of defining test cases comes in: you try to find specific inputs that are representative of all the important kinds of inputs that might ever be passed to the function.
#### Checkpoint6.3.1.
For the hangman game, this ‘blanked' function takes a word and some letters that have been guessed, and returns a version of the word with _ for all the letters that haven't been guessed. Which of the following is the correct way to write a test to check that ‘under' will be blanked as 'u_d__' when the user has guessed letters d and u so far?
• assert blanked('under', 'du', 'u_d__') == True
• blanked only takes two inputs; this provides three inputs to the blanked function
• assert blanked('under', 'u_d__') == 'du'
• The second argument to the blanked function should be the letters that have been guessed, not the blanked version of the word
• assert blanked('under', 'du') == 'u_d__'
• This checks whether the value returned from the blanked function is 'u_d__'. | 2023-02-03 07:38:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31496426463127136, "perplexity": 674.3797942911181}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500044.16/warc/CC-MAIN-20230203055519-20230203085519-00450.warc.gz"} |
http://books.duhnnae.com/2017/jun4/149716059569-Well-posedness-of-a-Class-of-Non-homogeneous-Boundary-Value-Problems-of-the-Korteweg-de-Vries-Equation-on-a-Finite-Domain-Eugene-Kramer-Ivonne-Riva.php | # Well-posedness of a Class of Non-homogeneous Boundary Value Problems of the Korteweg-de Vries Equation on a Finite Domain
Download or read this book online for free in PDF: Well-posedness of a Class of Non-homogeneous Boundary Value Problems of the Korteweg-de Vries Equation on a Finite Domain
In this paper, we study a class of initial-boundary value problems for the Korteweg-de Vries equation posed on a bounded domain $0,L$. We show that the initial-boundary value problem is locally well-posed in the classical Sobolev space $H^s0,L$ for $s-\frac34$, which provides a positive answer to one of the open questions of Colin and Ghidalia .
Author: Eugene Kramer; Ivonne Rivas; Bing-Yu Zhang
Source: https://archive.org/ | 2017-10-19 04:02:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42389950156211853, "perplexity": 233.46230112345893}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823220.45/warc/CC-MAIN-20171019031425-20171019051425-00199.warc.gz"} |
https://ncatlab.org/nlab/show/Alexandru+Dimca | # nLab Alexandru Dimca
Selected writings
## Selected writings
On (abelian) sheaf theory in topology, with a focus on constructible sheaves and perverse sheaves:
category: people
Created on June 8, 2022 at 19:11:42. See the history of this page for a list of all contributions to it. | 2023-02-03 07:49:10 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 2, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.346275269985199, "perplexity": 2440.0527732924193}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500044.16/warc/CC-MAIN-20230203055519-20230203085519-00641.warc.gz"} |
https://calconcalculator.com/finance/annuity-calculator/ | ## What is an annuity?
An annuity is a sequence of equal payments or disbursements. Individuals often find themselves in a situation where they have to pay or receive a certain amount of money at certain intervals (repayment of a loan or purchase of goods in installments, the partial return of investment amount funds at regular intervals). A life insurance contract is probably the most common and well-known type of business event involving a series of equal cash payments at equal intervals. Such a periodic savings process represents the accumulation of some amount of money through an annuity. The future amount of an annuity is the sum (future value) of all annuities that increase by the accrued interest. An annuity by definition requires:
1. periodic payments or receipts (annuities) are always of the same amount,
2. that the time interval between such annuities is always equal to and
3. calculation of an interest is once in each period.
It should note that you can make payments both at the beginning and at the end of the period. In order to distinguish between annuities in such situations. Annuities are can divide into:
1. regular annuities if payments are at the end of each period,
2. the annuities which payment is at the beginning of the period.
## How to calculate annuity?
Calculating present value is a step toward estimating how much your annuity is worth — and if you’re receiving a fair bargain when you sell your payments. To comprehend and apply this method, you’ll need particular information, such as the discount rate given by a purchasing business. Given a fixed interest rate, the present value (PV) allows you to calculate the present value of equally spaced payments in the future. To determine the present value of an ordinary annuity, use the following annuity formula:
## General annuity information
Annuities operate by turning a lump-sum premium into a fixed-income stream that a customer cannot outlive. To meet their everyday necessities, many retirees require more than Social Security and investment assets. Annuities can provide this income through an accumulation and annuitization process. In the case of instant annuities, lifelong payments guaranteed by the insurance company that begins within a month of purchase – no accumulation phase required.
Annuity arrangements shift to the insurance firm all of the risk of a falling market. This implies that you, as the annuity owner, are covered from market risk and longevity risk or the danger of outliving your money.
## Quick pros and cons of annuities
### Pros of annuities
• Lifetime income. Perhaps the most persuasive argument in favor of an annuity is that it typically offers income that cannot be outlived (though some only payout for a certain period of time).
• Deferred Distributions. Another attractive feature of annuities is the ability to make tax-deferred distributions. However, with annuities, you do not owe the government anything until you withdraw the cash.
• Guarantee rate. Variable annuities payout is based on how the market performs, whereas fixed annuities payout is based on a set rate of return over a set period of time.
### Cons of annuities
• Fees are exorbitant. The main issue with annuities is their high cost when we compare it to mutual funds and CDs. Many are vend by agents, whose commission you pay in the form of a significant upfront sales price.
• Liquidity is scarce. Another source of concern is a lack of liquidity. Many annuities have a surrender charge that you must pay if you try to withdraw during the first few years of your contract.
• Increased tax rates. The tax-deferred status of your interest and investment profits is frequently cited as a key selling feature by issuers. When you do make withdrawals, however, any net returns you get are taxed as regular income. That may be far greater than the capital gains tax rate, depending on your tax status.
• Complexity. One of the most important investment principles is to never buy a product you don’t understand. Annuities are no different. Over the last several years, the insurance industry has erupted with a flood of new, often exotic variations on the annuity.
## Fixed vs. Variable Annuities
### Fixed annuities
A fixed annuity is a form of insurance contract that guarantees the customer a certain set interest rate on their contributions to the account. On the other hand, a variable annuity pays variable interest based on the success of an investment portfolio specified by the account’s owner.
### Variable annuities
A variable annuity is a sort of annuity contract in which the value varies depending on the performance of an underlying portfolio of sub-accounts. Sub-accounts and mutual funds are essentially equivalent. However, sub-accounts lack ticker symbols that investors may readily enter into a fund tracker for research reasons. Variable annuities vary from fixed annuities in that they give a specified and guaranteed return.
### Indexed annuities
An indexed annuity is a type of annuity contract that pays an interest rate based on the performance of a certain market index, such as the S&P 500. It differs from fixed annuities, which pay a fixed interest rate, and variable annuities, which base their interest rate on a portfolio of securities chosen by the annuity holder.
## Immediate vs. deferred annuities
### Immediate annuities
An immediate annuity is an agreement between an individual and an insurance company that pays the
owner a guaranteed income or annuity almost immediately. This is different from a deferred annuity, in which payments begin on a future date chosen by the annuity owner. An immediate annuity is also a lump-sum immediate annuity (SPIA), income pension, or simply an immediate annuity.
### Deferred annuities
A deferred annuity could be a contract with an insurer that promises to pay the owner an everyday income or a payment at some future date. Investors often use deferred annuities to supplement their other retirement income, like Social Security. Deferred annuities differ from immediate annuities, which begin making payments straight away.
### Surrendering an annuity
The surrender period is that the amount of your time an investor must wait until they withdraw funds from an annuity without facing a penalty. Surrender periods are often a few years long, and withdrawing money before the top of the surrender period may end up during a surrender charge, which is a deferred sales fee. Generally, but not always, the longer the surrender period, the higher the annuity’s other terms.
## The difference between installments and an annuity
Both annuities and installments are monthly loan repayments, forming differently and significantly affecting the loan repayment amount.
Annuities are equal to monthly repayments. In which the ratio of the amount of principal and interest to the remaining principal changes with each repayment. The first annuity contains the lowest principal and the highest interest. Each subsequent month, the principal payment is more and more. The amount of interest decreases, while the total amount of annuities remains the same.
Installments are unequal monthly repayments in which a fixed amount is repaid in advance. Calculation of the amount of interest is on the remaining amount of principal. The first installment is the largest. Each subsequent installment is smaller than the previous one, while the last installment is the smallest. Although repayment in installments results in higher initial monthly payments. The user pays less total interest over a longer period of time. In future periods the monthly repayments in installments are relatively lower. Repayment in installments is fast and cheaper:
• the principal charge is quickly
• the total interest paid is less when repaying loans in installments than in annuities.
## What are loans?
A loan is a property-legal relationship between a lender and a borrower. Regulation of that one the relationship is by a special loan agreement so that it defines:
a) the amount of the loan approved
b) interest rate
c) the manner in which the amount of interest will be calculated
d) loan repayment time
e) method of loan repayment
The borrower repays the loan to the lender with annuities. Annuities are periodic amounts consisting of repayment quotas and interest. Namely, at the end of the period, the obligation of the borrower is to repay the borrowed principal (loan), as well as the amount of accrued compound interest. An overview of the loan repayment is in the repayment table (plan). Which consists of columns that list: loan repayment period, annuities, interest, repayment quotas, and the rest of the debt. Loans can be non-purpose or cash, so-called. “Cash” loans and special purposes such as consumer, housing, or working capital. Only banks that have received a work permit can engage in lending operations, i.e., lending to legal entities and individuals.
## Loan repayment model with equal annuities
We assume that the loan is repaid with a complex and decursive calculation of interest and equal annuities at the end of the period. Payment of the interest is throughout the repayment period the loan does not change. To determine the expression for the amount of equal annuities to which it is the loan of return, we notice an analogy between equal periodic payments at the end of the year and equal annuities and as between the present value of payments and approved loan. So the expression for the number of equal annuities at the end of the year will be analogous to the expression for the amount of periodic payments at the end of the year.
In practice, it is often easier to calculate the loan and provide an opportunity for the borrower to determine the amount of annuity that he assumes he will be able to repay, applying the loan repayment model to pre-agreed annuities.
## What is interest?
Interest is the price of payment for the use of bank funds. If it is as a percentage, we are talking about the interest rate. The amount of the interest rate depends on the type of loan, the term for which the funds are shifting. The means of securing the collection of receivables, market conditions, competition, the inflation rate, and the country’s credit rating.
### Nominal interest rate
Interest rate is a relative number – a percentage, which determines how many monetary units are per unit of credit and we use it to calculate regular interest on a given loan. It can be fixed or variable.
### Conformal interest rate
Conformal interest rate is the rate at which the same amount of interest is obtained, regardless of whether the interest is calculated once at the end of the repayment period or more than once during the loan repayment. The annual conformal interest rate can be also a monthly interest rate by rooting the interest rate by the number of accounting periods. E.g. if the annual interest rate is 6%, we get the monthly by rooting 6% by the number of months 12. We get that the monthly is 0.487%.
### Proportional interest rate
A proportional interest rate is a rate at which a different amount of interest is obtained depending on whether the interest calculation is once at the end of the repayment period or more than once during the loan repayment itself. Namely, suppose the interest is calculated more than once during the repayment period. In that case, a higher amount of total interest is obtained than if it was calculated only once at the end of the loan repayment period.
Suppose that the reduction of the proportional interest rate on an annual level to a monthly interest rate. In that case, it can obtain it by simply dividing the annual interest rate by the number of accounting periods. E.g., if the annual interest rate is 6%, the monthly come when we divide 6% by the number of months 12. We get that the monthly is 0.5%.
### Effective interest rate
Unlike the nominal interest rate, the effective interest rate (EIR) represents the actual price of the loan. It allows you to more easily see and compare the conditions under which different banks offer the same loans.
In addition to the nominal interest rate, the effective interest rate includes fees and commissions paid by the client to the bank for the approval of the loan. In the case of loans granted with a deposit, the EIR also includes income based on the bank’s interest on that deposit. The effective interest rate also includes the cost of processing the application, the cost of issuing the loan, the annual fee for the loan administration fee, the fee for the unused portion of the framework loan, the amount of insurance premium, if insurance is a condition for using the loan, the costs of opening and maintaining accounts.
Condition for approving the loan, as well as other costs related to ancillary services that are a condition for using the loan and borne by the user (e.g., fixed fee for processing insurance claims, costs of issuing excerpts from the real estate register, costs of real estate appraisal and movables, costs of verification of the pledge statement, costs of registration of the pledge right – mortgage, costs of insight into the database on the indebtedness of the user, etc.).
### Intercalary interest
Intercalary interest is the interest which calculation and charge are only from the moment you approve that loan until the moment you start repaying it, i.e., until the payment of the first installment. Depending on the business policy, the bank may accrue the accrued intercalary interest to the principal of the debt and collect it through an annuity/installment or at once – after the expiration of the loan. If the bank charges intercalary interest, you should check with the bank employees before concluding the contract when it would be best to pay off the loan so that the intercalary interest would be as low as possible.
### Default interest
Default interest is an interest whose calculation and the charge is if the client has not settled the obligations in time in accordance with the provisions of the concluded contract.
## FAQ
What is an annuity fund?
The investment portfolio that provides the return on your premium is known as an annuity fund. When the insurance company invests your money in the appropriate investment vehicles, it gets interesting. Annuity funds influence your rate of return and, eventually, the amount of your guaranteed income monthly.
How to calculate taxable income on an annuity?
The basis is calculated in the same way as for fixed annuities to determine your taxable vs. tax-free payments. Divide your base by the number of annuity installments you anticipate receiving.
How to calculate the future value of the annuity?
The future value of an ordinary annuity is calculated as F = P * ([1 + I]N – 1 )/I, where P is the payout amount. The interest (discount) rate is equal to I. The number of payments is denoted by N. F denotes the annuity’s future value.
How to calculate annuity factor?
The annuity’s present value is computed using the Annuity Factor (AF): = AF x Time 1 cash flow.
How do annuities work?
An annuity works by shifting risk from the annuitant, or owner, to the insurance provider. You pay the annuity firm premiums to carry this risk, just like you would with any other kind of insurance.
Are annuities a good investment?
Annuities are a wonderful method to enhance your retirement income by providing a consistent income stream. Many consumers purchase an annuity after exhausting their other tax-advantaged savings accounts.
What is annuity income?
An income annuity is a financial instrument that allows you to exchange a large payment for guaranteed recurring cash flow. An income annuity, sometimes known as an instant annuity, typically begins payments one month after the premium is paid and may continue for as long as the buyer lives.
How much does a 100 000 annuity pay per month?
If you acquired a $100,000 annuity at the age of 65 and began receiving monthly payments within 30 days, you would get$521 each month for the rest of your life.
## Other Calculator
If you want to know more about salaries, how we calculate them, or what types of salaries we have. Be sure to check out our Salary Calculator! We think that a calculator is very useful, given that we encounter the concept of salary on a daily basis, we need to know what it actually consists of. For more calculators in math, physics, finance, health, and more, visit our CalCon Calculator official page. | 2022-05-23 15:47:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37624555826187134, "perplexity": 1542.645877879668}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662558030.43/warc/CC-MAIN-20220523132100-20220523162100-00559.warc.gz"} |
https://quant.stackexchange.com/questions/55788/volatility-of-multimodal-distribution-of-returns | # Volatility of multimodal distribution of returns
Take $$x_1, x_2, \ldots, x_T$$ to be the price of a stock, indexed by $$t=1, 2, \ldots, T$$. Define rate of return at time $$t>W$$ for a window size of $$W$$ to be $$r_t = \frac{x_t - x_{t-W}}{x_{t-W}}$$ Rolling returns for a shift $$\delta$$ are thus given by the series $$r_{t}, r_{t+\delta}, r_{t+2\delta}, \ldots, r_T$$ (for $$t > W$$). As shown below, the distribution of rolling returns could be multimodal, meaning that there may be more than one peak in the distribution.
What is the appropriate way to describe the volatility of the rate of return when determining risk? Is it:
1. a set of 2-tuples, where the first value of each 2-tuple is the index of the mode (e.g., 1, 2, 3, etc.) and the second value is the volatility of the mode,
2. the average of the volatilities of each mode, weighted by the probability of being in each mode,
3. the volatility of a single unimodal distribution fit over the entire dataset, or
4. something else?
• By rolling returns do you mean cumulative returns, $(\Pi (1+r)) -1$? – develarist Jul 20 '20 at 8:54
• What do you mean by multimodal? I understand a multimodal distribution to be one with two or more peaks. That seems to be a separate issue from heteroscedasticity (different values for the volatility). – Bob Jansen Jul 20 '20 at 14:50
• Thank you both for your comments. I have added some information to the body that hopefully clarifies your questions. Could you please let me know your thoughts? – Vivek Subramanian Jul 20 '20 at 21:02
• Maybe I am too narrow-minded but "volatility" in Finance is computed from non-overlapping returns. Volatility is the std deviation of independent increments. I am not sure what it would mean to compute it from rolling returns (which are not independent). – noob2 Jul 20 '20 at 21:34
• For what instrument are you observing this return distribution? Regarding your last comment: Maybe you can, I have never tried and I don't think many have. It seems more complicated with little benefit. Why would you? – Bob Jansen Jul 21 '20 at 7:11 | 2021-04-14 18:06:21 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 8, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8028582334518433, "perplexity": 535.2938279346299}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038077843.17/warc/CC-MAIN-20210414155517-20210414185517-00593.warc.gz"} |
https://scoste.fr/posts/kelly/ | # The Kelly criterion, crypto exchange drama, and your own utility function
November 2022
## Better is bigger
There's been a lot of fuss recently on the FTX collapse and the spiritual views of his charismatic (?) 30-year old founder Sam Bankman-Fried (SBF). In a twitter thread, SBF mentioned his investment strategy and his own version of a plan due to the mathematician John L. Kelly. Further discussions (especially this one by Matt Hollerbach) pointed that he missed Kelly's point. Sam's misunderstanding prompted him to go for super-high leverage, which resulted in very risky positions – and in fine, a bankrupcy.
## Fixed-fraction strategies
Here's the setting: you are in a situation where, for $n$ epochs, you can gamble money. At each epoch you can win with probability $p$, and if you win you get $w$ times your bet. If you lose (with proba $q=1-p$) you lose $\ell$ times your bet. Therefore, if you bet $1€$, your expected gain is
$e = p w -q\ell.$
And if you bet $R$, your expected gain is $Re$.
John L. Kelly, in his 1956 paper, asked and solved the following question:
Your investment strategy is that, at each step, you bet a fraction $f$ of your current wealth – the fraction $f$ is constant over time.
What is the optimal $f$?
His solution is this famous Kelly criterion, otherwise dubbed Fortune's formula in the bestseller of the same name by W. Poundstone, and became part of the history of the legendary team around Shannon who more or less disrupted some casinos in Las Vegas's Casino's in the 60s. Here's a short presentation.
### Small formalization
We set $X_t = 0$ or $1$ according to the outcome of the $t$-th bet and we note $S_t = X_1+\dotsb+X_t$ the total number of wins before $t$. Starting with an initial wealth of $1$ (million), at each epoch we bet a constant fraction $0 \leqslant f \leqslant 1$ of our total wealth. Then, our wealth at epoch $t$ is
$R_t = (1+fw)^{S_t}(1-f\ell)^{t-S_t} = (1-f\ell)^t \gamma^{S_n}$
where we noted $\gamma = (1+fw)/(1-f\ell)$.
## Going full degenerate
A no-brain strategy[1] to maximize the final gain $R_n$ is to compute $\mathbb{E}[R_n]$ and then optimise in $f$. Using independence,
$\mathbb{E}[R_t] =(\mathbb{E}[\gamma^{X_1}])^t = (1-f\ell)^t (p\gamma + 1-p)^t = (1+f(pw - lq))^t = (1+fe)^t$
and now we seek $f_\mathrm{degen}$ which maximizes this function. Clearly $(1 + fe)^n$ is increasing or decreasing according to $e>0$ or $e<0$; indeed, let us suppose that $e>0$ (the expected gain is positive), the nobrain strategy consists in $f_{\mathrm{degen}}=1$: at each epoch, you bet all your money. The expected gain is
$R_{\mathrm{degen}} = (1+pw-q\ell)^n = (1+e)^n.$
It is exponentially large: even for a very small expected gain of $e \approx 0.01$ and $n=100$ epochs you get $2.7$: you nearly tripled your wealth! But suppose that $a=b=1$ (that is, you win or lose what you bet); should you have only ONE loss during the $t$ epochs, you lose everything. The only outcome of this strategy where you don't finish broke is where all the $n$ bets are in your favor, with proba $p^n$. To fix ideas, if $n=10$ and $p = 0.7$, $p^n \approx 2\%$. For $n=100$ it drops to less than $0,00000000000001\%$.
That's a litteraly the St-Petersburg paradox.
## The Kelly strategy
Now comes Kelly's analysis. He noticed that, if the number $n$ of epochs is large, the portion of winning bets should be close to $p$, that is we roughly have $S_n/n \approx p$ by the Law of Large Numbers. Consequenly,
$R_n \approx ((1-f\ell) \gamma^{p})^n = [(1+fw)^p(1-f\ell)^q]^n.$
Now you want to maximize this to get the optimal $f_{\mathrm{kelly}}$. This is equivalent to finding the max of
$p\log(1+fw) + q\log(1-f\ell)$
and after elementary manipulations the optimal fraction $f_{\mathrm{kelly}}$ and maximal gain $R_{\mathrm{kelly}}$ are
\begin{aligned}f_{\mathrm{kelly}} = \frac{p}{\ell} - \frac{q}{w} &&\qquad && R_{\mathrm{kelly}} = \left(p(1+w/e)^p (q(1+\ell/w))^q \right)^n \end{aligned}
Here it should be understood that if $f_{\mathrm{kelly}}$ is negative or greater than $1$, we clip it to 0 or 1. For most cases though, $f_{\mathrm{kelly}}$ will be between $0$ and $1$. For instance if $a=b=1$ it is equal to $p-q = 2p-1$.
We adopted two plans for finding the $f$ maximizing the final wealth, but they don't match. The no-brain approach seems legit. But in the Kelly approach, the approximation (5) might seem suspicious. It could however be justified by arguing that Kelly does not optimize the same objective as the nobrainer; indeed, Kelly rigorously maximizes the logarithm of the gain:
\begin{aligned} \mathbb{E}[\log(R_n)] &= n\log(1-f\ell)+ \mathbb{E}[S_n\log(1+fw)/(1-f\ell)] \\ &=n\log(1-f\ell)+ np \log(1+fw)/(1-f\ell) \\ &= n [p\log(1+fw) + q\log(1-f\ell)] \end{aligned}
which is exactly $n$ times (6).
## Utility functions, or: how to justify everything
How would one justify maximizing the logarithm of the wealth instead of the wealth? Well, one potential justification is with utility functions, that thing from economy.
### Utility functions
If you win 1000€ when you have only, say, 1000€ in savings, it's a lot; but if you win 1000€ when you already have 1000000€, it means almost nothing to you. The happiness you get for each extra dollar increases less and less; in other words, the utility (I hate the jargon of economists) you get is concave. Your utility function could very well be logarithmic, and the Kelly criterion would tell you how to maximize your logarithmic utility function.
This interpretation is the one put forward by SBF in his famous thread, and it is mostly irrelevant, as already noted by Kelly himself.
The twist is that with utility functions, you could justify any a priori strategy $f$. They're not a good tool for understanding people's behaviour or elaborating investment strategies. You can even exercice yourself by finding, for any fixed $f \in [0,1]$, a concave utility function $\varphi$ such that the maximum expected utility $\mathbb{E}[\varphi(R_n)]$ is attained at $f$.
That's more or less how SBF justified his crazy over-leverage strategy, by saying that his own utility function was closer to linear ($f=f_{\mathrm{degen}})$ than logarithmic ($f = f_{\mathrm{kelly}})$. In his paper, Kelly actually argues that rather than taking his criterion as the best possible, we should take it as an upper limit above which it should be completely irrational to go. SBF, on the other hand, used this analysis to justify all-or-nothing strategies which resulted in, well, quite bad an outcome.
## Concluding remarks
1. Always look for geometric returns, not arithmetic.
2. The Kelly criterion is over-simplistic. Quantitative investment books are full of variants.
3. If you justify your actions by your utility function, chances are you're just out of control.
4. Don't invest in crypto now
## Notes
[1] the « full degen » strategy, as Eruine says | 2022-11-30 23:06:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 71, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9220555424690247, "perplexity": 1278.6143034667261}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710777.20/warc/CC-MAIN-20221130225142-20221201015142-00230.warc.gz"} |
http://mathoverflow.net/feeds/user/4267 | User angela - MathOverflow most recent 30 from http://mathoverflow.net 2013-05-23T15:17:54Z http://mathoverflow.net/feeds/user/4267 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/57337/when-should-a-supervisor-be-a-co-author/57391#57391 Answer by angela for When should a supervisor be a co-author? angela 2011-03-04T19:32:20Z 2011-03-04T19:32:20Z <p>As a non-mathematician, I am somewhat mystified by the prevailing norms of the mathematics community as I understand them from this thread. Correct me if I'm wrong, but it sounds like: </p> <ul> <li><p>Supervisors make important intellectual contributions to the thesis work of their students. </p></li> <li><p>Typically, the name of the supervisor does not appear on the work. </p></li> </ul> <p>For example, the most upvoted comment at the moment says "as a rule the supervisor should not be a co-author in the main paper taken from a student's thesis, <strong>even if he has contributed substantially to it</strong>." (emphasis is mine) Other comments echo the sentiment. </p> <p>This seems problematic, both morally and practically. In other scientific communities, the author list is supposed to reflect the people who contributed intellectually to the paper. Manipulating it is an ethical offense. For example, <strong>the practices described in this thread appear to violate IEEE policies on authorship</strong> <a href="http://www.ieee.org/documents/opsmanual.pdf" rel="nofollow">which state</a> (Section 8.2.1)</p> <blockquote> <p>Authorship and co-authorship should be based on a substantial intellectual contribution ... the list of authors on an article serves multiple purposes; it indicates who is responsible for the work and to whom questions regarding the work should be addressed.</p> </blockquote> <p>Finally, I would just like to add that as a student, I would feel horrible submitting a paper authored only by me if the paper was based in large part on the insights of someone else. </p> http://mathoverflow.net/questions/40683/which-doubly-stochastic-matrices-can-be-written-as-products-of-pairwise-averaging Which doubly stochastic matrices can be written as products of pairwise averaging matrices? angela 2010-09-30T23:22:23Z 2010-10-01T03:14:39Z <p>A matrix $A$ is called doubly stochastic if its entries are nonnegative, and if all of its rows and columns add up to $1$. A subset of doubly stochastic matrices is the set of pairwise averaging matrices which move two components of a vector closer to their average. More precisely, a pairwise averaging matrix $P_{i,j,\alpha}$ is defined by stipulating that $y=P_{i,j,\alpha}x$ is $$y_i = (1-\alpha) x_i + \alpha x_j$$ $$y_j = \alpha x_i + (1-\alpha) x_j$$ $$y_k = x_k ~~{\rm for~ all }~ k \neq i,j~,$$ where $\alpha \in [0,1]$. </p> <p>My question is: can every doubly stochastic matrix be written as a product of pairwise averaging matrices? </p> <p>If the answer is no, I would like to know if its possible to characterize the doubly stochastic matrices which can be written this way.</p> <p><b>Update:</b> I just realized that the answer is no. Here is a sketch of the proof. Pick any $3 \times 3$ doubly stochastic matrix matrix $A$ with $A_{23}=A_{32}=0$. If $A$ can be written as the product of pairwise averages, the pairwise average matrices $P_{2,3,\alpha}$ never appear in the product, since they result in setting the $(2,3)$ and $(3,2)$ entries to positive numbers, which remain positive after any more applications of pairwise averages. So the product must only use $P_{1,2,\alpha}$ or $P_{1,3,\alpha}$. But one can see that no matter in what order one applies these matrices, at least one of $A_{23}$ or $A_{32}$ will be set to a positive number. For example, if we average 1 and 2 first and then 1 and 3, then $A_{32}$ will be nonzero. </p> <p>My second question is still unanswered: is it possible to characterize the matrices which are products of pairwise averages?</p> http://mathoverflow.net/questions/35618/matrices-self-adjoint-with-respect-to-some-inner-product matrices self-adjoint with respect to some inner product angela 2010-08-15T02:21:42Z 2010-08-15T09:03:51Z <p>Is it possible to give a nice characterization of matrices $A \in R^{n \times n}$ which are self-adjoint with respect to <em>some</em> inner product?</p> <p>These matrices include all symmetric matrices (of course) and some nonsymmetric ones: for example, the transition matrix of any (irreducible) reversible Markov chain will have this property. </p> <p>Naturally, all such matrices must have real eigenvalues, though I do not expect that this is a sufficient condition (is it?).</p> <p>About the only observation I have is that since any inner product can be represented as $\langle x,y \rangle = x^T M y$ for some positive definite matrix $M$, we are looking for matrices $A$ which satisfy $A^T M = M A$ or $M^{-1} A^T M = A$. In other words, we are looking for real matrices similar to their transpose with a positive definite similarity matrix. </p> http://mathoverflow.net/questions/16471/a-geometric-interpretation-of-independence A geometric interpretation of independence? angela 2010-02-26T03:15:47Z 2010-02-26T16:46:50Z <p>Consider the set of random variables with zero mean and finite second moment. This is a vector space, and $\langle X, Y \rangle = E[XY]$ is a valid inner product on it. Uncorrelated random variables correspond to orthogonal vectors in this space. </p> <p>Questions:</p> <p>(i) Does there exist a similar geometric interpretation for independent random variables in terms of this vector space?</p> <p>(ii) A collection of jointly Gaussian random variables are uncorrelated if and only if they are independent. Is it possible to give a geometric interpretation for this?</p> http://mathoverflow.net/questions/57337/when-should-a-supervisor-be-a-co-author/57391#57391 Comment by angela angela 2011-03-04T21:08:41Z 2011-03-04T21:08:41Z @Thierry Zell - Fair enough, but at least in other disciplines the published record is there to set things straight. Based on this discussion, it seems like in mathematics the published record sometimes omits key information. http://mathoverflow.net/questions/57337/when-should-a-supervisor-be-a-co-author/57391#57391 Comment by angela angela 2011-03-04T20:48:14Z 2011-03-04T20:48:14Z @Daniel Litt - I agree. But if standards for authorship in mathematics are so different than in other scientific disciplines, I wish mathematicians had disseminated this fact more widely. To give a concrete example: I have sometimes referred to "the proof of conjecture X by Y," because there is a paper by mathematician Y which proves conjecture X. I now see that my assumption - that the proof is due solely to the people whose names appear on the paper as authors - might be false. If I want to <i>justly</i> attribute the proof of conjecture X, it appears I need additional information. http://mathoverflow.net/questions/57337/when-should-a-supervisor-be-a-co-author/57391#57391 Comment by angela angela 2011-03-04T20:39:31Z 2011-03-04T20:39:31Z @Andre Henriques - perhaps if mathematicians ordered authors by contribution, supervisors would not feel like they have to remove their names from papers to which they contributed, which frankly sounds to me like it would be an improvement all around.... http://mathoverflow.net/questions/57337/when-should-a-supervisor-be-a-co-author/57391#57391 Comment by angela angela 2011-03-04T20:38:09Z 2011-03-04T20:38:09Z @Neil Strickland - I don't think thats quite correct, authorship information is not presented merely by mentioning the advisor. This is because just mentioning the advisor leaves open the question of whether the advisor made a substantial contribution to the work, enough to be listed as co-author. http://mathoverflow.net/questions/57337/when-should-a-supervisor-be-a-co-author/57391#57391 Comment by angela angela 2011-03-04T20:36:38Z 2011-03-04T20:36:38Z @Mark Meckes - I disagree, I think there is a substantial difference between the two scenarios you compare. In the first scenario, the relative contributions of you and Prof. A are obvious to anyone who reads the literature. In the second scenario, they are hidden. In fact, future histories of the subject are likely emphasize your work and de-emphasize Prof A's, since his/her name is not on any of the published work leading to the proof of the conjecture. http://mathoverflow.net/questions/40683/which-doubly-stochastic-matrices-can-be-written-as-products-of-pairwise-averaging/40702#40702 Comment by angela angela 2010-10-01T02:37:17Z 2010-10-01T02:37:17Z Doesn't Hadamard's inequality ($|det(A)|$ \leq \prod_i ||a_i||) imply that the determinant of every doubly stochastic matrix lies between $−1$ and $1$? http://mathoverflow.net/questions/35468/widely-accepted-mathematical-results-that-were-later-shown-wrong/35654#35654 Comment by angela angela 2010-08-16T03:48:19Z 2010-08-16T03:48:19Z <a href="http://en.wikipedia.org/wiki/Godel" rel="nofollow">en.wikipedia.org/wiki/Godel</a>'s_ontological_proof http://mathoverflow.net/questions/31475/singular-values-of-matrix-sums/31479#31479 Comment by angela angela 2010-07-12T00:24:24Z 2010-07-12T00:24:24Z so if $A=diag(1,0,0)$ and $B=diag(0,1,0)$, then $s_2(A)=s_2(B)=0$, $s_2(A+B)=1$, so the inequality we are discussing gives $1 \leq 0+0$. http://mathoverflow.net/questions/31475/singular-values-of-matrix-sums/31479#31479 Comment by angela angela 2010-07-11T23:46:12Z 2010-07-11T23:46:12Z I don't understand the justification for the inequality $s_k(A+B) \leq s_k(A)+s_k(B)$. The matrices $A=diag(1,0)$, $B=diag(0,1)$ appear to form a counterexample. If you want in addition $k\geq 1$, as implied by your first sentence, then $A=diag(1,0,0)$, $B=diag(0,1,0)$ is a counterexample. http://mathoverflow.net/questions/16471/a-geometric-interpretation-of-independence/16473#16473 Comment by angela angela 2010-02-26T05:02:22Z 2010-02-26T05:02:22Z I am quite happy with answers which add more structure to the space or tinker with the setting in any way. My only motivation is to get some geometric intuition about random variables, so anything in that vein would make me very happy. | 2013-05-23 15:17:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7989112138748169, "perplexity": 672.6445130350747}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703489876/warc/CC-MAIN-20130516112449-00036-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://www.rocketryforum.com/threads/argh-again-sorta.78624/ | # ARGH ... again ... sorta ...
### Help Support The Rocketry Forum:
#### jflis
##### Well-Known Member
Once again, I find myself having computer problems. This time it was hardware failure. The motor in my data drive hosed. It was spuratic for a while and I suspected dirty contacts on the cable assembly, but it wound up being the drive motor. I didn't discover this till it *stopped* being spuratic and was just dead...
...now it is headed off to a data-recovery-center. I don't even want to talk about it...
so..... Release of our next kit is delayed as all of the developement drawings are on the data disk. Many other kit/accessory developments are also skewed (like the 5/50 engine mount kit and the like...)
I will keep folks posted. Note, I am doing order admin by hand now so things may slow up a bit.
jim
#### kenobi65
##### Well-Known Member
I shall do the "Save The Data" dance for you, Jim!
#### OKTurbo
##### Well-Known Member
TRF Supporter
Bummer..
Been there....Done that.
Do you have a CDR/W drive? After the last "episode" I purchased a program called "Instant Recovery". It saves an image of your hard drive to a bunch of CDR disks. Yes...it takes a long time to archive your drive, BUT it's not lost forever....and CDR's are cheap.
#### slim_t
##### Well-Known Member
DVD burners are getting more affordable now and so is the media.
They'll hold about 5-6 times the data of a CDR.
And you could make your own rocketry DVD's and sell them on your site or give them away free with high dollar kit orders.
Just a thought.
Tim
#### jflis
##### Well-Known Member
whew, i finally got my data disk recovered onto a new drive. looks to be 100%
I'll go change me shorts now...
I just finished the instructions for the EMK-5-50 and have ordered documentation, so that will be coming out soon. Will now focus on the Freedom Forge Missile and preping for NSL
whew
jim
#### Hospital_Rocket
##### Well-Known Member
And now, repeat after us...
Backups are good
Backups are good
Backups are good
#### jflis
##### Well-Known Member
backups are good *and* cheap! LOL
#### Neil
##### Well-Known Member
Backups are good
Backups are good
Backups are good
We just had to wipe our old hard drive and get a new one...
All my EXTREMELY accurate, customized Rsims for ALL my current rockets that I had JUST put on there: LOST. Nearly all of my pics: LOST. All my crazy designs that are too big and I wish I never posted: GOOD RIDDANCE OK, mixed blessings... But all my fauvorites, which included 50-150 rocketry-related websites, carefully categorized into vendors, manufacturers, info sites, recreational sites (TRF ETC) and Misc, GONE.
#### lalligood
##### Well-Known Member
One of the most unheralded yet beautifully simplistic & best of all RELIABLE computer accessories to be released arguably forever is the USB flash drive (also known as several other names, not limited to but inlcuding: USB drive, Jump Drive, Thumb Drive).
About the size of a small pocketknife or an Estes 18mm motor, they come in 32MB, 64MB, 128MB, 256MB, 512MB, & 1GB capacities. IMHO, the 128MB version is about the best bang for the buck at ~$40. Shop carefully & you might find them for less... There's no moving parts or batteries to fail, they often come with clips to hook onto a keychain (makes for an expensive keychain but you better believe I have it on me at all times!), & as long as you have almost ANY modern operating system that has USB support--WinXP/2000/Me (native), Win98 (need to download & install drivers), Mac (native), & Linux (native). (Native means support without any special and/or additional drivers.) Just plug it in & it appears as your next available drive letter (or however Mac & Linux handle storage devices). They are fully rewritable but with none of the hassle of a CD-R/CD-RW/DVD-R drive. It works every bit like a tiny hard drive. So easy to use that even the most technically challenged people can use them (read: my wife). 128MB (or even 1GB) may not sound like much nowadays but for the most part, it's more than enough to store all of your most precious personal data. And no, I don't have anything to do with any company that makes these things... I do network & desktop support for a living and I spend far too much of my time recovering people's data. When all else fails, RTFM... #### Hospital_Rocket ##### Well-Known Member When all else fails, RTFM... As the reigning alpha geek and Desktop Support Manager where I am forced to slave for APCP, you would be banished for uttering such blasphemy. Actually I can't live without a jump drive. I carry 2 one on my ID badge one built into a rather fat pen (this stops the users cold - when you install software from pen you pull out of your pocket A #### jflis ##### Well-Known Member those jump drives are pretty cool, but i'd need about 10 of them just for FlisKits and another 20-30 for my photo albums, not to mention my websites, graphic artwork, pumkin stuff, etc, etc... #### powderburner ##### Well-Known Member So you say it's about$40 for 128 mb on a USB drive
And I have a whole stack of 700 mb storage devices that cost about 4 cents each
Tell me again why I need a USB flash drive?
#### edwardw
##### Well-Known Member
Powderburner - here is my reason for having a 256 MB jumpdrive. When I'm at work/school/home and need to transfer a document to/from work/school/home the jumpdrive makes a really easy transfer. Plug in - copy and paste..leave, arrive, plug in and open.
Also, at work I do a lot of work with Architectual Desktop (Mod for AutoCAD) and it makes it a blast when you have 3 or 4 guys designing a building. You all do your part, then bring it together on one computer
It's also a good excuse to load those rocket pics and show to the guys at work
Edward
#### hokkyokusei
##### Well-Known Member
Those little flash devices are really cool (recently I saw that you can get even USB flash devices built into watches & penknives!), but I like my USB2 disk. 19Gb & the physical size is ony 1/2" x 2 1/2" x 5", which easily fits inside my jacket pocket.
As a software developer, I can carry around my entire development system, plus a large music collection, and all of my rocketry related files (pics, videos, rocksim, web site, etc...)
#### lalligood
##### Well-Known Member
Originally posted by jflis
those jump drives are pretty cool, but i'd need about 10 of them just for FlisKits and another 20-30 for my photo albums, not to mention my websites, graphic artwork, pumkin stuff, etc, etc...
I guess I should have been more clear that the USB flash drives are more for storage of dynamic data (that's constantly changing & being updated). Image files & other archival data would be most economically stored on CD-R. A second hard drive, like one of those external/portable firewire hard drives might be worth looking into if you have that much dynamic data.
There is no single "perfect solution" unfortunately & that's really a shame... That hard drive manufacturers are making quantum leaps in technology every year to 18 months doubling hard drive capacities yet we all remain just as lazy in deleting unused data & advances such as faster internet connections allow us to access more data, we store more & more stuff while maintaining that dangerous "nothing will happen to my hard drive" attitude.
But it never hurts to investigate multiple technologies/methods to prevent you from finding yourself in that position again--because it indeed sucks rotten eggs when drives die
#### lalligood
##### Well-Known Member
Originally posted by Hospital_Rocket
As the reigning alpha geek and Desktop Support Manager where I am forced to slave for APCP, you would be banished for uttering such blasphemy.
Forgive me. It was indeed a bad attempt at humor... After all, I can't think of a single decent computer manual in the past several years that was worth saving from the trash can
#### Micromeister
##### Micro Craftman/ClusterNut
TRF Supporter
Jim:
I really feel your pain! McCoy's Micro Wonder Works has had identical crashlandings in the past couple months #\$&%##@# Dang new fangled things!!!
My wife has told me for years computer's are a commie plot, I'm more inclined to believe it's the Adversary at his evil work.
I've started keeping NOTHING but the programs on the computers hard drive, seperating all data, photo and dwg files on Zip, Jazz or CD stroge discs and backups.
So far it's only cost a few days initial breakout time. I recent computer failure cost me about 3 hours to be back up and running on a back up machine. Ya might want to look into those GB size external storage drives, especially for your model design dwgs. Either way ya just gotta say "Well, their great when they work" Good luck at the Data recovery store!
#### BlueNinja
##### Well-Known Member
I think a relative had an external 1gb drive identical to a Zip or floppy disk... Wouldn't imagine they're too expensive. I think it was made by Imation. | 2021-09-20 08:48:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17699526250362396, "perplexity": 6785.772711092865}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057033.33/warc/CC-MAIN-20210920070754-20210920100754-00696.warc.gz"} |
https://codeclimate.com/github/etsy/phan/src/Phan/Language/Element/Comment/Builder.php | # etsy/phan
src/Phan/Language/Element/Comment/Builder.php
### Summary
F
1 wk
#### File Builder.php has 1164 lines of code (exceeds 250 allowed). Consider refactoring. Open
<?php
declare(strict_types=1);
namespace Phan\Language\Element\Comment;
Found in src/Phan/Language/Element/Comment/Builder.php - About 2 days to fix
#### Builder has 44 functions (exceeds 20 allowed). Consider refactoring. Open
final class Builder
{
/** @var string the original raw comment */
public $comment; /** @var list<string> the list of lines of the doc comment */ Found in src/Phan/Language/Element/Comment/Builder.php - About 6 hrs to fix #### Function parseCommentLine has a Cognitive Complexity of 39 (exceeds 5 allowed). Consider refactoring. Open private function parseCommentLine(int$i, string $line): void { // https://secure.php.net/manual/en/regexp.reference.internal-options.php // (?i) makes this case-sensitive, (?-1) makes it case-insensitive // phpcs:ignore Generic.Files.LineLength.MaxExceeded Found in src/Phan/Language/Element/Comment/Builder.php - About 5 hrs to fix # Cognitive Complexity Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend. ### A method's cognitive complexity is based on a few simple rules: • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one • Code is considered more complex for each "break in the linear flow of the code" • Code is considered more complex when "flow breaking structures are nested" ### Further reading #### Method maybeParsePhanCustomAnnotation has 134 lines of code (exceeds 25 allowed). Consider refactoring. Open private function maybeParsePhanCustomAnnotation(int$i, string $line, string$type, string $case_sensitive_type): void { switch ($type) {
case 'phan-forbid-undeclared-magic-properties':
if ($this->checkCompatible('@phan-forbid-undeclared-magic-properties', [Comment::ON_CLASS],$i)) {
Found in src/Phan/Language/Element/Comment/Builder.php - About 5 hrs to fix
#### Method parseCommentLine has 102 lines of code (exceeds 25 allowed). Consider refactoring. Open
private function parseCommentLine(int $i, string$line): void
{
// https://secure.php.net/manual/en/regexp.reference.internal-options.php
// (?i) makes this case-sensitive, (?-1) makes it case-insensitive
// phpcs:ignore Generic.Files.LineLength.MaxExceeded
Found in src/Phan/Language/Element/Comment/Builder.php - About 4 hrs to fix
#### The class Builder has 42 non-getter- and setter-methods. Consider refactoring Builder to keep number of methods under 25. Open
final class Builder
{
/** @var string the original raw comment */
public $comment; /** @var list<string> the list of lines of the doc comment */ ## TooManyMethods ### Since: 0.1 A class with too many methods is probably a good suspect for refactoring, in order to reduce its complexity and find a way to have more fine grained objects. By default it ignores methods starting with 'get' or 'set'. The default was changed from 10 to 25 in PHPMD 2.3. ## Example ### Source https://phpmd.org/rules/codesize.html#toomanymethods #### The class Builder has an overall complexity of 256 which is very high. The configured complexity threshold is 50. Open final class Builder { /** @var string the original raw comment */ public$comment;
/** @var list<string> the list of lines of the doc comment */
#### Function parameterFromCommentLine has a Cognitive Complexity of 22 (exceeds 5 allowed). Consider refactoring. Open
private function parameterFromCommentLine(
string $line, bool$is_var,
int $i ): Parameter { Found in src/Phan/Language/Element/Comment/Builder.php - About 3 hrs to fix # Cognitive Complexity Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend. ### A method's cognitive complexity is based on a few simple rules: • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one • Code is considered more complex for each "break in the linear flow of the code" • Code is considered more complex when "flow breaking structures are nested" ### Further reading #### Consider simplifying this complex logical expression. Open if (!$this->comment_flags &&
!$this->return_comment && !$this->parameter_list &&
!$this->variable_list && !$this->template_type_list &&
Found in src/Phan/Language/Element/Comment/Builder.php - About 3 hrs to fix
#### Function magicMethodFromCommentLine has a Cognitive Complexity of 21 (exceeds 5 allowed). Consider refactoring. Open
private function magicMethodFromCommentLine(
string $line, int$comment_line_offset
): ?Method {
// https://phpdoc.org/docs/latest/references/phpdoc/tags/method.html
Found in src/Phan/Language/Element/Comment/Builder.php - About 2 hrs to fix
# Cognitive Complexity
Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.
### A method's cognitive complexity is based on a few simple rules:
• Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
• Code is considered more complex for each "break in the linear flow of the code"
• Code is considered more complex when "flow breaking structures are nested"
#### Method parameterFromCommentLine has 62 lines of code (exceeds 25 allowed). Consider refactoring. Open
private function parameterFromCommentLine(
string $line, bool$is_var,
int $i ): Parameter { Found in src/Phan/Language/Element/Comment/Builder.php - About 2 hrs to fix #### Function mergeMethodParts has a Cognitive Complexity of 18 (exceeds 5 allowed). Consider refactoring. Open private static function mergeMethodParts(array$parts): array
{
$prev_parts = [];$delta = 0;
$results = []; Found in src/Phan/Language/Element/Comment/Builder.php - About 2 hrs to fix # Cognitive Complexity Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend. ### A method's cognitive complexity is based on a few simple rules: • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one • Code is considered more complex for each "break in the linear flow of the code" • Code is considered more complex when "flow breaking structures are nested" ### Further reading #### Function maybeParsePhanCustomAnnotation has a Cognitive Complexity of 18 (exceeds 5 allowed). Consider refactoring. Open private function maybeParsePhanCustomAnnotation(int$i, string $line, string$type, string $case_sensitive_type): void { switch ($type) {
case 'phan-forbid-undeclared-magic-properties':
if ($this->checkCompatible('@phan-forbid-undeclared-magic-properties', [Comment::ON_CLASS],$i)) {
Found in src/Phan/Language/Element/Comment/Builder.php - About 2 hrs to fix
# Cognitive Complexity
Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.
### A method's cognitive complexity is based on a few simple rules:
• Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
• Code is considered more complex for each "break in the linear flow of the code"
• Code is considered more complex when "flow breaking structures are nested"
#### Method magicMethodFromCommentLine has 57 lines of code (exceeds 25 allowed). Consider refactoring. Open
private function magicMethodFromCommentLine(
string $line, int$comment_line_offset
): ?Method {
// https://phpdoc.org/docs/latest/references/phpdoc/tags/method.html
Found in src/Phan/Language/Element/Comment/Builder.php - About 2 hrs to fix
#### Method build has 47 lines of code (exceeds 25 allowed). Consider refactoring. Open
public function build(): Comment
{
foreach ($this->lines as$i => $line) { if (\strpos($line, '@') === false) {
continue;
Found in src/Phan/Language/Element/Comment/Builder.php - About 1 hr to fix
private function guessActualLineLocation(int $i): int {$path = Config::projectPath($this->context->getFile());$entry = FileCache::getEntry($path);$declaration_lineno = $this->lineno; Found in src/Phan/Language/Element/Comment/Builder.php - About 1 hr to fix # Cognitive Complexity Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend. ### A method's cognitive complexity is based on a few simple rules: • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one • Code is considered more complex for each "break in the linear flow of the code" • Code is considered more complex when "flow breaking structures are nested" ### Further reading #### The class Builder has 22 fields. Consider redesigning Builder to keep the number of fields under 15. Open final class Builder { /** @var string the original raw comment */ public$comment;
/** @var list<string> the list of lines of the doc comment */
## TooManyFields
### Since: 0.1
Classes that have too many fields could be redesigned to have fewer fields, possibly through some nested object grouping of some of the information. For example, a class with city/state/zip fields could instead have one Address field.
## Example
class Person {
protected $one; private$two;
private $three; [... many more fields ...] } ### Source https://phpmd.org/rules/codesize.html#toomanyfields #### Function findLineNumberOfCommentForElement has a Cognitive Complexity of 12 (exceeds 5 allowed). Consider refactoring. Open public static function findLineNumberOfCommentForElement(AddressableElementInterface$element, array $lines, int$i): int
{
$context =$element->getContext();
$entry = FileCache::getOrReadEntry(Config::projectPath($context->getFile()));
Found in src/Phan/Language/Element/Comment/Builder.php - About 1 hr to fix
# Cognitive Complexity
Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.
### A method's cognitive complexity is based on a few simple rules:
• Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
• Code is considered more complex for each "break in the linear flow of the code"
• Code is considered more complex when "flow breaking structures are nested"
#### Method magicPropertyFromCommentLine has 35 lines of code (exceeds 25 allowed). Consider refactoring. Open
private function magicPropertyFromCommentLine(
string $line, int$i
): ?Property {
// Note that the type of a property can be left out (@property $myVar) - This is equivalent to @property mixed$myVar
Found in src/Phan/Language/Element/Comment/Builder.php - About 1 hr to fix
#### Method returnTypeFromCommentLine has 30 lines of code (exceeds 25 allowed). Consider refactoring. Open
private function returnTypeFromCommentLine(
string $line, int$i
): UnionType {
$return_union_type_string = ''; Found in src/Phan/Language/Element/Comment/Builder.php - About 1 hr to fix #### Method magicParamFromMagicMethodParamString has 30 lines of code (exceeds 25 allowed). Consider refactoring. Open private function magicParamFromMagicMethodParamString( string$param_string,
int $param_index, int$comment_line_offset
): ?Parameter {
Found in src/Phan/Language/Element/Comment/Builder.php - About 1 hr to fix
#### Method mergeMethodParts has 30 lines of code (exceeds 25 allowed). Consider refactoring. Open
private static function mergeMethodParts(array $parts): array {$prev_parts = [];
$delta = 0;$results = [];
Found in src/Phan/Language/Element/Comment/Builder.php - About 1 hr to fix
#### Function build has a Cognitive Complexity of 10 (exceeds 5 allowed). Consider refactoring. Open
public function build(): Comment
{
foreach ($this->lines as$i => $line) { if (\strpos($line, '@') === false) {
continue;
Found in src/Phan/Language/Element/Comment/Builder.php - About 1 hr to fix
# Cognitive Complexity
Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.
### A method's cognitive complexity is based on a few simple rules:
• Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
• Code is considered more complex for each "break in the linear flow of the code"
• Code is considered more complex when "flow breaking structures are nested"
#### Function magicParamFromMagicMethodParamString has a Cognitive Complexity of 10 (exceeds 5 allowed). Consider refactoring. Open
private function magicParamFromMagicMethodParamString(
string $param_string, int$param_index,
int $comment_line_offset ): ?Parameter { Found in src/Phan/Language/Element/Comment/Builder.php - About 1 hr to fix # Cognitive Complexity Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend. ### A method's cognitive complexity is based on a few simple rules: • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one • Code is considered more complex for each "break in the linear flow of the code" • Code is considered more complex when "flow breaking structures are nested" ### Further reading #### Method parseMixin has 26 lines of code (exceeds 25 allowed). Consider refactoring. Open private function parseMixin(int$i, string $line, string$annotation_name): void
{
return;
}
Found in src/Phan/Language/Element/Comment/Builder.php - About 1 hr to fix
#### Method __construct has 6 arguments (exceeds 4 allowed). Consider refactoring. Open
string $comment, CodeBase$code_base,
Context $context, int$lineno,
int $i ): ?Property { // Note that the type of a property can be left out (@property$myVar) - This is equivalent to @property mixed $myVar Found in src/Phan/Language/Element/Comment/Builder.php - About 45 mins to fix # Cognitive Complexity Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend. ### A method's cognitive complexity is based on a few simple rules: • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one • Code is considered more complex for each "break in the linear flow of the code" • Code is considered more complex when "flow breaking structures are nested" ### Further reading #### Avoid too many return statements within this method. Open return; Found in src/Phan/Language/Element/Comment/Builder.php - About 30 mins to fix #### Avoid too many return statements within this method. Open return; Found in src/Phan/Language/Element/Comment/Builder.php - About 30 mins to fix #### Avoid too many return statements within this method. Open return; Found in src/Phan/Language/Element/Comment/Builder.php - About 30 mins to fix #### Avoid too many return statements within this method. Open return; Found in src/Phan/Language/Element/Comment/Builder.php - About 30 mins to fix #### Avoid too many return statements within this method. Open return; Found in src/Phan/Language/Element/Comment/Builder.php - About 30 mins to fix #### Avoid too many return statements within this method. Open return; Found in src/Phan/Language/Element/Comment/Builder.php - About 30 mins to fix #### Avoid too many return statements within this method. Open return; Found in src/Phan/Language/Element/Comment/Builder.php - About 30 mins to fix #### Avoid too many return statements within this method. Open return; Found in src/Phan/Language/Element/Comment/Builder.php - About 30 mins to fix #### Avoid too many return statements within this method. Open return; Found in src/Phan/Language/Element/Comment/Builder.php - About 30 mins to fix #### Avoid too many return statements within this method. Open return; Found in src/Phan/Language/Element/Comment/Builder.php - About 30 mins to fix #### Avoid too many return statements within this method. Open return; Found in src/Phan/Language/Element/Comment/Builder.php - About 30 mins to fix #### Avoid too many return statements within this method. Open return; Found in src/Phan/Language/Element/Comment/Builder.php - About 30 mins to fix #### Avoid too many return statements within this method. Open return; Found in src/Phan/Language/Element/Comment/Builder.php - About 30 mins to fix #### Avoid too many return statements within this method. Open return; Found in src/Phan/Language/Element/Comment/Builder.php - About 30 mins to fix #### Avoid too many return statements within this method. Open return; Found in src/Phan/Language/Element/Comment/Builder.php - About 30 mins to fix #### Avoid too many return statements within this method. Open return; Found in src/Phan/Language/Element/Comment/Builder.php - About 30 mins to fix #### Avoid too many return statements within this method. Open return; Found in src/Phan/Language/Element/Comment/Builder.php - About 30 mins to fix #### Avoid too many return statements within this method. Open return; Found in src/Phan/Language/Element/Comment/Builder.php - About 30 mins to fix #### Avoid too many return statements within this method. Open return; Found in src/Phan/Language/Element/Comment/Builder.php - About 30 mins to fix #### Avoid too many return statements within this method. Open return; Found in src/Phan/Language/Element/Comment/Builder.php - About 30 mins to fix #### Avoid too many return statements within this method. Open return; Found in src/Phan/Language/Element/Comment/Builder.php - About 30 mins to fix #### Avoid too many return statements within this method. Open return; Found in src/Phan/Language/Element/Comment/Builder.php - About 30 mins to fix #### Avoid too many return statements within this method. Open return; Found in src/Phan/Language/Element/Comment/Builder.php - About 30 mins to fix #### Function maybeParseTemplateType has a Cognitive Complexity of 6 (exceeds 5 allowed). Consider refactoring. Open private function maybeParseTemplateType(int$i, string $line): void { // Make sure support for generic types is enabled if (Config::getValue('generic_types_enabled')) { if ($this->checkCompatible('@template', Comment::HAS_TEMPLATE_ANNOTATION, $i)) { Found in src/Phan/Language/Element/Comment/Builder.php - About 25 mins to fix # Cognitive Complexity Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend. ### A method's cognitive complexity is based on a few simple rules: • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one • Code is considered more complex for each "break in the linear flow of the code" • Code is considered more complex when "flow breaking structures are nested" ### Further reading #### Function parseMixin has a Cognitive Complexity of 6 (exceeds 5 allowed). Consider refactoring. Open private function parseMixin(int$i, string $line, string$annotation_name): void
{
return;
}
Found in src/Phan/Language/Element/Comment/Builder.php - About 25 mins to fix
# Cognitive Complexity
Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.
### A method's cognitive complexity is based on a few simple rules:
• Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
• Code is considered more complex for each "break in the linear flow of the code"
• Code is considered more complex when "flow breaking structures are nested"
#### Function maybeParseVarLine has a Cognitive Complexity of 6 (exceeds 5 allowed). Consider refactoring. Open
private function maybeParseVarLine(int $i, string$line): void
{
if (!$this->checkCompatible('@var', Comment::HAS_VAR_ANNOTATION,$i)) {
return;
}
Found in src/Phan/Language/Element/Comment/Builder.php - About 25 mins to fix
# Cognitive Complexity
Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.
### A method's cognitive complexity is based on a few simple rules:
• Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
• Code is considered more complex for each "break in the linear flow of the code"
• Code is considered more complex when "flow breaking structures are nested"
#### The class Builder has 1492 lines of code. Current threshold is 1000. Avoid really long classes. Open
final class Builder
{
/** @var string the original raw comment */
public $comment; /** @var list<string> the list of lines of the doc comment */ #### The method parseCommentLine() has an NPath complexity of 360. The configured NPath complexity threshold is 200. Open private function parseCommentLine(int$i, string $line): void { // https://secure.php.net/manual/en/regexp.reference.internal-options.php // (?i) makes this case-sensitive, (?-1) makes it case-insensitive // phpcs:ignore Generic.Files.LineLength.MaxExceeded ## NPathComplexity ### Since: 0.1 The NPath complexity of a method is the number of acyclic execution paths through that method. A threshold of 200 is generally considered the point where measures should be taken to reduce complexity. ## Example class Foo { function bar() { // lots of complicated code } } ### Source https://phpmd.org/rules/codesize.html#npathcomplexity #### The method maybeParsePhanCustomAnnotation() has 146 lines of code. Current threshold is set to 100. Avoid really long methods. Open private function maybeParsePhanCustomAnnotation(int$i, string $line, string$type, string $case_sensitive_type): void { switch ($type) {
case 'phan-forbid-undeclared-magic-properties':
if ($this->checkCompatible('@phan-forbid-undeclared-magic-properties', [Comment::ON_CLASS],$i)) {
#### The method build() has an NPath complexity of 252. The configured NPath complexity threshold is 200. Open
public function build(): Comment
{
foreach ($this->lines as$i => $line) { if (\strpos($line, '@') === false) {
continue;
## NPathComplexity
### Since: 0.1
The NPath complexity of a method is the number of acyclic execution paths through that method. A threshold of 200 is generally considered the point where measures should be taken to reduce complexity.
## Example
class Foo {
function bar() {
// lots of complicated code
}
}
### Source https://phpmd.org/rules/codesize.html#npathcomplexity
#### The method parseCommentLine() has 112 lines of code. Current threshold is set to 100. Avoid really long methods. Open
private function parseCommentLine(int $i, string$line): void
{
// https://secure.php.net/manual/en/regexp.reference.internal-options.php
// (?i) makes this case-sensitive, (?-1) makes it case-insensitive
// phpcs:ignore Generic.Files.LineLength.MaxExceeded
#### The method build() has a Cyclomatic Complexity of 19. The configured cyclomatic complexity threshold is 10. Open
public function build(): Comment
{
foreach ($this->lines as$i => $line) { if (\strpos($line, '@') === false) {
continue;
## CyclomaticComplexity
### Since: 0.1
Complexity is determined by the number of decision points in a method plus one for the method entry. The decision points are 'if', 'while', 'for', and 'case labels'. Generally, 1-4 is low complexity, 5-7 indicates moderate complexity, 8-10 is high complexity, and 11+ is very high complexity.
## Example
// Cyclomatic Complexity = 11
class Foo {
1 public function example() {
2 if ($a ==$b) {
3 if ($a1 ==$b1) {
fiddle();
4 } elseif ($a2 ==$b2) {
fiddle();
} else {
fiddle();
}
5 } elseif ($c ==$d) {
6 while ($c ==$d) {
fiddle();
}
7 } elseif ($e ==$f) {
8 for ($n = 0;$n < $h;$n++) {
fiddle();
}
} else {
switch ($z) { 9 case 1: fiddle(); break; 10 case 2: fiddle(); break; 11 case 3: fiddle(); break; default: fiddle(); break; } } } } ### Source https://phpmd.org/rules/codesize.html#cyclomaticcomplexity #### The method parameterFromCommentLine() has a Cyclomatic Complexity of 16. The configured cyclomatic complexity threshold is 10. Open private function parameterFromCommentLine( string$line,
bool $is_var, int$i
): Parameter {
## CyclomaticComplexity
### Since: 0.1
Complexity is determined by the number of decision points in a method plus one for the method entry. The decision points are 'if', 'while', 'for', and 'case labels'. Generally, 1-4 is low complexity, 5-7 indicates moderate complexity, 8-10 is high complexity, and 11+ is very high complexity.
## Example
// Cyclomatic Complexity = 11
class Foo {
1 public function example() {
2 if ($a ==$b) {
3 if ($a1 ==$b1) {
fiddle();
4 } elseif ($a2 ==$b2) {
fiddle();
} else {
fiddle();
}
5 } elseif ($c ==$d) {
6 while ($c ==$d) {
fiddle();
}
7 } elseif ($e ==$f) {
8 for ($n = 0;$n < $h;$n++) {
fiddle();
}
} else {
switch ($z) { 9 case 1: fiddle(); break; 10 case 2: fiddle(); break; 11 case 3: fiddle(); break; default: fiddle(); break; } } } } ### Source https://phpmd.org/rules/codesize.html#cyclomaticcomplexity #### The method parseCommentLine() has a Cyclomatic Complexity of 35. The configured cyclomatic complexity threshold is 10. Open private function parseCommentLine(int$i, string $line): void { // https://secure.php.net/manual/en/regexp.reference.internal-options.php // (?i) makes this case-sensitive, (?-1) makes it case-insensitive // phpcs:ignore Generic.Files.LineLength.MaxExceeded ## CyclomaticComplexity ### Since: 0.1 Complexity is determined by the number of decision points in a method plus one for the method entry. The decision points are 'if', 'while', 'for', and 'case labels'. Generally, 1-4 is low complexity, 5-7 indicates moderate complexity, 8-10 is high complexity, and 11+ is very high complexity. ## Example // Cyclomatic Complexity = 11 class Foo { 1 public function example() { 2 if ($a == $b) { 3 if ($a1 == $b1) { fiddle(); 4 } elseif ($a2 == $b2) { fiddle(); } else { fiddle(); } 5 } elseif ($c == $d) { 6 while ($c == $d) { fiddle(); } 7 } elseif ($e == $f) { 8 for ($n = 0; $n <$h; $n++) { fiddle(); } } else { switch ($z) {
9 case 1:
fiddle();
break;
10 case 2:
fiddle();
break;
11 case 3:
fiddle();
break;
default:
fiddle();
break;
}
}
}
}
### Source https://phpmd.org/rules/codesize.html#cyclomaticcomplexity
#### The method maybeParsePhanCustomAnnotation() has a Cyclomatic Complexity of 49. The configured cyclomatic complexity threshold is 10. Open
private function maybeParsePhanCustomAnnotation(int $i, string$line, string $type, string$case_sensitive_type): void
{
switch ($type) { case 'phan-forbid-undeclared-magic-properties': if ($this->checkCompatible('@phan-forbid-undeclared-magic-properties', [Comment::ON_CLASS], $i)) { ## CyclomaticComplexity ### Since: 0.1 Complexity is determined by the number of decision points in a method plus one for the method entry. The decision points are 'if', 'while', 'for', and 'case labels'. Generally, 1-4 is low complexity, 5-7 indicates moderate complexity, 8-10 is high complexity, and 11+ is very high complexity. ## Example // Cyclomatic Complexity = 11 class Foo { 1 public function example() { 2 if ($a == $b) { 3 if ($a1 == $b1) { fiddle(); 4 } elseif ($a2 == $b2) { fiddle(); } else { fiddle(); } 5 } elseif ($c == $d) { 6 while ($c == $d) { fiddle(); } 7 } elseif ($e == $f) { 8 for ($n = 0; $n <$h; $n++) { fiddle(); } } else { switch ($z) {
9 case 1:
fiddle();
break;
10 case 2:
fiddle();
break;
11 case 3:
fiddle();
break;
default:
fiddle();
break;
}
}
}
}
### Source https://phpmd.org/rules/codesize.html#cyclomaticcomplexity
#### The class Builder has a coupling between objects value of 25. Consider to reduce the number of dependencies under 13. Open
final class Builder
{
/** @var string the original raw comment */
public $comment; /** @var list<string> the list of lines of the doc comment */ ## CouplingBetweenObjects ### Since: 1.1.0 A class with too many dependencies has negative impacts on several quality aspects of a class. This includes quality criteria like stability, maintainability and understandability ## Example class Foo { /** * @var \foo\bar\X */ private$x = null;
/**
* @var \foo\bar\Y
*/
private $y = null; /** * @var \foo\bar\Z */ private$z = null;
public function setFoo(\Foo $foo) {} public function setBar(\Bar$bar) {}
public function setBaz(\Baz $baz) {} /** * @return \SplObjectStorage * @throws \OutOfRangeException * @throws \InvalidArgumentException * @throws \ErrorException */ public function process(\Iterator$it) {}
// ...
}
### Source https://phpmd.org/rules/design.html#couplingbetweenobjects
#### Similar blocks of code found in 2 locations. Consider refactoring. Open
private static function mergeMethodParts(array $parts): array {$prev_parts = [];
$delta = 0;$results = [];
Found in src/Phan/Language/Element/Comment/Builder.php and 1 other location - About 1 day to fix
src/Phan/Language/UnionType.php on lines 530..562
## Duplicated Code
Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:
Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.
When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).
## Tuning
This issue has a mass of 355.
We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.
The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.
If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.
See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.
## Refactorings
#### Similar blocks of code found in 2 locations. Consider refactoring. Open
private function parsePhanProperty(int $i, string$line): void
{
if (!$this->checkCompatible('@phan-property', [Comment::ON_CLASS],$i)) {
return;
}
Found in src/Phan/Language/Element/Comment/Builder.php and 1 other location - About 50 mins to fix
src/Phan/Language/Element/Comment/Builder.php on lines 960..973
## Duplicated Code
Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:
Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.
When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).
## Tuning
This issue has a mass of 97.
We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.
The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.
If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.
See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.
## Refactorings
#### Similar blocks of code found in 2 locations. Consider refactoring. Open
private function parsePhanMethod(int $i, string$line): void
{
if (!$this->checkCompatible('@phan-method', [Comment::ON_CLASS],$i)) {
return;
}
Found in src/Phan/Language/Element/Comment/Builder.php and 1 other location - About 50 mins to fix
src/Phan/Language/Element/Comment/Builder.php on lines 945..958
## Duplicated Code
Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:
Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.
When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).
## Tuning
This issue has a mass of 97.
We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.
The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.
If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.
See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.
## Refactorings
#### Similar blocks of code found in 2 locations. Consider refactoring. Open
private static function extractMethodParts(string $type_string): array {$parts = [];
foreach (\explode(',', $type_string) as$part) {
$parts[] = \trim($part);
Found in src/Phan/Language/Element/Comment/Builder.php and 1 other location - About 50 mins to fix
src/Phan/Language/UnionType.php on lines 490..504
## Duplicated Code
Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:
Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.
When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).
## Tuning
This issue has a mass of 97.
We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.
The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.
If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.
See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.
## ShortVariable
### Since: 0.2
Detects when a field, local, or parameter has a very short name.
## Example
class Something {
private $q = 15; // VIOLATION - Field public static function main( array$as ) { // VIOLATION - Formal
$r = 20 +$this->q; // VIOLATION - Local
for (int $i = 0;$i < 10; $i++) { // Not a Violation (inside FOR)$r += $this->q; } } } ### Source https://phpmd.org/rules/naming.html#shortvariable #### Avoid variables with short names like$i. Configured minimum length is 3. Open
private function parsePhanProperty(int $i, string$line): void
## ShortVariable
### Since: 0.2
Detects when a field, local, or parameter has a very short name.
## Example
class Something {
private $q = 15; // VIOLATION - Field public static function main( array$as ) { // VIOLATION - Formal
$r = 20 +$this->q; // VIOLATION - Local
private function guessActualLineLocation(int $i): int ## ShortVariable ### Since: 0.2 Detects when a field, local, or parameter has a very short name. ## Example class Something { private$q = 15; // VIOLATION - Field
public static function main( array $as ) { // VIOLATION - Formal$r = 20 + $this->q; // VIOLATION - Local for (int$i = 0; $i < 10;$i++) { // Not a Violation (inside FOR)
$r +=$this->q;
}
}
}
## ShortVariable
### Since: 0.2
Detects when a field, local, or parameter has a very short name.
## Example
class Something {
private $q = 15; // VIOLATION - Field public static function main( array$as ) { // VIOLATION - Formal
$r = 20 +$this->q; // VIOLATION - Local
for (int $i = 0;$i < 10; $i++) { // Not a Violation (inside FOR)$r += $this->q; } } } ### Source https://phpmd.org/rules/naming.html#shortvariable #### Avoid variables with short names like$i. Configured minimum length is 3. Open
private function maybeParsePhanAssert(int $i, string$line): void
## ShortVariable
### Since: 0.2
Detects when a field, local, or parameter has a very short name.
## Example
class Something {
private $q = 15; // VIOLATION - Field public static function main( array$as ) { // VIOLATION - Formal
$r = 20 +$this->q; // VIOLATION - Local
for (int $i = 0;$i < 10; $i++) { // Not a Violation (inside FOR)$r += $this->q; } } } ### Source https://phpmd.org/rules/naming.html#shortvariable #### Avoid variables with short names like$i. Configured minimum length is 3. Open
private function maybeParseReturn(int $i, string$line): void
## ShortVariable
### Since: 0.2
Detects when a field, local, or parameter has a very short name.
## Example
class Something {
private $q = 15; // VIOLATION - Field public static function main( array$as ) { // VIOLATION - Formal
$r = 20 +$this->q; // VIOLATION - Local
for (int $i = 0;$i < 10; $i++) { // Not a Violation (inside FOR)$r += $this->q; } } } ### Source https://phpmd.org/rules/naming.html#shortvariable #### Avoid variables with short names like$i. Configured minimum length is 3. Open
private function parsePhanMethod(int $i, string$line): void
## ShortVariable
### Since: 0.2
Detects when a field, local, or parameter has a very short name.
## Example
class Something {
private $q = 15; // VIOLATION - Field public static function main( array$as ) { // VIOLATION - Formal
$r = 20 +$this->q; // VIOLATION - Local
for (int $i = 0;$i < 10; $i++) { // Not a Violation (inside FOR)$r += $this->q; } } } ### Source https://phpmd.org/rules/naming.html#shortvariable #### Avoid variables with short names like$i. Configured minimum length is 3. Open
private function maybeParseTemplateType(int $i, string$line): void
## ShortVariable
### Since: 0.2
Detects when a field, local, or parameter has a very short name.
## Example
class Something {
private $q = 15; // VIOLATION - Field public static function main( array$as ) { // VIOLATION - Formal
$r = 20 +$this->q; // VIOLATION - Local
for (int $i = 0;$i < 10; $i++) { // Not a Violation (inside FOR)$r += $this->q; } } } ### Source https://phpmd.org/rules/naming.html#shortvariable #### Avoid variables with short names like$i. Configured minimum length is 3. Open
private function parseUnusedParamLine(int $i, string$line): void
## ShortVariable
### Since: 0.2
Detects when a field, local, or parameter has a very short name.
## Example
class Something {
private $q = 15; // VIOLATION - Field public static function main( array$as ) { // VIOLATION - Formal
$r = 20 +$this->q; // VIOLATION - Local
private function maybeParsePhanInherits(int $i, string$line, string $type): void ## ShortVariable ### Since: 0.2 Detects when a field, local, or parameter has a very short name. ## Example class Something { private$q = 15; // VIOLATION - Field
public static function main( array $as ) { // VIOLATION - Formal$r = 20 + $this->q; // VIOLATION - Local for (int$i = 0; $i < 10;$i++) { // Not a Violation (inside FOR)
$r +=$this->q;
}
}
}
## ShortVariable
### Since: 0.2
Detects when a field, local, or parameter has a very short name.
## Example
class Something {
private $q = 15; // VIOLATION - Field public static function main( array$as ) { // VIOLATION - Formal
$r = 20 +$this->q; // VIOLATION - Local
for (int $i = 0;$i < 10; $i++) { // Not a Violation (inside FOR)$r += $this->q; } } } ### Source https://phpmd.org/rules/naming.html#shortvariable #### Avoid variables with short names like$j. Configured minimum length is 3. Open
$j =$i - ($lineno_search -$check_lineno);
## ShortVariable
### Since: 0.2
Detects when a field, local, or parameter has a very short name.
## Example
class Something {
private $q = 15; // VIOLATION - Field public static function main( array$as ) { // VIOLATION - Formal
$r = 20 +$this->q; // VIOLATION - Local
for (int $i = 0;$i < 10; $i++) { // Not a Violation (inside FOR)$r += $this->q; } } } ### Source https://phpmd.org/rules/naming.html#shortvariable #### Avoid variables with short names like$i. Configured minimum length is 3. Open
private function maybeParseVarLine(int $i, string$line): void
## ShortVariable
### Since: 0.2
Detects when a field, local, or parameter has a very short name.
## Example
class Something {
private $q = 15; // VIOLATION - Field public static function main( array$as ) { // VIOLATION - Formal
$r = 20 +$this->q; // VIOLATION - Local
for (int $i = 0;$i < 10; $i++) { // Not a Violation (inside FOR)$r += $this->q; } } } ### Source https://phpmd.org/rules/naming.html#shortvariable #### Avoid variables with short names like$i. Configured minimum length is 3. Open
private function maybeParseMethod(int $i, string$line): void
## ShortVariable
### Since: 0.2
Detects when a field, local, or parameter has a very short name.
## Example
class Something {
private $q = 15; // VIOLATION - Field public static function main( array$as ) { // VIOLATION - Formal
$r = 20 +$this->q; // VIOLATION - Local
public static function findLineNumberOfCommentForElement(AddressableElementInterface $element, array$lines, int $i): int ## ShortVariable ### Since: 0.2 Detects when a field, local, or parameter has a very short name. ## Example class Something { private$q = 15; // VIOLATION - Field
public static function main( array $as ) { // VIOLATION - Formal$r = 20 + $this->q; // VIOLATION - Local for (int$i = 0; $i < 10;$i++) { // Not a Violation (inside FOR)
$r +=$this->q;
}
}
}
## ShortVariable
### Since: 0.2
Detects when a field, local, or parameter has a very short name.
## Example
class Something {
private $q = 15; // VIOLATION - Field public static function main( array$as ) { // VIOLATION - Formal
$r = 20 +$this->q; // VIOLATION - Local
private function maybeParseInherits(int $i, string$line, string $type): void ## ShortVariable ### Since: 0.2 Detects when a field, local, or parameter has a very short name. ## Example class Something { private$q = 15; // VIOLATION - Field
public static function main( array $as ) { // VIOLATION - Formal$r = 20 + $this->q; // VIOLATION - Local for (int$i = 0; $i < 10;$i++) { // Not a Violation (inside FOR)
$r +=$this->q;
}
}
}
### Source https://phpmd.org/rules/naming.html#shortvariable
#### Avoid variables with short names like $i. Configured minimum length is 3. Open private function maybeParsePhanClosureScope(int$i, string $line): void ## ShortVariable ### Since: 0.2 Detects when a field, local, or parameter has a very short name. ## Example class Something { private$q = 15; // VIOLATION - Field
public static function main( array $as ) { // VIOLATION - Formal$r = 20 + $this->q; // VIOLATION - Local for (int$i = 0; $i < 10;$i++) { // Not a Violation (inside FOR)
$r +=$this->q;
}
}
}
## ShortVariable
### Since: 0.2
Detects when a field, local, or parameter has a very short name.
## Example
class Something {
private $q = 15; // VIOLATION - Field public static function main( array$as ) { // VIOLATION - Formal
$r = 20 +$this->q; // VIOLATION - Local
for (int $i = 0;$i < 10; $i++) { // Not a Violation (inside FOR)$r += $this->q; } } } ### Source https://phpmd.org/rules/naming.html#shortvariable #### Avoid variables with short names like$i. Configured minimum length is 3. Open
private function maybeParseThrows(int $i, string$line): void
## ShortVariable
### Since: 0.2
Detects when a field, local, or parameter has a very short name.
## Example
class Something {
private $q = 15; // VIOLATION - Field public static function main( array$as ) { // VIOLATION - Formal
$r = 20 +$this->q; // VIOLATION - Local
for (int $i = 0;$i < 10; $i++) { // Not a Violation (inside FOR)$r += $this->q; } } } ### Source https://phpmd.org/rules/naming.html#shortvariable #### Avoid variables with short names like$i. Configured minimum length is 3. Open
private function maybeParseSuppress(int $i, string$line): void
## ShortVariable
### Since: 0.2
Detects when a field, local, or parameter has a very short name.
## Example
class Something {
private $q = 15; // VIOLATION - Field public static function main( array$as ) { // VIOLATION - Formal
$r = 20 +$this->q; // VIOLATION - Local
int $i ## ShortVariable ### Since: 0.2 Detects when a field, local, or parameter has a very short name. ## Example class Something { private$q = 15; // VIOLATION - Field
public static function main( array $as ) { // VIOLATION - Formal$r = 20 + $this->q; // VIOLATION - Local for (int$i = 0; $i < 10;$i++) { // Not a Violation (inside FOR)
$r +=$this->q;
}
}
}
## ShortVariable
### Since: 0.2
Detects when a field, local, or parameter has a very short name.
## Example
class Something {
private $q = 15; // VIOLATION - Field public static function main( array$as ) { // VIOLATION - Formal
$r = 20 +$this->q; // VIOLATION - Local
for (int $i = 0;$i < 10; $i++) { // Not a Violation (inside FOR)$r += $this->q; } } } ### Source https://phpmd.org/rules/naming.html#shortvariable #### Avoid variables with short names like$i. Configured minimum length is 3. Open
private function parseParamLine(int $i, string$line): void
## ShortVariable
### Since: 0.2
Detects when a field, local, or parameter has a very short name.
## Example
class Something {
private $q = 15; // VIOLATION - Field public static function main( array$as ) { // VIOLATION - Formal
$r = 20 +$this->q; // VIOLATION - Local
for (int $i = 0;$i < 10; $i++) { // Not a Violation (inside FOR)$r += $this->q; } } } ### Source https://phpmd.org/rules/naming.html#shortvariable #### Avoid variables with short names like$i. Configured minimum length is 3. Open
private function maybeParsePhanCustomAnnotation(int $i, string$line, string $type, string$case_sensitive_type): void
## ShortVariable
### Since: 0.2
Detects when a field, local, or parameter has a very short name.
## Example
class Something {
private $q = 15; // VIOLATION - Field public static function main( array$as ) { // VIOLATION - Formal
$r = 20 +$this->q; // VIOLATION - Local
for (int $i = 0;$i < 10; $i++) { // Not a Violation (inside FOR)$r += $this->q; } } } ### Source https://phpmd.org/rules/naming.html#shortvariable #### Avoid variables with short names like$i. Configured minimum length is 3. Open
private function parseCommentLine(int $i, string$line): void
## ShortVariable
### Since: 0.2
Detects when a field, local, or parameter has a very short name.
## Example
class Something {
private $q = 15; // VIOLATION - Field public static function main( array$as ) { // VIOLATION - Formal
$r = 20 +$this->q; // VIOLATION - Local
for (int $i = 0;$i < 10; $i++) { // Not a Violation (inside FOR)$r += $this->q; } } } ### Source https://phpmd.org/rules/naming.html#shortvariable #### Avoid variables with short names like$i. Configured minimum length is 3. Open
private function maybeParseProperty(int $i, string$line): void
## ShortVariable
### Since: 0.2
Detects when a field, local, or parameter has a very short name.
## Example
class Something {
private $q = 15; // VIOLATION - Field public static function main( array$as ) { // VIOLATION - Formal
$r = 20 +$this->q; // VIOLATION - Local
for (int $i = 0;$i < 10; $i++) { // Not a Violation (inside FOR)$r += $this->q; } } } ### Source https://phpmd.org/rules/naming.html#shortvariable #### Avoid variables with short names like$j. Configured minimum length is 3. Open
$j =$i - ($lineno_search -$check_lineno);
## ShortVariable
### Since: 0.2
Detects when a field, local, or parameter has a very short name.
## Example
class Something {
private $q = 15; // VIOLATION - Field public static function main( array$as ) { // VIOLATION - Formal
$r = 20 +$this->q; // VIOLATION - Local
for (int $i = 0;$i < 10; $i++) { // Not a Violation (inside FOR)$r += \$this->q;
}
}
} | 2022-07-04 14:48:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23624970018863678, "perplexity": 13400.393379224239}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104432674.76/warc/CC-MAIN-20220704141714-20220704171714-00162.warc.gz"} |
https://www.gradesaver.com/textbooks/math/algebra/algebra-and-trigonometry-10th-edition/chapter-6-p-s-problem-solving-page-506/9d | ## Algebra and Trigonometry 10th Edition
$80~beats~per~second$
The pulse of the patient is the frequency of the given function. The frequency is the inverse of the period. According to item (b) the period is 0.75 seconds: $f=\frac{1}{0.75}=\frac{4}{3}~beats~per~second=\frac{4}{3}\times60=80~beats~per~second$ | 2020-04-01 15:40:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7001689672470093, "perplexity": 516.1795755016577}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370505731.37/warc/CC-MAIN-20200401130837-20200401160837-00499.warc.gz"} |
https://www.lmfdb.org/L/2/450/5.4/c3-0 | ## Results (1-50 of 82 matches)
Next
Label $\alpha$ $A$ $d$ $N$ $\chi$ $\mu$ $\nu$ $w$ prim arith $\mathbb{Q}$ self-dual $\operatorname{Arg}(\epsilon)$ $r$ First zero Origin
2-450-1.1-c3-0-0 $5.15$ $26.5$ $2$ $2 \cdot 3^{2} \cdot 5^{2}$ 1.1 $$3.0 3 0 0 0.570561 Modular form 450.4.a.a Modular form 450.4.a.a.1.1 2-450-1.1-c3-0-1 5.15 26.5 2 2 \cdot 3^{2} \cdot 5^{2} 1.1$$ $3.0$ $3$ $0$ $0$ $0.704569$ Modular form 450.4.a.c Modular form 450.4.a.c.1.1
2-450-1.1-c3-0-10 $5.15$ $26.5$ $2$ $2 \cdot 3^{2} \cdot 5^{2}$ 1.1 $$3.0 3 0 0 1.19540 Modular form 450.4.a.s Modular form 450.4.a.s.1.1 2-450-1.1-c3-0-11 5.15 26.5 2 2 \cdot 3^{2} \cdot 5^{2} 1.1$$ $3.0$ $3$ $0$ $0$ $1.22000$ Modular form 450.4.a.v.1.2
2-450-1.1-c3-0-12 $5.15$ $26.5$ $2$ $2 \cdot 3^{2} \cdot 5^{2}$ 1.1 $$3.0 3 0 0 1.26850 Modular form 450.4.a.t Modular form 450.4.a.t.1.1 2-450-1.1-c3-0-13 5.15 26.5 2 2 \cdot 3^{2} \cdot 5^{2} 1.1$$ $3.0$ $3$ $0.5$ $1$ $1.48177$ Modular form 450.4.a.b Modular form 450.4.a.b.1.1
2-450-1.1-c3-0-14 $5.15$ $26.5$ $2$ $2 \cdot 3^{2} \cdot 5^{2}$ 1.1 $$3.0 3 0.5 1 1.50802 Modular form 450.4.a.u.1.1 2-450-1.1-c3-0-15 5.15 26.5 2 2 \cdot 3^{2} \cdot 5^{2} 1.1$$ $3.0$ $3$ $0.5$ $1$ $1.57819$ Modular form 450.4.a.f Modular form 450.4.a.f.1.1
2-450-1.1-c3-0-16 $5.15$ $26.5$ $2$ $2 \cdot 3^{2} \cdot 5^{2}$ 1.1 $$3.0 3 0.5 1 1.61796 Modular form 450.4.a.g Modular form 450.4.a.g.1.1 2-450-1.1-c3-0-17 5.15 26.5 2 2 \cdot 3^{2} \cdot 5^{2} 1.1$$ $3.0$ $3$ $0.5$ $1$ $1.64382$ Modular form 450.4.a.h Modular form 450.4.a.h.1.1
2-450-1.1-c3-0-18 $5.15$ $26.5$ $2$ $2 \cdot 3^{2} \cdot 5^{2}$ 1.1 $$3.0 3 0.5 1 1.66663 Modular form 450.4.a.u.1.2 2-450-1.1-c3-0-19 5.15 26.5 2 2 \cdot 3^{2} \cdot 5^{2} 1.1$$ $3.0$ $3$ $0.5$ $1$ $2.02021$ Modular form 450.4.a.k Modular form 450.4.a.k.1.1
2-450-1.1-c3-0-2 $5.15$ $26.5$ $2$ $2 \cdot 3^{2} \cdot 5^{2}$ 1.1 $$3.0 3 0 0 0.756904 Modular form 450.4.a.e Modular form 450.4.a.e.1.1 2-450-1.1-c3-0-20 5.15 26.5 2 2 \cdot 3^{2} \cdot 5^{2} 1.1$$ $3.0$ $3$ $0.5$ $1$ $2.06732$ Modular form 450.4.a.m Modular form 450.4.a.m.1.1
2-450-1.1-c3-0-21 $5.15$ $26.5$ $2$ $2 \cdot 3^{2} \cdot 5^{2}$ 1.1 $$3.0 3 0.5 1 2.07118 Modular form 450.4.a.n Modular form 450.4.a.n.1.1 2-450-1.1-c3-0-22 5.15 26.5 2 2 \cdot 3^{2} \cdot 5^{2} 1.1$$ $3.0$ $3$ $0.5$ $1$ $2.12234$ Modular form 450.4.a.o Modular form 450.4.a.o.1.1
2-450-1.1-c3-0-23 $5.15$ $26.5$ $2$ $2 \cdot 3^{2} \cdot 5^{2}$ 1.1 $$3.0 3 0.5 1 2.12825 Modular form 450.4.a.p Modular form 450.4.a.p.1.1 2-450-1.1-c3-0-3 5.15 26.5 2 2 \cdot 3^{2} \cdot 5^{2} 1.1$$ $3.0$ $3$ $0$ $0$ $0.776539$ Modular form 450.4.a.d Modular form 450.4.a.d.1.1
2-450-1.1-c3-0-4 $5.15$ $26.5$ $2$ $2 \cdot 3^{2} \cdot 5^{2}$ 1.1 $$3.0 3 0 0 0.997962 Modular form 450.4.a.i Modular form 450.4.a.i.1.1 2-450-1.1-c3-0-5 5.15 26.5 2 2 \cdot 3^{2} \cdot 5^{2} 1.1$$ $3.0$ $3$ $0$ $0$ $1.01262$ Modular form 450.4.a.j Modular form 450.4.a.j.1.1
2-450-1.1-c3-0-6 $5.15$ $26.5$ $2$ $2 \cdot 3^{2} \cdot 5^{2}$ 1.1 $$3.0 3 0 0 1.02322 Modular form 450.4.a.v.1.1 2-450-1.1-c3-0-7 5.15 26.5 2 2 \cdot 3^{2} \cdot 5^{2} 1.1$$ $3.0$ $3$ $0$ $0$ $1.03050$ Modular form 450.4.a.l Modular form 450.4.a.l.1.1
2-450-1.1-c3-0-8 $5.15$ $26.5$ $2$ $2 \cdot 3^{2} \cdot 5^{2}$ 1.1 $$3.0 3 0 0 1.14161 Modular form 450.4.a.q Modular form 450.4.a.q.1.1 2-450-1.1-c3-0-9 5.15 26.5 2 2 \cdot 3^{2} \cdot 5^{2} 1.1$$ $3.0$ $3$ $0$ $0$ $1.16312$ Modular form 450.4.a.r Modular form 450.4.a.r.1.1
2-450-15.2-c3-0-0 $5.15$ $26.5$ $2$ $2 \cdot 3^{2} \cdot 5^{2}$ 15.2 $$3.0 3 -0.384 0 0.102781 Modular form 450.4.f.e.107.1 2-450-15.2-c3-0-1 5.15 26.5 2 2 \cdot 3^{2} \cdot 5^{2} 15.2$$ $3.0$ $3$ $-0.295$ $0$ $0.104332$ Modular form 450.4.f.d.107.3
2-450-15.2-c3-0-10 $5.15$ $26.5$ $2$ $2 \cdot 3^{2} \cdot 5^{2}$ 15.2 $$3.0 3 0.0610 0 1.11270 Modular form 450.4.f.c.107.1 2-450-15.2-c3-0-11 5.15 26.5 2 2 \cdot 3^{2} \cdot 5^{2} 15.2$$ $3.0$ $3$ $0.118$ $0$ $1.12168$ Modular form 450.4.f.f.107.4
2-450-15.2-c3-0-12 $5.15$ $26.5$ $2$ $2 \cdot 3^{2} \cdot 5^{2}$ 15.2 $$3.0 3 0.204 0 1.26641 Modular form 450.4.f.f.107.1 2-450-15.2-c3-0-13 5.15 26.5 2 2 \cdot 3^{2} \cdot 5^{2} 15.2$$ $3.0$ $3$ $0.365$ $0$ $1.46261$ Modular form 450.4.f.a.107.1
2-450-15.2-c3-0-14 $5.15$ $26.5$ $2$ $2 \cdot 3^{2} \cdot 5^{2}$ 15.2 $$3.0 3 0.311 0 1.48307 Modular form 450.4.f.b.107.1 2-450-15.2-c3-0-15 5.15 26.5 2 2 \cdot 3^{2} \cdot 5^{2} 15.2$$ $3.0$ $3$ $0.400$ $0$ $1.66866$ Modular form 450.4.f.f.107.3
2-450-15.2-c3-0-16 $5.15$ $26.5$ $2$ $2 \cdot 3^{2} \cdot 5^{2}$ 15.2 $$3.0 3 0.422 0 1.71749 Modular form 450.4.f.d.107.4 2-450-15.2-c3-0-17 5.15 26.5 2 2 \cdot 3^{2} \cdot 5^{2} 15.2$$ $3.0$ $3$ $-0.438$ $0$ $2.06963$ Modular form 450.4.f.a.107.2
2-450-15.2-c3-0-2 $5.15$ $26.5$ $2$ $2 \cdot 3^{2} \cdot 5^{2}$ 15.2 $$3.0 3 -0.188 0 0.295188 Modular form 450.4.f.e.107.3 2-450-15.2-c3-0-3 5.15 26.5 2 2 \cdot 3^{2} \cdot 5^{2} 15.2$$ $3.0$ $3$ $-0.381$ $0$ $0.331403$ Modular form 450.4.f.d.107.2
2-450-15.2-c3-0-4 $5.15$ $26.5$ $2$ $2 \cdot 3^{2} \cdot 5^{2}$ 15.2 $$3.0 3 -0.384 0 0.510694 Modular form 450.4.f.e.107.2 2-450-15.2-c3-0-5 5.15 26.5 2 2 \cdot 3^{2} \cdot 5^{2} 15.2$$ $3.0$ $3$ $-0.188$ $0$ $0.526100$ Modular form 450.4.f.e.107.4
2-450-15.2-c3-0-6 $5.15$ $26.5$ $2$ $2 \cdot 3^{2} \cdot 5^{2}$ 15.2 $$3.0 3 -0.134 0 0.569940 Modular form 450.4.f.c.107.2 2-450-15.2-c3-0-7 5.15 26.5 2 2 \cdot 3^{2} \cdot 5^{2} 15.2$$ $3.0$ $3$ $-0.0991$ $0$ $0.675322$ Modular form 450.4.f.d.107.1
2-450-15.2-c3-0-8 $5.15$ $26.5$ $2$ $2 \cdot 3^{2} \cdot 5^{2}$ 15.2 $$3.0 3 -0.0770 0 0.943393 Modular form 450.4.f.f.107.2 2-450-15.2-c3-0-9 5.15 26.5 2 2 \cdot 3^{2} \cdot 5^{2} 15.2$$ $3.0$ $3$ $0.115$ $0$ $1.07019$ Modular form 450.4.f.b.107.2
2-450-15.8-c3-0-0 $5.15$ $26.5$ $2$ $2 \cdot 3^{2} \cdot 5^{2}$ 15.8 $$3.0 3 -0.365 0 0.100079 Modular form 450.4.f.a.143.1 2-450-15.8-c3-0-1 5.15 26.5 2 2 \cdot 3^{2} \cdot 5^{2} 15.8$$ $3.0$ $3$ $0.438$ $0$ $0.198460$ Modular form 450.4.f.a.143.2
2-450-15.8-c3-0-10 $5.15$ $26.5$ $2$ $2 \cdot 3^{2} \cdot 5^{2}$ 15.8 $$3.0 3 -0.118 0 1.04106 Modular form 450.4.f.f.143.4 2-450-15.8-c3-0-11 5.15 26.5 2 2 \cdot 3^{2} \cdot 5^{2} 15.8$$ $3.0$ $3$ $0.384$ $0$ $1.08525$ Modular form 450.4.f.e.143.1
2-450-15.8-c3-0-12 $5.15$ $26.5$ $2$ $2 \cdot 3^{2} \cdot 5^{2}$ 15.8 $$3.0 3 0.381 0 1.30958 Modular form 450.4.f.d.143.2 2-450-15.8-c3-0-13 5.15 26.5 2 2 \cdot 3^{2} \cdot 5^{2} 15.8$$ $3.0$ $3$ $0.188$ $0$ $1.39206$ Modular form 450.4.f.e.143.3
2-450-15.8-c3-0-14 $5.15$ $26.5$ $2$ $2 \cdot 3^{2} \cdot 5^{2}$ 15.8 $$3.0 3 0.384 0 1.52129 Modular form 450.4.f.e.143.2 2-450-15.8-c3-0-15 5.15 26.5 2 2 \cdot 3^{2} \cdot 5^{2} 15.8$$ $3.0$ $3$ $0.134$ $0$ $1.53507$ Modular form 450.4.f.c.143.2
Next | 2023-01-28 16:56:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9885215759277344, "perplexity": 507.56098688958195}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499646.23/warc/CC-MAIN-20230128153513-20230128183513-00340.warc.gz"} |
https://www.greencarcongress.com/2007/07/ud-led-team-set/comments/page/2/ | ## UD-Led Team Sets Solar Cell Efficiency Record of 42.8%; Joins DuPont on $100M Project ##### 28 July 2007 The lateral solar cell architecture with a specially designed concentrator contributes to the enhanced performance. Click to enlarge. Using a novel technology that adds multiple innovations to a very high-performance crystalline silicon solar cell platform, a consortium led by the University of Delaware has achieved a record-breaking combined solar cell efficiency of 42.8% from sunlight at standard terrestrial conditions. That number is a significant advance from the current record of 40.7% announced in December and demonstrates an important milestone on the path to the 50% efficiency goal set by the Defense Advanced Research Projects Agency (DARPA). In November 2005, the UD-led consortium received approximately$13 million in funding for the initial phases of the DARPA Very High Efficiency Solar Cell (VHESC) program to develop affordable portable solar cell battery chargers.
Combined with the demonstrated efficiency performance of the very high efficiency solar cells’ spectral splitting optics, which is more than 93%, these recent results put the pieces in place for a solar cell module with a net efficiency 30% greater than any previous module efficiency and twice the efficiency of state-of-the-art silicon solar cell modules.
As a result of the consortium’s technical performance, DARPA is initiating the next phase of the program by funding the newly formed DuPont-University of Delaware VHESC Consortium to transition the lab-scale work to an engineering and manufacturing prototype model. This three-year effort could be worth as much as $100 million, including industry cost-share. Allen Barnett, principal investigator and UD professor of electrical and computer engineering, and Christiana Honsberg, co-principal investigator and associate professor of electrical and computer engineering led the research. The two direct the University’s High Performance Solar Power Program and will continue working to achieve 50% efficiency, a benchmark that when reached would mean a doubling of the efficiency of terrestrial solar cells based around a silicon platform within a 50-month span. The highly efficient VHESC solar cell uses a novel lateral optical concentrating system that splits solar light into three different energy bins of high, medium and low, and directs them onto cells of various light sensitive materials to cover the solar spectrum. The system delivers variable concentrations to the different solar cell elements. The concentrator is stationary with a wide acceptance angle optical system that captures large amounts of light and eliminates the need for complicated tracking devices. Modern solar cell systems rely on the concentration of sunlight. The previous best of 40.7% efficiency was achieved with a high concentration device that requires sophisticated tracking optics and features a concentrating lens the size of a table and more than 30 centimeters, or about 1 foot, thick. The UD consortium’s devices are potentially far thinner at less than 1 centimeter. This is a major step toward our goal of 50% efficiency. The percentage is a record under any circumstance, but it’s particularly noteworthy because it’s at low concentration, approximately 20 times magnification. The low profile and lack of moving parts translates into portability, which means these devices easily could go on a laptop computer or a rooftop. —Allen Barnett Honsberg said the advance of 2 percentage points is noteworthy in a field where gains of 0.2 percent are the norm and gains of 1 percent are seen as significant breakthroughs. Many of us have been working with programs to take us to a real photovoltaic energy future. This project is already working in that future. DARPA has leapfrogged the ‘conventional,’ demonstrating that creativity and focus can significantly accelerate revolutionary research-bench concepts toward reality, demonstrating this does not have to take decades. This is a first step—but a significant one in making sure our energy future is what we know it should look like. —Lawrence L. Kazmerski, director of the US Department of Energy’s National Center for Photovoltaics at the National Renewable Energy Laboratory Barnett and Honsberg said that reaching the 42.8% mark is a significant advance in solar cell efficiency, particularly given the unique small and portable architecture being used by the consortium and the short time—21 months—in which it was developed. During the first 21 months of the VHESC program, a diverse team of academia, government lab and industrial partners, led by UD, was focused on developing the technology basis for a new extremely high efficiency solar cell. The rapid success of that effort has enabled the present transition to a focus on prototype product development. The team’s approach provides for affordability and also flexibility in the choice of materials and the integration of new technologies as they are developed. Barnett credits the early success of the program to the team approach taken to solving the problem. Partners in the initial phase included BP Solar, Blue Square Energy, Energy Focus, Emcore and SAIC. Key research contributors included the University of Delaware, National Renewable Energy Laboratory, Georgia Institute of Technology, Purdue University, University of Rochester, Massachusetts Institute of Technology, University of California Santa Barbara, Optical Research Associates and the Australian National University. The newly formed DuPont-University of Delaware VHESC consortium will be made up of industrial partners, national laboratories and universities. (A hat-tip to Marty!) Resources: ### Comments Stan, I suggest you retake your college courses on thermodynamics. There are so many flaws and misunderstandings in your arguments thar I don't even know where to begin refuting them. Good day. Stan, your argument is overly complicated and yet all wrong. A solar panel is not much different then black road which is all over the world. The only difference is that the solar panel makes electricity that would have been generated in some other way that would probably generate more heat. Burning coal for example makes heat way beyond the electricity it makes and co2 to cause global warming. The heat generated by the electricity's use is not a factor, it is already happening and is the same no mater the source. In all a solar panel is far less dangerous for global warming then a simple asphalt road. Mr. Peterson, In what way would the heat that is generated from the output of the solar cells have less of an opportunity to radiate back into space? You are just spreading FUD among the uneducated. The only problem that I can foresee is if a large enough concentrated plot of land is covered with very efficient solar cells and the electricity is transported elsewhere at high efficiency. Such a giant solar cell farm would be enough to cool deserts and thus disrupt local climates. That is why you create many smaller localized installations - i.e. on roofs and other man-made structures. Besides it would be extremely difficult and expensive to have a power grid transport that much electricity from such a small area to hundreds of destinations each of them thousands of miles away. Distributed (non obstructive roof top mounted) high efficiency (40%) low cost ($1/watt) PVs may be a very smart and clean way to supply 50% + of the electricity required for our homes, PHEVs and BEVs. A 4KW system could produce an average of about 20 KWh/day and would require about 10 X 1 square meter, high efficiency, panels.
The installed cost of this sype of system would be about $8k to$10K per residence. For those of us who think that this is too much to pay for clean power, it represents about the home price inflation for ONE YEAR during the last 5 years. Inflation did not contribute to the reduction of GHG but we all voluntary paid for it.
Eventually, domestic energy back-up storage units could (if desired) capture excess power and further reduce grid consumption during low or no sunshine hours.
People (10 million +) with oversized systems could sell excess power to the grid.
Multiply similar systems by 50 million and about half the current USA coal fired power generating plants could be closed.
With 100+ million PHEVs and BEVs (i.e. = 2 per home) about half the current oil refineries could be closed.
What use are very efficient panels if they are not cheap? If the watt per dollar are not low enough its not going to be economically competitive.
Right now solar systems are quite expensive. Installed by a pro $8 a watt would be low. It needs to get down below$3 a watt and then it will take off. I am thinking of putting up a system now even at the high prices.
Man, I haven't seen such a pure psychobabbling intellectual meltdown like that in ages! Stan Peterson has just jumped the shark.
I think it's worth it to take a look at what a PV-powered world would really mean. For the sake of argument, let's assume that the entire electric supply of the world comes from PV, that the world consumes 4x as much as the USA alone, and the USA consumes 5000 TWH/year (roughly 25% more than at present). Let's also assume that PV supplies, as electricity, 25% of the energy pumped out of the ground as oil (8.5e7 bbl/day @ 6.1 GJ/bbl). The PV panels are 15% efficient, absorb 100% of the incident sunlight, and replace surfaces with an average albedo of 0.30. If the heat is distributed evenly, how much does the world heat up?
First, the average power: 20,000 TWH/yr is 2.28 TW, and 25% of 85 million bbl/day * 6.1 GJ/bbl is 1.50 TW for a total of 3.78 TW. Harvesting this at 15% efficiency requires intercepting 25.2 TW of sunlight. 30% of this would have been reflected before, so the additional heating is 7.56 TW.
The Earth has a surface area of about 511 trillion square meters. 7.56 TW of power over this area is 14.8 milliwatts per square meter. If the emissivity is 0.5 (allowing for the greenhouse effect), the temperature increase to radiate this extra heat is 0.0148/(4*0.5*280K^3*5.67e-8) which equals....
(drum roll)
0.006°K.
Now, I haven't checked this with a unit analysis, and my differentiation of the blackbody radiation equation might be wrong. But I'll bet a case of beer that I'm within an order of magnitude.
Engineer-Poet, no bets from me, that looks about right!
Even if it was a problem though, remember that reflecting more light to regulate the Earth's temperature is trivial - just paint the deserts white.
Stan,
You misuse the term "Cassandra." Cassandra of Troy was given the ability to prophesy future events but with the curse that all the events would be bad and she wouldn't be believed. So she was right but no one believed her.
A certain divinity school dropout is wrong but many people believe him.
Since I attended the first Earth Day observance (1969) and learned about the coming ice age, raw material shortages, and mass starvations that would inevitably occur by the 1990's, I have concluded that people just like bad news and there are always those who will supply it. Their mantra is "all change is bad" unless it hurts those people or institutions that they don't like.
Considering that the "normal" climate of the last few 100,000 years is "ice age" and that we are currently enjoying a little break from massive ice sheets in our backyards, I have to think that if we can warm the earth and keep it warm, then that's a good thing. Let's keep that CO2 coming; plants love it. During past warm periods, plants (including agricultural plants) thrived farther north than they do today and equatorial jungles spread, too. Cold is death; warmth is life.
Meanwhile, let's keep looking for ways to get off the crude that isn't endless and that funds people who want us dead. No nation or state has ever conserved their way out of shortages (rationing is institutionalized shortages) and I'm not expecting that to work now. New energy sources are needed to avoid a return of the Dark Ages when the middle eastern oil fields play out.
Solar cells? Great idea. Where I live there are 300 sunny days a year. Just get the cost down to something that starts coming out ahead in a few years and not long after I'm dead. While we're at it, I want a car I can plug in and use for local driving without burning fuel. But you will have a very hard time powering industry with solar cells until heavy industry is willing to relocate to wide open spaces.
That's why I come to Green Car Congress; I'm looking for good news that will enhance my quality of life and give us the ability to improve the quality of lives in the third world. It is better to invent the future than to surrender to it.
@ HealthyBreeze -
PV panels that rely on heliostat arrays aren't operated at high temperatures, they are thin structures whose rear faces are aggressively cooled. You do have a point, however, in that the there has to be a temperature gradient in the thickness direction that drives the heat transfer. That means the exposed and rear faces experience differential expansion. If the panels are too stiff and the coefficient of thermal expansion too high, thermal stresses could break the panels.
Yeah, I'd love to have these on my roof, but high-efficiency solar cells are currently so expensive only NASA can afford them.
At today's prices, it's more like $10,000 per installed kW of PV for us regular folk. Given those prices you also look heavily at wind charts and local height ordinances. :) When, The article states they're moving into prototype phase soon. This design paradigm change for production purposes is 3yrs off according to the article. "The consortium's goal is to create solar cells that operate at 50 percent in production, Barnett said. With the fresh funding and cooperative efforts of the DuPont-UD consortium, he said it is expected new high efficiency solar cells could be in production by 2010." “By integrating the optical design with the solar cell design, we have entered previously unoccupied design space leading to a new paradigm about how to make solar cells, how to use them, and what they can do.” also, anticipating future cost... "The team's novel approach provides for affordability and also flexibility in the choice of materials and the integration of new technologies as they are developed." Since a heavyweight like Dupont is involved now, along with Defense backing, the cost should go down as they ramp up production in the future. Duponts involvement will promote an eventual lowering of cost due to their manufacturing conglomerate worldwide. They'll want to push this technology downline to all solar manufacturers. This shows competition is heating up for solar dollars. The goal is 50%, but if you read the article, the scientist believe they can achieve higher results in the future due to a paradigm shift in design techniques. Question for Stan or other engineers, How much heat increase does dark roofing material add? Does not roofing absorb heat? This new technology will actually decrease the size required for power generation. Whats the difference between putting these on a roof and the roof itself for absorbing heat? Well just about everybody is hitting the nail on the head (with the exception of Stan, yikes...) but not coming right out and saying it. As far as I'm concerned efficiency levels of modern day solar cells are wonderful. Besides, most people will never be able to go out and buy THE most efficient cells on the market, they just aren't cost effective. That's why most commercial cells for home use are still at about 25% efficiency. Kinda works like buying a graphics card for your computer; hey if you can afford to go out and spend$500 dollars for an Nvidia 8800GTX, but I digress...
The real effort needs to be made in the area of industrial processes; i.e reducing the COST of solar cells, not increasing efficiency. I'm not too impressed with news reports that someone increased efficiency by an extra few percent. I'll probably never even consider using those arrays. But if someone finds a way to make 20% efficient cells for half the price, that would have an immediate and significant impact on the consumer market for solar cells.
Once again, just my two cents.
Peace,
Cosmo
Saw a Discover or Science channel show last week about processess using compounds applied like ink to plastics or glass and mass producable with a goal of ten cents per Watt within 18 months.
This is in MIT country.
There is a company now formed to proceed. It has a name like Kornack or something similar.
Anyone tracking that? If so, please let me know at
[email protected]
Peace and enjoy.
Better not let Dick Cheney know we can get such efficiencies from solar or he'll send a rocket into the sun to blow it up.
The comments to this entry are closed. | 2023-02-02 10:52:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3514086604118347, "perplexity": 2100.4610534552503}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500017.27/warc/CC-MAIN-20230202101933-20230202131933-00809.warc.gz"} |
https://stats.stackexchange.com/questions/214134/som-based-on-a-not-euclidean-distance | # SOM based on a not euclidean distance
Suppose one has trained a SOM on a certain number of data. Without explaining all the procedure, one can say that the SOM algorithm produces a certain number of prototypes and the new elements coming in input are clustered based on the distance from the prototypes.
Two possible packages are:
1. kohonen::som (R)
2. somclu (Python)
Here it is explained the fact that in a high dimensional context the euclidean distance is not the best to capture the difference among vectors. Nevertheless, relying on the two previous algorithms it seems (coincidence?) that there's not the possibility to choose a distance different from the euclidean one in order to train the models.
There exists a reason for which the euclidean distance could be the best (or only possible) one on the training process of a self organizing map?
This problem is not only related to the SOM, it's a general problem. In big dimensions small changes in parameters can cause big differences in distances. This problem appears when you try to measure euclidean distance in high dimensional space. Let say you have two vectors. Each of them has dimension size equal to 1000. In the first vector all elements equal to 1, and in the second one all elements equal to 0.99. Basically, all elements in the second vector reduced by 1% compare to the first one. These vectors should be pretty close to each other when you try to learn algorithm to capture some relations in the data. But here what happens when you try to compute euclidian distance.
>>> import numpy as np
>>> x = np.ones(1000)
>>> y = np.ones(1000) * 0.99
>>> x[:10]
array([ 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.])
>>> y[:10]
array([ 0.99, 0.99, 0.99, 0.99, 0.99, 0.99, 0.99, 0.99, 0.99, 0.99])
>>>
>>> from sklearn import metrics
>>> metrics.euclidean_distances(x,y)
array([[ 0.31622777]])
>>>
>>> metrics.pairwise.cosine_distances(x,y)
array([[ -3.55271368e-15]])
As you can see euclidean distance shows that vectors are not very close to each other. For comparison I've also added the cosine distance metric. Cosine distance is equal to zero (in the example above I got $-3 \cdot 10 ^ {-15}$, because of computational error), because two vectors have the same direction and angle between them is equal to zero
• I agree. Nevertheless, what I have seen in the SOM context is that it seems like no one has thought to something different to the euclidean distance. What you've answered strenghtens the concept: there exists some implementation of the SOMs with a not-euclidean distance? May 24, 2016 at 6:41
• In the literature related to the SOM I've came across just euclidean disntance and cosine similarities. Here you can find an implementation of SOM that supports both of these measurments - github.com/itdxer/neupy/blob/master/neupy/algorithms/… May 24, 2016 at 8:26
Upon reading about Kohonen Self Organising Maps, my initial thoughts were the same. It seems there has been some research on it, but I don't know enough about the area to comment more. There is a paper called Extending the SOM Algorithm to Non Euclidean Distances via the Kernel Trick, but it is behind a paywall so I can't comment on the content. I've requested the paper on Research gate so we'll see how that goes.
With regards to the code: could you inherit from existing solutions and implement a different distance metric? Alternatively, could you transform the data set via feature scaling, say, and see how your results differ?
• The algorithm is written in C: I think that maybe it could be easier to implement it from the beginning than inheriting. About the scaling it's an interesting idea, but it would be the transformation of a transformation (maybe too many passsages?). I've seen the paper: probably it's a well-known topic outside stack! :) Jun 22, 2016 at 12:40 | 2022-09-24 19:21:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4541054666042328, "perplexity": 471.5310138282462}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00034.warc.gz"} |
https://vuorre.netlify.com/post/2018/12/12/rpihkal-stop-pasting-and-start-gluing/ | # rpihkal: Stop pasting and start gluing
Use the glue R package to join strings.
We’ve all been there; writing manuscripts with R Markdown and dreaming of easy in-text code bits for reproducible reporting. Say you’ve fit a regression model to your data, and would then like to report the model’s parameters in your text, without writing the values in the text. (If the data or model changes, you’d need to re-type the values again.)
For example, you can print this model summary easily in the R console:
fit <- lm(mpg ~ disp, data = mtcars)
summary(fit)
##
## Call:
## lm(formula = mpg ~ disp, data = mtcars)
##
## Residuals:
## Min 1Q Median 3Q Max
## -4.8922 -2.2022 -0.9631 1.6272 7.2305
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 29.599855 1.229720 24.070 < 2e-16 ***
## disp -0.041215 0.004712 -8.747 9.38e-10 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 3.251 on 30 degrees of freedom
## Multiple R-squared: 0.7183, Adjusted R-squared: 0.709
## F-statistic: 76.51 on 1 and 30 DF, p-value: 9.38e-10
And to cite those values in the text body of your manuscript, you can write the text in R Markdown like this:
The model intercept was r round(coef(fit)[1], 2), great.
Which would show up in your manuscript like this:
The model intercept was 29.6, great.
## Paste
However, when you want to present more information, such as the parameter estimate with its standard error, you will have to paste() those strings together:
(x <- round(summary(fit)$coefficients, 3)) ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 29.600 1.230 24.070 0 ## disp -0.041 0.005 -8.747 0 intercept <- paste("b = ", x[1, 1], ", SE = ", x[1, 2], sep = "") You can then just cite the intercept object in your text body: The model intercept was very very significant (r intercept). Which would render in your PDF or word document as: The model intercept was very very significant (b = 29.6, SE = 1.23). paste() is a base R function, and as such very robust and reproducible–all R installations will have it. However, as such it has a fairly terrible syntax where you have to quote strings, separate strings and variables with commas, etc. This task is made much easier with glue(). ## Glue glue is a small R package that allows you to join strings together in a neat, pythonific way. It replaces the need for quoting and separating arguments in paste(), by asking you to wrap variables in curly braces. Here’s how to do the above pasting with glue: library(glue) intercept <- glue("b = {x[1, 1]}, SE = {x[1, 2]}") Which gives you the same string as the much messier paste() approach: b = 29.6, SE = 1.23 ### Glue with data frames Glue has other neat (more advanced) features, such as gluing variables row-by-row in a data frame: library(dplyr) as.data.frame(x) %>% glue_data( "{rownames(.)}'s point estimate was {Estimate}, with an SE of {Std. Error}." ) ## (Intercept)'s point estimate was 29.6, with an SE of 1.23. ## disp's point estimate was -0.041, with an SE of 0.005. ## Appendix: papaja For some models (like our simple linear model example here), the papaja R package (which deserves its own rpihkal post!) has very useful shortcuts library(papaja) intercept <- apa_print(fit)$estimate\$Intercept
If you now cite intercept in the text body of your manuscript, it renders into $$\LaTeX$$ (which is interpreted nicely if you are outputting PDF or Word documents; here on this website it looks odd):
The model intercept was rather significant (r intercept).
The model intercept was rather significant ($$b = 29.60$$, 95% CI $$[27.09$$, $$32.11]$$). | 2019-01-23 06:58:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48977747559547424, "perplexity": 4674.126530134422}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547584203540.82/warc/CC-MAIN-20190123064911-20190123090911-00539.warc.gz"} |
https://artofproblemsolving.com/wiki/index.php?title=Euler_line&diff=99292&oldid=99291 | # Difference between revisions of "Euler line"
In any triangle $\triangle ABC$, the Euler line is a line which passes through the orthocenter $H$, centroid $G$, circumcenter $O$, nine-point center $N$ and de Longchamps point $L$. It is named after Leonhard Euler. Its existence is a non-trivial fact of Euclidean geometry. Certain fixed orders and distance ratios hold among these points. In particular, $\overline{OGNH}$ and $OG:GN:NH = 2:1:3$
Euler line is the central line $L_{647}$.
Given the orthic triangle $\triangle H_AH_BH_C$ of $\triangle ABC$, the Euler lines of $\triangle AH_BH_C$,$\triangle BH_CH_A$, and $\triangle CH_AH_B$ concur at $N$, the nine-point circle of $\triangle ABC$.
## Proof Centroid Lies on Euler Line
This proof utilizes the concept of spiral similarity, which in this case is a rotation followed homothety. Consider the medial triangle $\triangle O_AO_BO_C$. It is similar to $\triangle ABC$. Specifically, a rotation of $180^\circ$ about the midpoint of $O_BO_C$ followed by a homothety with scale factor $2$ centered at $A$ brings $\triangle ABC \to \triangle O_AO_BO_C$. Let us examine what else this transformation, which we denote as $\mathcal{S}$, will do.
It turns out $O$ is the orthocenter, and $G$ is the centroid of $\triangle O_AO_BO_C$. Thus, $\mathcal{S}(\{O_A, O, G\}) = \{A, H, G\}$. As a homothety preserves angles, it follows that $\measuredangle O_AOG = \measuredangle AHG$. Finally, as $\overline{AH} || \overline{O_AO}$ it follows that $$\triangle AHG = \triangle O_AOG$$ Thus, $O, G, H$ are collinear, and $\frac{OG}{HG} = \frac{1}{2}$.
## Another Proof
Let $M$ be the midpoint of $BC$. Extend $CG$ past $G$ to point $H'$ such that $CG = \frac{1}{2} GH$. We will show $H'$ is the orthocenter. Consider triangles $MGO$ and $AGH'$. Since $\frac{MG}{GA}=\frac{H'G}{GC} = \frac{1}{2}$, and they both share a vertical angle, they are similar by SAS similarity. Thus, $AH' \parallel OM \perp BC$, so $H'$ lies on the $A$ altitude of $\triangle ABC$. We can analogously show that $H'$ also lies on the $B$ and $C$ altitudes, so $H'$ is the orthocenter. $\square$
## Proof Nine-Point Center Lies on Euler Line
Assuming that the nine point circle exists and that $N$ is the center, note that a homothety centered at $H$ with factor $2$ brings the Euler points $\{E_A, E_B, E_C\}$ onto the circumcircle of $\triangle ABC$. Thus, it brings the nine-point circle to the circumcircle. Additionally, $N$ should be sent to $O$, thus $N \in \overline{HO}$ and $\frac{HN}{ON} = 1$.
## Analytic Proof of Existence
Let the circumcenter be represented by the vector $O = (0, 0)$, and let vectors $A,B,C$ correspond to the vertices of the triangle. It is well known the that the orthocenter is $H = A+B+C$ and the centroid is $G = \frac{A+B+C}{3}$. Thus, $O, G, H$ are collinear and $\frac{OG}{HG} = \frac{1}{2}$ | 2020-10-25 19:34:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 67, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9222298264503479, "perplexity": 201.94763165423612}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107889651.52/warc/CC-MAIN-20201025183844-20201025213844-00438.warc.gz"} |
https://lara.epfl.ch/w/sav08:relational_semantics_of_procedures?rev=1429630214&do=diff | • English only
# Differences
This shows you the differences between two versions of the page.
sav08:relational_semantics_of_procedures [2009/05/26 23:02]vkuncak sav08:relational_semantics_of_procedures [2015/04/21 17:30] (current) Both sides previous revision Previous revision 2009/05/26 23:06 vkuncak 2009/05/26 23:02 vkuncak 2009/05/26 23:01 vkuncak 2009/05/26 22:52 vkuncak 2009/05/26 20:35 vkuncak 2009/05/26 17:59 vkuncak 2009/05/26 17:58 vkuncak 2009/05/26 17:48 vkuncak 2009/05/26 17:45 vkuncak 2009/05/26 17:44 vkuncak 2009/05/26 17:44 vkuncak 2009/05/26 15:50 vkuncak 2009/05/26 15:19 vkuncak 2009/05/26 14:35 vkuncak 2009/05/26 13:59 vkuncak 2009/05/26 13:58 vkuncak 2009/05/26 13:56 vkuncak 2009/05/26 13:37 vkuncak 2009/05/26 13:32 vkuncak 2009/05/26 13:31 vkuncak 2009/05/26 13:20 vkuncak 2009/05/26 13:19 vkuncak 2009/05/26 13:12 vkuncak 2009/05/26 13:09 vkuncak 2009/05/26 13:07 vkuncak 2009/05/26 12:59 vkuncak 2009/05/26 12:50 vkuncak Next revision Previous revision 2009/05/26 23:06 vkuncak 2009/05/26 23:02 vkuncak 2009/05/26 23:01 vkuncak 2009/05/26 22:52 vkuncak 2009/05/26 20:35 vkuncak 2009/05/26 17:59 vkuncak 2009/05/26 17:58 vkuncak 2009/05/26 17:48 vkuncak 2009/05/26 17:45 vkuncak 2009/05/26 17:44 vkuncak 2009/05/26 17:44 vkuncak 2009/05/26 15:50 vkuncak 2009/05/26 15:19 vkuncak 2009/05/26 14:35 vkuncak 2009/05/26 13:59 vkuncak 2009/05/26 13:58 vkuncak 2009/05/26 13:56 vkuncak 2009/05/26 13:37 vkuncak 2009/05/26 13:32 vkuncak 2009/05/26 13:31 vkuncak 2009/05/26 13:20 vkuncak 2009/05/26 13:19 vkuncak 2009/05/26 13:12 vkuncak 2009/05/26 13:09 vkuncak 2009/05/26 13:07 vkuncak 2009/05/26 12:59 vkuncak 2009/05/26 12:50 vkuncak 2008/04/10 13:37 vkuncak 2008/04/10 13:36 vkuncak 2008/04/10 13:35 vkuncak 2008/04/10 12:12 vkuncak 2008/04/09 21:02 vkuncak 2008/04/09 15:31 vkuncak 2008/04/09 15:27 vkuncak 2008/04/09 15:26 vkuncak 2008/04/09 15:22 vkuncak 2008/04/09 11:08 vkuncak 2008/04/09 10:57 vkuncak 2008/04/09 10:57 vkuncak created Line 4: Line 4: In [[Relational Semantics]] for a language without procedures we had semantic function that maps programs to relations In [[Relational Semantics]] for a language without procedures we had semantic function that maps programs to relations - $+ \begin{equation*} r_c : C \to {\cal R} r_c : C \to {\cal R} -$ + \end{equation*} so $r_c(c_1)$ was the relation corresponding to the command $c_1$, and we denote it by $[\![c_1]\!]$. so $r_c(c_1)$ was the relation corresponding to the command $c_1$, and we denote it by $[\![c_1]\!]$. Line 41: Line 41: Suppose we have relation $r_{move}$ describing the meaning of the procedure. By [[Relational Semantics]], this procedure should behave similarly as the meanings of commands, so we should have the following equation: Suppose we have relation $r_{move}$ describing the meaning of the procedure. By [[Relational Semantics]], this procedure should behave similarly as the meanings of commands, so we should have the following equation: - $+ \begin{equation*} r_{move} = ([\![assume(x>0)]\!]\ \circ\ [\![x=x-1]\!]\ \circ\ r_{move}\ \circ\ [\![y=y+1]\!])\ \bigcup\ [\![assume(!(x>0))]\!] r_{move} = ([\![assume(x>0)]\!]\ \circ\ [\![x=x-1]\!]\ \circ\ r_{move}\ \circ\ [\![y=y+1]\!])\ \bigcup\ [\![assume(!(x>0))]\!] -$ + \end{equation*} Denoting the right-hand side by $F(r_{move})$, we can write the above as Denoting the right-hand side by $F(r_{move})$, we can write the above as - $+ \begin{equation*} r_{move} = F(r_{move}) r_{move} = F(r_{move}) -$ + \end{equation*} Thus, the meaning of the procedure is the fixpoint of function $F : {\cal R} \to {\cal R}$ given by Thus, the meaning of the procedure is the fixpoint of function $F : {\cal R} \to {\cal R}$ given by - $+ \begin{equation*} F(t) = ([\![assume(x>0)]\!]\ \circ\ [\![x=x-1]\!]\ \circ\ t\ \circ\ [\![y=y+1]\!])\ \bigcup\ [\![assume(!(x>0))]\!] F(t) = ([\![assume(x>0)]\!]\ \circ\ [\![x=x-1]\!]\ \circ\ t\ \circ\ [\![y=y+1]\!])\ \bigcup\ [\![assume(!(x>0))]\!] -$ + \end{equation*} Note that $F$ is monotonic in its argument, because the operators $\circ$, $\cup$ are monotonic: Note that $F$ is monotonic in its argument, because the operators $\circ$, $\cup$ are monotonic: * $r_1 \subseteq r_2\ \rightarrow\ (r_1 \circ s) \subseteq (r_2 \circ s)$ * $r_1 \subseteq r_2\ \rightarrow\ (r_1 \circ s) \subseteq (r_2 \circ s)$ Line 57: Line 57: * $r_1 \subseteq r_2\ \rightarrow\ r_1 \cup s \subseteq r_2 \cup s$ * $r_1 \subseteq r_2\ \rightarrow\ r_1 \cup s \subseteq r_2 \cup s$ Similarly, the function is $\omega$-continuous (see [[sav08:Tarski's Fixpoint Theorem]]). Thus, it has the least fixpoint, and this fixpoint is given by Similarly, the function is $\omega$-continuous (see [[sav08:Tarski's Fixpoint Theorem]]). Thus, it has the least fixpoint, and this fixpoint is given by - $+ \begin{equation*} r_{move} = \bigcup_{n \ge 0} F^n(\emptyset) r_{move} = \bigcup_{n \ge 0} F^n(\emptyset) -$ + \end{equation*} In fact, $F^n(\emptyset)$ represents the result of inlining the recursive procedure $n$ times. Therefore, if $s_0$ is the initial state such In fact, $F^n(\emptyset)$ represents the result of inlining the recursive procedure $n$ times. Therefore, if $s_0$ is the initial state such that the computation in the procedure terminates within $n$ steps, then $(s_0,s) \in F^n(\emptyset)$ iff $(s_0,s) \in r_{move}$. that the computation in the procedure terminates within $n$ steps, then $(s_0,s) \in F^n(\emptyset)$ iff $(s_0,s) \in r_{move}$. In this example, we have In this example, we have - $+ \begin{equation*} r_{move} = \{((x,y),(x',y').\ (x > 0 \land x'=0 \land y'=x+y) \lor (x \le 0 \land x'=x \land y'=y) \} r_{move} = \{((x,y),(x',y').\ (x > 0 \land x'=0 \land y'=x+y) \lor (x \le 0 \land x'=x \land y'=y) \} -$ + \end{equation*} ===== Multiple Mutually Recursive Procedures ===== ===== Multiple Mutually Recursive Procedures ===== Line 106: Line 106: Now, we turn mutually recursive definition into a simple recursive definition by taking the pair of the meaning of odd and even. We need to find Now, we turn mutually recursive definition into a simple recursive definition by taking the pair of the meaning of odd and even. We need to find relations $r_{even}$, $r_{odd}$ such that relations $r_{even}$, $r_{odd}$ such that - $+ \begin{equation*} (r_{even},r_{odd}) = G(r_{even},r_{odd}) (r_{even},r_{odd}) = G(r_{even},r_{odd}) -$ + \end{equation*} where the function $G : {\cal R}^2 \to {\cal R}^2$ is given by: where the function $G : {\cal R}^2 \to {\cal R}^2$ is given by: - $+ \begin{equation*} G(E,O) = G(E,O) = \begin{array}[t]{@{}l@{}} \begin{array}[t]{@{}l@{}} Line 118: Line 118: ([\![assume(x!=0)]\!] \circ [\![x=x-1]\!] \circ E) \bigg) ([\![assume(x!=0)]\!] \circ [\![x=x-1]\!] \circ E) \bigg) \end{array} \end{array} -$ + \end{equation*} We define lattice structure on ${\cal R}^2$ by We define lattice structure on ${\cal R}^2$ by - $+ \begin{equation*} (r_1,r_2) \sqsubseteq (r'_1,r'_2) \ \mbox{ iff } \ (r_1 \subseteq r'_1) \land (r_2 \subseteq r'_2) (r_1,r_2) \sqsubseteq (r'_1,r'_2) \ \mbox{ iff } \ (r_1 \subseteq r'_1) \land (r_2 \subseteq r'_2) -$ + \end{equation*} - $+ \begin{equation*} (r_1,r_2) \sqcup (r'_1,r'_2) = (r_1 \cup r'_1, r_2 \cup r'_2) (r_1,r_2) \sqcup (r'_1,r'_2) = (r_1 \cup r'_1, r_2 \cup r'_2) -$ + \end{equation*} Note that: Note that: - $+ \begin{equation*} G(\emptyset,\emptyset) = ([\![assume(x==0)]\!] \circ [\![wasEven=true]\!], [\![assume(x==0)]\!] \circ [\![wasEven=false]\!]) G(\emptyset,\emptyset) = ([\![assume(x==0)]\!] \circ [\![wasEven=true]\!], [\![assume(x==0)]\!] \circ [\![wasEven=false]\!]) -$ + \end{equation*} - $+ \begin{equation*} G(G(\emptyset,\emptyset)) = G(G(\emptyset,\emptyset)) = \begin{array}[t]{@{}l@{}} \begin{array}[t]{@{}l@{}} Line 139: Line 139: ([\![assume(x!=0)]\!] \circ [\![x=x-1]\!] \circ G(\emptyset,\emptyset).\_1) \bigg) ([\![assume(x!=0)]\!] \circ [\![x=x-1]\!] \circ G(\emptyset,\emptyset).\_1) \bigg) \end{array} \end{array} -$ + \end{equation*} where $p.\_1$ and $p.\_2$ denote first, respectively, second, element of the pair $p$. where $p.\_1$ and $p.\_2$ denote first, respectively, second, element of the pair $p$. Line 151: Line 151: In the example above, we can prove that In the example above, we can prove that - $+ \begin{equation*} r_{even} = \{((x,wasEven),(x',wasEven').\ x \ge 0 \land x' = 0 \land r_{even} = \{((x,wasEven),(x',wasEven').\ x \ge 0 \land x' = 0 \land - (wasEven' \leftrightarrow (x \pmod 2 = 0)) + (wasEven' \leftrightarrow ((x \mod 2) = 0)) -$ + \end{equation*} - $+ \begin{equation*} r_{odd} = \{((x,wasEven),(x',wasEven').\ x \ge 0 \land x' = 0 \land r_{odd} = \{((x,wasEven),(x',wasEven').\ x \ge 0 \land x' = 0 \land - (wasEven' \leftrightarrow (x \pmod 2 \ne 0)) + (wasEven' \leftrightarrow ((x \mod 2) \ne 0)) -$ + \end{equation*} === Remark === === Remark === | 2019-08-18 01:48:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.949874758720398, "perplexity": 4636.2593010495275}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313536.31/warc/CC-MAIN-20190818002820-20190818024820-00319.warc.gz"} |
http://lukaspuettmann.com/page17/ | # "John Maynard Keynes: Hopes Betrayed, 1883–1920", by Robert Skidelski
By chance, I came across the first part of the biography of John Maynard Keynes by Robert Skidelsky and started reading it, not quite expecting to finish. It’s very well written and much more profound than I had hoped for in a biography.
Right from the first sentence, the book shines:
“John Maynard Keynes was not just a man of establishments; but part of the elite of each establishment of which he was a member. […] This position was largely achieved by the force of his dazzling intellect and by his practical genius. But he did not start life without considerable advantages which helped him slip easily into the parts for which his talents destined him. There was no nonsense about his being in the wrong place or having the wrong accent. Of his chief advantages was being born at Cambridge, into a community of dons, the son of John Neville and Florence Ada Keynes.
[…]
When he was five his great-grandmother Jane Elizabeth Ford wrote to him, ‘You will be expected to be very clever, having lived always in Cambridge.’” (p1)
For an economics student, Keynes is worth reading about in any way, but the more I read about Keynes the more fascinating he becomes.
Skidelsky wrote this three-volume biography over almost twenty years with this first volume coming out in 1983 and the last volume in 2000. In this first part, Skidelsky covers Keynes’ upbringing in Cambridge, his education at Eton and Cambridge, his time in London with his Bloomsbury friends and his time at the Treasury during and after World War I.
“He never did take an economics degree. In fact, his total professional training came to little more than eight weeks.” (p166)
and also
“Like most economists at the time, Keynes started teaching economics without having taken a university degree in the subject. […] Compared with today, there was little to learn, and that was not difficult. […] In this way he acquired a firm understanding of a fairly limited range of theory.” (p206)
I was surprised to read that Keynes never traveled further east than Egypt.
Skidelsky explains in length the intellectual development in Cambridge which included Keynes father John Neville Keynes who he sums up with:
“In producing one book on pure, and a second on applied, logic, Neville had circumnavigated the range of his intellectual interests. He was thirty-eight. He lived another sixty years. Apart from a few contributions […] and the odd essay, his pen was henceforth confined to revising previous work, writing his diary and letters, and drafting minutes. […] Perhaps he is to be admired, rather than pitied, for keeping silent when he had nothing to say.” (p64)
People at the time struggled with the gradual loss of theology during the transition from the Victorian to Edwardian period and searched for something to replace it. The names of the people involved are quite familiar to students of economics:
“Both [Marshall and Sidgwick] inherited the problems of collapsing theology and both engaged in essentially the same enterprise: the attempt to find authoritative theology-substitutes. […] Sidgwick was mainly a classicist; Marshall was mainly a mathematician. […] For many intellectuals brought up on Christianity still felt the need for authoritative guidance on how to conduct their lives – which they did not get from economics.” (p32)
“The difference was that Sidgwick had a need, which Keynes never had, to find a way to bring all these things into a rational, coherent, relationship with each other.” (p34)
There’s also a part on the Keynes’s family’s finances:
“The Keynes life-style was sustained by an income which was never less than comfortable, and grew more so.” (p55)
“As Maynard grew up his parents grew steadily more affluent. Capital and earnings went up, while prices went down. […] But what strikes one today is how secure his position was. He just went on getting richer without great effort on his part. That is what the Victorians meant by progress. Neville found his affluence all the more agreeable because his enjoyment of it was unclouded by any sense of guilt.” (p56)
This reminds me of Piketty’s “historical fact” $$r > g$$ (with $$r$$ the real return on wealth and $$g$$ the growth rate of incomes). Piketty cites Jane Austen, who implicitly states that real rates are about 5% which allows for a comfortable life of existing wealth.
It is amazing, how much of Keynes’ conversation is documented and how much of his life can be reconstructed. I recently came across this article which states that we are left with 20,000 of Goethe’s letters. How many of us keep records of our emails or our Facebook and Whatsapp conversations?
I liked this part on Keynes’ mind:
“One never feels that he had a sense of a single current of history carrying the world forward to the natural order described by the classical economists, or some other kind of utopia. Rather he was always impressed, some would say over-impressed, by the fragility of the civilisation inherited from the Victorians, by the feeling that it was an exceptional episode in human history.” (p92)
And this reminds me of many people I know who are very good at what they do:
“Once again [writing an essay at Eton] he was showing his ability to get totally absorbed in a subject remote from his official interests.” (p113)
Reading about Bloomsbury I’m reminded of my time at UCL when I often passed Keynes’ house at Gordon Square. Consider these bits about the Bloomsbury Group for example:
“For it was [G. E.] Moore who tried to redefine the content of ethical discussion by insisting that the primary question was not ‘what ought I to do’ but ‘what is good’; and that the primary question could be answered only by reference to some conception of the good life. The virtues, Moore said, have no value in themselves. They are valuable only as a means to what is good, and must be rationally proportioned to it. If Bloomsbury can be defined by a common attitude of mind – as it surely can – this is it.” (p245)
“Bloomsbury, it is true, was devoid of Christian belief. […] And there is no doubt that it encouraged, thought it did not entail, political passivity.” (p246)
“Bloomsberries, as they called themselves, might be curious about outsiders. They were also frightened of them, and could be chilling to them. […] Bloomsbury was a particular expression of, and gave direction to, the ‘revolt against the Victorians’.” (p248)
I was most touched by the parts on Keynes’ and his Bloomsbury friends’ response to World War I.
“Although the Archduke […] had been assassinated on 28 June, only a month later was there a first reference [in Keynes diaries] to the worsening international situation. Characteristically it was in the context of Stock Exchange speculation. […] Next day Germany invaded Belgium. On 4 August 1914 England declared war on Germany, and Bloomsbury’s – and Maynard’s – world collapsed.” (p285)
Keynes gradually came to oppose British participation in the war and so did his Bloomsbury friends. He considered quitting his job at the Treasury, but he thought it was better to be inside the circle of knowledge. He wrote to his mother that he would resign from the Treasury only if they started
[…] “torturing my friends”. (p324)
Tyler Cowen wrote in 2005:
“Robert Skidelsky [and Sylvia Nasar] raised the bar for economic biographies some time ago.”
I cannot compare what people used to expect from biographies, but the quality of this took me by surprise and makes me update my assessment of the genre in general.
Related posts:
# Interview with David Card and Alan Krueger
I liked this interview with David Card and Alan Krueger which I found here. Some good bits:
Krueger: […] So I encourage economists to use a variety of different research styles. What I think on the margin is more informative for economics is the type of quasi-experimental design that David and I emphasize in the book.
But the other thing I would say, which I think is underappreciated, is the great value of just simple measurement. Pure measurement. And many of the great advances in science, as well as in the social sciences, have come about because we got better telescopes or better microscopes, simply better measurement techniques.
In economics, the national income and product accounts is a good example. Collecting data on time use is another good example. And I think we underinvest in learning methods for collecting data—both survey data, administrative data, data that you can collect naturally through sensors and other means.
This reminds me of a talk by Hal Varian in Bonn last year, in which he said that one of the new frontiers in social science is to make use the data that is created when we use our smartphone or shop online.
And I knew that Scandinavia was famous for its administrative matched data, but I didn’t know that Germany stands out, too:
Krueger: We’ve long been behind Scandinavia, which has provided linked data for decades. And we’re now behind Germany, where a lot of interesting work is being done.
And this was interesting, although a bit general:
Krueger: And we haven’t caught up in terms of training students to collect original survey data. I’ve long thought we should have a course in economic methods — […] — and cover the topics that applied researchers really rely upon, but typically are forced to learn on their own. Index numbers, for example.
Related posts:
# "Mankind's single greatest waste of time and energy"
(No, not a PhD.)
In 300 BC, a new plow was developed in China. It required the effort of only one oxen, where before several were needed for the same work. It had a heavier design, but overall reduced the necessary effort.
The stunning thing is that other parts of the world, and in particular Europe, did not use this better design until the 17th century AD. When it arrived, the adoption of this type of plow was important for Europe’s agricultural revolution.
Although the invention seems obvious and all necessary materials had been around for a long time, people did not come up with it. Maybe technological progress isn’t really as linear and inevitable as it seems in retrospect?
I found this in “1491” by Charles C. Mann. He uses this example of the Europeans failure to invent or adopt the Chinese plow to put into perspective that the Maya’s used the wheel for toys, but not to grind maize or to carry burdens. Mann takes this from Robert Temple’s “The Genius of China” and expanding on Mann’s original citation we find (with added emphasis):
“Of all the advantages which China had for centuries over the rest of the world, the greatest was perhaps the superiority of its plows. Nothing underlines the backwardness of the West more than the fact that for thousands of years, millions of human beings plowed the earth in a manner which was so inefficient, so wasteful of effort, and so utterly exhausting that this deficiency of plowing may rank as mankind’s single greatest waste of time and energy.” (p17)
“For farmers, this was like going from the bow and arrow to the gun.” (p19)
“The increased friction meant that huge multiple teams of oxen were required, whereas Chinese plowmen could make do with a single ox, and rarely more than two. Europeans had to pool their resources, and waste valuable time and money in getting hold of six to eight oxen to plow the simplest field. […] It is no exaggeration to say that China was in the position of America or Western Europe today, and Europe was in the position of, say, Morocco.” (p20)
The following would be interesting to study:
1. One could first look at why people didn’t adopt the better plow much sooner. Maybe we have records on which cities or regions adopted this plow first. Do places, that adopted first, differ? I might expect larger urban places with more diverse populations that traded heavily to adopt this innovation first. This paper leads me to think that “openness to disruption” might have helped.
2. And there could be occasions when the arrival of the new technology could be treated as a “quasi-experiment”. Can we see the effects of new technology in action? How did output, wages and profits react?
Apparently it was the Dutch who first brought the plow back from China and then brought it to England as Dutch laborers working there. And from there it next went to America and France. And then:
“By the 1770s it was the cheapest and best plow available. […] There was no single more important element in the European agricultural revolution.” (p20)
Related posts:
# Reasonable Latex templates
For my PhD, I write papers and presentations and like many people in economics I use Latex for both.
You can find the codes for my templates here.
The preambles are a bit bloated, but they do what I want them to. You can use them and do with them what you like without attribution.
## Paper template (example)
Initially I started with Fabrizio Zilibotti’s template for dissertation theses. I took some inspiration from the style of the Journal of Economic Perspectives. I think some of the preamble might come from Rob Hyndman.
## Presentation template (example)
I started from the “Dresden” theme and changed it a bit, by removing some of the unnecessary information like affiliations and using less strong colors. Also I find a black side at the end (in the PowerPoint style) quite useful to end presentations. I took the blue color from colorbrewer2 (recommended here).
## Report template (example)
This template is nice for short reports or essays. It uses the tufte-latex class and I tweaked it a little (aligned text, spaces between paragraphs).
## CV template (example)
This uses Chris Paciorek’s template. I changed the top a bit and added site numbers and a date that automatically updates to the current month and year.
Related posts:
# Reading the original Lonely Planet "Across Asia on the Cheap" (1975)
I enjoy backpacking greatly and I often use the Lonely Planet’s travel guides to do so. I read Maureen and Tony Wheeler’s history of how they came to write the first Lonely Planet in which they tell how they travelled from London to Australia. This made me curious to read the original guide from 1975 and it turns out the electronic version is freely available on Amazon.
Here are my thoughts on it:
• Many countries back then turned you away at the border if you looked like a hippie.
• Back then, Bali was a real island paradise.
• The change in where it’s safe to travel to then and now is quite drastic. Back then, Iran, Afghanistan and Pakistan were safe to go to. People thought of them as a little boring and mostly rushed through. But Kabul, especially, was exciting. Again, mostly for the drugs. Compare that to Southeast Asia, were in those days it was only safe to go to Singapore, Malaysia, Thailand and Burma.
• Nowadays, the guides are more careful with their language.
• Prices in nominal terms seem to be ridiculously cheap to us, but in real terms they were also very cheap.
• Selling blood was a good source of income while travelling. Kuwait had the highest prices. Is this still the case?
• The highway through Yugoslavia (the “Autoput”) was what my father told be about it: quite dangerous.
• Money and communication were a much, much bigger hassle back then.
Related posts:
# "Superforecasters", by Philip Tetlock and Dan Gardner
How good are people at forecasting political or economic events? Why are some people better than others?
Philip Tetlock and Dan Gardner have written “Superforecasting” based on a tournament started in 2011 in which they have 2800 people predict a number of events. They then scored how they did and analyze the results.
Tetlock is famous for his 2005 book “Expert Political Judgment” in which he summarizes a 20 year study in which pundits, researchers, analysts and political experts forecasted events. He finds overall disappointing forecasting performance, but is able to draw a clear line between “foxes” (who are good forecasters) and “hedgehogs” (who are not). For this metaphor, he draws on an essay by Isaiah Berlin with reference to the ancient idea of: “The Fox knows many things, but the hedgehog knows one thing well.”
Hedgehogs are guided by the one thing they know – their ideology – and they form their forecasts to fit into their way of thinking. But foxes consider different possible explanations.
I was intrigued when I first read Tetlock’s 2005 book, because it seemed to play with the debate in economics on how “structural” vs. “reduced-form” our research should be. A structural model is backed by theory and tries to explain why things happen. A reduced-form model imposes less theory and tries to find patterns in data and predict what comes next, but it usually cannot explain why things happened.
Tetlock and Gardner’s new book does not resolve this conflict. They argue, again, that those people are good at prediction who produce good ballpark estimates (what they call “fermi-tizing”) and are carefully adjusting their probability estimates when new information becomes available. I liked this bit:
“Confidence and accuracy are positively correlated. But research shows we exaggerate the size of the correlation.” (p138)
and
“[…] granularity predicts accuracy […].” (p145)
They criticize after-the-fact explanations with: “Yeah, but any story would fit.” This is the basic criticism of structural models. Any number of models could fit your data points. How do we know which is right?
They say:
“Religion is not the only way to satisfy the yearning for meaning. Psychologists find that many atheists also see meaning in the significant events in their lives, and a majority of atheists said they believe in fate, defined as the view that “events happen for a reason and that there is an underlying order to life that determines how events turn out.” Meaning is a basic human need. As much research shows, the ability to find it is a marker of a healthy, resilient mind.” (p148)
In my opinion, the authors don’t take the necessity for models serious enough: We need models and we want them. And, actually, we will always have a model in our mind, even if we don’t make it explicit and admit it. Even Nate Silver (who is famous for his accuracy in prediction) says:
“For example, I highly prefer […] regression-based modeling to machine learning, where you can’t really explain anything. To me, the whole value is in the explanation.”
And in fact the authors become more humble near the end:
“In reality, accuracy is often only one of many goals. Sometimes it is irrelevant. […] ‘kto, kogo?’” (p245)
This last reference is Lenin saying: “Who did what to whom?”
They describe how good forecasters average the estimates they derive from different methods. For example, taking the outside view “how likely is it that negotiations with terrorists ever work?” and then the inside view “what are the motivations of the Nigerian government and what drives Boko Haram?”.
But that only works because Tetlock’s forecasts are quite specific. They’re relevant, yet they exclude a large number of things. Of the top of my head, here’s a list of what they didn’t forecast:
• Long-run events: “What will the Gini coefficient in the United States be in 2060?”, “Will China have at least 80% of the number of airport carriers of the United States in 2100?”, “Will the growth rate of German real GDP per capita be above 1% per annum from 2020-2060?”, “How likely is it that there will be a virus that kills at least 10% of the global population within 3 years from 2020-2150?”
• High-frequency events: “How should we trade this stock in the next 10 seconds?”
• Predictions or classifications involving a large number of objects: “Can we identify clusters in these 3 million product descriptions?”, “Do these 10 million pictures show a cat?”
The first of these events might be the most relevant of all, but they are also the most difficult to form an expectation about. The questions are unanswerable if we don’t want to wait and if we did wait Tetlock’s superforecasters might well be good at forecasting them. So I have to grant them that.
The second kind (“high-frequency prediction”), I actually find the least relevant and interesting. I think here this would really just be number-crunching, pattern-matching, so “reduced-form” in its purest form and means writing or applying algorithms to do the work. Still, we don’t really learn anything about this kind of forecasting from Tetlock’s books, but it’s what a lot of people in finance think of when they hear “prediction”.
The third has recently become more relevant, but much more so in the realm of machine learning analysts and statisticians. They are the kind of problems one might find on kaggle. Again, they’re prediction but Tetlock’s recipes don’t work here.
I like the idea of ballpark estimates and “fermitization”, but something there irritated me. Isn’t their whole point about taking all information into account and not sticking with narratives, but to instead make careful probabilistic estimates? Tetlock and Gardner discuss the example of how many piano-tuners there are in Chicago. They then go through a textbook example of how to answer a consulting interview question. They come up with an estimate of 62 and present a highly dubious empirical estimate of 80. A number of things strike me as odd: First, they next go on to say how the empirical frequencies of events should be our baseline. So shouldn’t we first have googled for it and seen, “ok, there seem to be about 80 of these guys in Chicago”. Then, in the next step, we could think about where our estimate might have gone wrong. Maybe not everybody of them has a website? Maybe we didn’t find all? Maybe there’s double counting? And then we could adjust for that. Or, you could do both, their Fermi “structural” estimate and the Googling “reduced-form” estimate and then average both using weights that depending on how relatively certain you are.
Their iterated statement that we need to measure what we are talking about, reminds me of Thomas Piketty’s, Abhijit Banerjee and Esther Duflo’s and Angus Deaton’s books who also spend large portions of their texts arguing that we need to have good data about the things we care about. I completely agree.
I also liked their discussion on how all human beings need narratives and how that might even be good for our mental health and resilience. And I do suppose I would be miserable as a superforecaster. I already devour large amounts of news, blogs and more every day, but I dread getting updates by Google News about all the topics that I would have to cover. In fact, I did consider taking part in Tetlock’s superforecasting experiment. Back in 2011, it went through the blogs and I came across it. I looked at it a bit and I thought I might enjoy it, but really I didn’t want to commit so much time to something like that. With hindsight, I’m glad I didn’t participate.
He also discusses Keynes’ citation:
“When the facts change, I change my mind. What do you do, Sir?”
This sounds like a really foxish, Bayesian statement. I recently came across the assessment by Marc Blaug (in the introduction to his book) that Keynes was initially a Fox and became a Hedgehog. Tetlock then presents the nice twist that it’s unknown if Keynes really stated that. But he was ready to admit it, because it wasn’t fundamental to his (Tetlock’s) identity.
I also like the idea of a “pre-mortem” (p202), so thinking about reasons that my project might fail. (But as for research projects, maybe it’s better to actively resist this, otherwise you never get going.)
He ends with a plea for opposing parties to get together and use their different view to come up with a better forecast:
“The key is precision.” (p268)
The problem here is that we are talking about conditional vs. unconditional forecasts. Different groups want to change that condition. Also, some forecasts are political – such as those concerning GDP or population size – where the forecast itself might even have an impact on what will happen.
Last, I also agree with Tetlocks thank you note:
“[…] I thank my coauthor […] and editor […] who helped me tell my story far better than I could have – and who had to wrestle for two years against my professional propensity to “complexify” fundamentally simple points.” (p288)
When you compare Tetlocks two books on this topic, this last is much more pleasant to read without loosing in accuracy or depth. | 2021-08-02 23:55:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34303364157676697, "perplexity": 2269.32480465838}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154408.7/warc/CC-MAIN-20210802234539-20210803024539-00111.warc.gz"} |
http://ask.cvxr.com/t/express-log-sum-of-the-squared-exponential-in-cvx/10559 | # Express -log(sum of the squared exponential) in cvx
Hi, I want to express -log(exp(-x.^2)+exp(-2*x.^2)). But I cannot find a way that CVX accepts it. Can anyone help?
log-convex can be added, so log(exp(x^2) + exp(2*x^2)) is allowed.
log-concave can’t be added, so log(exp(-x^2) + exp(-2*x^2)) is not allowed.
Read the answer by mcg at Log of sigmoid function to learn the CVX rules for log-convex and log-concave. This material is not really addressed in the CVX Users’ Guide.
Thanks so much for your reply. But I think **-**log(exp(-x.^2)+exp(-2*x.^2)) is convex at least for x\in [-20, 20].
You think? Or you’ve proved?
I plot it with MATLAB.
Did you read the link in my previous post? That is not a proof. | 2022-10-04 04:12:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8603273034095764, "perplexity": 2862.055177652331}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00668.warc.gz"} |
https://plainmath.net/18742/find-vertical-asymptotes-rational-functions-rational-function-equal | Question
(a) How do you find vertical asymptotes of rational functions? (b) Let s be the rational function s ( x ) = \frac { a _ { n } x ^ { n } + a _ { n -
Rational functions
(a) How do you find vertical asymptotes of rational functions? (b) Let s be the rational function $$\displaystyle{s}{\left({x}\right)}={\frac{{{a}_{{{n}}}{x}^{{{n}}}+{a}_{{{n}-{1}}}{x}^{{{n}-{1}}}+\cdots+{a}_{{{1}}}{x}+{a}_{{{0}}}}}{{{b}_{{{m}}}{x}^{{{m}}}+{b}_{{{m}-{1}}}{x}^{{{m}-{1}}}+\cdots+{b}_{{{1}}}{x}+{b}_{{{0}}}}}}\\$$
How do you find the horizontal asymptote of s? (c) Find the vertical and horizontal asymptotes of $$\displaystyle{f{{\left({x}\right)}}}={\frac{{{5}{x}^{{{2}}}+{3}}}{{{x}^{{{2}}}-{4}}}}$$
2021-05-05
a) We find a vertical asymptote of a rational function such that we find a rational zeros of the denominator i.e the line x = a is a, vertical asymptote of a rational function, where a is zero of the denominator.
b) Let s(x) be a rational function
$$\displaystyle{s}{\left({x}\right)}={\frac{{{a}_{{{n}}}{x}^{{{n}}}+{a}_{{{n}-{1}}}{x}^{{{n}-{1}}}+\ldots+{a}_{{{1}}}{x}+{a}_{{{0}}}}}{{{b}_{{{m}}}{x}^{{{m}}}+{b}_{{{m}-{1}}}{x}^{{{m}-{1}}}+\ldots+{b}_{{{1}}}{x}+{b}_{{{0}}}}}}$$
then
1. If n < m, then s(x) has horizontal asymptote y = 0,
2. lf n = m, then s has horizontal asymptote $$\displaystyle{y}={\frac{{{a}_{{{n}}}}}{{{b}_{{{m}}}}}}$$
3. If n > m, then s has no horizontal asymptote.
c) We want to find the denominator in the factored form
$$\displaystyle{f{{\left({x}\right)}}}={\frac{{{5}{x}^{{{2}}}+{3}}}{{{x}^{{{2}}}-{4}}}}$$
Sine, we can express the denominator in the factored form
$$\displaystyle{x}^{{{2}}}-{4}={\left({x}-{2}\right)}{\left({x}+{2}\right)}$$
we can see that zeros 2 and -2, therefore from the part a) the lines x=2 and x=-2 are vertical asymptotes.
We can see that n=2 and m=2 i.e n=m. So, from the part b) we obtain that the horizontal asymptote is y=5. | 2021-08-03 18:49:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9620403051376343, "perplexity": 354.54202149122926}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154466.61/warc/CC-MAIN-20210803155731-20210803185731-00699.warc.gz"} |
http://mathoverflow.net/revisions/104685/list | We know it converges for any prime p. I just want to know how to compute its exact value: \prod_{n=1}^{\infty} $$\prod_{n=1}^{\infty} (1-p^{-n})1-p^{-n})$$ | 2013-05-22 11:35:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8969008326530457, "perplexity": 114.76545514645412}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701638778/warc/CC-MAIN-20130516105358-00003-ip-10-60-113-184.ec2.internal.warc.gz"} |
http://www.mzan.com/article/47683660-is-there-better-way-to-create-lazy-variable-initialization.shtml | Home Is there better way to create lazy variable initialization?
# Is there better way to create lazy variable initialization?
netanelrevah
1#
netanelrevah Published in 2017-12-06 21:22:39Z
I want to create code that initialize variable only when I really need it. But initializing in the regular way: var = None if var is None: var = factory() var2 = var Make too much noise in the code. I tried to create fast solution but I feel there is better option. This is my solution that is fast but can't get parameters and use defaultdict for this. def lazy_variable(factory): data = defaultdict(factory) return lambda: data[''] var = lazy_variable(a_factory) var2 = var() More questions: is there fast python container that holds only one variable? is there a way to return value without calling the function with parenthesis? EDIT: Please consider performance. I know i can create a class that can have this behavior, but it slower then the simple solution and also the default dict solution. trying some of the solutions: define: import cachetools.func import random @cachetools.func.lru_cache(None) def factory(i): return random.random() and run: %%timeit for i in xrange(100): q = factory(i) q = factory(i) got: 100 loops, best of 3: 2.63 ms per loop naive: %%timeit for i in xrange(100): a = None if a is None: a = random.random() q = a q = a got: The slowest run took 4.71 times longer than the fastest. This could mean that an intermediate result is being cached. 100000 loops, best of 3: 14.8 µs per loop I'm not sure what was cached defaultdict solution: %%timeit for i in xrange(100): a = lazy_variable(random.random) q = a() q = a() got: The slowest run took 4.11 times longer than the fastest. This could mean that an intermediate result is being cached. 10000 loops, best of 3: 76.3 µs per loop Tnx!
Michael Butscher
2#
Michael Butscher Reply to 2017-12-06 21:41:54Z
A simple container (but which needs the parentheses nevertheless) can be done e.g. like this: class Container: UNDEF = object() def __init__(self, factory): self.data = Container.UNDEF self.factory = factory def __call__(self): if self.data is Container.UNDEF: self.data = self.factory() return self.data # Test: var = Container(lambda: 5) print(var()) print(var())
zwer
3#
zwer Reply to 2017-12-06 21:52:52Z
If we're talking about instance variables, then yes - you can write your own wrapper and have it behave the way you want: class LazyVar(object): def __init__(self, factory, *args, **kwargs): self.id = "__value_" + str(id(self)) # internal store self.factory = factory self.args = args self.kwargs = kwargs def __get__(self, instance, owner): if instance is None: return self else: try: return getattr(instance, self.id) except AttributeError: value = self.factory(*self.args, **self.kwargs) setattr(instance, self.id, value) return value def factory(name): print("Factory called, initializing: " + name) return name.upper() # just for giggles class TestClass(object): foo = LazyVar(factory, "foo") bar = LazyVar(factory, "bar") You can test it as: test = TestClass() print("Foo will get initialized the moment we mention it") print("Foo's value is:", test.foo) print("It will also work for referencing, so even tho bar is not initialized...") another_bar = test.bar print("It gets initialized the moment we set its value to some other variable") print("They, of course, have the same value: {} vs {}".format(test.bar, another_bar)) Which will print: Foo will get initialized the moment we mention it Factory called, initializing: foo Foo's value is: FOO It will also work for referencing, so even tho bar is not initialized... Factory called, initializing: bar It gets initialized the moment we set its value to some other variable They, of course, have the same value: BAR vs BAR Unfortunately, you cannot use the same trick for globally declared variables as __get__() gets called only when accessed as instance vars.
timgeb
4#
timgeb Reply to 2017-12-06 21:54:39Z
Well you could simply access locals() or globals() and type var2 = locals().get('var', factory()) but I have never been in a situation where that would be useful, so you should probably evaluate why you want to do what you want to do.
Paul Panzer
5#
Paul Panzer Reply to 2017-12-06 22:35:29Z
If I understand you correctly then some of the functionality you are interested in is provided by functools.lru_cache: import functools as ft @ft.lru_cache(None) def lazy(): print("I'm working soo hard") return sum(range(1000)) lazy() # 1st time factory is called # I'm working soo hard # 499500 lazy() # afterwards cached result is used # 499500 The decorated factory may also take parameters: @ft.lru_cache(None) def lazy_with_args(x): print("I'm working so hard") return sum((x+i)**2 for i in range(100)) lazy_with_args(3.4) # I'm working so hard # 363165.99999999994 lazy_with_args(3.4) # 363165.99999999994 # new parametes, factory is used to compute new value lazy_with_args(-1.2) # I'm working so hard # 316614.00000000006 lazy_with_args(-1.2) # 316614.00000000006 # old value stays in cache lazy_with_args(3.4) # 363165.99999999994
netanelrevah
6#
netanelrevah Reply to 2017-12-12 22:06:21Z
Ok, I think i found a nice and fast solution using generators: def create_and_generate(creator): value = creator() while True: yield value def lazy_variable(creator): generator_instance = create_and_generate(creator) return lambda: next(generator_instance) another fast solution is: def lazy_variable(factory): data = [] def f(): if not data: data.extend((factory(),)) return data[0] return f but I thing the generator is more clear.
You need to login account before you can post.
Processed in 0.314799 second(s) , Gzip On . | 2017-12-18 22:31:04 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32453209161758423, "perplexity": 8059.789058728936}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948627628.96/warc/CC-MAIN-20171218215655-20171219001655-00398.warc.gz"} |
https://physics.stackexchange.com/questions/463062/impulse-imparted-during-elastic-collision/463090 | # Impulse imparted during elastic collision [closed]
Lets say there are two objects a of 1kg and a very heavy block. The light mass has a velocity of $$12$$ $$ms^{-1}$$ towards right. The heavy block has a velocity of $$10$$ $$ms^{-1}$$ towards right. The light mass collides elastically with the heavy block. What is the impulse imparted by the heavy block? I am facing a problem calculating change in velocity of ball on hitting the moving block which is required to calculate impulse.
My Attempt: As this is an elastic collision
Coefficient of restitution $$=1$$
Therefore,
$$1=\frac{12-10}{10-v_{ball}}$$
Which gives final velocity of ball to be $$8$$ $$ms^{-1}$$ towards right.
Calculating change in momentum of ball that is momentum imparted to ball during collision:
Initial velocity of ball =$$12$$ $$ms^{-1}$$ towards right.
Final velocity of ball = $$8$$ $$ms^{-1}$$ towards right.
Velocity of approach of ball=$$(12-10)=2$$ $$ms^{-1}$$ towards right.
In the problem below velocity with which water hits the plate is taken as velocity of approach
So while calculating impulse imparted to ball(1kg) by the block, Should we take
$$1.$$ m(final velocity - velocity with which ball hits the block)
$$1(8-2)=6$$ $$kg$$ $$ms^{-1}$$ towards right.
$$2.$$ m(final velocity - initial velocity)
$$1(8-12)=4$$ $$kg$$ $$ms^{-1}$$ towards left.
I think that the $$(1)$$ can't be true as block would apply a pushing force towards left thus imparting impulse towards left. Which means $$(2)$$ should be correct. Am I wrong?
In such a case where $$(2)$$ is correct why was velocity of approach used to calculate force on wall in the water jet problem and why it is not applicable in block and ball case. I am facing a problem in calculating change in velocity of ball when it hits the moving block.
## closed as off-topic by John Rennie, stafusa, Kyle Kanos, Jon Custer, ZeroTheHeroMar 2 at 3:21
This question appears to be off-topic. The users who voted to close gave this specific reason:
• "Homework-like questions should ask about a specific physics concept and show some effort to work through the problem. We want our questions to be useful to the broader community, and to future users. See our meta site for more guidance on how to edit your question to make it better" – John Rennie, stafusa, Kyle Kanos, Jon Custer, ZeroTheHero
If this question can be reworded to fit the rules in the help center, please edit the question.
Your confusion comes from mixing two different frames of reference. In the frame of reference where the ball initially travels at $$12 \frac{m}{s}$$, it's final velocity is $$8 \frac{m}{s}$$ and the impulse is $$m\Delta v = 1 kg (8 - 12) \frac{m}{s} = -4 \frac{kg \cdot m}{s}$$ (or $$4 \frac{kg \cdot m}{s}$$ to the left as you said). When you calculate using the "velocity of approach" of the ball, you're really just in a frame of reference where the large mass is taken to be stationary so that all velocity is relative to the mass. The velocity of an object in this frame in terms of the old one is given by $$v_{new} = v_{old} - 10 \frac{m}{s}$$ In this frame, the ball is moving at $$(12 - 10) \frac{m}{s} = 2 \frac{m}{s}$$ to the right. The velocity of the ball when it rebounds is no longer $$8 \frac{m}{s}$$, that was in the old frame of reference. In this frame of reference, it is $$(8 - 10) \frac{m}{s} = -2 \frac{m}{s}$$, so the impulse is $$1 kg (-2 - 2) \frac{m}{s} = -4 \frac{kg \cdot m}{s}$$ same as the original answer. You can do calculations in any frame of reference, you just have to be consistent and not switch around in the middle. | 2019-08-18 13:03:24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 34, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7603500485420227, "perplexity": 258.61174091930405}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313889.29/warc/CC-MAIN-20190818124516-20190818150516-00274.warc.gz"} |
https://socratic.org/questions/what-is-the-slope-of-a-line-that-is-perpendicular-to-3y-2x-6 | # What is the slope of a line that is perpendicular to 3y+2x=6?
Apr 2, 2018
$m = \frac{3}{2}$
#### Explanation:
A line is a negative inverse of it perpendicular line.
This means m(1) $m \left(1\right) = - \frac{1}{m \left(2\right)}$
Through manipulation of the equation we change it to $y = - \frac{2}{3} x + \frac{6}{3}$
The $- \frac{2}{3}$ infront of the represents the slope of the line.
Using idea from earlier we flip the gradient and times it by -1.
$- \frac{2}{3} = - \frac{1}{m}$ (cross multiply)
$3 m = 2$ (divide the 3)
$m = \frac{3}{2}$
Apr 2, 2018
$\frac{3}{2}$
#### Explanation:
If two lines are perpendicular then the result of multiplying the two gradients together always equals -1
Rearrange the equation to find the gradient:
$3 y + 2 x = 6$
$\implies$ $3 y = - 2 x + 6$
$\implies$ $y = - \frac{2}{3} x + 2$
Gradient = $- \frac{2}{3}$ the reciprocal is $\frac{3}{2}$
$- \frac{2}{3} \times \frac{3}{2} = - 1$ | 2020-11-27 17:54:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 16, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7806479334831238, "perplexity": 1267.1413019990546}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141193856.40/warc/CC-MAIN-20201127161801-20201127191801-00502.warc.gz"} |
https://dralb.com/2019/07/30/gram-schmidt-orthonormalization/ | # Gram-Schmidt Orthonormalization
Now that we have defined inner product spaces and angles between vectors, we can determine the amount a given vector is in the direction of another vector. As we worked on this, we noted that the process was significantly simplified if we worked with unit vectors. Furthermore, If we wanted to break a vector into components finding its components in different directions, it was helpful it we had started with orthogonal vectors. As such, we will want to be able to find an orthonormal basis of a vector space, if we have generating set of vectors. We used the Gram-Schmidt Orthonormalition process to do this.
## Example
Let $$S=span(\left\{(1,1,2),(1,1,1),(0,1,1)\right\})$$ be a subspace of $$\mathbb{R}^{3}$$ with the inner product defined as the dot product. Find an orthonormal basis for $$S$$ using the Gram-Schimdt Orthonormalization process.
### First vector
As we work through the Gram-Schmidt Orthonormalization process for this vector space, we will focus on one step at a time. The first step is to find a unit vector in the direction of the first given vector. We will, therefore, let $$\mathbf{v}_{1}=(1,1,2)$$. In order to find the unit vector in this direction, we get
\begin{align*}
\mathbf{w}_{1}=\frac{(1,1,2)}{||(1,1,2)||}
\end{align*}
where $$||\mathbf{v}||$$ is the norm of $$\mathbf{v}$$.
In order to find the norm of $$\mathbf{v}_{1}$$, we note that by definition
\begin{align*}
||(1,1,2)||&=\sqrt{(1,1,2)\cdot (1,1,2)} \\
&=\sqrt{1^{2}+1^{2}+2^{2}} \\
&=\sqrt{6}.
\end{align*}
We now find that the first vector in the orthonormal basis is
\begin{align*}
\mathbf{w}_{1}&=\frac{(1,1,2)}{||(1,1,2)||} \\
&=\frac{(1,1,2)}{\sqrt{6}} \\
&=\left(\frac{1}{\sqrt{6}},\frac{1}{\sqrt{6}},\frac{2}{\sqrt{6}}\right).
\end{align*}
### Second Vector
As we work on finding the second vector for our orthonormal basis, we will need to perform two different tasks. First, we will need to make sure that the vector we find is orthogonal to the first vector in our basis. Secondly, we need to ensure that we are left with a unit vector.
#### Orthogonal
Here, we will start with the second vector in $$S$$, $$\mathbf{v}_{2}=(1,1,1)$$. In order to find a vector which is orthogonal to $$\mathbf{w}_{1}$$, we will take $$\mathbf{v}_{2}$$ and subtract off the portion of $$\mathbf{v}_{2}$$ in the direction of $$\mathbf{w}_{1}$$. This will then leave the portion of $$\mathbf{v}_{2}$$ that is orthogonal to $$\mathbf{w}_{1}$$. That is, we will define
\begin{align*}
\mathbf{u}_{1}=\mathbf{v}_{2}-\text{proj}_{\mathbf{w}_{1}}(\mathbf{v}_{2}).
\end{align*}
Recall that, in general,
\begin{align*}
\text{proj}_{\mathbf{w}}(\mathbf{v})=\frac{\langle \mathbf{v},\mathbf{w} \rangle}{\langle \mathbf{w}, \mathbf{w} \rangle}\mathbf{w}.
\end{align*}
However, since $$\mathbf{w}_{1}$$ is a unit vector, we know that $$\langle \mathbf{w}, \mathbf{w} \rangle=1$$, so we can simplify this to,
\begin{align*}
\text{proj}_{\mathbf{w}_{1}}(\mathbf{v}_{2})=\langle \mathbf{v}_{2},\mathbf{w}_{1} \rangle \mathbf{w}.
\end{align*}
We can now find that
\begin{align*}
\text{proj}_{\mathbf{w}_{1}}(\mathbf{v}_{2})&=\langle \mathbf{v}_{2},\mathbf{w}_{1} \rangle \mathbf{w}\\
&=\langle (1,1,1), \left(\frac{1}{\sqrt{6}},\frac{1}{\sqrt{6}},\frac{2}{\sqrt{6}}\right)\rangle \left(\frac{1}{\sqrt{6}},\frac{1}{\sqrt{6}},\frac{2}{\sqrt{6}}\right) \\
&=\left((1,1,1) \cdot \left(\frac{1}{\sqrt{6}},\frac{1}{\sqrt{6}},\frac{2}{\sqrt{6}}\right)\right)\left(\frac{1}{\sqrt{6}},\frac{1}{\sqrt{6}},\frac{2}{\sqrt{6}}\right) \\
&=\left(1*\frac{1}{\sqrt{6}}+1*\frac{1}{\sqrt{6}}+\frac{2}{\sqrt{6}}\right)\left(\frac{1}{\sqrt{6}},\frac{1}{\sqrt{6}},\frac{2}{\sqrt{6}}\right) \\
&=\frac{4}{\sqrt{6}}\left(\frac{1}{\sqrt{6}},\frac{1}{\sqrt{6}},\frac{2}{\sqrt{6}}\right) \\
&=\left(\frac{4}{6},\frac{4}{6},\frac{8}{6}\right) \\
&=\left(\frac{2}{3},\frac{2}{3},\frac{4}{3}\right).
\end{align*}
Now that we know the portion of $$\mathbf{v}_{2}$$ in the direction of $$\mathbf{w}_{1}$$ we can find the portion of $$\mathbf{v}_{2}$$ orthogonal to $$\mathbf{w}_{1}$$. Here, we get that
\begin{align*}
u_{2}&=(1,1,1)-\left(\frac{2}{3},\frac{2}{3},\frac{4}{3}\right)\\
&=\left(\frac{1}{3},\frac{1}{3},-\frac{1}{3}\right).
\end{align*}
#### Unit Vector
We have now found the portion of $$\mathbf{v}_{2}$$ orthogonal to $$\mathbf{w}_{1}$$, however, the norm of the resulting vector is not $$1$$. Therefore, we will find a unit vector in this direction. This will be our $$\mathbf{w}_{2}$$. Here, we get
\begin{align*}
\mathbf{w}_{2}=\frac{\mathbf{u_{2}}}{||\mathbf{u}_{2}||}.
\end{align*}
As we work finding this, we will find the norm first. Here, we get
\begin{align*}
||\mathbf{u}_{2}||&=||\left(\frac{1}{3},\frac{1}{3},-\frac{1}{3}\right)|| \\
&=\sqrt{\left(\frac{1}{3},\frac{1}{3},-\frac{1}{3}\right) \cdot \left(\frac{1}{3},\frac{1}{3},-\frac{1}{3}\right)\rangle} \\
&=\sqrt{\left(\frac{1}{3}\right)^{2}+\left(\frac{1}{3}\right)^{2}+\left(\frac{-1}{3}\right)^{2}} \\
&=\sqrt{\frac{1}{3}} \\
&=\frac{1}{\sqrt{3}}.
\end{align*}
We, therefore, get that
\begin{align*}
\mathbf{w}_{2}&=\frac{\mathbf{u_{2}}}{||\mathbf{u}_{2}||} \\
&=\frac{\left(\frac{1}{3},\frac{1}{3},-\frac{1}{3}\right)}{\frac{1}{\sqrt{3}}} \\
&=\left(\frac{\sqrt{3}}{3},\frac{\sqrt{3}}{3},-\frac{\sqrt{3}}{3}\right).
\end{align*}
This is, therefore, the second vector in our orthonormal basis of $$S$$.
### Third Vector
Now that we have the first two vectors we will need to find the third vector in our basis. In order to do this, we will first need to find a vector which is orthogonal to both $$\mathbf{w}_{1}$$ and $$\mathbf{w}_{2}$$. Then, we will need to find a vector in the same direction of this vector.
#### Orthogonal
In order to find a vector that is orthogonal both $$\mathbf{w}_{1}$$ and $$\mathbf{w}_{2}$$, we will start with $$\mathbf{v}_{3}$$. We will then subtract off the portions in both the $$\mathbf{w}_{1}$$ and $$\mathbf{w}_{2}$$ directions. We, therefore, define
\begin{align*}
\mathbf{u}_{3}=\mathbf{v}_{3}-\text{proj}_{\mathbf{w}_{1}}(\mathbf{v}_{3})-\text{proj}_{\mathbf{w}_{2}}(\mathbf{v}_{3}).
\end{align*}
In order to find $$\mathbf{u}_{3}$$, we will begin by finding
\begin{align*}
\text{proj}_{\mathbf{w}_{1}}(\mathbf{v}_{3})&=\langle \mathbf{v}_{3}, \mathbf{w}_{1} \rangle \mathbf{w}_{1} \\
&=\left((0,1,1) \cdot \left(\frac{1}{\sqrt{6}},\frac{1}{\sqrt{6}},\frac{2}{\sqrt{6}}\right)\right)\left(\frac{1}{\sqrt{6}},\frac{1}{\sqrt{6}},\frac{2}{\sqrt{6}}\right) \\
&=\left(\frac{1}{\sqrt{6}}+\frac{2}{\sqrt{6}}\right)\left(\frac{1}{\sqrt{6}},\frac{1}{\sqrt{6}},\frac{2}{\sqrt{6}}\right) \\
&=\frac{3}{\sqrt{6}}\left(\frac{1}{\sqrt{6}},\frac{1}{\sqrt{6}},\frac{2}{\sqrt{6}}\right) \\
&=\left(\frac{1}{2},\frac{1}{2},1\right).
\end{align*}
Next, we find that
\begin{align*}
\text{proj}_{\mathbf{w}_{2}}(\mathbf{v}_{3})&=\langle \mathbf{v}_{3}, \mathbf{w}_{2} \rangle \mathbf{w}_{2} \\
&=\left((0,1,1) \cdot \left(\frac{\sqrt{3}}{3},\frac{\sqrt{3}}{3},-\frac{\sqrt{3}}{3}\right)\right)\left(\frac{\sqrt{3}}{3},\frac{\sqrt{3}}{3},-\frac{\sqrt{3}}{3}\right) \\
&=\left( \frac{\sqrt{3}}{3}-\frac{\sqrt{3}}{3} \right) \left(\frac{\sqrt{3}}{3},\frac{\sqrt{3}}{3},-\frac{\sqrt{3}}{3}\right) \\
&=(0,0,0).
\end{align*}
Note that since we got $$\mathbf{v}_{3} \cdot \mathbf{w}_{2}=0$$, this tells us that these two were already orthogonal. However, we do still need to subtract off the portion in the $$\mathbf{w}_{1}$$ direction. We, therefore, get that
\begin{align*}
\mathbf{u}_{3}&=\mathbf{v}_{3}-\text{proj}_{\mathbf{w}_{1}}(\mathbf{v}_{3})-\text{proj}_{\mathbf{w}_{2}}(\mathbf{v}_{3}) \\
&=(0,1,1)-\left(\frac{1}{2},\frac{1}{2},1\right) \\
&=\left(-\frac{1}{2},\frac{1}{2},0\right).
\end{align*}
#### Unit Vector
Now that we’ve found a vector orthogonal to both $$\mathbf{w}_{1}$$ and $$\mathbf{w}_{2}$$, we need to find a vector in the same direction with a norm of $$1$$. We will, therefore, let
\begin{align*}
\mathbf{w}_{3}=\frac{\mathbf{u}_{3}}{||\mathbf{u}_{3}||}.
\end{align*}
In order to find this, we then just need to find
\begin{align*}
||\mathbf{u}_{3}||&=\sqrt{\mathbf{u}_{3} \cdot \mathbf{u}_{3}} \\
&=\sqrt{\left(-\frac{1}{2},\frac{1}{2},0\right) \cdot \left(-\frac{1}{2},\frac{1}{2},0\right)} \\
&=\sqrt{\frac{1}{4}+\frac{1}{4}} \\
&=\sqrt{\frac{1}{2}} \\
&=\frac{1}{\sqrt{2}}.
\end{align*}
We are now ready to find that
\begin{align*}
\mathbf{w}_{3}&=\frac{\mathbf{u}_{3}}{||\mathbf{u}_{3}||} \\
&=\frac{\left(-\frac{1}{2},\frac{1}{2},0\right)}{\frac{1}{\sqrt{2}}} \\
&=\left(-\frac{\sqrt{2}}{2},\frac{\sqrt{2}}{2},0\right).
\end{align*}
## Conclusion
We have now found three orthogonal unit vectors whose span is $$S$$. Therefore,
\begin{align*}
B&=\left\{\mathbf{w}_{1},\mathbf{w}_{2},\mathbf{w}_{3}\right\} \\
&=\left\{\left(\frac{1}{\sqrt{6}},\frac{1}{\sqrt{6}},\frac{2}{\sqrt{6}}\right),\left(\frac{\sqrt{3}}{3},\frac{\sqrt{3}}{3},-\frac{\sqrt{3}}{3}\right),\left(-\frac{\sqrt{2}}{2},\frac{\sqrt{2}}{2},0\right)\right\}
\end{align*}
is an orthonormal basis of $$S$$.
As always, I hoped this helped you as you studied and that you enjoyed the process. If you did, make sure to share this with anyone else that could use that help and subscribe to our YouTube channel.
This site uses Akismet to reduce spam. Learn how your comment data is processed. | 2021-01-18 03:14:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.968085765838623, "perplexity": 386.3863631671546}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703514121.8/warc/CC-MAIN-20210118030549-20210118060549-00731.warc.gz"} |
http://adas-fusion.eu/element/detail/adf38/nrb05%5D%5Bc/nrb05%5D%5Bc_mo36ls1-2.dat | nrb05#c_mo36ls1-2.dat
Photoexcitation-autoionisation Rate Coefficients
Ion
Mo36+
Filename
nrb05#c_mo36ls1-2.dat
Full Path
Parent states
1s2 2s2 2p2 3P4.0
1s2 2s2 2p2 1D2.0
1s2 2s2 2p2 1S0.0
1s2 2s1 2p3 5S2.0
1s2 2s1 2p3 3D7.0
1s2 2s1 2p3 3P4.0
1s2 2s1 2p3 3S1.0
1s2 2s1 2p3 1D2.0
1s2 2s1 2p3 1P1.0
1s2 2p4 3P4.0
1s2 2p4 1D2.0
1s2 2p4 1S0.0
Recombined states
1s2 2s2 2p3 4S1.5
1s2 2s2 2p3 2D4.5
1s2 2s2 2p3 2P2.5
1s2 2s1 2p4 4P5.5
1s2 2s1 2p4 2D4.5
1s2 2s1 2p4 2S0.5
1s2 2s1 2p4 2P2.5
1s2 2p5 2P2.5
--------------------------------------------------------------------------------------------------------------
--------------------------------------------------------------------------------------------------------------
--------------------------------------------------------------------------------------------------------------
-------------------------------------------------------------------------------------------------------------- | 2018-06-20 09:27:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9637166857719421, "perplexity": 5.380126746912188}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863516.21/warc/CC-MAIN-20180620085406-20180620105406-00013.warc.gz"} |
https://runestone.academy/runestone/books/published/thinkcspy/GUIandEventDrivenProgramming/12_model_view_controller.html | # 15.32. Managing GUI Program Complexity¶
As we explained in a previous lesson, GUI programs are best implemented as Python classes because it allows you to manage the scope of the variables in your GUI interface and callback functions. However, as GUI programs become more complex, it can become overwhelming to implement them as a single class. If a single class has over than 2,000 lines of code it is probably getting too big to effectively manage. What are some ways to effectively break down complex problems into manageable pieces?
One of the most widely used ways to break down a GUI program into manageable pieces is called the Model-View-Controller software design pattern. This is often abbreviated as MVC (Model-View-Controller). It divides a problem into three pieces:
• Model - the model directly manages an application’s data and logic. If the model changes, the model sends commands to update the user’s view.
• View - the view presents the results of the application to the user. It is in charge of all program output.
• Controller - the controller accepts all user input and sends commands to the model to change the model’s state.
To says this in more general terms, the controller manages the applications input, the model manages the application’s “state” and enforces application consistency, and the view updates the output, which is what the user sees on the screen. This is basically identical to what all computer processing is composed of, which is:
input --> processing --> output
The MVC design pattern renames the pieces and restricted which part of the code can talk to the other parts of code. For MVC design:
controller (input) --> model (processing) --> view (output)
From the perspective of a GUI program, this means that the callback functions, which are called when a user causes events, are the controller, the model should perform all of the application logic, and the building and modification of the GUI widgets composes the view.
Let’s develop a Whack-a-mole game program using this design strategy. Instead of creating one Python Class for the entire game, the code will be developed as a set of cooperating objects. So where should we begin? I would suggest that the same stages of development we used in the previous lesson are a good approach, but we will create a separate Python class for most of the stages. Let’s walk through the code development.
## 15.32.1. Creating the View¶
Let’s create a Python class that builds the user interface for a Whack-a-mole game. The emphasis for this code is the creation of the widgets we need to display to the user. For this version let’s allow the moles to be placed at random locations inside the left frame. To do this we must specify an exact size for the left frame. Otherwise the code is the same as the previous version.
Code
## 15.32.2. Creating the Model¶
The model for this Whack-a-mole game is fairly simple. We need to keep a counter for the number of user hits on moles that are visible, and a counter for the number of times a user clicks on a mole that is not visible (or just clicks on the left frame and not a mole widget.)
## 15.32.3. Creating the Controller¶
The controller receives user events and sends messages to the controller to update the model’s state. For our Whack-a-mole game, we have the following four basic commands we need to send to the model:
• A user clicked on something on the left frame.
• The user wants to start a new game. (The user clicked on the “Start” button.)
• The user wants to stop playing a game. (The user clicked on the “Stop button.)
• The user wants to quit the application. (The user clicked on the “Quit” button.)
The controller* needs to recognize these events and send them to appropriate methods in the model. The controller needs to define callback functions for these events and register the appropriate event with the appropriate callback. Therefore, the controller needs access to the widgets in the view object. This can easily be accomplished by passing a reference to the view object to the controller when it is created. Summary ——- | 2020-01-20 06:46:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1807028353214264, "perplexity": 1015.9705882155042}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250597458.22/warc/CC-MAIN-20200120052454-20200120080454-00131.warc.gz"} |
https://codegolf.stackexchange.com/questions/28370/write-a-hello-world-gui-program-that-closes-itself-after-three-seconds/28451 | # Write a hello world GUI program that closes itself after three seconds
Write a program in any language, so long as it has GUI support for a window system (cannot be text-consoles, etc., and must be a GUI/toolkit/window).
The program must say hello world in any manner (splash image, menu bar, status bar, title, video, message box, other widget stuff, etc.), so long as a GUI-compliant window appears and shows this text for three seconds. The person who can code this in the least amount of code used, counted in bytes, will take the cake.
• The comments here have degraded into a pointless argument. Please refrain from extended discussion in the comments; if necessary, you may use chat instead. All comments have been purged. – Doorknob May 21 '14 at 20:53
• It doesn't matter where the argument came from - all users should avoid participating in disputes that are clearly noise or going nowhere. – Doorknob May 21 '14 at 20:56
• Does it have to be "Hello world" or can it be "Hello_world"? – slebetman May 23 '14 at 7:56
• Also, must it exit gracefully? – slebetman May 23 '14 at 8:04
• you don't specify how the program has to be run? for instance does it have to be started up by itself or can it be something loaded up in a already running environment? – Jordon Biondo May 25 '14 at 17:02
# Unix shell, 31 characters
xmessage -timeout 3 hello world
This program requires the xmessage(1) utility from X.Org. It uses the traditional black-and-white X Athena Widgets (Xaw).
• Grrr...I was going to post one using timeout and zenity but it was longer... – BenjiWiebe May 23 '14 at 15:21
• @BenjiWiebe There is a zenity answer by n.1 at codegolf.stackexchange.com/a/28451/4065 – kernigh May 23 '14 at 18:58
• You can save 1 byte: timeout 3 xmessage hello world – Glenn Randers-Pehrson May 24 '14 at 14:41
• @GlennRanders-Pehrson I can't do that, because my machine has no timeout command. Maybe someone else can post a new answer that uses it? – kernigh May 24 '14 at 19:09
# Shell and gedit - 27 characters
timeout 3 gedit Hello World
If Hello World needs to be displayed as a single string, then its 28 characters:
timeout 3 gedit Hello\ World
timeout utility runs a command for the duration specified. It ensures that gedit runs for 3 seconds, assuming minimal startup time.
Any editor can be used in place of gedit. If a shorter named editor is used like gvim, the length can be reduced by 1 or more characters.
Using an editor initially thought of by user80551.
• Escape the space, use Hello\ World – user80551 May 23 '14 at 5:58
• @user80551 The question requires "Hello World" to be displayed in any manner, so that shouldn't be necessary. – asheeshr May 23 '14 at 6:01
• Technically, that would make this Hello <space> <close icon> <Document icon> World – user80551 May 23 '14 at 6:03
• Can you confirm if kate would work instead of gedit? – user80551 May 23 '14 at 6:04
• @user80551 Dont have KDE installed, but going by Kate's man page, there doesnt seem to be any reason why it shouldn't work. – asheeshr May 23 '14 at 6:07
# Applescript, 45 bytes:
Not often Applescript is one of the shorter answers:
display alert "hello world" giving up after 3
Paste into the Applescript Editor and run, or run using osascript at the command line:
osascript -e 'display alert "hello world" giving up after 3'
# HTML+Javascript, 73 60 characters
<script>setTimeout("open('','_self','');close()",3e3)</script>Hello world
This works in Chrome, but may not be portable to other browsers.
Suggestions from the comments take this further:
<body onload=open('',name=setTimeout(close,3e3))>Hello world
• That's a lot of golfing I saw. I saw this go from 108 chars down to 73. Suddenly, the whole code is visible. – Justin May 21 '14 at 21:11
• @Quincunx: You missed the first few iterations then. :) – Greg Hewgill May 21 '14 at 21:11
• That's because I was busy posting my own code. :-) – Justin May 21 '14 at 21:12
• <body onload=open('','_self'),setTimeout(close,3e3)>Hello world seems to save a few more characters. – Ventero May 21 '14 at 22:40
• You have an edit suggestion from user3082537: save two chars by <body onload=open('',name=setTimeout(close,3e3))>Hello world – Justin May 23 '14 at 6:48
## shell script, 31
Not sure whether it qualifies. Requires notify-send. Works at least on Ubuntu 12.04.
notify-send -t 3000 Hello world
• Probably not since it isn't a GUI compliant window. Unfortunately, using zenity is much longer. – user80551 May 22 '14 at 11:10
• Its 34 chars with gedit - codegolf.stackexchange.com/a/28425/8766 – user80551 May 22 '14 at 11:13
• I believe that a window with no window decorations is still a window. Here in Enlightenment, the notification also has an X button to close it (but no other window decorations). – kernigh May 23 '14 at 19:07
gedit Hello\ World&sleep 3;kill $! This assumes that gedit pops up instantly since the 3 seconds are counted from the start of issuing the command. Could be smaller if there's a GUI text editor shorter than gedit. geany works too for the same number of chars, just s/gedit/geany/g EDIT: Using timeout is shorter. https://codegolf.stackexchange.com/a/28477/8766 EDIT2: Can anyone confirm if this works with kate ? ## meld , 32 If exactly Hello World is not required, then meld can be used. meld Hello World&sleep 3;kill$!
• Nice idea, gedit tries to open a file with this name even if there's no file. – A.L May 22 '14 at 18:04
• If you use gvim it will need only 33 chars – avall May 22 '14 at 18:17
• @Daniel halt would be shorter but I don't know if being destructive is allowed. Also, it requires root permissions so we either need to assume that we are root or use sudo which costs more (and needs the user to type the password). – user80551 May 23 '14 at 12:34
• Sorry, I was trying to be humorous. I was assuming the user has root permissions. – Daniel May 23 '14 at 12:48
• @Daniel No need to be sorry, this site is based on the most evil devious twisting of the rules to make your code shorter. – user80551 May 23 '14 at 13:02
## VBScript, 58
WScript.CreateObject("WScript.Shell").Popup"Hello world",3
## Python (pygame), 87
import pygame.display as d,time
d.set_mode()
d.set_caption('Hello world')
time.sleep(3)
• Not working on OSX, doesn't show window... – Harry Beadle May 22 '14 at 11:21
• @BritishColour it should, perhaps it's too small? Try changing the size to [999,999] – user12205 May 22 '14 at 11:23
• Still not working, the window isn't even appearing... – Harry Beadle May 22 '14 at 11:26
• @BritishColour Well it definitely works for me. See here. Perhaps it was showing up at the background or something? – user12205 May 22 '14 at 11:40
• It's interesting how our code is basically identical, but uses a different module. tkinter turns out to be shorter. – Justin May 22 '14 at 16:06
## Tcl - 32 bytes
I noticed that some of these submissions, like the shell or javascript ones, allow you to type the code into the console. If that's the case I can shorten it to:
wm ti . hello\ world;af 3000 exi
Must be typed into the console after running wish. Meaning, run wish without arguments which will give you a REPL console and then type the code above. This makes use of the fact that tcl can be lenient and autocomplete command/function names but only in interactive mode. So that af actually exectues the after command and exi executes exit. I wanted to use ex but my system has the ex editor installed.
## Original submission - 36 bytes
wm ti . hello\ world;after 3000 exit
Run using wish instead of tclsh.
## 52 51 chars with Mathematica
(Hope it counts as a GUI-compliant.)
NotebookClose/@{CreateDialog@"Hello world",Pause@3}
# Java, 136 bytes
class F{public static void main(String[]a)throws Exception{new java.awt.Frame("Hello World").show();Thread.sleep(3000);System.exit(0);}}
Displays the message Hello World as the title of a frame. After 3 seconds, the program closes.
Looks like this:
Drag it bigger:
class F {
public static void main(String[] a) throws Exception {
new java.awt.Frame("Hello World").show();
System.exit(0);
}
}
• You can use enum instead of class to save another character. – Riking May 21 '14 at 22:34
• @Riking it doesn't work. – Justin May 21 '14 at 22:35
• Really? I could've sworn I actually used that once... Dang. – Riking May 21 '14 at 22:36
• – Justin May 21 '14 at 22:36
## R, 44
x11(ti="Hello World");Sys.sleep(3);dev.off()
## PowerShell - 63 52
(new-object -c wscript.shell).popup('Hello World',3)
# Visual FoxPro - 23 characters
WAIT"hello world"TIME 3
This abuses the fact that VFP allows to not to put a space between the string to be printed (which I just discovered) and that it allows to shorten every keyword to up to its first 4 characters.
Ungolfed version:
WAIT "hello world" TIMEOUT 3
# GTK+, 47 45
zenity --info --text=Hello\ World --timeout=3
Old version (score 47):
zenity --info --title="Hello World" --timeout=3
For some reason, zenity display a text which can be translated as All update are done.
• Just to add some extra info, All updates are complete. is what I get in English. – user12205 May 22 '14 at 18:50
• You can reduce one character by changing --title to --text – asheeshr May 23 '14 at 3:59
• You can change "Hello World" to Hello\ World – kernigh May 23 '14 at 19:00
• Thanks AsheeshR and kernigh, I have 2 less characters with your help. – A.L May 23 '14 at 19:47
## C, 151 characters
#include<allegro.h>
main(){textout_ex(screen,font,"Hello World",0,0,7,set_gfx_mode('SAFE',8,8,install_timer(),
allegro_init()));rest(3e3);}END_OF_MAIN()
Not the smallest answer. I like it though.
• Good job.      – Sut Dip May 21 '14 at 20:54
• How does that even compile? 'SAFE' isn't a single char. – heinrich5991 May 23 '14 at 19:40
• @heinrich5991 SAFE is likely defined in allegro.h as a single character. – Adam Davis May 23 '14 at 21:28
• @AdamDavis C evaluates macros in character constants? – heinrich5991 May 24 '14 at 13:34
• No macro, it's really just a 32-bit integer written as four bytes in what's called multi-character constant notation, a too-clever-for-its-own-good way to write four-byte tag strings. Apple used it for file type magic numbers once. Compilers nowadays support it but emit a warning. Example for nonbelievers – Wander Nauta May 25 '14 at 15:38
# C# 101 151
This will for sure not be the shortest answer (since there are already other good answers being way shorter) but codegolf.SE needs a lot more C# contributions in my opinion!
using t=System.Threading;class P{static void Main(){using(t.Tasks.Task.Run(()=>System.Windows.MessageBox.Show("hello world"))){t.Thread.Sleep(3000);}}}
# C# 121
An alternative based on Bob's answer, but with WPF instead of WinForms:
class P{static void Main(){new System.Windows.Window(){Title="hello world"}.Show();System.Threading.Thread.Sleep(3000);}}
Saves 3 characters thanks to the shorter namespace ...
• You need to include the using declarations in the character count. Alternatively, you could leave them out but then you'd have to do things like System.Windows.Forms.MessageBox.Show (slightly fewer characters if it's only a single use). As your code currently is, it won't compile or run. – Bob May 23 '14 at 5:02
• Well, I just assumed it was not neccesary because I see a lot of answers in C, C++, C#, Java etc without any using/imports/whatever. I will edit it, though. BTW, is there some explicit rule on this? Sure it would compile and run with the right compile settings and/or compiler. (e. g. resolving using directives on compile time, as long as they can be resolved distinct) – Num Lock May 23 '14 at 6:48
• It is a little grey - for example, I had to add a reference to System.Windows.Forms.dll, which is part of the msbuild config/the compile command line. However, the general consensus seems to be that when a full program is requested, using/import/#include/etc. where necessary for the code to compile and run are required, and attempting to use compiler command line tricks to dodge that is bad. – Bob May 23 '14 at 7:42
• I will keep that in mind. Thank you for the references. – Num Lock May 23 '14 at 7:54
## Batch (24)
msg/time:3 * hello world
Tested on Windows 7, but should work on any NT-based version of Windows, assuming you have MSG.EXE in your System32 folder.
EDIT: Apparently MSG.EXE is not available by default on home versions of Windows. On Windows 7, for example, this is only available in the Ultimate or Business editions. However, you can copy the file over to your System32 folder and get it to work. (You must also copy over the appropriate MSG.EXE.MUI file to get proper error messages, but my "script" works without them.)
You have to install software for most of these other responses to work, too, so I don't think that should be a disqualifier.
• Why won't this call a program named time:3 in a folder called msg in the current directory? – cat Jul 12 '16 at 23:11
• @cat Windows uses \. – jimmy23013 Jun 6 '17 at 5:16
• @jimmy wow that was a year ago. i'm just used to writing / on all platforms now oops – cat Jun 6 '17 at 10:40
• @jimmy23013 But Windows supports / too – MilkyWay90 Dec 30 '18 at 3:56
• @MilkyWay90 Windows supports / in some places, but in cmd, /xxx is interpreted as an argument. – jimmy23013 Dec 30 '18 at 5:52
## APL (40)
X.Close⊣⎕DL 3⊣'X'⎕WC'Form' 'Hello World'
• 39: X.Close⊣⎕DL⍴⍕'X'⎕WC'Form' 'Hello World' – Adám Jun 28 '16 at 19:48
## Lua + LÖVE, 67 bytes
l=love l.window.setTitle"hello world"l.timer.sleep(3)l.event.quit()
# Perl on Windows (56)
use Win32;fork?kill+sleep+3,:Win32'MsgBox"Hello World"
• Use '-MWin32' to save four bytes – DarkHeart Jun 6 '17 at 8:49
## Perl 5, 47
Using Perl/Tk:
perl -MTk -e'alarm 3;tkinit-title,"Hello World!";MainLoop'
# 123 45678901234567890123456789012345678901234567
• It seems that the Tk module is required. – A.L May 22 '14 at 18:03
• Yes, of course you need some GUI toolkit. I choose Tk because of tkinit(). – Matthias May 23 '14 at 5:57
• I count 44 bytes. Are you counting the -MTk flag as well? – slebetman May 23 '14 at 8:12
• @slebetman Yes I count that as 3 chars, I added the count to the post. – Matthias May 23 '14 at 8:15
• @n.1 I added it below the title with a link to the CPAN documentation of the Tk module. None of the perl core modules is a GUI module, therefore you always have to install external modules. Sorry, this was clear to me (as I am working with perl quite often), but you are right, it is surprising for those who work in other languages. – Matthias May 23 '14 at 13:35
# Rebol View (r3gui), 49
view/no-wait[title"hello world"]wait 3 unview/all
Ungolfed:
view/no-wait [title "hello world"]
wait 3
unview/all
# Processing, 77
int x=millis();void draw(){text("Hello world",0,9);if(millis()>x+3e3)exit();}
Screenshot:
Edit 1: Y position of the text can be 9 instead of 10, like noted by @ace.
Edit 2: 3000 can be represented as 3e3 to shave one character off, also noted by @ace
• Using 9 for Y position instead of 10 works for me. – user12205 May 23 '14 at 10:16
• Just edited the code. Thanks! – segfaultd May 23 '14 at 14:48
• Just noticed you can use 3e3 instead of 3000 to save one more char – user12205 May 23 '14 at 17:11
## bash + ImageMagick (36 bytes)
timeout 3 display label:Hello\ world
Tested on Ubuntu 14.04 LTS and on Fedora 20.
Nicer-looking, but 10 bytes larger:
timeout 3 display -size 800 label:Hello\ world
## CMD / Batch - 33 Bytes
I believe the window that the Windows CMD terminal runs in counts as GUI compliant.
start "Hello world" cmd /csleep 3
If you don't have the sleep command on your system - then you can use timeout which comes default in Windows 7. For two more bytes.
start "Hello world" cmd /ctimeout 3
Starts a new CMD window with the title "Hello World" (NOT displayed in the terminal itself, but as the title of the GUI window that the terminal runs in), this window will close as soon as all parsed commands have executed - so after sleep 3 or timeout 3 has completed.
The window looks like this -
Note; start runs the given commands in a new window - not the window that you are running the above commands from.
• Changing it to cmd "Hello world" cmd /ctimeout 3 puts Hello world in the title bar for 3 seconds, but that probably doesn't count. – Chris Kent May 26 '14 at 7:15
• I wouldn't think it does, because it doesn't spawn a new window. The question says a GUI-compliant window appears (appears being the key word), implying that it has to display a new window. Good idea though. – unclemeat May 26 '14 at 22:50
• start "Hello world" You sir, are a genius. – user8397947 Jul 13 '16 at 1:26
# Python 3, 83 72 bytes
from tkinter import*
f=Tk()
f.wm_title("Hello World")
f.after(3000,exit)
Save bytes by using tkinter.
The old method added a Label to the frame. This method sets the title of the frame to Hello World. f.after(3000,exit) runs exit() after 3000 milliseconds have passed.
# Cobra - 180
use System.Windows.Forms
class M
def main
Environment.exit(0)
def w
MessageBox.show("hello world")
# Ruby [with Shoes] (44 chars)
Shoes.app{para "Hello world";every(3){exit}}
# C# 124
Far from the shortest :(
class P{static void Main(){new System.Windows.Forms.Form(){Text="Hello World"}.Show();System.Threading.Thread.Sleep(3000);}} | 2020-01-27 23:38:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2549128234386444, "perplexity": 5379.57188874232}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251728207.68/warc/CC-MAIN-20200127205148-20200127235148-00230.warc.gz"} |
http://mathoverflow.net/questions/52389/conjecture-on-signed-sum-of-integer-fractions-x-y-from-1-n?sort=oldest | Conjecture on signed sum of integer fractions x/y from 1..N?
Here is a generalization of an integer challenge that was asked on Yahoo!Answers in 2009, I believe it could be original, defies induction and has exponential-complexity. Not aware of any theory that covers it.
Using the natural numbers 1 through N exactly once each, write a (signed) sum of $\lceil N/2 \rceil$ fractions x/y giving the smallest positive minimum, S(N), and ideally try to achieve $S(N) \equiv 0$. In particular is $S(N)\equiv 0$ achievable for all even $N\geq6$ ? Or else for what values of N is that achievable, or not achievable? Is there any pattern?
Examples:
people found exact-zero solutions for even N up to 20, thereafter approximations
$S(6)=0 = \dfrac12 -\dfrac43 +\dfrac56$
$S(8)=0 = \dfrac13 -\dfrac54 +\dfrac76 -\dfrac28$
$S({10})=0 = \dfrac12 -\dfrac39 -\dfrac76 +\dfrac48 +\dfrac5{10}$
$S({12})=0=-\dfrac23 +\dfrac1{12} -\dfrac76 -\dfrac95 +\dfrac8{10} +\dfrac{11}4$
$... S({20})=0 = \dfrac{18}2 +\dfrac93 +\dfrac{11}{15} +\dfrac7{14} +\dfrac4{10} +\dfrac5{12} +\dfrac{13}8 +\dfrac6{16} -\dfrac{17}1 +\dfrac{19}{20}$
For $N={50}$, user Vašek found this one with $S({50}) < {10}^{-6}$: $9.844*{10}^{-7} =-\dfrac {26}{18} +\dfrac {44}7 -\dfrac {35}6 -\dfrac {13}{46} +\dfrac {39}{50} +\dfrac {27}2 -\dfrac {21}{14} -\dfrac {34}{41} +\dfrac 3{47} -\dfrac {29}{19} +\dfrac {49}{48} -\dfrac 1{10} -\dfrac {42}{12} -\dfrac {28}{20} -\dfrac {24}{22} +\dfrac {33}{32} -\dfrac {25}9 -\dfrac 5{11} +\dfrac {38}{31} -\dfrac {40}{16} -\dfrac {15}{36} +\dfrac {43}{37} +\dfrac 84 -\dfrac {45}{17} -\dfrac {23}{30}$
ksoileau using a hill-climbing algorithm found these near-zeros: $S(40)≤4.38055291*10^ {-8} = \dfrac 2{1}-\dfrac{3}{9}-\dfrac{5}{6}-\dfrac{7}{8}-\dfrac{4}{10}+\dfrac{21}{25}+\dfrac{17}{11 }-\dfrac{13}{23}-\dfrac{15}{39}+\dfrac{35}{38}+\dfrac{14}{32}+\dfrac{19}{33 }-\dfrac{24}{29}-\dfrac{27}{28}-\dfrac{16}{12}-\dfrac{26}{22}-\dfrac{30}{34 }+\dfrac{31}{36}+\dfrac{37}{20}-\dfrac{18}{40}$
$S(50)≤5.56460829*10^ {-8} = -\dfrac{1}{48}-\dfrac{3}{4}-\dfrac{5}{9}-\dfrac{7}{8}-\dfrac{20}{6}+\dfrac{10}{18 }+\dfrac{13}{30}-\dfrac{15}{16}+\dfrac{45}{12}+\dfrac{34}{17 }-\dfrac{39}{42}-\dfrac{23}{24}-\dfrac{25}{26}-\dfrac{2}{31 }+\dfrac{14}{29}+\dfrac{28}{32}+\dfrac{33}{19}-\dfrac{35}{36 }-\dfrac{37}{38}-\dfrac{21}{40}-\dfrac{41}{44}+\dfrac{43}{22 }+\dfrac{11}{46}+\dfrac{47}{27}-\dfrac{49}{50}$
Can you say anything at all (analytically or statistically) about the behavior of S(N)? For what values of N should $S(N) \equiv 0$ (even if you can't show the solution)?
What's interesting is it seems to have no pattern and defeat induction: knowing all the results for numbers < N doesn't help at all with S(N)?
Odd-N cases:
$S(3) = \dfrac13 = 1 -\dfrac23$
$S(5) =0 = \dfrac31 +\dfrac42 - 5$
$S(7) =0 = \dfrac13 +\dfrac52 +\dfrac76 - 4$
$S(9) =0 = \dfrac12 -\dfrac68 -\dfrac74 -\dfrac93 + 5$ or $\dfrac13 +\dfrac76 -\dfrac84 -\dfrac92 + 5$ etc.
$... S({19}) ≤ 1E-5 = \dfrac12 +\dfrac34 +\dfrac56 +\dfrac78 +\dfrac{12}{10} +\dfrac{14}{11} +\dfrac{16}{15} +\dfrac{18}{13} +\dfrac{19}{17} - 9 ,$
Presumably it makes most sense to break out odd and even N separately. i.e. S(2M) forms one decreasing(?) sequence, and S(2M+1) forms another. Anyone with time on their hands, feel free to compute and post tables of S(N) for N.
See if you can even prove whether the even-N case {S(2M)} is or is not monotone decreasing (at least for some subrange of 2M).
To eliminate duplicates with order of terms swapped, let us adopt some (arbitrary) ordering principle such as e.g. require the denominators {y_i} to be in increasing order.
I had one thought about a probabilistic proof: Write each of the $\lceil N/2 \rceil$ terms as $(x_i/y_i) = u_i$ and also call σ_i the sign chosen for each term u_i. Then consider our sum $\sum σ_i (x_i/y_i)$
Noting that each of the terms $u_i = \exp{[ ln(x_i) - ln(y_i) ]}$ consider the distribution of all possible $N! (N-1)!$ values of the {u_i}. The u_i are discrete but look how exponentially $N! (N-1)!$ grows with N. It seems intuitive that the more possible values for the u_i we have, the more probabilistic that we can choose some signed sum of {u_i} to minimize S(N), and specifically to make S(N) < S(N-2). Try to calculate that probability?
(PS be careful of precision and roundoff errors if you program this.)
PPS: A note on the complexity of this problem:
There are N! choices to assign the N numbers into $\lceil N/2 \rceil$ fraction terms $x_i/y_i$ ; and an additional $\lceil N/2 \rceil$ choices for the signs {σ_i} Thus it is exponential (2^N) complexity. without loss of generality, choose an ordered notation where the fractions $σ (x/y)$ are written in order of increasing numerators x. Then there are:
$\;\;\;\;\;\;\;\;\; \binom{N}{\lceil N/2 \rceil}$ ways to pick the numerators {x_i}
$\;\;\;\;\;\;\;\;\; \lfloor N/2 \rfloor !$ ways to pick all the denominators y_i for each x_i ;
$\;\;\;\;\;\;\;\;\; 2^{\lceil N/2 \rceil}$ ways to choose signs σ_i
$\implies complexity(N) \sim \lfloor N/2 \rfloor ! * \binom{N}{\lceil N/2 \rceil} * 2^{\lceil N/2 \rceil}$
and that boils down to: $2^{2M}$ (even case) and $2^{2(M+1)} / (M+2)$ (odd case).
User steppenwolf (see reference 1) sketched a proof that, at least for even N, $\lim_{2M\to\infty} S(2M)=0$
and also a weak upper bound $S(N) \leq 3.25/N$
I originally asked this on Yahoo!Answers as a generalization of a previous question by user ksoileau: http://answers.yahoo.com/question/index?qid=20090330224143AA2zDfL
-
Sorry -- what's the conjecture referred to in the title? – Todd Trimble Jan 18 '11 at 9:06
In particular is S(N)=0 achievable for all even N (≥6) ? – smci Jan 18 '11 at 9:44
Also, does it exhibit no pattern and defeat induction: knowing all the results for numbers < N doesn't help at all with S(N)? – smci Jan 18 '11 at 9:46
Can you re-edit equations with LaTeX? – Nurdin Takenov Jan 18 '11 at 10:52
Any prime (or prime power) between $N/2$ or $N$ cannot occur in the denominator if $S(N)$ is to vanish. More generally, numbers up to $N$ with large prime factors would make it difficult for $S(N)$ to vanish if they were in the denominator, however given that half of the numbers up to $N$ have to become denominators, I would suspect that $S(N)$ will not vanish for all large $N$. Also the irregularity of the primes suggests that an inductive construction to try to prove $S(N)=0$ is possible for large $N$ is likely to fail. – Terry Tao Feb 2 at 20:00
This is a suggestion to not dismiss induction too readily.
If I were to attempt an inductive proof, one approach I would take would be an inductive definition of T(2m), the set of all sums arrived at by forming m fractions as directed and then taking all signed sums. T(2m+2) is an incremental change to T(2m), but with most likely more than exponential growth. If you can prove that T(2m) contains either fraction (2m+1)/(2m+2) or its multiplicative inverse, you can conclude S(2m+2) is 0. That would be too easy, though. I suspect you will need solve equations like x/(2m+2) + (2m+1)/y = P for some value of P that is related to a value in T(2m).
-
I'm surprised, as I hadn't noticed the negative vote. Also, (I think it was) your observation about s(primorial(n))>= n was useful in the happy prime new year thread, and I was sorry to see it go. Perhaps it will return. Gerhard "Thank You For Your SUpport" Paseman, 2011.01.19 – Gerhard Paseman Jan 19 '11 at 17:51
Even if you can't prove the inductive step, any reaction to my suggestion on a probabilistic proof? – smci Jan 19 '11 at 22:18
Since you asked, my reaction is that it is illformed. Is u_i one of the fractions or the sum? For a given 2m, there are less than 4m^2 such fractions allowed, and you don't want the sum of any of them, but of a particular subset of them which is challenging to enumerate. Until you clean up that part, I am not encouraged to proceed to the estimate of the enumeration, much less to the argument. Also, it is likely that some of the sums will be equal, and it is not clear how the probabilistic argument will handle that. Gerhard "Ask Me About System Design" Paseman, 2011.01.19 – Gerhard Paseman Jan 19 '11 at 22:39
Dude, I wrote that u_i = (x_i/y_i) is one term, σ_i is its sign, and the summation is Σ σ_i (x_i/y_i) Yes, rarely some of the sums are equal, if we adopt some (arbitrary) ordering principle such as e.g. require the numerators {x_i} to be in increasing order. This throws out duplicate sequences with terms swapped. After that there is nearly-zero probability of two different sequences having the same sum. So the probabilistic argument gives us a (quadratic?) range of choices for each term. – smci Jan 27 '11 at 21:23
In fact a better ordering principle is probably denominators {y_i} must be in increasing order. This makes for better legibility of the solution, but programming the recursion gets a little bit more annoying. – smci Jan 27 '11 at 21:28 | 2015-11-26 20:08:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8483946919441223, "perplexity": 632.5650261530978}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398447773.21/warc/CC-MAIN-20151124205407-00266-ip-10-71-132-137.ec2.internal.warc.gz"} |
https://math.stackexchange.com/questions/985177/defining-strict-self-similarity | # Defining strict self-similarity
I have been reading through John Hutchinson's paper "Fractals and Self-Similarity" and some other things, and I haven't really found a definition of strict self-similarity to work with that makes much sense to me. Heuristically we want (any?) part of a strictly self-similar object to look like a smaller copy of the whole object, but how to we formalise this so we can test (ask) whether some set is strictly self-similar?
• A set is self-similar if it is the invariant set of an iterated function system, as described on the Wiki page for self-similarity. – Mark McClure Oct 22 '14 at 2:06
• I understand that this is what we want, but is there a direct definition of self-similar coming from the idea of what we (heuristically) mean by self similar, so we could prove that the invariant set is self similar? I am wondering because it is not entirely clear that the invariant set satisfies my intuition of what a self similar set should be. – Dom Oct 22 '14 at 3:25
• Well, it seems to me that this definition does exactly capture what we intuitively mean by self-similar - at least, in the case where the functions in the list are similarity transformations! Perhaps, though, I've used it too long. :) – Mark McClure Oct 22 '14 at 10:00
• I kind of thought so too, but then I thought my intuition is that self-similar should mean that (I guess) every point should have a neighbourhood that looks like the whole object, but then you could have multiple similarities of the IFS mapping into this neighbourhood with their images overlapping, which may not look like the whole image anymore. I can't quite see how to resolve this. – Dom Oct 22 '14 at 11:05
• You've heard of the Open Set Condition? It's a technical definition that captures the intuition behind "non-overlapping". The fact that Hutchinson had to define it means that he, too, found overlap to be problematic in the analysis of self-similar sets. So I'd say your intuition is spot-on. The resolution is to consider the sub-class of self-similar sets that satisfy the open set condition. Of course, this begs the question - what can we do lacking that condition? – Mark McClure Oct 22 '14 at 11:51 | 2019-07-21 19:56:43 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8037002086639404, "perplexity": 369.7371958205975}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195527196.68/warc/CC-MAIN-20190721185027-20190721211027-00010.warc.gz"} |
http://ir.lib.uwo.ca/physicspub/4/ | ## Physics and Astronomy Publications
#### Title
The Close Circumstellar Environment of Betelgeuse: Adaptive Optics Spectro-imaging in the Near-IR with VLT/NACO
Article
9-2009
#### Journal
Astronomy & Astrophysics
504
1
115
125
#### URL with Digital Object Identifier
http://dx.doi.org/10.1051/0004-6361/200912521
#### Abstract
Context. Betelgeuse is one the largest stars in the sky in terms of angular diameter. Structures on the stellar photosphere have been detected in the visible and near-infrared as well as a compact molecular environment called the MOLsphere. Mid-infrared observations have revealed the nature of some of the molecules in the MOLsphere, some being the precursor of dust.
Aims. Betelgeuse is an excellent candidate to understand the process of mass loss in red supergiants. Using diffraction-limited adaptive optics (AO) in the near-infrared, we probe the photosphere and close environment of Betelgeuse to study the wavelength dependence of its extension, and to search for asymmetries.
Methods. We obtained AO images with the VLT/NACO instrument, taking advantage of the “cube” mode of the CONICA camera to record separately a large number of short-exposure frames. This allowed us to adopt a “lucky imaging” approach for the data reduction, and obtain diffraction-limited images over the spectral range $1.04{-}2.17~\mu$m in 10 narrow-band filters.
Results. In all filters, the photosphere of Betelgeuse appears partly resolved. We identify an asymmetric envelope around the star, with in particular a relatively bright “plume” extending in the southwestern quadrant up to a radius of approximately six times the photosphere. The CN molecule provides an excellent match to the 1.09 $\mu$m bandhead in absorption in front of the stellar photosphere, but the emission spectrum of the plume is more difficult to interpret.
Conclusions. Our AO images show that the envelope surrounding Betelgeuse has a complex and irregular structure. We propose that the southwestern plume is linked either to the presence of a convective hot spot on the photosphere, or to the rotation of the star.
COinS | 2016-05-03 08:50:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5428592562675476, "perplexity": 2229.3720385825113}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860121090.75/warc/CC-MAIN-20160428161521-00017-ip-10-239-7-51.ec2.internal.warc.gz"} |
https://compgenomr.github.io/book/gene-expression-analysis-using-high-throughput-sequencing-technologies.html | ## 8.3 Gene expression analysis using high-throughput sequencing technologies
With the advent of the second-generation (a.k.a next-generation or high-throughput) sequencing technologies, the number of genes that can be profiled for expression levels with a single experiment has increased to the order of tens of thousands of genes. Therefore, the bottleneck in this process has become the data analysis rather than the data generation. Many statistical methods and computational tools are required for getting meaningful results from the data, which comes with a lot of valuable information along with a lot of sources of noise. Fortunately, most of the steps of RNA-seq analysis have become quite mature over the years. Below we will first describe how to reach a read count table from raw fastq reads obtained from an Illumina sequencing run. We will then demonstrate in R how to process the count table, make a case-control differential expression analysis, and do some downstream functional enrichment analysis.
### 8.3.1 Processing raw data
#### 8.3.1.1 Quality check and read processing
The first step in any experiment that involves high-throughput short-read sequencing should be to check the sequencing quality of the reads before starting to do any downstream analysis. The quality of the input sequences holds fundamental importance in the confidence for the biological conclusions drawn from the experiment. We have introduced quality check and processing in Chapter 7, and those tools and workflows also apply in RNA-seq analysis.
#### 8.3.1.2 Improving the quality
The second step in the RNA-seq analysis workflow is to improve the quality of the input reads. This step could be regarded as an optional step when the sequencing quality is very good. However, even with the highest-quality sequencing datasets, this step may still improve the quality of the input sequences. The most common technical artifacts that can be filtered out are the adapter sequences that contaminate the sequenced reads, and the low-quality bases that are usually found at the ends of the sequences. Commonly used tools in the field (trimmomatic (Bolger, Lohse, and Usadel 2014), trimGalore (Andrews 2010)) are again not written in R, however there are alternative R libraries for carrying out the same functionality, for instance, QuasR (Gaidatzis, Lerch, Hahne, et al. 2015) (see QuasR::preprocessReads function) and ShortRead (Morgan, Anders, Lawrence, et al. 2009) (see ShortRead::filterFastq function). Some of these approaches are introduced in Chapter 7.
The sequencing quality control and read pre-processing steps can be visited multiple times until achieving a satisfactory level of quality in the sequence data before moving on to the downstream analysis steps.
### 8.3.2 Alignment
Once a decent level of quality in the sequences is reached, the expression level of the genes can be quantified by first mapping the sequences to a reference genome, and secondly matching the aligned reads to the gene annotations, in order to count the number of reads mapping to each gene. If the species under study has a well-annotated transcriptome, the reads can be aligned to the transcript sequences instead of the reference genome. In cases where there is no good quality reference genome or transcriptome, it is possible to de novo assemble the transcriptome from the sequences and then quantify the expression levels of genes/transcripts.
For RNA-seq read alignments, apart from the availability of reference genomes and annotations, probably the most important factor to consider when choosing an alignment tool is whether the alignment method considers the absence of intronic regions in the sequenced reads, while the target genome may contain introns. Therefore, it is important to choose alignment tools that take into account alternative splicing. In the basic setting where a read, which originates from a cDNA sequence corresponding to an exon-exon junction, needs to be split into two parts when aligned against the genome. There are various tools that consider this factor such as STAR (Dobin, Davis, Schlesinger, et al. 2013), Tophat2 (Kim, Pertea, Trapnell, et al. 2013), Hisat2 (Kim, Langmead, and Salzberg 2015), and GSNAP (Wu, Reeder, Lawrence, et al. 2016). Most alignment tools are written in C/C++ languages because of performance concerns. There are also R libraries that can do short read alignments; these are discussed in Chapter 7.
### 8.3.3 Quantification
After the reads are aligned to the target, a SAM/BAM file sorted by coordinates should have been obtained. The BAM file contains all alignment-related information of all the reads that have been attempted to be aligned to the target sequence. This information consists of - most basically - the genomic coordinates (chromosome, start, end, strand) of where a sequence was matched (if at all) in the target, specific insertions/deletions/mismatches that describe the differences between the input and target sequences. These pieces of information are used along with the genomic coordinates of genome annotations such as gene/transcript models in order to count how many reads have been sequenced from a gene/transcript. As simple as it may sound, it is not a trivial task to assign reads to a gene/transcript just by comparing the genomic coordinates of the annotations and the sequences, because of confounding factors such as overlapping gene annotations, overlapping exon annotations from different transcript isoforms of a gene, and overlapping annotations from opposite DNA strands in the absence of a strand-specific sequencing protocol. Therefore, for read counting, it is important to consider:
1. Strand specificity of the sequencing protocol: Are the reads expected to originate from the forward strand, reverse strand, or unspecific?
2. Counting mode: - when counting at the gene-level: When there are overlapping annotations, which features should the read be assigned to? Tools usually have a parameter that lets the user select a counting mode. - when counting at the transcript-level: When there are multiple isoforms of a gene, which isoform should the read be assigned to? This consideration is usually an algorithmic consideration that is not modifiable by the end-user.
Some tools can couple alignment to quantification (e.g. STAR), while some assume the alignments are already calculated and require BAM files as input. On the other hand, in the presence of good transcriptome annotations, alignment-free methods (Salmon (Patro, Duggal, Love, et al. 2017), Kallisto (Bray, Pimentel, Melsted, et al. 2016), Sailfish (Patro, Mount, and Kingsford 2014)) can also be used to estimate the expression levels of transcripts/genes. There are also reference-free quantification methods that can first de novo assemble the transcriptome and estimate the expression levels based on this assembly. Such a strategy can be useful in discovering novel transcripts or may be required in cases when a good reference does not exist. If a reference transcriptome exists but of low quality, a reference-based transcriptome assembler such as Cufflinks (Trapnell, Williams, Pertea, et al. 2010) can be used to improve the transcriptome. In case there is no available transcriptome annotation, a de novo assembler such as Trinity (Haas, Papanicolaou, Yassour, et al. 2013) or Trans-ABySS (Robertson, Schein, Chiu, et al. 2010) can be used to assemble the transcriptome from scratch.
Within R, quantification can be done using: - Rsubread::featureCounts - QuasR::qCount - GenomicAlignments::summarizeOverlaps
### 8.3.4 Within sample normalization of the read counts
The most common application after a gene’s expression is quantified (as the number of reads aligned to the gene), is to compare the gene’s expression in different conditions, for instance, in a case-control setting (e.g. disease versus normal) or in a time-series (e.g. along different developmental stages). Making such comparisons helps identify the genes that might be responsible for a disease or an impaired developmental trajectory. However, there are multiple caveats that needs to be addressed before making a comparison between the read counts of a gene in different conditions (Maza, Frasse, Senin, et al. 2013).
• Library size (i.e. sequencing depth) varies between samples coming from different lanes of the flow cell of the sequencing machine.
• Longer genes will have a higher number of reads.
• Library composition (i.e. relative size of the studied transcriptome) can be different in two different biological conditions.
• GC content biases across different samples may lead to a biased sampling of genes (Risso, Schwartz, Sherlock, et al. 2011).
• Read coverage of a transcript can be biased and non-uniformly distributed along the transcript (Mortazavi, Williams, McCue, et al. 2008).
Therefore these factors need to be taken into account before making comparisons.
The most basic normalization approaches address the sequencing depth bias. Such procedures normalize the read counts per gene by dividing each gene’s read count by a certain value and multiplying it by 10^6. These normalized values are usually referred to as CPM (counts per million reads):
• Total Counts Normalization (divide counts by the sum of all counts)
• Upper Quartile Normalization (divide counts by the upper quartile value of the counts)
• Median Normalization (divide counts by the median of all counts)
Popular metrics that improve upon CPM are RPKM/FPKM (reads/fragments per kilobase of million reads) and TPM (transcripts per million). RPKM is obtained by dividing the CPM value by another factor, which is the length of the gene per kilobase. FPKM is the same as RPKM, but is used for paired-end reads. Thus, RPKM/FPKM methods account for, firstly, the library size, and secondly, the gene lengths.
TPM also controls for both the library size and the gene lengths, however, with the TPM method, the read counts are first normalized by the gene length (per kilobase), and then gene-length normalized values are divided by the sum of the gene-length normalized values and multiplied by 10^6. Thus, the sum of normalized values for TPM will always be equal to 10^6 for each library, while the sum of RPKM/FPKM values do not sum to 10^6. Therefore, it is easier to interpret TPM values than RPKM/FPKM values.
### 8.3.5 Computing different normalization schemes in R
Here we will assume that there is an RNA-seq count table comprising raw counts, meaning the number of reads counted for each gene has not been exposed to any kind of normalization and consists of integers. The rows of the count table correspond to the genes and the columns represent different samples. Here we will use a subset of the RNA-seq count table from a colorectal cancer study. We have filtered the original count table for only protein-coding genes (to improve the speed of calculation) and also selected only five metastasized colorectal cancer samples along with five normal colon samples. There is an additional column width that contains the length of the corresponding gene in the unit of base pairs. The length of the genes are important to compute RPKM and TPM values. The original count tables can be found from the recount2 database (https://jhubiostatistics.shinyapps.io/recount/) using the SRA project code SRP029880, and the experimental setup along with other accessory information can be found from the NCBI Trace archive using the SRA project code SRP029880.
#colorectal cancer
counts_file <- system.file("extdata/rna-seq/SRP029880.raw_counts.tsv",
package = "compGenomRData")
coldata_file <- system.file("extdata/rna-seq/SRP029880.colData.tsv",
package = "compGenomRData")
counts <- as.matrix(read.table(counts_file, header = T, sep = '\t'))
#### 8.3.5.1 Computing CPM
Let’s do a summary of the counts table. Due to space limitations, the summary for only the first three columns is displayed.
summary(counts[,1:3])
## CASE_1 CASE_2 CASE_3
## Min. : 0 Min. : 0 Min. : 0
## 1st Qu.: 5155 1st Qu.: 6464 1st Qu.: 3972
## Median : 80023 Median : 85064 Median : 64145
## Mean : 295932 Mean : 273099 Mean : 263045
## 3rd Qu.: 252164 3rd Qu.: 245484 3rd Qu.: 210788
## Max. :205067466 Max. :105248041 Max. :222511278
To compute the CPM values for each sample (excluding the width column):
cpm <- apply(subset(counts, select = c(-width)), 2,
function(x) x/sum(as.numeric(x)) * 10^6)
Check that the sum of each column after normalization equals to 10^6 (except the width column).
colSums(cpm)
## CASE_1 CASE_2 CASE_3 CASE_4 CASE_5 CTRL_1 CTRL_2 CTRL_3 CTRL_4 CTRL_5
## 1e+06 1e+06 1e+06 1e+06 1e+06 1e+06 1e+06 1e+06 1e+06 1e+06
#### 8.3.5.2 Computing RPKM
# create a vector of gene lengths
geneLengths <- as.vector(subset(counts, select = c(width)))
# compute rpkm
rpkm <- apply(X = subset(counts, select = c(-width)),
MARGIN = 2,
FUN = function(x) {
10^9 * x / geneLengths / sum(as.numeric(x))
})
Check the sample sizes of RPKM. Notice that the sums of samples are all different.
colSums(rpkm)
## CASE_1 CASE_2 CASE_3 CASE_4 CASE_5 CTRL_1 CTRL_2 CTRL_3
## 158291.0 153324.2 161775.4 173047.4 172761.4 210032.6 301764.2 241418.3
## CTRL_4 CTRL_5
## 291674.5 252005.7
#### 8.3.5.3 Computing TPM
#find gene length normalized values
rpk <- apply( subset(counts, select = c(-width)), 2,
function(x) x/(geneLengths/1000))
#normalize by the sample size using rpk values
tpm <- apply(rpk, 2, function(x) x / sum(as.numeric(x)) * 10^6)
Check the sample sizes of tpm. Notice that the sums of samples are all equal to 10^6.
colSums(tpm)
## CASE_1 CASE_2 CASE_3 CASE_4 CASE_5 CTRL_1 CTRL_2 CTRL_3 CTRL_4 CTRL_5
## 1e+06 1e+06 1e+06 1e+06 1e+06 1e+06 1e+06 1e+06 1e+06 1e+06
None of these metrics (CPM, RPKM/FPKM, TPM) account for the other important confounding factor when comparing expression levels of genes across samples: the library composition, which may also be referred to as the relative size of the compared transcriptomes. This factor is not dependent on the sequencing technology, it is rather biological. For instance, when comparing transcriptomes of different tissues, there can be sets of genes in one tissue that consume a big chunk of the reads, while in the other tissues they are not expressed at all. This kind of imbalance in the composition of compared transcriptomes can lead to wrong conclusions about which genes are actually differentially expressed. This consideration is addressed in two popular R packages: DESeq2 (Love, Huber, and Anders 2014) and edgeR (Robinson, McCarthy, and Smyth 2010) each with a different algorithm. edgeR uses a normalization procedure called Trimmed Mean of M-values (TMM). DESeq2 implements a normalization procedure using median of Ratios, which is obtained by finding the ratio of the log-transformed count of a gene divided by the average of log-transformed values of the gene in all samples (geometric mean), and then taking the median of these values for all genes. The raw read count of the gene is finally divided by this value (median of ratios) to obtain the normalized counts.
### 8.3.6 Exploratory analysis of the read count table
A typical quality control, in this case interrogating the RNA-seq experiment design, is to measure the similarity of the samples with each other in terms of the quantified expression level profiles across a set of genes. One important observation to make is to see whether the most similar samples to any given sample are the biological replicates of that sample. This can be computed using unsupervised clustering techniques such as hierarchical clustering and visualized as a heatmap with dendrograms. Another most commonly applied technique is a dimensionality reduction technique called Principal Component Analysis (PCA) and visualized as a two-dimensional (or in some cases three-dimensional) scatter plot. In order to find out more about the clustering methods and PCA, please refer to Chapter 4.
#### 8.3.6.1 Clustering
We can combine clustering and visualization of the clustering results by using heatmap functions that are available in a variety of R libraries. The basic R installation comes with the stats::heatmap function. However, there are other libraries available in CRAN (e.g. pheatmap (Kolde 2019)) or Bioconductor (e.g. ComplexHeatmap (Z. Gu, Eils, and Schlesner 2016a)) that come with more flexibility and more appealing visualizations.
Here we demonstrate a heatmap using the pheatmap package and the previously calculated tpm matrix. As these matrices can be quite large, both computing the clustering and rendering the heatmaps can take a lot of resources and time. Therefore, a quick and informative way to compare samples is to select a subset of genes that are, for instance, most variable across samples, and use that subset to do the clustering and visualization.
Let’s select the top 100 most variable genes among the samples.
#compute the variance of each gene across samples
V <- apply(tpm, 1, var)
#sort the results by variance in decreasing order
#and select the top 100 genes
selectedGenes <- names(V[order(V, decreasing = T)][1:100])
Now we can quickly produce a heatmap where samples and genes are clustered (see Figure 8.1 ).
library(pheatmap)
pheatmap(tpm[selectedGenes,], scale = 'row', show_rownames = FALSE)
We can also overlay some annotation tracks to observe the clusters. Here it is important to observe whether the replicates of the same sample cluster most closely with each other, or not. Overlaying the heatmap with such annotation and displaying sample groups with distinct colors helps quickly see if there are samples that don’t cluster as expected (see Figure 8.2 ).
colData <- read.table(coldata_file, header = T, sep = '\t',
stringsAsFactors = TRUE)
pheatmap(tpm[selectedGenes,], scale = 'row',
show_rownames = FALSE,
annotation_col = colData)
#### 8.3.6.2 PCA
Let’s make a PCA plot to see the clustering of replicates as a scatter plot in two dimensions (Figure 8.3).
library(stats)
library(ggplot2)
#transpose the matrix
M <- t(tpm[selectedGenes,])
# transform the counts to log2 scale
M <- log2(M + 1)
#compute PCA
pcaResults <- prcomp(M)
#plot PCA results making use of ggplot2's autoplot function
#ggfortify is needed to let ggplot2 know about PCA data structure.
autoplot(pcaResults, data = colData, colour = 'group')
We should observe here whether the samples from the case group (CASE) and samples from the control group (CTRL) can be split into two distinct clusters on the scatter plot of the first two largest principal components.
We can use the summary function to summarize the PCA results to observe the contribution of the principal components in the explained variation.
summary(pcaResults)
## Importance of components:
## PC1 PC2 PC3 PC4 PC5 PC6 PC7
## Standard deviation 24.396 2.50514 2.39327 1.93841 1.79193 1.6357 1.46059
## Proportion of Variance 0.957 0.01009 0.00921 0.00604 0.00516 0.0043 0.00343
## Cumulative Proportion 0.957 0.96706 0.97627 0.98231 0.98747 0.9918 0.99520
## PC8 PC9 PC10
## Standard deviation 1.30902 1.12657 4.616e-15
## Proportion of Variance 0.00276 0.00204 0.000e+00
## Cumulative Proportion 0.99796 1.00000 1.000e+00
#### 8.3.6.3 Correlation plots
Another complementary approach to see the reproducibility of the experiments is to compute the correlation scores between each pair of samples and draw a correlation plot.
Let’s first compute pairwise correlation scores between every pair of samples.
library(stats)
correlationMatrix <- cor(tpm)
Let’s have a look at how the correlation matrix looks (8.1) (showing only two samples each of case and control samples):
TABLE 8.1: Correlation scores between samples
CASE_1 CASE_2 CTRL_1 CTRL_2
CASE_1 1.0000000 0.9924606 0.9594011 0.9635760
CASE_2 0.9924606 1.0000000 0.9725646 0.9793835
CTRL_1 0.9594011 0.9725646 1.0000000 0.9879862
CTRL_2 0.9635760 0.9793835 0.9879862 1.0000000
We can also draw more visually appealing correlation plots using the corrplot package (Figure 8.4). Using the addrect argument, we can split clusters into groups and surround them with rectangles. By setting the addCoef.col argument to ‘white’, we can display the correlation coefficients as numbers in white color.
library(corrplot)
corrplot(correlationMatrix, order = 'hclust',
number.cex = 0.7)
Here pairwise correlation levels are visualized as colored circles. Blue indicates positive correlation, while Red indicates negative correlation.
We could also plot this correlation matrix as a heatmap (Figure 8.5). As all the samples have a high pairwise correlation score, using a heatmap instead of a corrplot helps to see the differences between samples more easily. The annotation_col argument helps to display sample annotations and the cutree_cols argument is set to 2 to split the clusters into two groups based on the hierarchical clustering results.
library(pheatmap)
# split the clusters into two based on the clustering similarity
pheatmap(correlationMatrix,
annotation_col = colData,
cutree_cols = 2)
### 8.3.7 Differential expression analysis
Differential expression analysis allows us to test tens of thousands of hypotheses (one test for each gene) against the null hypothesis that the activity of the gene stays the same in two different conditions. There are multiple limiting factors that influence the power of detecting genes that have real changes between two biological conditions. Among these are the limited number of biological replicates, non-normality of the distribution of the read counts, and higher uncertainty of measurements for lowly expressed genes than highly expressed genes (Love, Huber, and Anders 2014). Tools such as edgeR and DESeq2 address these limitations using sophisticated statistical models in order to maximize the amount of knowledge that can be extracted from such noisy datasets. In essence, these models assume that for each gene, the read counts are generated by a negative binomial distribution. This is a popular distribution that is used for modeling count data. This distribution can be specified with a mean parameter, $$m$$, and a dispersion parameter, $$\alpha$$. The dispersion parameter $$\alpha$$ is directly related to the variance as the variance of this distribution is formulated as: $$m+\alpha m^{2}$$. Therefore, estimating these parameters is crucial for differential expression tests. The methods used in edgeR and DESeq2 use dispersion estimates from other genes with similar counts to precisely estimate the per-gene dispersion values. With accurate dispersion parameter estimates, one can estimate the variance more precisely, which in turn improves the result of the differential expression test. Although statistical models are different, the process here is similar to the moderated t-test and qualifies as an empirical Bayes method which we introduced in Chapter 3. There, we calculated gene-wise variability and shrunk each gene-wise variability towards the median variability of all genes. In the case of RNA-seq the dispersion coefficient $$\alpha$$ is shrunk towards the value of dispersion from other genes with similar read counts.
Now let us take a closer look at the DESeq2 workflow and how it calculates differential expression:
1. The read counts are normalized by computing size factors, which addresses the differences not only in the library sizes, but also the library compositions.
2. For each gene, a dispersion estimate is calculated. The dispersion value computed by DESeq2 is equal to the squared coefficient of variation (variation divided by the mean).
3. A line is fit across the dispersion estimates of all genes computed in step 2 versus the mean normalized counts of the genes.
4. Dispersion values of each gene are shrunk towards the fitted line in step 3.
5. A Generalized Linear Model is fitted which considers additional confounding variables related to the experimental design such as sequencing batches, treatment, temperature, patient’s age, sequencing technology, etc., and uses negative binomial distribution for fitting count data.
6. For a given contrast (e.g. treatment type: drug-A versus untreated), a test for differential expression is carried out against the null hypothesis that the log fold change of the normalized counts of the gene in the given pair of groups is exactly zero.
7. It adjusts p-values for multiple-testing.
In order to carry out a differential expression analysis using DESeq2, three kinds of inputs are necessary:
1. The read count table: This table must be raw read counts as integers that are not processed in any form by a normalization technique. The rows represent features (e.g. genes, transcripts, genomic intervals) and columns represent samples.
2. A colData table: This table describes the experimental design.
3. A design formula: This formula is needed to describe the variable of interest in the analysis (e.g. treatment status) along with (optionally) other covariates (e.g. batch, temperature, sequencing technology).
Let’s define these inputs:
#remove the 'width' column
countData <- as.matrix(subset(counts, select = c(-width)))
#define the experimental setup
stringsAsFactors = TRUE)
#define the design formula
designFormula <- "~ group"
Now, we are ready to run DESeq2.
library(DESeq2)
library(stats)
#create a DESeq dataset object from the count matrix and the colData
dds <- DESeqDataSetFromMatrix(countData = countData,
colData = colData,
design = as.formula(designFormula))
#print dds object to see the contents
print(dds)
## class: DESeqDataSet
## dim: 19719 10
## assays(1): counts
## rownames(19719): TSPAN6 TNMD ... MYOCOS HSFX3
## rowData names(0):
## colnames(10): CASE_1 CASE_2 ... CTRL_4 CTRL_5
## colData names(2): source_name group
The DESeqDataSet object contains all the information about the experimental setup, the read counts, and the design formulas. Certain functions can be used to access this information separately: rownames(dds) shows which features are used in the study (e.g. genes), colnames(dds) displays the studied samples, counts(dds) displays the count table, and colData(dds) displays the experimental setup.
Remove genes that have almost no information in any of the given samples.
#For each gene, we count the total number of reads for that gene in all samples
#and remove those that don't have at least 1 read.
dds <- dds[ rowSums(DESeq2::counts(dds)) > 1, ]
Now, we can use the DESeq() function of DESeq2, which is a wrapper function that implements estimation of size factors to normalize the counts, estimation of dispersion values, and computing a GLM model based on the experimental design formula. This function returns a DESeqDataSet object, which is an updated version of the dds variable that we pass to the function as input.
dds <- DESeq(dds)
Now, we can compare and contrast the samples based on different variables of interest. In this case, we currently have only one variable, which is the group variable that determines if a sample belongs to the CASE group or the CTRL group.
#compute the contrast for the 'group' variable where 'CTRL'
#samples are used as the control group.
DEresults = results(dds, contrast = c("group", 'CASE', 'CTRL'))
#sort results by increasing p-value
DEresults <- DEresults[order(DEresults$pvalue),] Thus we have obtained a table containing the differential expression status of case samples compared to the control samples. It is important to note that the sequence of the elements provided in the contrast argument determines which group of samples are to be used as the control. This impacts the way the results are interpreted, for instance, if a gene is found up-regulated (has a positive log2 fold change), the up-regulation status is only relative to the factor that is provided as control. In this case, we used samples from the “CTRL” group as control and contrasted the samples from the “CASE” group with respect to the “CTRL” samples. Thus genes with a positive log2 fold change are called up-regulated in the case samples with respect to the control, while genes with a negative log2 fold change are down-regulated in the case samples. Whether the deregulation is significant or not, warrants assessment of the adjusted p-values. Let’s have a look at the contents of the DEresults table. #shows a summary of the results print(DEresults) ## log2 fold change (MLE): group CASE vs CTRL ## Wald test p-value: group CASE vs CTRL ## DataFrame with 19097 rows and 6 columns ## baseMean log2FoldChange lfcSE stat pvalue ## <numeric> <numeric> <numeric> <numeric> <numeric> ## CYP2E1 4829889 9.36024 0.215223 43.4909 0.00000e+00 ## FCGBP 10349993 -7.57579 0.186433 -40.6355 0.00000e+00 ## ASGR2 426422 8.01830 0.216207 37.0863 4.67898e-301 ## GCKR 100183 7.82841 0.233376 33.5442 1.09479e-246 ## APOA5 438054 10.20248 0.312503 32.6477 8.64906e-234 ## ... ... ... ... ... ... ## CCDC195 20.4981 -0.215607 2.89255 -0.0745386 NA ## SPEM3 23.6370 -22.154765 3.02785 -7.3170030 NA ## AC022167.5 21.8451 -2.056240 2.89545 -0.7101618 NA ## BX276092.9 29.9636 0.407326 2.89048 0.1409199 NA ## ETDC 22.5675 -1.795274 2.89421 -0.6202983 NA ## padj ## <numeric> ## CYP2E1 0.00000e+00 ## FCGBP 0.00000e+00 ## ASGR2 2.87741e-297 ## GCKR 5.04945e-243 ## APOA5 3.19133e-230 ## ... ... ## CCDC195 NA ## SPEM3 NA ## AC022167.5 NA ## BX276092.9 NA ## ETDC NA The first three lines in this output show the contrast and the statistical test that were used to compute these results, along with the dimensions of the resulting table (number of columns and rows). Below these lines is the actual table with 6 columns: baseMean represents the average normalized expression of the gene across all considered samples. log2FoldChange represents the base-2 logarithm of the fold change of the normalized expression of the gene in the given contrast. lfcSE represents the standard error of log2 fold change estimate, and stat is the statistic calculated in the contrast which is translated into a pvalue and adjusted for multiple testing in the padj column. To find out about the importance of adjusting for multiple testing, see Chapter 3. #### 8.3.7.1 Diagnostic plots At this point, before proceeding to do any downstream analysis and jumping to conclusions about the biological insights that are reachable with the experimental data at hand, it is important to do some more diagnostic tests to improve our confidence about the quality of the data and the experimental setup. ##### 8.3.7.1.1 MA plot An MA plot is useful to observe if the data normalization worked well (Figure 8.6). The MA plot is a scatter plot where the x-axis denotes the average of normalized counts across samples and the y-axis denotes the log fold change in the given contrast. Most points are expected to be on the horizontal 0 line (most genes are not expected to be differentially expressed). library(DESeq2) DESeq2::plotMA(object = dds, ylim = c(-5, 5)) ##### 8.3.7.1.2 P-value distribution It is also important to observe the distribution of raw p-values (Figure 8.7). We expect to see a peak around low p-values and a uniform distribution at P-values above 0.1. Otherwise, adjustment for multiple testing does not work and the results are not meaningful. library(ggplot2) ggplot(data = as.data.frame(DEresults), aes(x = pvalue)) + geom_histogram(bins = 100) ##### 8.3.7.1.3 PCA plot A final diagnosis is to check the biological reproducibility of the sample replicates in a PCA plot or a heatmap. To plot the PCA results, we need to extract the normalized counts from the DESeqDataSet object. It is possible to color the points in the scatter plot by the variable of interest, which helps to see if the replicates cluster well (Figure 8.8). library(DESeq2) # extract normalized counts from the DESeqDataSet object countsNormalized <- DESeq2::counts(dds, normalized = TRUE) # select top 500 most variable genes selectedGenes <- names(sort(apply(countsNormalized, 1, var), decreasing = TRUE)[1:500]) plotPCA(countsNormalized[selectedGenes,], col = as.numeric(colData$group), adj = 0.5,
xlim = c(-0.5, 0.5), ylim = c(-0.5, 0.6))
Alternatively, the normalized counts can be transformed using the DESeq2::rlog function and DESeq2::plotPCA() can be readily used to plot the PCA results (Figure 8.9).
rld <- rlog(dds)
DESeq2::plotPCA(rld, ntop = 500, intgroup = 'group') +
ylim(-50, 50) + theme_bw()
##### 8.3.7.1.4 Relative Log Expression (RLE) plot
A similar plot to the MA plot is the RLE (Relative Log Expression) plot that is useful in finding out if the data at hand needs normalization (Gandolfo and Speed 2018). Sometimes, even the datasets normalized using the explained methods above may need further normalization due to unforeseen sources of variation that might stem from the library preparation, the person who carries out the experiment, the date of sequencing, the temperature changes in the laboratory at the time of library preparation, and so on and so forth. The RLE plot is a quick diagnostic that can be applied on the raw or normalized count matrices to see if further processing is required.
Let’s do RLE plots on the raw counts and normalized counts using the EDASeq package (Risso, Schwartz, Sherlock, et al. 2011) (see Figure 8.10).
library(EDASeq)
par(mfrow = c(1, 2))
plotRLE(countData, outline=FALSE, ylim=c(-4, 4),
col=as.numeric(colData$group), main = 'Raw Counts') plotRLE(DESeq2::counts(dds, normalized = TRUE), outline=FALSE, ylim=c(-4, 4), col = as.numeric(colData$group),
main = 'Normalized Counts')
Here the RLE plot is comprised of boxplots, where each box-plot represents the distribution of the relative log expression of the genes expressed in the corresponding sample. Each gene’s expression is divided by the median expression value of that gene across all samples. Then this is transformed to log scale, which gives the relative log expression value for a single gene. The RLE values for all the genes from a sample are visualized as a boxplot.
Ideally the boxplots are centered around the horizontal zero line and are as tightly distributed as possible (Risso, Ngai, Speed, et al. 2014). From the plots that we have made for the raw and normalized count data, we can observe how the normalized dataset has improved upon the raw count data for all the samples. However, in some cases, it is important to visualize RLE plots in combination with other diagnostic plots such as PCA plots, heatmaps, and correlation plots to see if there is more unwanted variation in the data, which can be further accounted for using packages such as RUVSeq (Risso, Ngai, Speed, et al. 2014) and sva (Leek, Johnson, Parker, et al. 2012). We will cover details about the RUVSeq package to account for unwanted sources of noise in RNA-seq datasets in later sections.
### 8.3.8 Functional enrichment analysis
#### 8.3.8.1 GO term analysis
In a typical differential expression analysis, thousands of genes are found differentially expressed between two groups of samples. While prior knowledge of the functions of individual genes can give some clues about what kind of cellular processes have been affected, e.g. by a drug treatment, manually going through the whole list of thousands of genes would be very cumbersome and not be very informative in the end. Therefore a commonly used tool to address this problem is to do enrichment analyses of functional terms that appear associated to the given set of differentially expressed genes more often than expected by chance. The functional terms are usually associated to multiple genes. Thus, genes can be grouped into sets by shared functional terms. However, it is important to have an agreed upon controlled vocabulary on the list of terms used to describe the functions of genes. Otherwise, it would be impossible to exchange scientific results globally. That’s why initiatives such as the Gene Ontology Consortium have collated a list of Gene Ontology (GO) terms for each gene. GO term analysis is probably the most common analysis applied after a differential expression analysis. GO term analysis helps quickly find out systematic changes that can describe differences between groups of samples.
In R, one of the simplest ways to do functional enrichment analysis for a set of genes is via the gProfileR package.
Let’s select the genes that are significantly differentially expressed between the case and control samples. Let’s extract genes that have an adjusted p-value below 0.1 and that show a 2-fold change (either negative or positive) in the case compared to control. We will then feed this gene set into the gProfileR function. The top 10 detected GO terms are displayed in Table 8.2.
library(DESeq2)
library(gProfileR)
library(knitr)
# extract differential expression results
DEresults <- results(dds, contrast = c('group', 'CASE', 'CTRL'))
#remove genes with NA values
DE <- DEresults[!is.na(DEresults$padj),] #select genes with adjusted p-values below 0.1 DE <- DE[DE$padj < 0.1,]
#select genes with absolute log2 fold change above 1 (two-fold change)
DE <- DE[abs(DE$log2FoldChange) > 1,] #get the list of genes of interest genesOfInterest <- rownames(DE) #calculate enriched GO terms goResults <- gprofiler(query = genesOfInterest, organism = 'hsapiens', src_filter = 'GO', hier_filtering = 'moderate') TABLE 8.2: Top GO terms sorted by p-value. p.value term.size precision domain term.name 64 0 2740 0.223 CC plasma membrane part 23 0 1609 0.136 BP ion transport 16 0 3656 0.258 BP regulation of biological quality 30 0 385 0.042 BP extracellular structure organization 34 0 7414 0.452 BP multicellular organismal process 78 0 1069 0.090 MF transmembrane transporter activity 47 0 1073 0.090 BP organic acid metabolic process 5 0 975 0.083 BP response to drug 18 0 1351 0.107 BP biological adhesion 31 0 4760 0.302 BP system development #### 8.3.8.2 Gene set enrichment analysis A gene set is a collection of genes with some common property. This shared property among a set of genes could be a GO term, a common biological pathway, a shared interaction partner, or any biologically relevant commonality that is meaningful in the context of the pursued experiment. Gene set enrichment analysis (GSEA) is a valuable exploratory analysis tool that can associate systematic changes to a high-level function rather than individual genes. Analysis of coordinated changes of expression levels of gene sets can provide complementary benefits on top of per-gene-based differential expression analyses. For instance, consider a gene set belonging to a biological pathway where each member of the pathway displays a slight deregulation in a disease sample compared to a normal sample. In such a case, individual genes might not be picked up by the per-gene-based differential expression analysis. Thus, the GO/Pathway enrichment on the differentially expressed list of genes would not show an enrichment of this pathway. However, the additive effect of slight changes of the genes could amount to a large effect at the level of the gene set, thus the pathway could be detected as a significant pathway that could explain the mechanistic problems in the disease sample. We use the bioconductor package gage (Luo, Friedman, Shedden, et al. 2009) to demonstrate how to do GSEA using normalized expression data of the samples as input. Here we are using only two gene sets: one from the top GO term discovered from the previous GO analysis, one that we compile by randomly selecting a list of genes. However, annotated gene sets can be used from databases such as MSIGDB (Subramanian, Tamayo, Mootha, et al. 2005), which compile gene sets from a variety of resources such as KEGG (Kanehisa, Sato, Kawashima, et al. 2016) and REACTOME (Antonio Fabregat, Jupe, Matthews, et al. 2018). #Let's define the first gene set as the list of genes from one of the #significant GO terms found in the GO analysis. order go results by pvalue goResults <- goResults[order(goResults$p.value),]
#restrict the terms that have at most 100 genes overlapping with the query
go <- goResults[goResults$overlap.size < 100,] # use the top term from this table to create a gene set geneSet1 <- unlist(strsplit(go[1,]$intersection, ','))
#Define another gene set by just randomly selecting 25 genes from the counts
#table get normalized counts from DESeq2 results
normalizedCounts <- DESeq2::counts(dds, normalized = TRUE)
geneSet2 <- sample(rownames(normalizedCounts), 25)
geneSets <- list('top_GO_term' = geneSet1,
'random_set' = geneSet2)
Using the defined gene sets, we’d like to do a group comparison between the case samples with respect to the control samples.
library(gage)
#use the normalized counts to carry out a GSEA.
gseaResults <- gage(exprs = log2(normalizedCounts+1),
ref = match(rownames(colData[colData$group == 'CTRL',]), colnames(normalizedCounts)), samp = match(rownames(colData[colData$group == 'CASE',]),
colnames(normalizedCounts)),
gsets = geneSets, compare = 'as.group')
We can observe if there is a significant up-regulation or down-regulation of the gene set in the case group compared to the controls by accessing gseaResults$greater as in Table 8.3 or gseaResults$less as in Table 8.4.
TABLE 8.3: Up-regulation statistics
p.geomean stat.mean p.val q.val set.size exp1
top_GO_term 0.0000 7.1994 0.0000 0.0000 32 0.0000
random_set 0.5832 -0.2113 0.5832 0.5832 25 0.5832
TABLE 8.4: Down-regulation statistics
p.geomean stat.mean p.val q.val set.size exp1
random_set 0.4168 -0.2113 0.4168 0.8336 25 0.4168
top_GO_term 1.0000 7.1994 1.0000 1.0000 32 1.0000
We can see that the random gene set shows no significant up- or down-regulation (Tables 8.3 and (8.4), while the gene set we defined using the top GO term shows a significant up-regulation (adjusted p-value < 0.0007) (8.3). It is worthwhile to visualize these systematic changes in a heatmap as in Figure 8.11.
library(pheatmap)
# get the expression data for the gene set of interest
M <- normalizedCounts[rownames(normalizedCounts) %in% geneSet1, ]
# log transform the counts for visualization scaling by row helps visualizing
# relative change of expression of a gene in multiple conditions
pheatmap(log2(M+1),
annotation_col = colData,
show_rownames = TRUE,
fontsize_row = 8,
scale = 'row',
cutree_cols = 2,
cutree_rows = 2)
We can see that almost all genes from this gene set display an increased level of expression in the case samples compared to the controls.
### 8.3.9 Accounting for additional sources of variation
When doing a differential expression analysis in a case-control setting, the variable of interest, i.e. the variable that explains the separation of the case samples from the control, is usually the treatment, genotypic differences, a certain phenotype, etc. However, in reality, depending on how the experiment and the sequencing were designed, there may be additional factors that might contribute to the variation between the compared samples. Sometimes, such variables are known, for instance, the date of the sequencing for each sample (batch information), or the temperature under which samples were kept. Such variables are not necessarily biological but rather technical, however, they still impact the measurements obtained from an RNA-seq experiment. Such variables can introduce systematic shifts in the obtained measurements. Here, we will demonstrate: firstly how to account for such variables using DESeq2, when the possible sources of variation are actually known; secondly, how to account for such variables when all we have is just a count table but we observe that the variable of interest only explains a small proportion of the differences between case and control samples.
#### 8.3.9.1 Accounting for covariates using DESeq2
For demonstration purposes, we will use a subset of the count table obtained for a heart disease study, where there are RNA-seq samples from subjects with normal and failing hearts. We again use a subset of the samples, focusing on 6 case and 6 control samples and we only consider protein-coding genes (for speed concerns).
Let’s import count and colData for this experiment.
counts_file <- system.file('extdata/rna-seq/SRP021193.raw_counts.tsv',
package = 'compGenomRData')
colData_file <- system.file('extdata/rna-seq/SRP021193.colData.tsv',
package = 'compGenomRData')
stringsAsFactors = TRUE)
Let’s take a look at how the samples cluster by calculating the TPM counts as displayed as a heatmap in Figure 8.12.
library(pheatmap)
#find gene length normalized values
geneLengths <- counts$width rpk <- apply( subset(counts, select = c(-width)), 2, function(x) x/(geneLengths/1000)) #normalize by the sample size using rpk values tpm <- apply(rpk, 2, function(x) x / sum(as.numeric(x)) * 10^6) selectedGenes <- names(sort(apply(tpm, 1, var), decreasing = T)[1:100]) pheatmap(tpm[selectedGenes,], scale = 'row', annotation_col = colData, show_rownames = FALSE) Here we can see from the clusters that the dominating variable is the ‘Library Selection’ variable rather than the ‘diagnosis’ variable, which determines the state of the organ from which the sample was taken. Case and control samples are all mixed in both two major clusters. However, ideally, we’d like to see a separation of the case and control samples regardless of the additional covariates. When testing for differential gene expression between conditions, such confounding variables can be accounted for using DESeq2. Below is a demonstration of how we instruct DESeq2 to account for the ‘library selection’ variable: library(DESeq2) # remove the 'width' column from the counts matrix countData <- as.matrix(subset(counts, select = c(-width))) # set up a DESeqDataSet object dds <- DESeqDataSetFromMatrix(countData = countData, colData = colData, design = ~ LibrarySelection + group) When constructing the design formula, it is very important to pay attention to the sequence of variables. We leave the variable of interest to the last and we can add as many covariates as we want to the beginning of the design formula. Please refer to the DESeq2 vignette if you’d like to learn more about how to construct design formulas. Now, we can run the differential expression analysis as has been demonstrated previously. # run DESeq dds <- DESeq(dds) # extract results DEresults <- results(dds, contrast = c('group', 'CASE', 'CTRL')) #### 8.3.9.2 Accounting for estimated covariates using RUVSeq In cases when the sources of potential variation are not known, it is worthwhile to use tools such as RUVSeq or sva that can estimate potential sources of variation and clean up the counts table from those sources of variation. Later on, the estimated covariates can be integrated into DESeq2’s design formula. Let’s see how to utilize the RUVseq package to first diagnose the problem and then solve it. Here, for demonstration purposes, we’ll use a count table from a lung carcinoma study in which a transcription factor (Ets homologous factor - EHF) is overexpressed and compared to the control samples with baseline EHF expression. Again, we only consider protein coding genes and use only five case and five control samples. The original data can be found on the recount2 database with the accession ‘SRP049988’. counts_file <- system.file('extdata/rna-seq/SRP049988.raw_counts.tsv', package = 'compGenomRData') colData_file <- system.file('extdata/rna-seq/SRP049988.colData.tsv', package = 'compGenomRData') counts <- read.table(counts_file) colData <- read.table(colData_file, header = T, sep = '\t', stringsAsFactors = TRUE) # simplify condition descriptions colData$source_name <- ifelse(colData$group == 'CASE', 'EHF_overexpression', 'Empty_Vector') Let’s start by making heatmaps of the samples using TPM counts (see Figure 8.13). #find gene length normalized values geneLengths <- counts$width
rpk <- apply( subset(counts, select = c(-width)), 2,
function(x) x/(geneLengths/1000))
#normalize by the sample size using rpk values
tpm <- apply(rpk, 2, function(x) x / sum(as.numeric(x)) * 10^6)
selectedGenes <- names(sort(apply(tpm, 1, var),
decreasing = T)[1:100])
pheatmap(tpm[selectedGenes,],
scale = 'row',
annotation_col = colData,
cutree_cols = 2,
show_rownames = FALSE)
We can see that the overall clusters look fine, except that one of the case samples (CASE_5) clusters more closely with the control samples than the other case samples. This mis-clustering could be a result of some batch effect, or any other technical preparation steps. However, the colData object doesn’t contain any variables that we can use to pinpoint the exact cause of this. So, let’s use RUVSeq to estimate potential covariates to see if the clustering results can be improved.
First, we set up the experiment:
library(EDASeq)
# remove 'width' column from counts
countData <- as.matrix(subset(counts, select = c(-width)))
# create a seqExpressionSet object using EDASeq package
set <- newSeqExpressionSet(counts = countData,
phenoData = colData)
Next, let’s make a diagnostic RLE plot on the raw count table.
# make an RLE plot and a PCA plot on raw count data and color samples by group
par(mfrow = c(1,2))
plotRLE(set, outline=FALSE, ylim=c(-4, 4), col=as.numeric(colData$group)) plotPCA(set, col = as.numeric(colData$group), adj = 0.5,
ylim = c(-0.7, 0.5), xlim = c(-0.5, 0.5))
## make RLE and PCA plots on TPM matrix
par(mfrow = c(1,2))
plotRLE(tpm, outline=FALSE, ylim=c(-4, 4), col=as.numeric(colData$group)) plotPCA(tpm, col=as.numeric(colData$group), adj = 0.5,
ylim = c(-0.3, 1), xlim = c(-0.5, 0.5))
Both RLE and PCA plots look better on normalized data (Figure 8.15) compared to raw data (Figure 8.14), but still suggest the necessity of further improvement, because the CASE_5 sample still clusters with the control samples. We haven’t yet accounted for the source of unwanted variation.
#### 8.3.9.3 Removing unwanted variation from the data
RUVSeq has three main functions for removing unwanted variation: RUVg(), RUVs(), and RUVr(). Here, we will demonstrate how to use RUVg and RUVs. RUVr will be left as an exercise for the reader.
##### 8.3.9.3.1 Using RUVg
One way of removing unwanted variation depends on using a set of reference genes that are not expected to change by the sources of technical variation. One strategy along this line is to use spike-in genes, which are artificially introduced into the sequencing run (Jiang, Schlesinger, Davis, et al. 2011). However, there are many sequencing datasets that don’t have this spike-in data available. In such cases, an empirical set of genes can be collected from the expression data by doing a differential expression analysis and discovering genes that are unchanged in the given conditions. These unchanged genes are used to clean up the data from systematic shifts in expression due to the unwanted sources of variation. Another strategy could be to use a set of house-keeping genes as negative controls, and use them as a reference to correct the systematic biases in the data. Let’s use a list of ~500 house-keeping genes compiled here: https://www.tau.ac.il/~elieis/HKG/HK_genes.txt.
library(RUVSeq)
#source for house-keeping genes collection:
#https://m.tau.ac.il/~elieis/HKG/HK_genes.txt
package = 'compGenomRData'),
# let's take an intersection of the house-keeping genes with the genes available
# in the count table
house_keeping_genes <- intersect(rownames(set), HK_genes$V1) We will now run RUVg() with the different number of factors of unwanted variation. We will plot the PCA after removing the unwanted variation. We should be able to see which k values, number of factors, produce better separation between sample groups. # now, we use these genes as the empirical set of genes as input to RUVg. # we try different values of k and see how the PCA plots look par(mfrow = c(2, 2)) for(k in 1:4) { set_g <- RUVg(x = set, cIdx = house_keeping_genes, k = k) plotPCA(set_g, col=as.numeric(colData$group), cex = 0.9, adj = 0.5,
main = paste0('with RUVg, k = ',k),
ylim = c(-1, 1), xlim = c(-1, 1), )
}
Based on the separation of case and control samples in the PCA plots in Figure 8.16, we choose k = 1 and re-run the RUVg() function with the house-keeping genes to do more diagnostic plots.
# choose k = 1
set_g <- RUVg(x = set, cIdx = house_keeping_genes, k = 1)
Now let’s do diagnostics: compare the count matrices with or without RUVg processing, comparing RLE plots (Figure 8.17) and PCA plots (Figure 8.18) to see the effect of RUVg on the normalization and separation of case and control samples.
# RLE plots
par(mfrow = c(1,2))
plotRLE(set, outline=FALSE, ylim=c(-4, 4),
col=as.numeric(colData$group), main = 'without RUVg') plotRLE(set_g, outline=FALSE, ylim=c(-4, 4), col=as.numeric(colData$group), main = 'with RUVg')
# PCA plots
par(mfrow = c(1,2))
plotPCA(set, col=as.numeric(colData$group), adj = 0.5, main = 'without RUVg', ylim = c(-1, 0.5), xlim = c(-0.5, 0.5)) plotPCA(set_g, col=as.numeric(colData$group), adj = 0.5,
main = 'with RUVg',
ylim = c(-1, 0.5), xlim = c(-0.5, 0.5))
We can observe that using RUVg() with house-keeping genes as reference has improved the clusters, however not yielded ideal separation. Probably the effect that is causing the ‘CASE_5’ to cluster with the control samples still hasn’t been completely eliminated.
##### 8.3.9.3.2 Using RUVs
There is another strategy of RUVSeq that works better in the presence of replicates in the absence of a confounded experimental design, which is the RUVs() function. Let’s see how that performs with this data. This time we don’t use the house-keeping genes. We rather use all genes as input to RUVs(). This function estimates the correction factor by assuming that replicates should have constant biological variation, rather, the variation in the replicates are the unwanted variation.
# make a table of sample groups from colData
differences <- makeGroups(colData$group) ## looking for two different sources of unwanted variation (k = 2) ## use information from all genes in the expression object par(mfrow = c(2, 2)) for(k in 1:4) { set_s <- RUVs(set, unique(rownames(set)), k=k, differences) #all genes plotPCA(set_s, col=as.numeric(colData$group),
cex = 0.9, adj = 0.5,
main = paste0('with RUVs, k = ',k),
ylim = c(-1, 1), xlim = c(-0.6, 0.6))
}
Based on the separation of case and control samples in the PCA plots in Figure 8.19, we can see that the samples are better separated even at k = 2 when using RUVs(). Here, we re-run the RUVs() function using k = 2, in order to do more diagnostic plots. We try to pick a value of k that is good enough to distinguish the samples by condition of interest. While setting the value of k to higher values could improve the percentage of explained variation by the first principle component to up to 61%, we try to avoid setting the value unnecessarily high to avoid removing factors that might also correlate with important biological differences between conditions.
# choose k = 2
set_s <- RUVs(set, unique(rownames(set)), k=2, differences) #
Now let’s do diagnostics again: compare the count matrices with or without RUVs processing, comparing RLE plots (Figure 8.20) and PCA plots (Figure 8.21) to see the effect of RUVg on the normalization and separation of case and control samples.
## compare the initial and processed objects
## RLE plots
par(mfrow = c(1,2))
plotRLE(set, outline=FALSE, ylim=c(-4, 4),
col=as.numeric(colData$group), main = 'without RUVs') plotRLE(set_s, outline=FALSE, ylim=c(-4, 4), col=as.numeric(colData$group),
main = 'with RUVs')
## PCA plots
par(mfrow = c(1,2))
plotPCA(set, col=as.numeric(colData$group), main = 'without RUVs', adj = 0.5, ylim = c(-0.75, 0.75), xlim = c(-0.75, 0.75)) plotPCA(set_s, col=as.numeric(colData$group),
main = 'with RUVs', adj = 0.5,
ylim = c(-0.75, 0.75), xlim = c(-0.75, 0.75))
Let’s compare PCA results from RUVs and RUVg with the initial raw counts matrix. We will simply run the plotPCA() function on different normalization schemes. The resulting plots are in Figure 8.22:
par(mfrow = c(1,3))
plotPCA(countData, col=as.numeric(colData$group), main = 'without RUV - raw counts', adj = 0.5, ylim = c(-0.75, 0.75), xlim = c(-0.75, 0.75)) plotPCA(set_g, col=as.numeric(colData$group),
main = 'with RUVg', adj = 0.5,
ylim = c(-0.75, 0.75), xlim = c(-0.75, 0.75))
### References
Andrews. 2010. “Babraham Bioinformatics - FastQC A Quality Control Tool for High Throughput Sequence Data.” https://www.bioinformatics.babraham.ac.uk/projects/fastqc/.
Bolger, Lohse, and Usadel. 2014. “Trimmomatic: A Flexible Trimmer for Illumina Sequence Data.” Bioinformatics 30 (15): 2114–20. https://doi.org/10.1093/bioinformatics/btu170.
Bray, Pimentel, Melsted, and Pachter. 2016. “Near-Optimal Probabilistic RNA-Seq Quantification.” Nature Biotechnology 34 (5): 525–27. https://doi.org/10.1038/nbt.3519.
Dobin, Davis, Schlesinger, Drenkow, Zaleski, Jha, Batut, Chaisson, and Gingeras. 2013. “STAR: Ultrafast Universal RNA-Seq Aligner.” Bioinformatics 29 (1): 15–21. https://doi.org/10.1093/bioinformatics/bts635.
Fabregat, Jupe, Matthews, et al. 2018. “The Reactome Pathway Knowledgebase.” Nucleic Acids Research 46 (D1): D649–D655. https://doi.org/10.1093/nar/gkx1132.
Gaidatzis, Lerch, Hahne, and Stadler. 2015. “QuasR: Quantification and Annotation of Short Reads in R.” Bioinformatics 31 (7): 1130–2. https://doi.org/10.1093/bioinformatics/btu781.
Gandolfo, and Speed. 2018. “RLE Plots: Visualizing Unwanted Variation in High Dimensional Data.” PloS One 13 (2): e0191629. https://doi.org/10.1371/journal.pone.0191629.
Gu, Eils, and Schlesner. 2016a. “Complex Heatmaps Reveal Patterns and Correlations in Multidimensional Genomic Data.” Bioinformatics (Oxford, England) 32 (18): 2847–9. https://doi.org/10.1093/bioinformatics/btw313.
Haas, Papanicolaou, Yassour, et al. 2013. “De Novo Transcript Sequence Reconstruction from RNA-Seq: Reference Generation and Analysis with Trinity.” Nature Protocols 8 (8). https://doi.org/10.1038/nprot.2013.084.
Jiang, Schlesinger, Davis, Zhang, Li, Salit, Gingeras, and Oliver. 2011. “Synthetic Spike-in Standards for RNA-Seq Experiments.” Genome Research 21 (9): 1543–51. https://doi.org/10.1101/gr.121095.111.
Kanehisa, Sato, Kawashima, Furumichi, and Tanabe. 2016. “KEGG as a Reference Resource for Gene and Protein Annotation.” Nucleic Acids Research 44 (Database issue): D457–D462. https://doi.org/10.1093/nar/gkv1070.
Kim, Langmead, and Salzberg. 2015. “HISAT: A Fast Spliced Aligner with Low Memory Requirements.” Nature Methods 12 (4): 357–60. https://doi.org/10.1038/nmeth.3317.
Kim, Pertea, Trapnell, Pimentel, Kelley, and Salzberg. 2013. “TopHat2: Accurate Alignment of Transcriptomes in the Presence of Insertions, Deletions and Gene Fusions.” Genome Biology 14 (4): R36. https://doi.org/10.1186/gb-2013-14-4-r36.
Kolde. 2019. Pheatmap: Pretty Heatmaps. https://CRAN.R-project.org/package=pheatmap.
Leek, Johnson, Parker, Jaffe, and Storey. 2012. “The Sva Package for Removing Batch Effects and Other Unwanted Variation in High-Throughput Experiments.” Bioinformatics 28 (6): 882–83. https://doi.org/10.1093/bioinformatics/bts034.
Love, Huber, and Anders. 2014. “Moderated Estimation of Fold Change and Dispersion for RNA-Seq Data with DESeq2.” Genome Biology 15 (12). https://doi.org/10.1186/s13059-014-0550-8.
Luo, Friedman, Shedden, Hankenson, and Woolf. 2009. “GAGE: Generally Applicable Gene Set Enrichment for Pathway Analysis.” BMC Bioinformatics 10 (1): 161. https://doi.org/10.1186/1471-2105-10-161.
Maza, Frasse, Senin, Bouzayen, and Zouine. 2013. “Comparison of Normalization Methods for Differential Gene Expression Analysis in RNA-Seq Experiments: A Matter of Relative Size of Studied Transcriptomes.” Communicative & Integrative Biology 6 (6): e25849. https://doi.org/10.4161/cib.25849.
Morgan, Anders, Lawrence, Aboyoun, Pagès, and Gentleman. 2009. “ShortRead: A Bioconductor Package for Input, Quality Assessment and Exploration of High-Throughput Sequence Data.” Bioinformatics 25 (19): 2607–8. https://doi.org/10.1093/bioinformatics/btp450.
Mortazavi, Williams, McCue, Schaeffer, and Wold. 2008. “Mapping and Quantifying Mammalian Transcriptomes by RNA-Seq.” Nature Methods 5 (7): 621–28. https://doi.org/10.1038/nmeth.1226.
Patro, Duggal, Love, Irizarry, and Kingsford. 2017. “Salmon: Fast and Bias-Aware Quantification of Transcript Expression Using Dual-Phase Inference.” Nature Methods 14 (4): 417–19. https://doi.org/10.1038/nmeth.4197.
Patro, Mount, and Kingsford. 2014. “Sailfish Enables Alignment-Free Isoform Quantification from RNA-Seq Reads Using Lightweight Algorithms.” Nature Biotechnology 32 (5): 462–64. https://doi.org/10.1038/nbt.2862.
Risso, Ngai, Speed, and Dudoit. 2014. “Normalization of RNA-Seq Data Using Factor Analysis of Control Genes or Samples.” Nature Biotechnology 32 (9): 896–902. https://doi.org/10.1038/nbt.2931.
Risso, Schwartz, Sherlock, and Dudoit. 2011. “GC-Content Normalization for RNA-Seq Data.” BMC Bioinformatics 12 (December): 480. https://doi.org/10.1186/1471-2105-12-480.
Robertson, Schein, Chiu, et al. 2010. “De Novo Assembly and Analysis of RNA-Seq Data.” Nature Methods 7 (11): 909–12. https://doi.org/10.1038/nmeth.1517.
Robinson, McCarthy, and Smyth. 2010. “edgeR: A Bioconductor Package for Differential Expression Analysis of Digital Gene Expression Data.” Bioinformatics (Oxford, England) 26 (1): 139–40. https://doi.org/10.1093/bioinformatics/btp616.
Subramanian, Tamayo, Mootha, et al. 2005. “Gene Set Enrichment Analysis: A Knowledge-Based Approach for Interpreting Genome-Wide Expression Profiles.” Proceedings of the National Academy of Sciences 102 (43): 15545–50. https://doi.org/10.1073/pnas.0506580102.
Trapnell, Williams, Pertea, Mortazavi, Kwan, Baren, Salzberg, Wold, and Pachter. 2010. “Transcript Assembly and Quantification by RNA-Seq Reveals Unannotated Transcripts and Isoform Switching During Cell Differentiation.” Nature Biotechnology 28 (5): 511–15. https://doi.org/10.1038/nbt.1621.
Wu, Reeder, Lawrence, Becker, and Brauer. 2016. “GMAP and GSNAP for Genomic Sequence Alignment: Enhancements to Speed, Accuracy, and Functionality.” Methods in Molecular Biology (Clifton, N.J.) 1418: 283–334. https://doi.org/10.1007/978-1-4939-3578-9_15. | 2021-07-25 19:52:24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5762081146240234, "perplexity": 4184.665687913152}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046151760.94/warc/CC-MAIN-20210725174608-20210725204608-00216.warc.gz"} |
https://socratic.org/questions/what-is-the-limit-of-sqrt-x-4-6x-2-x-2-as-x-goes-to-infinity | What is the limit of (sqrt(x^4 - 6x^2)) - x^2 as x goes to infinity?
Oct 19, 2015
${\lim}_{x \rightarrow \infty} \left(\sqrt{{x}^{4} - 6 {x}^{2}} - {x}^{2}\right) = - 3$
Explanation:
${\lim}_{x \rightarrow \infty} \left(\sqrt{{x}^{4} - 6 {x}^{2}} - {x}^{2}\right)$ has indeterminate form $\infty - \infty$.
So we'll do some algebra:
$\sqrt{{x}^{4} - 6 {x}^{2}} - {x}^{2} = \frac{\left(\sqrt{{x}^{4} - 6 {x}^{2}} - {x}^{2}\right)}{1} \cdot \frac{\left(\sqrt{{x}^{4} - 6 {x}^{2}} + {x}^{2}\right)}{\left(\sqrt{{x}^{4} - 6 {x}^{2}} + {x}^{2}\right)}$
$= \frac{\left({x}^{4} - 6 {x}^{2}\right) - {x}^{4}}{\sqrt{{x}^{4} - 6 {x}^{2}} + {x}^{2}}$ $\text{ }$ (has indeterminate form $\frac{\infty}{\infty}$)
$= \frac{- 6 {x}^{2}}{\sqrt{{x}^{4} \left(1 - \frac{6}{x} ^ 2\right)} + {x}^{2}}$ for $x \ne 0$
$= \frac{- 6 {x}^{2}}{\sqrt{{x}^{4}} \sqrt{\left(1 - \frac{6}{x} ^ 2\right)} + {x}^{2}}$ for $x \ne 0$
$= \frac{- 6 {x}^{2}}{{x}^{2} \left(\sqrt{1 - \frac{6}{x} ^ 2} + 1\right)}$ for $x \ne 0$
$= \frac{- 6}{\sqrt{1 - \frac{6}{x} ^ 2} + 1}$ for $x \ne 0$
Now as $x \rightarrow \infty$, we see that the ratio $\rightarrow \frac{- 6}{2}$
So, ${\lim}_{x \rightarrow \infty} \left(\sqrt{{x}^{4} - 6 {x}^{2}} - {x}^{2}\right) = - 3$
Note that because the identity $\sqrt{{x}^{4}} = {x}^{2}$ is true for both positive and negative values of $x$, we also have
${\lim}_{x \rightarrow - \infty} \left(\sqrt{{x}^{4} - 6 {x}^{2}} - {x}^{2}\right) = - 3$ | 2022-08-11 14:52:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 21, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9832504391670227, "perplexity": 307.9872002624465}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571472.69/warc/CC-MAIN-20220811133823-20220811163823-00029.warc.gz"} |
https://projecteuclid.org/euclid.nmj/1418307267 | ## Nagoya Mathematical Journal
### Deformations with constant Lê numbers and multiplicity of nonisolated hypersurface singularities
#### Abstract
We show that the possible jump of the order in an $1$-parameter deformation family of (possibly nonisolated) hypersurface singularities, with constant Lê numbers, is controlled by the powers of the deformation parameter. In particular, this applies to families of aligned singularities with constant topological type—a class for which the Lê numbers are “almost” constant. In the special case of families with isolated singularities—a case for which the constancy of the Lê numbers is equivalent to the constancy of the Milnor number—the result was proved by Greuel, Plénat, and Trotman.
As an application, we prove equimultiplicity for new families of nonisolated hypersurface singularities with constant topological type, answering partially the Zariski multiplicity conjecture.
#### Article information
Source
Nagoya Math. J., Volume 218 (2015), 29-50.
Dates
First available in Project Euclid: 11 December 2014
https://projecteuclid.org/euclid.nmj/1418307267
Digital Object Identifier
doi:10.1215/00277630-2847026
Mathematical Reviews number (MathSciNet)
MR3345623
Zentralblatt MATH identifier
06451293
#### Citation
Eyral, Christophe; Ruas, Maria Aparecida Soares. Deformations with constant Lê numbers and multiplicity of nonisolated hypersurface singularities. Nagoya Math. J. 218 (2015), 29--50. doi:10.1215/00277630-2847026. https://projecteuclid.org/euclid.nmj/1418307267
#### References
• [1] O. M. Abderrahmane, On the deformation with constant Milnor number and Newton polyhedron, preprint, http://www.rimath.saitama-u.ac.jp/lab.jp/Fukui/ould/dahm.pdf (accessed 21 November 2004).
• [2] J. Fernández de Bobadilla and T. Gaffney, The Lê numbers of the square of a function and their applications, J. Lond. Math. Soc. (2) 77 (2008), 545–557.
• [3] C. Eyral, Zariski’s multiplicity question and aligned singularities, C. R. Math. Acad. Sci. Paris 342 (2006), 183–186.
• [4] C. Eyral, Zariski’s multiplicity question—A survey, New Zealand J. Math. 36 (2007), 253–276.
• [5] W. Fulton, Intersection Theory, Ergeb. Math. Grenzgeb. (3) 2, Springer, Berlin, 1984.
• [6] G.-M. Greuel, Constant Milnor number implies constant multiplicity for quasihomogeneous singularities, Manuscripta Math. 56 (1986), 159–166.
• [7] G.-M. Greuel and G. Pfister, Advances and improvements in the theory of standard bases and syzygies, Arch. Math. (Basel) 66 (1996), 163–176.
• [8] H. A. Hamm and Lê Dũng Tráng, Un théorème de Zariski du type de Lefschetz, Ann. Sci. Éc. Norm. Supér. (4) 6 (1973), 317–355.
• [9] A. G. Kouchnirenko, Polyèdres de Newton et nombres de Milnor, Invent. Math. 32 (1976), 1–31.
• [10] Lê Dũng Tráng, “Topologie des singularités des hypersurfaces complexes” in Singularités à Cargèse (Cargèse, 1972), Astérisque 7/8, Soc. Math. France, Paris, 1973, 171–182.
• [11] Lê Dũng Tráng and C. P. Ramanujam, The invariance of Milnor number implies the invariance of the topological type, Amer. J. Math. 98 (1976), 67–78.
• [12] Lê Dũng Tráng and K. Saito, La constance du nombre de Milnor donne des bonnes stratifications, C. R. Acad. Sci. Paris Sér. A–B 277 (1973), 793–795.
• [13] D. B. Massey, Lê cycles and hypersurface singularities, Lecture Notes in Math. 1615, Springer, Berlin, 1995.
• [14] D. Massey, Numerical Control over Complex Analytic Singularities, Mem. Amer. Math. Soc. 163 (2003), no. 778.
• [15] D. O’Shea, Topologically trivial deformations of isolated quasihomogeneous hypersurface singularities are equimultiple, Proc. Amer. Math. Soc. 101 (1987), 260–262.
• [16] C. Plénat and D. Trotman, On the multiplicities of families of complex hypersurface-germs with constant Milnor number, Internat. J. Math. 24 (2013), article no. 1350021.
• [17] M. J. Saia and J. N. Tomazella, Deformations with constant Milnor number and multiplicity of complex hypersurfaces, Glasg. Math. J. 46 (2004), 121–130.
• [18] B. Teissier, “Cycles évanescents, sections planes et conditions de Whitney” in Singularités à Cargèse (Cargèse, 1972), Astérisque 7/8, Soc. Math. France, Paris, 1973, 285–362.
• [19] D. Trotman, Partial results on the topological invariance of the multiplicity of a complex hypersurface, lecture, Université Paris 7, France, 1977.
• [20] O. Zariski, Some open questions in the theory of singularities, Bull. Amer. Math. Soc. 77 (1971), 481–491.
• [21] O. Zariski, On the topology of algebroid singularities, Amer. J. Math. 54 (1932), 453–465. | 2020-07-09 04:16:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 1, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6493474841117859, "perplexity": 2956.9662301311673}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655898347.42/warc/CC-MAIN-20200709034306-20200709064306-00317.warc.gz"} |
http://mathhelpforum.com/differential-geometry/120484-image-principal-value-mapping.html | ## image, principal value mapping
Sketch an image under $w = \text{Log} (z)$ of
i) the line $y=x$
ii) the line $x=e$
I have not done many problems with mappings under the principal value mapping of the logarithm. I know that
$\text{Log} (z) := \ln r + i \theta = \ln | z | + i \text{Arg} (z)$.
However, I do not see what these images would look like. I need a few pointers here on what to do. | 2014-12-29 14:11:10 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6536876559257507, "perplexity": 575.1537981081801}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1419447563009.126/warc/CC-MAIN-20141224185923-00084-ip-10-231-17-201.ec2.internal.warc.gz"} |
http://www.lofoya.com/Solved/1500/a-shopkeeper-sells-three-items-p-q-and-r-and-incurs-a-loss-of | # Difficult Percentages Solved QuestionAptitude Discussion
Q. A shopkeeper sells three items $P$, $Q$ and $R$ and incurs a loss of $21%$, $11%$ and $10%$ respectively. The overall loss percentage on selling $P$ and $Q$ items is $14.33%$ and that of $Q$ and $R$ items is $10.4%$. Find the overall loss percentage on selling the three items?
✖ A. $15\%$ ✔ B. $12.16\%$ ✖ C. $13.4\%$ ✖ D. $12.5\%$
Solution:
Option(B) is correct
Let the cost of the item $P$ = Rs $p$
Let the cost of the item $Q$ = Rs $q$
Let the cost of the item $R$ = Rs $r$
SP of the item $P =0.79p$
SP of the item $Q =0.89q$
SP of the item $R =0.9r$
Overall loss percentage of the 1st two items = $14.33\%$
$\Rightarrow \dfrac{0.21p+0.11q}{p+q}=0.1433$
$\Rightarrow \dfrac{p}{q}=\dfrac{1}{2}$
Overall loss percentage of the 2nd and 3rd item =$10.4\%$
$\Rightarrow \dfrac{0.11q+0.1r}{q+r}=0.104$
$\Rightarrow \dfrac{q}{r}=\dfrac{2}{3}$
Overall loss percentage:
$\Rightarrow \dfrac{0.21p+0.11q+0.1r}{p+q+r}\times 100$
$\Rightarrow \dfrac{1(0.21)+2(0.11)+3(0.1)}{1+2+3}\times 100$
$⇒ 0.1216×100$
$= 12.16\%$ | 2016-10-22 17:52:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6609091758728027, "perplexity": 991.1990763534054}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719033.33/warc/CC-MAIN-20161020183839-00330-ip-10-171-6-4.ec2.internal.warc.gz"} |
https://sites.psu.edu/math033spring16/2016/02/19/final-post-reducing-water-use-at-home/ | # Final Post: Reducing Water Use At Home
Water usage has become one of the World’s most pressing issues. Droughts have become more prevalent, water is being polluted, and rising populations have made water scarcer. Many people think that the issue can only be addressed at large-scale levels. This is not the case. Even from the comfort of their own home, everyday citizens can do their part to reduce water-usage. By implementing some simple new items, how much can you reduce your household’s water consumption?
Think about how much water you use at home every day. Every time you flush the toilet, wash your hands, brush your teeth, do the laundry, water the plants, or fill up your dogs bowl, water is used. There are a few very simple things you can do to reduce water use without having to buy anything. While brushing your teeth, turn off the faucet while actually brushing and only turn it on to wash. Instead of letting the water run while rubbing your hands, keep it off until you need to wash the soap off. I measured how much water I use when brushing my teeth with the faucet running the whole time and measured 15 cups. When I turned off the faucet while brushing, I only used two cups! Considering I brush twice per day, that results in a savings of 13,140 cups of water per year.
$18 cups\times2 per day\times365 days=13,140 cups per year$
I also measured that I use seven cups of water when washing my hands when the sink is on the hole time, but only three cups when I rub my hands with soap when the water is off. I wash my hands seven times per day on average, resulting in savings of 10,220 cups per year.
$4 cups\times7 per day\times365 days=10,220 cups per year$
By simply doing these two things, I could save 23,360 cups of water per year.
There are a few items you could purchase that drastically reduce household water usage. The Earth Massage Showerhead can reduce water usage by 10,000 gallons per year. The 9-jet stream releases only 1.5 gallons per minute. An Adjustable Toilet Flapper can reduce 5-7 gallon per flush toilets by 2.5 gallons, reducing toilet water usage by 6,400 gallons per year. Finally, a Front-Load Washer uses 50 percent less water than a typical washer. This saves 6,390 gallons per year. In addition, less water use means less electricity use, so it is a win-win situation.
In total, these three items can save households 22,790 gallons. State College water costs $4.30 per 1,000 gallons. $22,790 gallons\div 1,000 gallons\times 4.30=98 per year$ Purchasing those three items (including three toilet flaps for all toilets in the house) costs$675.80. However, by saving \$98 per year with the reduced water usage, you would get your money back in seven years!
$675.80\div 98\approx 7 years$
Saving water goes a long way for homeowners. Besides its beneficial environmental impact, it also saves money! Household water conservation is an easy thing that can help our world in many ways.
Bibliography
Source 1: eartheasy, “Water conservation at home” Relevant claims: Lists 25 ways to conserve water at home. Credibility: Good. The website provides links to back up its claims
Source 2: eartheasy shop, “Earth Massage Showerhead” Relevant claims: The showerhead can reduce water use by 10,000 gallons per yer; 1.5 GPM. Credibility: Fair. Trying to promote an item
Source 3: eartheasy shop, “Adjustable Toilet Flap” Relevant claims: Can reduce water use by 2.5 gallons per flush. Credibilty: Fair: Trying to promote a product.
Source 4: eartheasy shop, “Front-Load Washer” Relevant claims: Can reduce water use by 50 percent per wash. Credibility: Fair. Trying to promote a product.
This entry was posted in Student Writing. Bookmark the permalink. | 2020-02-20 05:53:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24944563210010529, "perplexity": 3988.407345892114}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144637.88/warc/CC-MAIN-20200220035657-20200220065657-00496.warc.gz"} |
https://www.gradesaver.com/textbooks/science/chemistry/chemistry-10th-edition/chapter-3-chemical-equations-and-reaction-of-stoichiometry-exercises-limiting-reactant-page-108/34 | # Chapter 3 - Chemical Equations and Reaction of Stoichiometry - Exercises - Limiting Reactant - Page 108: 34
(a) $S_8$ is the limiting reactant. (b) 67.5 g of $S_2Cl_2$ (c) 35.6 g of $Cl_2$
#### Work Step by Step
- Calculate or find the molar mass for $S_8$: $S_8$ : ( 32.07 $\times$ 8 )= 256.56 g/mol - Using the molar mass as a conversion factor, find the amount in moles: $$32.0 \space g \times \frac{1 \space mole}{ 256.56 \space g} = 0.125 \space mole$$ - Calculate or find the molar mass for $Cl_2$: $Cl_2$ : ( 35.45 $\times$ 2 )= 70.90 g/mol - Using the molar mass as a conversion factor, find the amount in moles: $$71.0 \space g \times \frac{1 \space mole}{ 70.90 \space g} = 1.00 \space mole$$ Find the amount of product if each reactant is completely consumed. $$0.125 \space mole \space S_8 \times \frac{ 4 \space moles \ S_2Cl_2 }{ 1 \space mole \space S_8 } = 0.500 \space mole \space S_2Cl_2$$ $$1.00 \space mole \space Cl_2 \times \frac{ 4 \space moles \ S_2Cl_2 }{ 4 \space moles \space Cl_2 } = 1.00 \space mole \space S_2Cl_2$$ Since the reaction of $S_8$ produces less $S_2Cl_2$ for these quantities, it is the limiting reactant. - Calculate or find the molar mass for $S_2Cl_2$: $S_2Cl_2$ : ( 35.45 $\times$ 2 )+ ( 32.07 $\times$ 2 )= 135.04 g/mol - Using the molar mass as a conversion factor, find the mass in g: $$0.500 \space mole \times \frac{ 135.04 \space g}{1 \space mole} = 67.5 \space g$$ - Find the amount of $Cl_2$ consumed. $$0.125 \space mole \space S_8 \times \frac{ 4 \space moles \ Cl_2 }{ 1 \space mole \space S_8 } = 0.500 \space mole \space Cl_2$$ $$0.500 \space mole \times \frac{ 70.90 \space g}{1 \space mole} = 35.4 \space g$$ Excess = Initial - Consumed = 71.0 g - 35.4 g = 35.6 g
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback. | 2022-05-25 10:26:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6623057723045349, "perplexity": 1513.3916669929868}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662584398.89/warc/CC-MAIN-20220525085552-20220525115552-00616.warc.gz"} |
https://www.coin-or.org/CppAD/Doc/parallel_ad.htm | Prev Next Index-> contents reference index search external Up-> CppAD multi_thread parallel_ad CppAD-> Install Introduction AD ADFun preprocessor multi_thread utility ipopt_solve Example speed Appendix multi_thread-> parallel_ad thread_test.cpp parallel_ad Headings-> Syntax Purpose Discussion CheckSimpleVector Example Restriction
Enable AD Calculations During Parallel Mode
Syntax
parallel_ad<Base>()
Purpose
The function parallel_ad<Base>() must be called before any AD<Base> objects are used in parallel mode. In addition, if this routine is called after one is done using parallel mode, it will free extra memory used to keep track of the multiple AD<Base> tapes required for parallel execution.
Discussion
By default, for each AD<Base> class there is only one tape that records AD of Base operations. This tape is a global variable and hence it cannot be used by multiple threads at the same time. The parallel_setup function informs CppAD of the maximum number of threads that can be active in parallel mode. This routine does extra setup (and teardown) for the particular Base type.
CheckSimpleVector
This routine has the side effect of calling the routines CheckSimpleVector< Type, CppAD::vector<Type> >() where Type is Base and AD<Base> .
Example
The files team_openmp.cpp , team_bthread.cpp , and team_pthread.cpp , contain examples and tests that implement this function.
Restriction
This routine cannot be called in parallel mode or while there is a tape recording AD<Base> operations.
Input File: cppad/core/parallel_ad.hpp | 2018-01-19 07:11:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6053341627120972, "perplexity": 7457.073417989129}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887832.51/warc/CC-MAIN-20180119065719-20180119085719-00749.warc.gz"} |
https://physics.stackexchange.com/questions/530962/forced-oscillation-and-resonance-formula-for-the-externally-applied-force | # Forced oscillation and resonance: formula for the externally applied force
In forced oscillation the formula for the externally applied force is $$F = \cos(\omega t)$$ in almost every book except one book, which uses $$F = \sin(\omega t)$$.
If the equation for the position is $$x = A\cdot\cos(\omega t+\phi)$$ and velocity is the derivative of position, then the velocity of the oscillator should be proportional to $$\sin(\omega t)$$ (because the derivate of $$\cos$$ is $$-\sin$$), and hence match the function 'drawn' by the applied force which is actually '$$\sin$$' only in one book. But if it's '$$\cos$$' it just doesn't make any sense in terms of the resonance.
Which is correct? Are both of them correct? Why?
• I think you mean "then the velocity of the oscillator should be proportional to $\sin(\omega t +\phi)$." Feb 13, 2020 at 20:36
• Do you know about Fourier transforms? Feb 14, 2020 at 16:02
In general there is a phase difference between the displacement, x, and the applied force, F. The phase difference depends on the frequency of F relative to the natural frequency of the oscillatory system. At resonance (or, more precisely, when the driving force frequency is the same as the system's undamped natural frequency) the displacement lags behind the driving force by $$\tfrac{\pi}{2}$$ (a quarter of a cycle).
It's usual to express both F and x as cosines or both as sines, so that the phase difference is simply the difference in the phase constants that are added to or subtracted from, $$\omega t$$. For example if $$F=F_0 \cos (\omega t)$$ and $$x=x_0 \cos (\omega t +\phi)$$, the displacement will be ahead of the driving force by a phase angle of $$(\phi-0)=\phi$$.
But it's perfectly possible to use $$F=F_0 \sin (\omega t)$$ for the force and $$x=x_0 \cos (\omega t +\phi)$$ for the displacement. Simply remember that $$\sin (\omega t) =\cos (\omega t-\tfrac{\pi}{2})$$. So in this case the displacement will be ahead of the driving force by a phase angle of $$[\phi -(-\tfrac{\pi}{2})]=(\phi+\tfrac{\pi}{2})$$. At resonance this phase angle is $$-\tfrac{\pi}{2}$$, so $$(\phi+\tfrac{\pi}{2})=-\tfrac{\pi}{2}$$, that is $$\phi=-\pi$$, which is indistinguishable from $$\phi=\pi$$. | 2022-07-02 14:22:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 21, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8867750763893127, "perplexity": 217.4193197437529}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104141372.60/warc/CC-MAIN-20220702131941-20220702161941-00118.warc.gz"} |
https://www.atmos-meas-tech.net/12/2183/2019/ | Journal cover Journal topic
Atmospheric Measurement Techniques An interactive open-access journal of the European Geosciences Union
Journal topic
Atmos. Meas. Tech., 12, 2183-2199, 2019
https://doi.org/10.5194/amt-12-2183-2019
Atmos. Meas. Tech., 12, 2183-2199, 2019
https://doi.org/10.5194/amt-12-2183-2019
Research article 10 Apr 2019
Research article | 10 Apr 2019
# Characterization of atmospheric aerosol optical properties based on the combined use of a ground-based Raman lidar and an airborne optical particle counter in the framework of the Hydrological Cycle in the Mediterranean Experiment – Special Observation Period 1
Characterization of atmospheric aerosol optical properties
Dario Stelitano1,a, Paolo Di Girolamo1, Andrea Scoccione2,b, Donato Summa1, and Marco Cacciani2 Dario Stelitano et al.
• 1Scuola di Ingegneria, Università degli Studi della Basilicata, 85100 Potenza, Italy
• 2Dipartimento di Fisica, Università di Roma “La Sapienza”, 00100 Rome, Italy
• anow at: Osservatorio Nazionale Terremoti, Istituto Nazionale di Geofisica e Vulcanologia, 00143 Rome, Italy
• bnow at: Centro Operativo per la Meteorologia, Aeronautica Militare, 00040 Pomezia, Italy
Abstract
Vertical profiles of the particle backscattering coefficient at 355, 532 and 1064 nm measured by the University of Basilicata Raman lidar system (BASIL) have been compared with simulated particle backscatter profiles obtained through a Mie scattering code based on the use of simultaneous and almost co-located profiles provided by an airborne optical particle counter. Measurements were carried out during dedicated flights of the French research aircraft ATR42 in the framework of the European Facility for Airborne Research (EUFAR) project “WaLiTemp”, as part of the Hydrological Cycle in the Mediterranean Experiment – Special Observation Period 1 (HyMeX-SOP1). Results from two selected case studies are reported and discussed in the paper, and a dedicated analysis approach is illustrated and applied to the dataset. Results reveal a good agreement between measured and simulated multi-wavelength particle backscattering profiles. Specifically, simulated and measured particle backscattering profiles at 355 and 532 nm for the second case study are found to deviate less than 15 % (mean value =5.9 %) and 50 % (mean value =25.9 %), respectively, when considering the presence of a continental–urban aerosol component, while slightly larger deviation values are found for the first study. The reported good agreement between measured and simulated multi-wavelength particle backscatter profiles testifies to the ability of multi-wavelength Raman lidar systems to infer aerosol types at different altitudes.
1 Introduction
Aerosols are a key atmospheric component, playing a major role in meteo-climatic processes. Aerosols influence precipitation processes and the water cycles primary through two effects: the direct effect, as a result of the scattering/absorption of solar radiation (among others, Haywood and Boucher, 2000; Takemura et al., 2005), and the indirect effect, as a result of the interaction with clouds (among others, Sekiguchi et al., 2003; Yang et al., 2011). A semi-direct effect can also arise in the presence of high aerosol loading, determining scattering and absorption enhancement, ultimately leading to an alteration of atmospheric stability (e.g. Mitchell, 1971). Despite the well-recognized aerosol importance in meteorological processes and climate evolution, only a limited number of remote sensing techniques can provide vertically resolved measurements of the microphysical properties of aerosol particles (among others, Bellantone et al., 2008; Granados-Muñoz et al., 2016; Mhawish et al., 2018). For example, in situ sensors transported by aerostatic balloons or any other flying vector allow the vertical profile of aerosol size and microphysical properties to be measured, with high vertical resolution (of the order of 10 m) but typically with a limited temporal resolution. Any experiment aimed at characterizing the temporal evolution of aerosol microphysical properties would require several consecutive balloon launches or flights, with the time lag between two consecutive launches/flights unlikely being shorter than 1 h, with a consequent detriment of the temporal resolution. Additionally, in situ particle sensors are quite heavy and bulky, which – in the case of balloon-borne experiments – implies the use of quite large aerostatic balloons. This makes monitoring by in situ particle sensors very expensive and logistically difficult to implement.
Remote sensing techniques can overcome these limitations. A variety of passive optical remote sensors (i.e. spectroradiometers, sun and sky photometers, etc.) have demonstrated their capability to characterize aerosol microphysical properties, but they lack in vertical resolution, which makes them scarcely suited for vertically resolved measurements of aerosol size and microphysical properties. Low vertical resolution is combined with a limited temporal resolution when these techniques are implemented on sun-synchronous orbiting platforms, with a typical “revisit time” of several hours. Active remote sensing systems may overcome this limitation. Specifically, lidar systems with aerosol measurement capability are characterized by high accuracies and temporal/vertical resolutions, which makes them particularly suited for aerosol typing applications. Lidar measurements of aerosol optical properties have been reported since the early 1960s (among others, Fiocco and Grams, 1964; Elterman, 1966). Originally, measurements were carried out with single-wavelength elastic backscatter lidars capable of providing vertical profiles of the particle backscattering coefficient at the laser wavelength. In these systems the particle backscattering coefficient is determined from the elastic lidar signals based on the application of the Klett–Fernald–Sasano approach (Klett, 1981, 1985; Fernald, 1984) or similar derived approaches (Di Girolamo et al., 1995, 1999). More recently, the acquired capability to measure roto-vibrational Raman lidar echoes from nitrogen and oxygen molecules has made the determination of the particle extinction coefficient also possible (Ansmann et al., 1990, 1992). The possibility of retrieving particle size and microphysical parameters from multi-wavelength lidar data of particle backscattering, extinction and depolarization has been recently demonstrated by a variety of authors (Müller et al., 2001, 2007, 2009; Veselovskii et al., 2002, 2009, 2010). These measurements can be combined with simultaneous measurements of the atmospheric thermodynamic profiles (Wulfmeyer et al., 2005; Di Girolamo et al., 2008, 2018a) to characterize aerosol–cloud interaction mechanisms. The ground-based University of Basilicata Raman lidar system (BASIL) has demonstrated the capability to provide multi-wavelength Raman lidar measurements with high quality and accuracy for the retrieval of particle size and microphysical parameters (Veselovskii et al., 2010; Di Girolamo et al., 2012a). The system was deployed in Candillargues (southern France) in the period from August to November 2012 in the framework of the Hydrological cycle in the Mediterranean Experiment (HyMeX) Special Observation Period 1 (SOP1). In the present paper, measurements carried out by BASIL are illustrated with the purpose of characterizing atmospheric aerosol optical properties. These measurements, in combination with in situ measurements from an airborne optical particle counter and the application of a Mie scattering code, are used to infer aerosol types. Back-trajectory analyses from a Lagrangian model (HYSPLIT) are used in support of the assessment of aerosol types (Man and Shih, 2001; Methven et al., 2001; Estellés et al., 2007; Toledano et al., 2009). The outline of the paper is as follows: Sect. 2 provides a description of the Raman lidar system BASIL and the airborne optical particle counter; Sect. 3 illustrates HyMeX-SOP1. The methodology is illustrated in Sect. 4, while measurements and simulations are reported in Sect. 5. Finally, Sect. 6 summarizes all results and provides some indications for possible future follow-up activities.
2 Instrumental setup
## 2.1 BASIL
The Raman lidar BASIL has been developed around a pulsed Nd:YAG laser, emitting pulses at 355, 532 and 1064 nm, with a repetition rate of 20 Hz. The system includes a large aperture telescope in Newtonian configuration, with a 400 mm diameter primary mirror, primarily aimed at the collection of Raman and higher range signals. Two additional smaller telescopes, developed around two 50 mm diameter 200 mm focal length lenses, are used to collect the backscatter echoes at 1064 nm and the total and cross-polarized backscatter echoes at 532 nm. The laser emission at 355 nm (average power of 10 W) is used to stimulate Raman scattering from water vapour and nitrogen and oxygen molecules (Di Girolamo et al., 2004, 2006, 2009a), which are ultimately used to measure the vertical profiles of atmospheric temperature, water vapour mixing ratio and aerosol extinction coefficient at 355 nm. Elastic backscattering echoes from aerosol and molecular species at 355, 532 and 1064 nm, in combination with the Raman scattering echoes from molecular nitrogen, are used to measure the vertical profiles of the aerosol backscattering coefficient at these three wavelengths. More details of the considered approaches are given in Sect. 4. Raman echoes are very weak and degraded by solar radiation in daytime. Consequently, high laser powers and large aperture telescopes are required to measure daytime Raman signals with a sufficient signal-to-noise ratio throughout a large portion of the troposphere. The instrumental setup of BASIL has been described in detail in several previous papers (Di Girolamo et al., 2009a, b, 2012a, b, 2016, 2017; Bhawar et al., 2011). BASIL was deployed in a variety of international field campaigns (among others, Bhawar et al., 2008; Serio et al., 2008; Wulfmeyer et al., 2008; Bennett et al., 2011; Ducrocq et al., 2014; Macke et al., 2017; Di Girolamo et al., 2018b).
## 2.2 Optical particle counter
An optical particle counter (OPC), manufactured by GRIMM Aerosol Technik GmbH (model Sky-OPC 1.129), is used to measure the size-resolved particle number concentration dN∕dr in the size range 0.25–32 µm. The sensor includes 31 size bins. The laser beam generated by a 683 nm diode laser invests the aerosol particles exiting from a pump chamber; the scattered radiation is deflected by two separate mirrors and detected by a photon sensor (Heim et al., 2008). By summing up the particle number over all the size intervals, the total number concentration is derived (Grimm and Eatough, 2009). The OPC model used in the present effort has a specific airborne design (McMeeking et al., 2010). The use of a differential pressure sensor and an external pump allows OPC measurements to be performed independently of environmental pressure conditions. The OPC was installed on board the French research aircraft ATR42, operated by the Service des Avions Instrumentés pour la Recherche en Environnement (SAFIRE), as part of an ensemble of in situ sensors for the characterization of aerosol and cloud size and microphysical properties. Dedicated flights by the ATR42 were performed during HyMeX-SOP 1 in the framework of the European Facility for Airborne Research (EUFAR) project “WaLiTemp”, with the aircraft looping up and down in the proximity of the Raman lidar system.
3 HyMeX and the Special Observation Period 1
The Hydrological cycle in Mediterranean Experiment was conceived with the overarching goal of collecting a large set of atmospheric and oceanic data to be used to get a better understanding of the hydrological cycle in the Mediterranean area. Within this experiment a major field campaign, the Special Observation Period 1 (SOP1), took place over the north-western Mediterranean area in the period September–November 2012 (Ducrocq et al., 2014). During HyMeX-SOP1 the Raman lidar system BASIL was deployed in the Cévennes-Vivarais atmospheric “supersite”, located in Candillargues (4337 N, 404 E; elevation: 1 m). BASIL was operated from 5 September to 5 November 2012, collecting more than 600 h of measurements, distributed over 51 measurement days, and consisted of 19 Intensive Observation Periods (IOPs).
The French research aircraft ATR42, hosting the OPC, was stationed at Montpellier Airport. Its main payload consisted of the airborne DIAL LEANDRE 2, profiling water vapour mixing ratio beneath the aircraft. The ATR42 payload also included in situ sensors for turbulence measurements, as well as aerosol and cloud microphysics probes, including the OPC. During HyMeX-SOP1, the ATR42 performed more than 60 flight hours: 8 were supported by the EUFAR project WaLiTemp, and the remaining hours were supported by the “Mediterranean Integrated STudies at Regional and Local Scales” (MISTRALS) programme. A specific flight pattern was defined for the purposes of the WaLiTemp project (Fig. 1), with the aircraft making spirals (hippodromes) up and down around a central location, originally aimed to be the atmospheric supersite in Candillargues. Unfortunately, because of air traffic restrictions, aircraft sensors' operation was typically started 20 km eastward of the supersite, and the central location of the hippodromes was also moved 20 km eastward.. Flights hours in the framework of the WaLiTemp project were carried out on 13 September, 2 and 29 October and 5 November 2012.
Figure 1ATR42 flight pattern in the framework of the WaLiTemp project (red line). The light blue dot represents the position of Montpellier Airport, where the ATR-42 took off and landed, while the red dot represent the position of the Raman lidar BASIL. The red curve represents the footprint of the aircraft pattern, including the positions of the spirals (hippodromes) up and down and the ground track from the airport to the spiraling position. The distance between the lidar site and the flight pattern is approx. 20 km.
Spiral ascents and descents were carried out with a vertical speed of 150 m min−1. During each flight, except in the presence of specific logistic issues, a minimum of two ascent–descent spirals were carried out. For the purposes of the present comparisons, in order to minimize the effect associated with the sounding of different air masses, we selected days characterized by horizontally homogeneous atmospheric conditions.
4 Methodology
The particle volume backscattering coefficient can be expressed as
$\begin{array}{}\text{(1)}& {\mathit{\beta }}_{{\mathit{\lambda }}_{\mathrm{0}}}^{\mathrm{par}}=\underset{\mathrm{0}}{\overset{\mathrm{\infty }}{\int }}{Q}_{\mathrm{back}}\left(r\right)n\left(r\right)\mathrm{d}r,\end{array}$
with Qback(r) being the particle backscattering efficiency and ${n}^{\prime }\left(r\right)=\mathrm{d}N/\mathrm{d}r$ being the particle size distribution. Qback(r) can be expressed as (Grainger et al., 2004)
$\begin{array}{}\text{(2)}& {Q}_{\mathrm{back}}=\frac{\mathrm{2}}{{x}^{\mathrm{2}}}\sum _{n=\mathrm{1}}^{\mathrm{\infty }}\left(\mathrm{2}n+\mathrm{1}\right)\left({\left|{a}_{n}\right|}^{\mathrm{2}}+{\left|{b}_{n}\right|}^{\mathrm{2}}\right),\end{array}$
where the terms an and bn represent the Mie scattering amplitudes of the nth magnetic partial wave (n being the function order). an and bn are obtained through the following expressions:
$\begin{array}{}\text{(3)}& & {a}_{n}=\frac{{\mathit{\psi }}_{n}\left(x\right){\mathit{\psi }}_{n}{}^{\prime }\left(mx\right)-m{\mathit{\psi }}_{n}{}^{\prime }\left(x\right){\mathit{\psi }}_{n}\left(mx\right)}{{\mathit{\xi }}_{n}^{\left(\mathrm{1}\right)}\left(x\right){\mathit{\psi }}_{n}{}^{\prime }\left(mx\right)-m{\mathit{\xi }}_{n}^{{\left(\mathrm{1}\right)}^{\prime }}\left(x\right){\mathit{\psi }}_{n}\left(mx\right)}\text{(4)}& & {b}_{n}=\frac{{\mathit{\psi }}_{n}{}^{\prime }\left(x\right){\mathit{\psi }}_{n}\left(mx\right)-m{\mathit{\psi }}_{n}\left(x\right){\mathit{\psi }}_{n}{}^{\prime }\left(mx\right)}{{\mathit{\xi }}_{n}^{{\left(\mathrm{1}\right)}^{\prime }}\left(x\right){\mathit{\psi }}_{n}{}^{\prime }\left(mx\right)-m{\mathit{\xi }}_{n}^{\left(\mathrm{1}\right)}\left(x\right){\mathit{\psi }}_{n}\left(mx\right)},\end{array}$
where m is the complex refractive index; $x=\mathrm{2}\mathit{\pi }r/\mathit{\lambda }$ is the particle size parameter, with λ being the laser wavelength and r being the particle radius, assumed to be a sphere. ψn(x) and ${\mathit{\xi }}_{n}^{\left(\mathrm{1}\right)}$ are Riccati–Bessel functions defined in terms of the spherical Bessel function of the first kind (Temme, 1996). A log-normal size distribution is considered in this study, with an analytical expression for each mode of the form (Grainger et al., 2004):
$\begin{array}{}\text{(5)}& {n}^{\prime }\left(r\right)=\frac{{N}_{\mathrm{0}}}{\sqrt{\mathrm{2}\mathit{\pi }}}\frac{\mathrm{1}}{\mathrm{ln}S}\frac{\mathrm{1}}{r}\mathrm{exp}\left[-\frac{\left({\left[\mathrm{ln}r-\mathrm{ln}{r}_{\mathrm{m}}\right]}^{\mathrm{2}}\right)}{\mathrm{2}{\mathrm{ln}}^{\mathrm{2}}S}\right],\end{array}$
where ${n}^{\prime }\left(r\right)=\mathrm{d}N/\mathrm{d}r$ is the number of particles within the size interval dr, with N(r) representing the cumulative particle number distribution for particles larger than R,rm is the median radius of the distribution, S is the standard deviation of the distribution and N0 is the particle integral concentration for the considered mode. S is a measure of the particle polydispersity, with lnS being equal to 1 for monodisperse particles. The log-normal distribution is completely described by N0, rm and S. Three modes are typically considered to describe the different aerosol components (d'Almeida et al., 1991): a fine or nucleation particle mode, a large or accumulation particle mode and a giant or coarse particle mode.
For the purposes of this research effort, particle concentration N0 is obtained by minimizing differences between the size distribution measured by the OPC and the simulated distribution, while the values of rm and S are those identified in the following section based on literature results. Simulated backscatter profiles ${\mathit{\beta }}_{{\mathit{\lambda }}_{\mathrm{0}}}^{\mathrm{par}}\left(z\right)$ are obtained through the application Eq. (1) for all altitudes covered by the OPC, considering different refractive index and size parameters' values for the three distribution modes, in dependence of the aerosol type, and integrating the expression over the three distribution modes. To perform these computations a specific Mie scattering code was developed by the authors in an IDL environment. The possibility to retrieve the particle size and microphysical properties from multi-wavelength measurements of the particle backscattering and extinction coefficient has been demonstrated by several authors (among others, Müller et al., 2001; Veselovskii et al., 2002) based on the application of retrieval schemes employing Tikhonov's inversion with regularization, which apply Mie scattering theory to an ensemble of particles with spherical shape. However, an appropriate and effective application of this approach imposes the use of particle backscatter and extinction profiles with a statistical uncertainty not exceeding 5 %–10 %. Multi-wavelength Raman lidar measurements of the particle backscattering and extinction coefficient for the considered case studies were not characterized by such a low level of uncertainty, this being especially true for the particle backscatter measurements at 1064 nm.
In order to determine aerosol typology, deviations between measured and simulated particle backscattering profiles at 355 and 532 nm were minimized. Initial values in terms of modal radius, $\stackrel{\mathrm{‾}}{r}$, standard deviation, σ, and refractive index for the different aerosol components were taken from d'Almeida et al. (1991). At each altitude, the particle size distribution measured by the optical particle counter is compared with the five aerosol typologies listed in d'Almeida et al. (1991), which for the sake of clarity are reproduced below:
• average continental (continental environment influenced by anthropogenic pollution);
• urban (continental environment heavy influenced by anthropogenic pollution);
• maritime polluted (environment polluted as Mediterranean Sea or North Atlantic);
• clean–polar (Arctic environment during summer period);
• clean continental–rural (rural continental environment without pollution).
Specifically, both urban and continental aerosols include a soot and pollution fine-mode component (as both aerosol types include the same aerosol components, they are treated in what follows as a single aerosol type), a water-soluble accumulation-mode component and a dust-like coarse-mode component; the maritime polluted aerosol type includes a soot and pollution fine-mode component, a water-soluble accumulation-mode component and a sea-salt coarse-mode component; the summertime Arctic aerosol type includes a sulfate fine-mode component and a sea salt and mineral accumulation-mode component; the rural aerosol type includes a water-soluble accumulation-mode component and a dust-like coarse-mode component.
Table 1Modal radius, standard deviation and refractive index (real and imaginary part) for the different considered aerosol components (from d'Almeida et al., 1991).
D'Almeida et al. (1991), Junge and Jaenicke (1971) and Junge (1972) suggested the use of a tri-modal log-normal size distribution (see Eq. 5), indicating specific values for the two primary size distribution parameters, i.e. the modal radius, $\stackrel{\mathrm{‾}}{r}$, and standard deviation, σ. Values of the modal radius, the standard deviation and the real, nr, and imaginary part, ni, of refractive index at the three lidar wavelengths (355, 532 and 1064 nm) for the three different aerosol components considered in the present computations are inferred from different papers in the literature (d'Almeida et al., 1991; Shettle and Fenn, 1976, 1979; WCP–112, 1986) and are listed in Table 1.
The log-normal size distribution has been computed considering the OPC data in the dimensional range 0.25–2.5 µm, with a 300 m vertical integration window. Results are illustrated in Fig. 2 (bold black line). In this same figure the size distribution computed from the OPC data is compared with the theoretical distributions for the three different modes (fine mode – red line, accumulation mode – violet line, coarse mode – light blue line).
Figure 2Size distribution computed from the OPC data (bold black line), together with the total theoretical distribution (thin black line) and theoretical distributions for the three different modes: fine mode (soot and pollution, red line), accumulation mode (water-soluble aerosols, violet line) and coarse mode (sea salt, light blue line).
For each of the three modes, the number of particles has been varied in order for the total theoretical distribution (thin black line) to match the experimental distribution computed with the OPC data. The matching between the experimental and theoretical distributions has been optimized based on the application of a best fit procedure. This approach was applied to each altitude level. In Fig. 2, we consider experimental and theoretical distributions at an altitude of 1529 m, this being the lowest altitude at which aerosols larger than 0.7–0.8 µm were measured by the OPC.
The vertical profiles of the particle backscattering coefficient at 355, 532 and 1064 nm have been simulated through the above-mentioned Mie scattering code from the OPC data, considering values of $\stackrel{\mathrm{‾}}{r}$ and σ for the different aerosol components. Measured profiles of the particle backscattering coefficient profiles at 355 and 532 nm are obtained from the Raman lidar signals through the application of the Raman techniques, which relies on the ratio between the 355/532 nm elastic signal and the corresponding simultaneous molecular nitrogen roto-vibrational Raman signal. The two signals are characterized by an almost identical overlap function, and therefore the overlap effect is cancelled out when ratioing the signals. Conversely, particle backscattering coefficient profiles at 1064 nm are obtained through the application of a Klett-modified inversion approach (Di Girolamo et al., 1995, 1999). The specific approach used in the present analysis considers a height-dependent lidar ratio profile and an iterative procedure converging to a final particle backscattering profile (Di Girolamo et al. 1995, 1995). Additionally, the elastic backscatter signal at 1064 nm and an additional elastic backscatter signal at 532 nm are collected with two small telescopes, developed around two 50 mm diameter 200 mm focal length lenses, with overlap regions not extending above 3–400 m.
A modified version of the approach defined by Di Iorio et al. (2003) was applied in order to determine the sounded aerosol typology. This approach is based on the minimization of the relative deviation between the measured and the simulated particle backscattering coefficient; i.e.
$\begin{array}{}\text{(6)}& \mathrm{\Delta }=\frac{\mathrm{1}}{{N}_{p}}\sum _{k=\mathrm{1}}{}_{p}^{N}\frac{\left|{\mathit{\beta }}_{\mathit{\lambda }\left(\text{simulated}\right)}\left({z}_{k}\right)-{\mathit{\beta }}_{k\left(\text{measured}\right)}\left({z}_{k}\right)\right|}{{\mathit{\beta }}_{\mathit{\lambda }\left(\text{measured}\right)}\left({z}_{k}\right)},\end{array}$
where zk is the altitude.
In the attempt to simultaneously minimize deviations between measured and simulated particle backscattering profiles at 355, 532 and 1064 nm, a total deviation can be computed as the root sum square of the single deviations at the two wavelengths, which can be expressed as
$\begin{array}{}\text{(7)}& {\mathrm{\Delta }}_{\mathrm{tot}}=\sqrt{{\mathrm{\Delta }}_{\mathrm{355}}^{\mathrm{2}}+{\mathrm{\Delta }}_{\mathrm{532}}^{\mathrm{2}}+{\mathrm{\Delta }}_{\mathrm{1064}}^{\mathrm{2}}}.\end{array}$
5 Results
## 5.1 Case study on 13 September 2012
During the first ascending spiral, in situ sensors on board the ATR42 were operated in the altitude region from 650 to 5700 m above sea level (hereafter in the paper all altitudes are intended above sea level), covering the 40 min time interval between 19:55 and 20:35 UTC. BASIL was operated in the time interval 19:00–23:00 UTC. Figure 3 illustrates the temporal evolution of the particle backscattering coefficient at 532 nm over the time interval 19:30–21:30 UTC. The figure is illustrated as a succession of 5 min vertical profiles with a vertical resolution of 7.5 m. The figure reveals the presence of a shallow nocturnal boundary layer, which is testified by the presence of an aerosol layer extending up to 500–600 m and the presence of a residual layer extending up to 1500–2100 m.
Figure 3Time evolution of the particle backscattering coefficient at 532 nm over the time interval 19:30–21:30 UTC on 13 September 2012.
Wind direction measurements performed by the on-board flight sensors reveal a primarily northerly wind, with direction varying in the range $±\mathrm{30}{}^{\circ }$ depending on altitude. The NOAA HYSPLIT Lagrangian back-trajectory model (Draxler and Rolph, 1998; Rolph et al., 2017; Stein et al., 2015) has been used to determine the origin of the sounded air masses. The HYSPLIT model computes air parcel trajectories, but it can also be used to simulate complex transport, dispersion, chemical transformation and deposition mechanisms. A common application of the HYSPLIT model is the back- and forward-trajectory analysis, which is used to determine the origin or destination of the investigated air masses and establish source–receptor relationships.
In the present effort the HYSPLIT model is used to determine air masses trajectories at specific altitude levels in the days preceding their arrival on the lidar site in Candillargues. Specifically, Fig. 4 illustrates back trajectories of the air masses overpassing the lidar site at 20:00 UTC on 13 September 2012 at an altitude of 600 (red line), 4000 (blue line) and 6000 m (green line). The trajectories extend back in time for 5 days, thus illustrating the air masses' path since 20:00 UTC on 8 September 2012.
Figure 4Air mass back trajectories at 600 (red), 4000 (blue) and 6000 m (green) ending over the lidar site at 20:00 UTC on 13 September 2012.
Air masses reaching the measurement site at altitudes of 600 and 4000 m originated in the vicinity of Iceland and Greenland and passed at low altitudes (< 400 m) over the North Atlantic Ocean and over industrialized areas in France, while air masses at 5826 m originated in the North Atlantic Ocean in the proximity of the Canadian coasts and persisted in a marine environment for almost 5 days before reaching France.
Figure 5 compares the vertical profiles of the measured and simulated particle backscattering coefficient at 355 nm. The measured profile is obtained from the Raman lidar data integrated over the 40 min time interval coincident with the airplane ascent time (19:55–20:35 UTC on 13 September 2012), with a vertical resolution of 300 m. Simulated particle backscatter profiles include all five aerosol components specified above, i.e. the continental–urban component (red dashed line), the continental (rural) component (green dashed line), the Arctic summer component (black dashed line) and the marine (polluted) component (blue dashed line). Figure 5 reveals a good agreement between the measured backscattering coefficient profile at 355 nm and coefficients simulated at this same wavelength assuming a continental–urban aerosol component and a marine (polluted) aerosol component.
Figure 5Vertical profiles of the measured (black line) and simulated particle backscattering coefficient at 355 nm over the time interval 19:55–20:35 UTC on 13 September 2012. The error bar in lidar measurements accounts for the statistical uncertainty.
The same analysis approach was also applied to the data at 532 nm. Figure 6 compares the vertical profiles of the measured (black line) and simulated (red line) particle backscattering coefficient at 532 nm over the same 40 min time interval on 13 September 2012, again with a vertical resolution of 300 m. Simulated particle backscatter profiles include the five above specified aerosol components. Lidar data at 532 nm are affected by a larger statistical uncertainty than those at 355 nm. Also in this case, the agreement between measured and simulated profiles appears to be quite good up to 3500–4000 m.
Figure 6 reveals that the measured particle backscattering coefficient profile at 532 nm is well reproduced by the simulated profiles at this same wavelength, especially the profiles considering a continental–urban aerosol component and a marine (polluted) aerosol component, with simulated profiles slightly overestimating the measured profile but being within or slightly exceeding the measurement error bar. Deviations between measured and simulated profiles are larger within the aerosol layer centred at 2800 m.
Figure 6Same as Fig. 5 but for the particle backscattering coefficient at 532 nm.
Figure 7 compares the vertical profiles of the measured and simulated particle backscattering coefficient at 1064 nm over the same 40 min time interval considered in Figs. 5 and 6, again with a vertical resolution of 300 m. Particle backscatter measurements at 1064 nm are affected by a statistical uncertainty larger than the one affecting the measurements at 532 nm. This larger uncertainty is the result of the use of a reduced laser emission power at 1064 nm because of the restrictions imposed by the air traffic control authorities. In this case, the agreement between measured and simulated profiles is poorer but still acceptable up to 2500 m.
Figure 7Same as Fig. 5 but for the particle backscattering coefficient at 1064 nm.
Figure 8 illustrates the deviations between the measured and the simulated particle backscattering coefficient profile at 355 nm. The smallest deviations between the two profiles up to 4500 m are obtained when considering the presence of a marine polluted aerosol component (smaller than 53 %, with a mean deviation of 23.2 %). Simulated profiles obtained considering a continental–urban aerosol component (not exceeding 54 %, with a mean deviation of 24.9 %) deviate less only within the altitude interval 1200–1300 m, while deviations are very similar above 2600 m. The simulated profile obtained considering the presence of either a continental rural or an Arctic summer aerosol component largely deviates from the measured profile (up to 80 % and 92 %, respectively, with a mean deviation of 50.9 % and 25.9 %). The Arctic component deviates less only above 4500 m, where the high signal noise level and the limited particle loading make aerosol type discrimination difficult to accomplish.
Figure 8Deviation, expressed in percentage, between measured and simulated particle backscattering coefficient profiles at 355 nm. Simulated profiles are Arctic summer (black dashed line), continental–urban (red dashed line), marine (polluted) (blue dashed line) and continental (rural) (green dashed line).
Figure 9 illustrates the deviations between the measured and the simulated particle backscattering coefficient profile at 532 nm. Again, the maximum altitude for aerosol type retrieval is 4340 m. The smallest deviations between measured and simulated particle backscattering coefficient profiles are obtained when considering the presence of a continental–urban aerosol component (not exceeding 105 %, with a mean value of 30.8 %) or a marine polluted aerosol component (smaller than 106 %, with a mean value of 30.9 %), while simulated profiles obtained considering the presence of either a continental rural or an Arctic summer aerosol component largely deviate from the measured profile (up to 60.6 % and 87 %, respectively, with a mean deviation of 39.6 % and 79.2 %). The only exception is given by the interval 2300–3000 m, where the simulated profile obtained considering a rural aerosol component deviates less.
Figure 9Same as Fig. 8 but obtained considering particle backscattering coefficient profiles at 532 nm.
Figure 10 illustrates the deviations between the measured and the simulated particle backscattering coefficient profile at 1064 nm considering altitudes up to 2500 m. The smallest deviations between the two profiles over the considered altitude range are obtained when considering the presence of a continental–urban aerosol component (not exceeding 61.4 %, with a mean deviation of 21.2 %). Deviations between measured and simulated profile obtained considering a marine polluted aerosol component are slightly larger (smaller than 55 %, with a mean deviation of 28.6 %), while the simulated profile obtained considering the presence of either a continental rural or an Arctic summer aerosol component largely deviate from the measured profile (up to 58 % and 82.7 %, with a mean deviation of 40.9 % and 67.3 %, respectively). Again, the only exception is found in the interval 1600–1900 m, where the simulated profile obtained considering the marine polluted aerosol component deviate less.
Figure 10Same as Fig. 8 but obtained considering particle backscattering coefficient profiles at 1064 nm up to 2500 m.
The overall deviation was calculated for the five distinct aerosol components. Figure 11 illustrates the overall deviations between the measured and the simulated particle backscattering coefficient profiles at 355, 532 and 1064 nm for the different aerosol components. In order to facilitate the interpretation of results, the overall deviation between measured and simulated particle backscattering coefficient profiles, for the different aerosol components, has been plotted together with the measured particle backscattering profiles at all wavelengths (Fig. 12). In the lowest portion of the atmosphere up to 1700 m, i.e. inside the planetary boundary layer, the continental–urban aerosol component is predominant. The upper layer between 1700 and 2400 m is characterized by the presence of a maritime aerosol component in the lower part and again an urban aerosol component in the upper part. Deviations including the particle backscattering coefficient at 1064 nm were computed up to 2500 m because of the high statistical noise of the 1064 nm lidar signal. Additional layers are visible in the altitude range 2400–3100 and 3800–4500 m. Above 2400 m simulations based on the urban and maritime components show similar deviations from measurements, except in the central part of layer between 2600 and 2900 m and between 4300 and 4500 m, where rural aerosols deviate less. HYSPLIT back-trajectory analysis confirms that the sounded air masses in the previous days overpass industrialized areas in France, Belgium and England.
Figure 11Total deviation, in percentage, between measured and simulated particle backscattering coefficient profiles at 355, 532 and 1064 nm (up to 2500 m) for the different aerosol components. Simulated profiles are Arctic summer (black dashed line), continental–urban (red dashed line), marine (polluted) (blue dashed line) and continental (rural) (green dashed line).
## 5.2 Case study on 2 October 2012
A second flight took place on 2 October 2012. During the ascending path, in situ sensors on board the ATR42 were operated in the altitude region from 680 to 5700 m, covering a 44 min time interval between 19:43 and 20:27 UTC. BASIL was operated over the time interval 16:00–24:00 UTC.
Wind direction measurements performed by the on-board flight sensors reveal a north-westerly wind, with direction varying in the range 220–320 depending on altitude. Figure 13 shows the 5-day back trajectories from the NOAA HYSPLIT model at 600, 4000 and 6000 m (in red, blue and green, respectively), ending on the lidar site at 20:00 UTC on 2 October 2012.
Figure 12Total deviation, in percentage, between measured and simulated particle backscattering coefficient profiles for the different aerosol components (Arctic summer: black dashed line, continental–urban: red dashed line, marine polluted: blue dashed line; continental rural: green dashed line) and measured particle backscattering profiles at both 355 (blue line) and 532 nm (red line). The horizontal blue and red axes refer to the particle backscattering coefficient at 355 and 532 nm, respectively, while the horizontal black axis refers to the total deviations. Horizontal orange lines are also drawn at specific altitudes to identify different aerosol types in support of the interpretation of the reported results.
Figure 13Back trajectories at 600 (red), 4000 (blue) and 6000 m (green) ending on the lidar site at 20:00 UTC on 2 October 2012.
Back-trajectory analysis results reveal that air masses reaching the measurement site at an altitude of 600 m originated in the North Atlantic Ocean, south of Iceland, and have passed at low altitudes (500–600 m) over highly anthropogenic continental areas (Ireland, England and northern France). A different path characterizes air masses at 4000 m. These originated over the North Atlantic Ocean, offshore of the Canadian coast, and overpassed an area north of the Azores over the northern coast of Spain before reaching the measurement site. Finally, air masses reaching the measurement site at 6000 m which originated over the North Pacific Ocean overpassed Canada, the North Atlantic Ocean, the northern coast of Spain and finally reached the measurement site.
In the analysis of this second case study, we applied the same methodology considered for the first case study (1991). As for the previous case study, given the microphysical parameters and aerosol typology for each of the three given modes, the number of particles has been varied in order for the theoretical distribution to match the experimental distribution computed with the OPC data, with the matching between the experimental and theoretical distributions again obtained through a best fit procedure. The modal radius, standard deviation and refractive index reported by d'Almeida et al. (1991) for the different considered aerosol components are listed in Table 1.
Figure 14 illustrates the vertical profiles of measured (black line) and simulated particle backscattering coefficient at 355 nm over the 44 min time interval between 19:43 and 20:27 UTC on 2 October 2012. Simulated particle backscatter profiles include all five aerosol components specified above, i.e. the continental–urban component (red dashed line), the continental (rural) component (green dashed line), the Arctic summer component (black dashed line) and the marine (polluted) component (blue dashed line). Figure 14 reveals a good agreement between the measured backscattering coefficient profile at 355 nm and coefficients simulated at this same wavelength assuming a continental–urban aerosol component and a marine (polluted) aerosol component.
Figure 14Vertical profiles of measured (black line) and simulated particle backscattering coefficient at 355 nm over the time interval 19:43–20:27 UTC on 2 October 2012. Simulated particle backscatter profiles include five distinct components: continental–urban (red dashed line), continental (rural) (green dashed line), Arctic summer (black dashed line) and marine (polluted) (blue dashed line). The error bar in lidar measurements accounts for the statistical uncertainty.
Figure 15Same as Fig. 14 but with particle backscattering coefficient profiles at 532 nm.
We also applied this same analysis approach to the data at 532 nm, with Fig. 15 illustrating the vertical profiles of the measured and simulated particle backscattering coefficient at 532 nm over the same time interval considered in Fig. 14. Again, simulated particle backscatter profiles include the five above-specified aerosol components. Figure 15 reveals that the measured particle backscattering coefficient profile at 532 nm is well reproduced by the simulated profiles at this same wavelength, especially the profiles considering a continental–urban aerosol component and a marine (polluted) aerosol component, with simulated profiles slightly underestimating the measured profile but being within or slightly exceeding the measurement error bar. Deviations between measured and simulated profiles are larger within the aerosol layers centred at 3000 and 4000 m. Due to the limited laser power at 1064 nm for this specific measurement session, measured profiles of the particle backscattering coefficient at 1064 nm are characterized by high statistical noise, which prevents us from considering the use of the comparison between measured and simulated particle backscatter profiles at this wavelength in the present analysis.
Figure 16 illustrates the deviations between the measured and the simulated particle backscattering coefficient profiles at 355 nm. The smallest deviations between the measured and the simulated particle backscattering coefficient profile over the considered altitude range are obtained when considering the presence of a continental–urban aerosol component (not exceeding 15 % up to 5000 m, with a mean deviation of 5.9 %). Deviations between the measured and simulated profile obtained considering a marine polluted aerosol component slightly exceed these values (smaller than 20 % up to 5000 m, with a mean deviation of 9.5 %), while the simulated profile obtained considering the presence of either a continental rural or an Arctic summer aerosol component largely deviates from the measured profile (up to 80 %, with a mean deviation of 50.9 % and 25.9 %, respectively).
Figure 17 illustrates the deviations between measured and simulated particle backscattering coefficient profiles at 532 nm. Again, the smallest deviations between the two profiles over the considered altitude range are obtained when considering a continental–urban aerosol component (not exceeding 50 % up to 5000 m, with a mean deviation of 25.9 %), with the only exception for the interval 3100–3700 m, where the simulated profile obtained considering a marine polluted aerosol component deviates less. Above 3700 m simulated profiles obtained considering a continental–urban and a marine polluted aerosol component equally deviate from the measured profile.
Figure 16Deviation, expressed in percentage, between measured and simulated particle backscattering coefficient profiles at 355 nm. Simulated profiles are Arctic summer (black dashed line), continental–urban (red dashed line), marine (polluted) (blue dashed line) and continental (rural) (green dashed line).
Figure 17Same as Fig. 16 but obtained considering particle backscattering coefficient profiles at 532 nm.
In the attempt to simultaneously minimize deviations between measured and simulated particle backscattering profiles at both 355 and 532 nm, following Eq. (7), a total deviation can be computed as the root sum square of the single deviations at the two wavelengths, which can be expressed as
$\begin{array}{}\text{(8)}& {\mathrm{\Delta }}_{\mathrm{tot}}=\sqrt{{\mathrm{\Delta }}_{\mathrm{355}}^{\mathrm{2}}+{\mathrm{\Delta }}_{\mathrm{532}}^{\mathrm{2}}}.\end{array}$
This quantity was calculated for the five distinct aerosol components. Figure 18 illustrates the total deviations between the measured and the simulated particle backscattering coefficient profiles at 355 and 532 nm for the different aerosol components. In order to facilitate the interpretation of these results, the total deviation between measured and simulated particle backscattering coefficient profiles for the different aerosol components has been plotted together with the measured particle backscattering profiles at both 355 and 532 nm (Fig. 19).
Figure 18Total deviation, in percentage, between measured and simulated particle backscattering coefficient profiles at 355 and 532 nm for the different aerosol components. Simulated profiles are Arctic summer (black dashed line), continental–urban (red dashed line), marine (polluted) (blue dashed line) and continental (rural) (green dashed line).
Figure 19Total deviation, in percentage, between measured and simulated particle backscattering coefficient profiles for the different aerosol components (Arctic summer: black dashed line, continental–urban: red dashed line, marine polluted: blue dashed line; continental rural: green dashed line) and measured particle backscattering profiles at both 355 (blue line) and 532 nm (red line). The horizontal blue and red axes refer to the particle backscattering coefficient at 355 and 532 nm, respectively, while the horizontal black axis refers to the total deviations. Horizontal orange lines are also drawn at specific altitudes to identify different aerosol types in support of the interpretation of the reported results.
Figure 19 allows the following considerations to be made. In the lowest portion of the atmosphere, up to an altitude of ∼1300 m (altitude 1), aerosol particles are most likely characterized by a predominant continental–urban component. This aerosol layer extends up to ∼1600 m, which is the altitude at which the boundary layer height is located, as also indicated by the simultaneous radiosonde data (not shown here). In the upper portion of the boundary layer, in the vertical interval 1300–1600 m, deviations associated with continental–urban, marine polluted and continental rural components overlap, which suggests that all three aerosol components are possible. However, while this upper portion of the boundary layer is typically characterized by entrainment effects (interfacial region), which may allow different aerosol components to be ingested, the continental–urban component is likely to be the predominant component.
Above the top of the boundary layer and up to ∼2700 m (altitude 2), particle backscatter decreases with altitude. The typology analysis suggests continental–urban aerosols likely to be the predominant component, as in fact total deviation between the measured and the simulated particle backscattering coefficient profile for this aerosol component is far lower than for all other aerosol components.
In the altitude interval 2700–3600 m (altitudes 2–3, with max. at 3000 m) the measured particle backscatter profiles reveal the presence of a distinct aerosol layer. The typology analysis indicates that both the continental–urban and the marine polluted components are possible. An additional distinct aerosol layer is found in the altitude interval 3600–4600 m (altitudes 3–4, with max. at 4000 m). Again, the typology analysis suggests the continental–urban component is possible. Sounded aerosol particles at 3000 and 4000 m are compatible with continental polluted aerosols, this possibility being supported by the back-trajectory analysis at 3000 and 4000 m.
A sensitivity study has also been carried out to assess the variability of the results to changes of specific size and microphysical parameters' values. The sensitivity study reveals that the considered methodology for aerosol typing is successfully applicable in the altitude region up to 3900 m, as in fact above this altitude the statistical uncertainty affecting the lidar signals is high, and this severely reduces the effectiveness of the aerosol typing methodology. The sensitivity analysis also reveals that in the lower levels, typically within the boundary layer where aerosol loading is larger, deviations between measured and simulated particle backscattering coefficient at the three wavelengths may vary by up to 20 % as a result of a ±5 % variability of specific size and microphysical parameters (for example, the real part of the refractive index), which certainly reduces confidence in the aerosol typing approach but is not compromising its outcome. Based on the results from this study we may conclude that the use of particle backscattering measurements at two wavelengths in combination with OPC measurements allows a sufficiently reliable assessment of the aerosol types to be obtained, which can be verified and refined based on the use of back-trajectory analyses.
6 Summary
During HyMeX-SOP1, the Raman lidar system BASIL was deployed in Candillargues (southern France) and operated almost continuously over a 2-month period in the time frame October–November 2012. Dedicated flights of the French research aircraft ATR42 were carried out in the framework of the EUFAR-WaLiTemp Project. The ATR42 payload included in situ sensors for turbulence measurements, as well as aerosol and cloud microphysics probes, together with an optical particle counter (GRIMM Aerosol Technik GmbH, model: Sky-OPC 1.129) capable of measuring particle number concentration in the size interval 0.25–2.5 µm. A specific flight pattern was considered for the purpose of this study, with the aircraft making spirals up and down around a central location approximately 20 km eastward of the lidar site. Vertical profiles of the particle backscattering coefficient at 355, 532 and 1064 nm have been simulated through the use of a Mie scattering code, using the data provided by the optical particle counter. The simulated particle backscatter profiles have been compared with the profiles measured by the lidar Raman system BASIL. Results from two selected case studies (on 13 September and on 2 October 2012) are reported and discussed. An analysis approach based on the application of a Mie scattering code is considered and applied. The approach ultimately allows the sounded aerosol types to be inferred. The added value of the reported methodology is represented by the possibility to infer the presence of different aerosol types based on the use of multi-wavelength Raman lidar measurement from a ground-based system in combination with an independent measurement of the particle concentration profile (in our case we are using the one coming from an optical particle counter mounted on board an aircraft overpassing the lidar site). This methodology is applicable when sounded particles are spherical or almost spherical, which allows for the Mie scattering theory to be applied for the determination of the particle backscattering coefficient. The HYSPLIT-NOAA back-trajectory model was used to verify the origin of the sounded aerosol particles.
Five different aerosol typologies are considered, i.e. continental polluted, clean continental–rural, urban, maritime polluted and clean–polar, with their size and microphysical properties taken from literature. The approach leads to an assessment of the predominant aerosol component based on the application of a minimization approach applied to the deviations between measured and simulated particle backscattering profiles at 355 and 532 nm and for the first test case study also at 1064 nm, considering all five aerosol typologies.
The application of this approach to the case study on 13 September 2012 suggests the presence of urban and maritime aerosols throughout the entire vertical extent of sounded column, except in the altitude region 2600–2900 and 4300–4500 m ranges, where the presence of a rural component is likely to be possible. The application of the approach to the case study on 2 October 2012 reveals that continental–urban aerosols are likely to be the predominant components up to ∼1600 m, while the two distinct aerosol layers located in the altitude regions 2700–3600 (with max. at 3000 m) and 3600–4600 m (with max. at 4000 m) are identified to likely consist of continental–urban and/or marine polluted aerosols, respectively. The correctness of the results has been verified based on the application of the HYSPLIT-NOAA back-trajectory model, with the analysis extend backing in time for 5 days allowing the origin of the sounded aerosol particles to be assessed.
Finally, a sensitivity study has been carried out to assess the variability of the aerosol typing approach to varying size and microphysical parameters. The study reveals that the reported approach is successfully applicable in the altitude region up to ∼4 km, while above this altitude the sensitivity of the approach is substantially reduced by the high statistical uncertainty affecting lidar signals. The sensitivity study also reveals that the within-boundary-layer deviations between measured and simulated particle backscattering coefficients at 355, 532 and 1064 nm may vary up to 20 % as a result of ±5 % variability of specific size and microphysical parameters. Such results reveal that the application of the reported approach, based on the use of particle backscattering measurements at two wavelengths in combination with OPC measurements, allows a sufficiently reliable assessment of aerosol typing to be obtained.
Data availability
Data availability.
Data used in this study, together with the related metadata, are available from the public data repository HyMeX database, which is freely accessible by all users through the following link: http://mistrals.sedoo.fr/HyMeX/ (Di Girolamo, 2019).
Author contributions
Author contributions.
PDG designed and developed the main experiment, and MC designed and developed the additional receiving unit. PDG, MC, DS, AS and DS carried out the measurements. DS, AS and DS developed the data analysis algorithms and carried out the data analysis. DS and PDG prepared the manuscript with contributions from DS.
Competing interests
Competing interests.
The authors declare that they have no conflict of interest.
Special issue statement
Special issue statement.
Acknowledgements
Acknowledgements.
This work is a contribution to the HyMeX Program supported by MISTRALS and ANR IODA-MED grant ANR-11-BS56-0005. This research effort was supported by the European Commission under the European Facility for Airborne Research of the Seventh Framework Programme (WaLiTemp project). This research effort was also supported by the project “Smart Cities – Basilicata” by the Italian Ministry of Education, University and Research. The authors gratefully acknowledge NOAA Air Resources Laboratory (ARL) for the provision of the HYSPLIT transport and dispersion model used in this publication.
Review statement
Review statement.
This paper was edited by Domenico Cimini and reviewed by three anonymous referees.
References
Ansmann, A., Riebesell, M., and Weitkamp, C.: Measurement of atmospheric aerosol extinction profiles with a Raman lidar, Opt. Lett., 15, 746–748, https://doi.org/10.1364/OL.15.000746, 1990.
Ansmann, A., Wandinger, U., Riebesell, M., Weitkamp, C., and Michaelis, W.: Independent measurement of extinction and backscatter profiles in cirrus clouds by using a combined Raman elastic-backscatter lidar, Appl. Optics, 31, 7113–7131, https://doi.org/10.1364/AO.31.007113, 1992.
Bellantone, V., Carofalo, I., De Tomasi, F., Perrone, M. R., Santese, M., Tafuro, A. M., and Turnone, A.: In situ samplings and remote sensing measurements to characterize aerosol properties over South-East Italy, J. Atmos. Ocean. Tech., 25, 1341–1356, 2008.
Bennett, L. J., Blyth, A. M., Burton, R. R., Gadian, A. M., Weckwerth, T. M., Behrendt, A., Di Girolamo, P., Dorninger, M., Lock, S.-J., Smith, V. H., and Mobbs, S. D.: Initiation of convection over the Black Forest mountains during COPS IOP15a, Q. J. Roy. Meteor. Soc., 137, 176–189, https://doi.org/10.1002/qj.760, 2011.
Bhawar, R., Bianchini, G., Bozzo, A., Cacciani, M., Calvello, M.R., Carlotti, M., Castagnoli, F., Cuomo, V., Di Girolamo, P., Di Iorio, T., Di Liberto, L., di Sarra, A., Esposito, F., Fiocco, G., Fua, D., Grieco, G., Maestri, T., Masiello, G., Muscari, G., Palchetti, L., Papandrea, E., Pavese, G., Restieri, R., Rizzi, R., Romano, F., Serio, C., Summa, D., Todini, G., and Tosi, E.: Spectrally Resolved Observations of Atmospheric Emitted Radiance in the H2O Rotation Band, Geophys. Res. Lett., 35, L04812, https://doi.org/10.1029/2007GL032207, 2008.
Bhawar, R., Di Girolamo, P., Summa, D., Flamant, C., Althausen, D., Behrendt, A., Kiemle, C., Bosser, P., Cacciani, M., Champollion, C., Di Iorio, T., Engelmann, R., Herold, C., Müller, D., Pal, S., Wirth, M., and Wulfmeyer, V.: The water vapour intercomparison effort in the framework of the Convective and Orographically-induced Precipitation Study: airborne-to-ground-based and airborne-to-airborne lidar systems, Q. J. Roy. Meteor. Soc., 137, 325–348, https://doi.org/10.1002/qj.697, 2011.
d'Almeida, G. A., Koepke, P., and Shettle, E. P.: Atmospheric aerosols: Global climatology and radiative characteristics, A Deepak Publishing, Hampton, Virginia, 561 pp., 1991.
Di Girolamo, P.: Particle backscattering coefficient at 355 mm, Particle backscattering coefficient at 532 mm, Particle backscattering coefficient at 1064 mm, available at: http://mistrals.sedoo.fr/HyMeX/, last access: 3 April 2019.
Di Girolamo, P., Gagliardi, R. V., Pappalardo, G., Spinelli, N., Velotta, R., and Berardi, V.: Two wavelength lidar analysis of stratospheric aerosol size distribution, J. Aerosol Sci., 26, 989–1001, https://doi.org/10.1016/0021-8502(95)00025-8, 1995.
Di Girolamo, P., Ambrico, P. F., Amodeo, A., Boselli, A., Pappalardo, G., and Spinelli, N.: Aerosol observations by lidar in the nocturnal boundary layer, Appl. Optics, 38, 4585–4595, https://doi.org/10.1364/AO.38.004585, 1999.
Di Girolamo, P., Marchese, R., Whiteman, D. N., and Demoz, B. B.: Rotational Raman Lidar measurements of atmospheric temperature in the UV, Geophys. Res. Lett., 31, L01106, https://doi.org/10.1029/2003GL018342, 2004.
Di Girolamo, P., Behrendt, A., and Wulfmeyer, V.: Spaceborne profiling of atmospheric temperature and particle extinction with pure rotational Raman lidar and of relative humidity in combination with differential absorption lidar: performance simulations, Appl. Optics, 45, 2474–2494, https://doi.org/10.1364/AO.45.002474, 2006.
Di Girolamo, P., Behrendt, A., Kiemle, C., Wulfmeyer, V., Bauer, H., Summa, D., Dörnbrack, A., and Ehret, G.: Simulation of satellite water vapour lidar measurements: Performance assessment under real atmospheric conditions, Remote Sens. Environ., 112, 1552–1568, https://doi.org/10.1016/j.rse.2007.08.008, 2008.
Di Girolamo, P., Summa, D., and Ferretti, R.: Multiparameter Raman Lidar Measurements for the Characterization of a Dry Stratospheric Intrusion Event, J. Atmos. Ocean. Tech., 26, 1742–1762, https://doi.org/10.1175/2009JTECHA1253.1, 2009a.
Di Girolamo, P., Summa, D., Lin, R.-F., Maestri, T., Rizzi, R., and Masiello, G.: UV Raman lidar measurements of relative humidity for the characterization of cirrus cloud microphysical properties, Atmos. Chem. Phys., 9, 8799–8811, https://doi.org/10.5194/acp-9-8799-2009, 2009b.
Di Girolamo, P., Summa, D., Bhawar, R., Di Iorio, T., Cacciani, M., Veselovskii, I., Dubovik, O., and Kolgotin, A.: Raman lidar observations of a Saharan dust outbreak event: Characterization of the dust optical properties and determination of particle size and microphysical parameters, Atmos. Environ., 50, 66–78, https://doi.org/10.1016/j.atmosenv.2011.12.061, 2012a.
Di Girolamo, P., Summa, D., Cacciani, M., Norton, E. G., Peters, G., and Dufournet, Y.: Lidar and radar measurements of the melting layer: observations of dark and bright band phenomena, Atmos. Chem. Phys., 12, 4143–4157, https://doi.org/10.5194/acp-12-4143-2012, 2012b.
Di Girolamo, P., Flamant, C., Cacciani, M., Richard, E., Ducrocq, V., Summa, D., Stelitano, D., Fourrié, N., and Saïd, F.: Observation of low-level wind reversals in the Gulf of Lion area and their impact on the water vapour variability, Q. J. Roy. Meteor. Soc., 142, 153–172, https://doi.org/10.1002/qj.2767, 2016.
Di Girolamo, P., Cacciani, M., Summa, D., Scoccione, A., De Rosa, B., Behrendt, A., and Wulfmeyer, V.: Characterisation of boundary layer turbulent processes by the Raman lidar BASIL in the frame of HD(CP)2 Observational Prototype Experiment, Atmos. Chem. Phys., 17, 745–767, https://doi.org/10.5194/acp-17-745-2017, 2017.
Di Girolamo, P., Behrendt, A., and Wulfmeyer, V.: Space-borne profiling of atmospheric thermodynamic variables with Raman lidar: performance simulations, Opt. Express, 26, 8125–8161, https://doi.org/10.1364/OE.26.008125, 2018a.
Di Girolamo, P., Scoccione, A., Cacciani, M., Summa, D., De Rosa, B., and Schween, J. H.: Clear-air lidar dark band, Atmos. Chem. Phys., 18, 4885–4896, https://doi.org/10.5194/acp-18-4885-2018, 2018b.
Di Iorio, T., Di Sarra, A., Junkermann, W., Cacciani, M., Fiocco, G., and Fuà, D.: Tropospheric aerosols in the Mediterranean: 1. Microphysical and optical properties, J. Geophys. Res.-Atmos., 108, 4316, https://doi.org/10.1029/2002JD002815, 2003.
Draxler, R. R. and Hess, G. D.: An overview of the HYSPLIT_4 modeling system for trajectories, dispersion and deposition, Aust. Meteorol. Mag., 47, 295–308, 1998.
Ducrocq, V., Braud, I., Davolio, S., Ferretti, R., Flamant, C., Jansa, A., Kalthoff, N., Richard, E., Taupier-Letage, I., Ayral, P., Belamari, S., Berne, A., Borga, M., Boudevillain, B., Bock, O., Boichard, J., Bouin, M., Bousquet, O., Bouvier, C., Chiggiato, J., Cimini, D., Corsmeier, U., Coppola, L., Cocquerez, P., Defer, E., Delanoë, J., Di Girolamo, P., Doerenbecher, A., Drobinski, P., Dufournet, Y., Fourrié, N., Gourley, J. J., Labatut, L., Lambert, D., Le Coz, J., Marzano, F. S., Molinié, G., Montani, A., Nord, G., Nuret, M., Ramage, K., Rison, W., Roussot, O., Said, F., Schwarzenboeck, A., Testor, P., Van Baelen, J., Vincendon, B., Aran, M., and Tamayo, J.: HyMeX-SOP1: The Field Campaign Dedicated to Heavy Precipitation and Flash Flooding in the Northwestern Mediterranean, B. Am. Meteor. Soc., 95, 1083–1100, https://doi.org/10.1175/BAMS-D-12-00244.1, 2014.
Elterman, L.: Aerosol Measurements in the Troposphere and Stratosphere, Appl. Optics, 5, 1769–1776, https://doi.org/10.1364/AO.5.001769, 1966.
Estellés, V., Martínez-Lozano, J. A., and Utrillas, M. P.: Influence of air mass history on the columnar aerosol properties at Valencia, Spain, J. Geophys. Res., 112, D15211, https://doi.org/10.1029/2007JD008593, 2007.
Fernald, F. G.: Analysis of atmospheric lidar observations: some comments, Appl. Optics, 23, 652–653, https://doi.org/10.1364/AO.23.000652, 1984.
Fiocco, G. and Grams, G.: Observations of the Aerosol Layer at 20 km by Optical Radar, J. Atmos. Sci., 21, 323–324, https://doi.org/10.1175/1520-0469(1964)021<0323:OOTALA>2.0.CO;2, 1964.
Grainger, R. G., Lucas, J., Thomas, G. E., and Ewen, G. B. L.: Calculation of Mie derivatives, Appl. Opt. 43, 5386–5393, https://doi.org/10.1364/AO.43.005386, 2004.
Granados-Muñoz, M. J., Bravo-Aranda, J. A., Baumgardner, D., Guerrero-Rascado, J. L., Pérez-Ramírez, D., Navas-Guzmán, F., Veselovskii, I., Lyamani, H., Valenzuela, A., Olmo, F. J., Titos, G., Andrey, J., Chaikovsky, A., Dubovik, O., Gil-Ojeda, M., and Alados-Arboledas, L.: A comparative study of aerosol microphysical properties retrieved from ground-based remote sensing and aircraft in situ measurements during a Saharan dust event, Atmos. Meas. Tech., 9, 1113–1133, https://doi.org/10.5194/amt-9-1113-2016, 2016.
Grimm, H. and Eatough, D. J.: Aerosol Measurement: The Use of Optical Light Scattering for the Determination of Particulate Size Distribution, and Particulate Mass, Including the Semi-Volatile Fraction, J. Air Waste Manage., 59, 101–107, https://doi.org/10.3155/1047-3289.59.1.101, 2009.
Haywood, J. and Boucher, O.: Estimates of the direct and indirect radiative forcing due to tropospheric aerosols: A review, Rev. Geophys., 38, 513–543, https://doi.org/10.1029/1999RG000078, 2000.
Heim, M., Mullins, B. J., Umhauer, H., and Kasper, G.: Performance evaluation of three optical particle counters with an efficient “multimodal” calibration method, J. Aerosol Sci., 39, 1019–1031, https://doi.org/10.1016/j.jaerosci.2008.07.006, 2008.
Junge, C. and Jaenicke, R.: New results in background aerosols studies from the Atlantic expedition of the R.V. Meteor, Spring 1969, J. Aerosol Sci., 2, 305–314, https://doi.org/10.1016/0021-8502(71)90055-3, 1971.
Junge, C. E.: Our knowledge of the physico-chemistry of aerosols in the undisturbed marine environment, J. Geophys. Res., 77, 5183–5200, https://doi.org/10.1029/JC077i027p05183, 2012.
Klett, J. D.: Stable analytical inversion solution for processing lidar returns, Appl. Optics, 20, 211–220, https://doi.org/10.1364/AO.20.000211, 1981.
Klett, J. D.: Lidar inversion with variable backscatter/extinction ratios, Appl. Optics, 24, 1638–1643, https://doi.org/10.1364/AO.24.001638, 1985.
Macke, A., Seifert, P., Baars, H., Barthlott, C., Beekmans, C., Behrendt, A., Bohn, B., Brueck, M., Bühl, J., Crewell, S., Damian, T., D eneke, H., Düsing, S., Foth, A., Di Girolamo, P., Hammann, E., Heinze, R., Hirsikko, A., Kalisch, J., Kalthoff, N., Kinne, S., Kohler, M., Löhnert, U., Madhavan, B. L., Maurer, V., Muppa, S. K., Schween, J., Serikov, I., Siebert, H., Simmer, C., Späth, F., Steinke, S., Träumner, K., Trömel, S., Wehner, B., Wieser, A., Wulfmeyer, V., and Xie, X.: The HD(CP)2 Observational Prototype Experiment (HOPE) – an overview, Atmos. Chem. Phys., 17, 4887–4914, https://doi.org/10.5194/acp-17-4887-2017, 2017.
Man, C. K. and Shih, M. Y.: Identification of sources of PM10 aerosols in Hong Kong by wind trajectory analysis, J. Aerosol Sci., 32, 1213–1223, https://doi.org/10.1016/S0021-8502(01)00052-0, 2001.
McMeeking, G. R., Hamburger, T., Liu, D., Flynn, M., Morgan, W. T., Northway, M., Highwood, E. J., Krejci, R., Allan, J. D., Minikin, A., and Coe, H.: Black carbon measurements in the boundary layer over western and northern Europe, Atmos. Chem. Phys., 10, 9393–9414, https://doi.org/10.5194/acp-10-9393-2010, 2010.
Methven, J., Evans, M., Simmonds, P., and Spain, G.: Estimating relationships between air mass origin and chemical composition, J. Geophys. Res., 106, 5005–5019, 2001.
Mhawish, A., Kumar, M., Mishra A. K., Srivastava P. K., and Banerjee, T.: Chapter 3 – Remote Sensing of Aerosols From Space: Retrieval of Properties and Applications, Remote Sensing of Aerosols, Clouds, and Precipitation, Elsevier, 45–83, https://doi.org/10.1016/B978-0-12-810437-8.00003-7, 2018.
Mitchell, J. M.: The Effect of Atmospheric Aerosols on Climate with Special Reference to Temperature near the Earth's Surface, J. Appl. Meteor., 10, 703–714, https://doi.org/10.1175/1520-0450(1971)010<0703:TEOAAO>2.0.CO;2, 1971.
Müller, D., Wandinger, U., Althausen, D., and Fiebig, M.: Comprehensive particle characterization from three-wavelength Raman-lidar observations: case study, Appl. Optics, 40, 4863–4869, https://doi.org/10.1364/AO.40.004863, 2001.
Müller D., Mattis, I., Ansmann, A., Wandinger, U., Ritter, C., and Kaiser, D.: Multiwavelength Raman lidar observations of particle growth during long-range transport of forest-fire smoke in the free troposphere, Geophys. Res. Lett., 34, L05803, https://doi.org/10.1029/2006GL027936, 2007.
Müller, T., Schladitz, A., Massling, A., Kaaden, N., Kandler, K., and Wiedensohler, A.: Spectral absorption coefficients and imaginary parts of refractive indices of Saharan dust during SAMUM-1, Tellus B, 61, 79–95, https://doi.org/10.1111/j.1600-0889.2008.00399.x, 2009.
Rolph, G., Stein, A., and Stunder, B.: Real-time Environmental Applications and Display sYstem: READY, Environ. Modell. Softw., 95, 210–228, https://doi.org/10.1016/j.envsoft.2017.06.025, 2017.
Shettle, P. and Fenn, R. W.: Models of the Atmospheric Aerosols and Their Optical Properties, in: AGARD Conference Proceedings No. 183, Optical Propagation in the Atmosphere, Lyngby, Denmark, 1976.
Shettle, E. P. and Fenn, R. W.: Models for the Aerosols of the Lower Atmosphere and the Effects of Humidity Variations on Their Optical Properties, AFGL-TR-79-0214, 675, p. 94, 1979.
Sekiguchi, M., Nakajima, T., Suzuki, K., Kawamoto, K., Higurashi, A., Rosenfeld, D., Sano, I., and Mukai, S.: A study of the direct and indirect effects of aerosols using global satellite data sets of aerosol and cloud parameters, J. Geophys. Res.-Atmos., 108, 4699, https://doi.org/10.1029/2002JD003359, 2003.
Serio, C., Masiello, G., Esposito, F., Di Girolamo, P., Di Iorio, T., Palchetti, L., Bianchini, G., Muscari, G., Pavese, G., Rizzi, R., Carli, B., and Cuomo, V.: Retrieval of foreign-broadened water vapor continuum coefficients from emitted spectral radiance in the H2O rotational band from 240 to 590 cm−1, Opt. Express, 16, 15816–15833, https://doi.org/10.1364/OE.16.015816, 2008.
Stein, A. F., Draxler, R. R., Rolph, G. D., Stunder, B. J. B., Cohen, M. D., and Ngan, F.: NOAA's HYSPLIT Atmospheric Transport and Dispersion Modeling System, B. Am. Meteor. Soc., 96, 2059–2077, https://doi.org/10.1175/BAMS-D-14-00110.1, 2015.
Takemura, T., Nozawa, T., Emori, S., Nakajima, T. Y., and Nakajima, T.: Simulation of climate response to aerosol direct and indirect effects with aerosol transport-radiation model, J. Geophys. Res.-Atmos., 110, D02202, https://doi.org/10.1029/2004JD005029, 2005.
Temme, N. M.: Special Functions: An introduction to the classical functions of mathematical physics, (2nd print ed.), New York, Wiley, 228–231, ISBN 0471113131, 1996.
Toledano, C., Cachorro, V. E., De Frutos, A. M., Torres, B., Berjon, A., Sorribas, M., and Stone, R. S.: Airmass Classification and Analysis of Aerosol Types at El Arenosillo (Spain), J. Appl. Meteorol. Clim., 8, 962–981, https://doi.org/10.1175/2008jamc2006.1, 2009.
Veselovskii, I., Kolgotin, A., Griaznov, V., Müller, D., Wandinger, U., and Whiteman, D. N.: Inversion with regularization for the retrieval of tropospheric aerosol parameters from multiwavelength lidar sounding, Appl. Optics, 41, 3685–3699, https://doi.org/10.1364/AO.41.003685, 2002.
Veselovskii, I., Whiteman, D. N., Kolgotin, A., Andrews, E., and Korenskii, M.: Demonstration of Aerosol Property Profiling by Multiwavelength Lidar under Varying Relative Humidity Conditions, J. Atmos. Ocean. Tech., 26, 1543–1557, https://doi.org/10.1175/2009JTECHA1254.1, 2009.
Veselovskii, I., Dubovik, O., Kolgotin, A., Lapyonok, T., Di Girolamo, P., Summa, D., Whiteman, D. N., Mishchenko, M., and Tanré, D.: Application of randomly oriented spheroids for retrieval of dust particle parameters from multiwavelength lidar measurements, J. Geophys. Res.-Atmos., 115, D21203, https://doi.org/10.1029/2010JD014139, 2010.
WCP-112: A Preliminary Cloudless Standard Atmosphere for Radiation Computation, WCP/IAMAP Radiation Commission, Geneva, WCP, (ICSU/WMO/WCP/IAMAP) (WCP-112), ii, 53 pp., Call no: WCP 112 TD 241986, 1986.
Wulfmeyer, V., Bauer, H., Di Girolamo, P., and Serio, C.: Comparison of active and passive water vapor remote sensing from space: An analysis based on the simulated performance of IASI and space borne differential absorption lidar, Remote Sens. Environ., 95, 211–230, https://doi.org/10.1016/j.rse.2004.12.019, 2005.
Wulfmeyer, V., Behrendt, A., Bauer, H. S., Kottmeier, C., Corsmeier, U., Blyth, A., Craig, G., Schumann, U., Hagen, M., Crewell, S., Di Girolamo, P., Flamant, C., Miller, M., Montani, A., Mobbs, S., Richard, E., Rotach, M. W., Arpagaus, M., Russchenberg, H., Schlüssel, P., König, M., Gärtner, V., Steinacker, R., Dorninger, M., Turner, D. D., Weckwerth, T., Hense, A., and Simmer, C.: Research campaign: The convective and orographically induced precipitation study – A research and development project of the World Weather Research Program for improving quantitative precipitation forecasting in low-mountain regions, B. Am. Meteorol. Soc., 89, 1477–1486, https://doi.org/10.1175/2008BAMS2367.1, 2008.
Yang, F., Tan, J., Zhao, Q., Du, Z., He, K., Ma, Y., Duan, F., Chen, G., and Zhao, Q.: Characteristics of PM2.5 speciation in representative megacities and across China, Atmos. Chem. Phys., 11, 5207–5219, https://doi.org/10.5194/acp-11-5207-2011, 2011. | 2019-04-26 06:13:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 16, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.720687210559845, "perplexity": 5816.294919242876}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578760477.95/warc/CC-MAIN-20190426053538-20190426075538-00261.warc.gz"} |
https://weblib.cern.ch/collection/CERN%20Published%20Articles?ln=ka&as=1 | # CERN Published Articles
უკანასკნელი დამატებები:
2021-10-25
09:38
Microbes and space travel – hope and hazards / Wei-Tze Tang, Julian (Leicester Royal Infirmary) ; Henriques, Andre (CERN) ; Ping Loh, Tze (Laboratory Medicine, National University Hospital, Singapore) 2021 - 6 p. - Published in : Future Microbiology 16 (2021) 0196
2021-10-22
10:44
New results in $\Lambda_{\mathrm{b}}^0$ baryon physics at CMS / Petrov, Nikita (Moscow, MIPT) /CMS Collaboration The study of excited $\Lambda_{\mathrm{b}}^0$ states and their mass measurement by the CMS experiment is reported, as well as the observation of the $\Lambda_{\mathrm{b}}^0\rightarrow \mathrm{J}/\psi\Lambda\phi$ decay and the measurement of its branching fraction, relative to the $\Lambda_{\mathrm{b}}^0\rightarrow \psi(\mathrm{2S})\Lambda$ decay. Both analyses use proton-proton collision data collected at $\sqrt{s} = 13$ TeV.. SISSA, 2021 - 6 p. - Published in : PoS ICHEP2020 (2021) 409 Fulltext: PDF; In : 40th International Conference on High Energy Physics (ICHEP), Prague, Czech Republic, 28 Jul - 6 Aug 2020, pp.409
2021-10-22
10:44
Higgs couplings: constraints and sensitivity on Supersymmetry / Mahmoudi, Farvah (IP2I, Lyon ; CERN) ; Arbey, Alexandre (IP2I, Lyon ; CERN) ; Battaglia, Marco (UC, Santa Cruz) ; Djouadi, Abdelhak (Annecy, LAPTH ; NICPB, Tallinn) ; Mühlleitner, Margarete (KIT, Karlsruhe) ; Spira, Michael (PSI, Villigen) The determination of the fundamental properties of the Higgs boson with sufficient accuracy has crucial implications for physics beyond the Standard Model. Here, we consider Supersymmetry and highlight the complementarity of the Higgs measurements and the superparticle searches at the LHC, based on the 8 and 13 TeV results, in probing its phenomenological minimal version, the pMSSM. [...] SISSA, 2021 - 6 p. - Published in : PoS ICHEP2020 (2021) 261 Fulltext: PDF; In : 40th International Conference on High Energy Physics (ICHEP), Prague, Czech Republic, 28 Jul - 6 Aug 2020, pp.261
2021-10-22
10:44
Search for New Physics with the SHiP experiment at CERN / Shirobokov, Sergey (Imperial Coll., London) /SHiP Collaboration The SHiP Collaboration has proposed a general-purpose experimental facility operating in beam dump mode at the CERN SPS accelerator with the aim of searching for light, long-lived exotic particles. The detector system aims at measuring the visible decays of hidden sector particles to both fully reconstructible final states and to partially reconstructible final states with neutrinos, in a nearly background free environment. [...] SISSA, 2021 - 6 p. - Published in : PoS ICHEP2020 (2021) 282 Fulltext: PDF; In : 40th International Conference on High Energy Physics (ICHEP), Prague, Czech Republic, 28 Jul - 6 Aug 2020, pp.282
2021-10-22
10:44
Tau identification exploiting deep learning techniques / Cardini, Andrea (DESY) /CMS Collaboration The recently deployed DeepTau algorithm for the discrimination of taus from light flavor quark or gluon induced jets, electrons, or muons is an ideal example for the exploitation of modern deep learning neural network techniques. With the current algorithm a suppression of misidentification rates by factors of two and more have been achieved for the same identification efficiency for taus compared to the MVA identification algorithms used for the LHC Run-1, leading to significant performance gains for many tau related analyses. [...] SISSA, 2021 - 6 p. - Published in : PoS ICHEP2020 (2021) 723 Fulltext: PDF; In : 40th International Conference on High Energy Physics (ICHEP), Prague, Czech Republic, 28 Jul - 6 Aug 2020, pp.723
2021-10-22
10:44
Hadronic resonance production measured by ALICE at the LHC / Ganoti, Paraskevi (Athens U.) /ALICE Collaboration Hadronic resonances with different lifetimes are very useful to probe the hadronic phase of heavy-ion collisions. Due to their relatively short lifetimes compared to the duration of the hadronic phase, resonances are good candidates to investigate the interplay between particle re-scattering and regeneration in the hadronic phase. [...] SISSA, 2021 - 6 p. - Published in : PoS ICHEP2020 (2021) 539 Fulltext: PDF; In : 40th International Conference on High Energy Physics (ICHEP), Prague, Czech Republic, 28 Jul - 6 Aug 2020, pp.539
2021-10-22
10:44
FASER: Forward Search Experiment at the LHC / Queitsch-Maitland, Michaela (CERN) /Faser Collaboration FASER is an approved small and inexpensive experiment designed to search for light, weakly-interacting particles during Run-3 of the LHC. Such particles may be produced in large numbers along the beam collision axis, travel for hundreds of meters without interacting, and then decay to Standard Model particles. [...] SISSA, 2021 - 6 p. - Published in : PoS ICHEP2020 (2021) 273 Fulltext: PDF; In : 40th International Conference on High Energy Physics (ICHEP), Prague, Czech Republic, 28 Jul - 6 Aug 2020, pp.273
2021-10-22
10:44
Searches for Dark Photons at LHCb / Cid Vidal, Xabier (Santiago de Compostela U.) /LHCb Collaboration In the past years, LHCb has shown its capabilities to perform interesting searches for beyond the Standard Model particles. The most clear example of this is that of Dark Photons, which are hypothetical new particles that serve as a "portal" between the Standard Model and a potential hidden dark sector. [...] SISSA, 2021 - 6 p. - Published in : PoS ICHEP2020 (2021) 231 Fulltext: PDF; In : 40th International Conference on High Energy Physics (ICHEP), Prague, Czech Republic, 28 Jul - 6 Aug 2020, pp.231
2021-10-22
10:44
Beauty to charmonium decays at LHCb / Li, Peilian (Heidelberg U.) /LHCb Collaboration The latest results of beauty meson decays to final states with charmonium resonances from LHCb are presented. This includes measurements of time-dependent CP violation parameters in $B_s^0\to J/\psi K^+K^-$ and $B_s^0\to J/\psi\pi^+\pi^-$ decay modes. [...] SISSA, 2021 - 6 p. - Published in : PoS ICHEP2020 (2021) 390 Fulltext: PDF; In : 40th International Conference on High Energy Physics (ICHEP), Prague, Czech Republic, 28 Jul - 6 Aug 2020, pp.390
2021-10-22
10:44
Commissioning and prospects of the first GEM station at the CMS experiment / Mocellin, Giovanni (RWTH Aachen U.) /CMS Collaboration The CMS Collaboration has been developing a Gas Electron Multiplier (GEM) detector in the endcap regions of the CMS muon system to maintain the high level of performance achieved during Run 2 in the challenging environment of the high-luminosity phase of the LHC (HL-LHC). The GEM detectors at endcap station 1 (GE1/1) were installed during the second long shutdown. [...] SISSA, 2021 - 6 p. - Published in : PoS ICHEP2020 (2021) 755 Fulltext: PDF; In : 40th International Conference on High Energy Physics (ICHEP), Prague, Czech Republic, 28 Jul - 6 Aug 2020, pp.755 | 2021-10-27 18:52:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7961449027061462, "perplexity": 6038.278483130158}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588242.22/warc/CC-MAIN-20211027181907-20211027211907-00237.warc.gz"} |
https://www.picostat.com/dataset/r-dataset-package-boot-aircondit7 | # R Dataset / Package boot / aircondit7
Webform
Category
Webform
Category
Webform
Category
Webform
Category
Webform
Category
Webform
Category
## Visual Summaries
Embed
<iframe src="https://embed.picostat.com/r-dataset-package-boot-aircondit7.html" frameBorder="0" width="100%" height="307px" />
Attachment Size
82 bytes
GNU General Public License v2.0
GNU General Public License v2.0
Dataset Help
On this Picostat.com statistics page, you will find information about the aircondit7 data set which pertains to Failures of Air-conditioning Equipment. The aircondit7 data set is found in the boot R package. You can load the aircondit7 data set in R by issuing the following command at the console data("aircondit7"). This will load the data into a variable called aircondit7. If R says the aircondit7 data set is not found, you can try installing the package by issuing this command install.packages("boot") and then attempt to reload the data. If you need to download R, you can go to the R project website. You can download a CSV (comma separated values) version of the aircondit7 R data set. The size of this file is about 82 bytes.
Documentation
## Failures of Air-conditioning Equipment
### Description
Proschan (1963) reported on the times between failures of the air-conditioning equipment in 10 Boeing 720 aircraft. The aircondit data frame contains the intervals for the ninth aircraft while aircondit7 contains those for the seventh aircraft.
Both data frames have just one column. Note that the data have been sorted into increasing order.
### Usage
aircondit
### Format
The data frames contain the following column:
hours
The time interval in hours between successive failures of the air-conditioning equipment
### Source
The data were taken from
Cox, D.R. and Snell, E.J. (1981) Applied Statistics: Principles and Examples. Chapman and Hall.
### References
Davison, A.C. and Hinkley, D.V. (1997) Bootstrap Methods and Their Application. Cambridge University Press.
Proschan, F. (1963) Theoretical explanation of observed decreasing failure rate. Technometrics, 5, 375-383.
--
Dataset imported from https://www.r-project.org. | 2020-10-23 05:54:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2837390899658203, "perplexity": 5657.222661393742}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107880656.25/warc/CC-MAIN-20201023043931-20201023073931-00481.warc.gz"} |
https://met-calc.com/articles/calculation_dimensions_for_drive_by_parallel_or_diagonal_square_head_iso_5211 | ## Calculation dimensions for drive by parallel or diagonal square head ISO 5211
The advantage of this connection is easy assembly and disassembly. The disadvantage is the low manufacturing precision and consequent consequences for limited speeds and small torques.
For a simplified calculation, it is assumed that the joint is without will, and that the torque causes the contact stress to be half of each function area of the square head. It is possible to assume a triangular distribution of this stress.
Load distribution will differ from the assumption due to production inaccuracy due to looseness or prestressing of joints and shaft deformations by from torsion torque. These deviations can include in the calculation a coefficient max. stress $S_s=1,3-2$ the lower value of which applies to short joints $l\le s$ and for high accuracy of manufacturing.
Fig. 1 - The square head joint
Torsion stress:
$\tau=\cfrac{0,601M_T}{\left(0,5s\right)^3}$
where:
$τ$ torsion stress $\mathrm{MPa}$ $M_T$ torque $\mathrm{Nm}$ $s$ width square head $\mathrm{mm}$ $τ_{all}$ allowable torsion stress $\mathrm{MPa}$
Allowable torsion stress:
$\tau_{all}=\cfrac{0,4R_{p0,2/T}}{S_F}\cdot C_c$
where:
$τ_{all}$ allowable torsion stress $\mathrm{MPa}$ $R_{p 0,2 /T}$ the minimum yield strength or 0,2% proof strength at calculation temperature $\mathrm{MPa}$ $S_F$ safety factor $\mathrm{-}$ $C_c$ coefficient of use of joints according to load $\mathrm{-}$
load $\mathrm{-}$
Bearing stress:
$p=\cfrac{M_T\cdot s_s}{2a\cdot l\cdot b}\le\sigma_{all}$
where:
$p$ bearing stress $\mathrm{MPa}$ $M_T$ torque $\mathrm{Nm}$ $s_s$ coefficient of maximum stress increase $\mathrm{-}$ $a$ length square head load $\mathrm{mm}$ $l$ length square head in the hub $\mathrm{mm}$ $b$ distance of the resultant of the pressure $\mathrm{mm}$
Distance of the resultant of the pressure:
$b=a_1+\cfrac{2}{3}a$
where:
$b$ distance of the resultant of the pressure $\mathrm{mm}$ $a_1$ length square head without load $\mathrm{mm}$ $a$ length square head load $\mathrm{mm}$
$a_1=\cfrac{d_9}{2}\sin{\left(\cos^{-1}{\cfrac{s}{d_9}}\right)}$
where:
$a_1$ length square head without load $\mathrm{mm}$ $d_9$ free diameter $\mathrm{mm}$ $s$ width square head $\mathrm{mm}$
$a=\cfrac{d_8}{2}\sin{\left(\cos^{-1}{\cfrac{s}{d_8}}\right)}-a_1$
where:
$a$ length square head with load $\mathrm{mm}$ $d_8$ diameter square head $\mathrm{mm}$ $s$ width square head $\mathrm{mm}$ $a_1$ length square head without load $\mathrm{mm}$
Allowable bearing stress:
$\sigma_{all}=\cfrac{0,9R_{p0,2/T}}{S_F}\cdot C_c$
where:
$\sigma_{all}$ allowable bearing stress $\mathrm{MPa}$ $R_{p 0,2 /T}$ the minimum yield strength or 0,2% proof strength at calculation temperature $\mathrm{MPa}$ $S_F$ safety factor $\mathrm{-}$ $C_c$ coefficient of use of joints according to load $\mathrm{-}$
Example:
We have to determine the safety factor for the bearing stress of the hub. Dimensions will be from the standard EN ISO 5211 flange type F25. The hub material will be GGG70. $M_T=8000\ \mathrm{Nm}$; $R_{p0,2/T}=380\ \mathrm{MPa}$; $s_s=1,5$; $s=55\ \mathrm{mm}$; $l=52\ \mathrm{mm}$; $d_8=72,2\ \mathrm{mm}$; $d_9=57,9\ \mathrm{mm}$.
$a_1=\cfrac{d_9}{2}\sin{\left(\cos^{-1}{\cfrac{s}{d_9}}\right)}=\cfrac{57,9}{2}\sin{\left(\cos^{-1}{\cfrac{55}{57,9}}\right)=9,05}\ \mathrm{mm}$
$a=\cfrac{d_8}{2}\sin{\left(\cos^{-1}{\cfrac{s}{d_8}}\right)}-a_1=\cfrac{72,2}{2}\sin{\left(\cos^{-1}{\cfrac{55}{72,2}}\right)-9,05=14,34}\ \mathrm{mm}$
Distance of the resultant of the pressure:
$b=a_1+\cfrac{2}{3}a=9,05+\cfrac{2}{3}14,34=18,61\ \mathrm{mm}$
Bearing stress:
$p=\cfrac{M_T\cdot s_s}{2a\cdot l\cdot b}=\cfrac{8000000\cdot1,5}{2\cdot14,34\cdot52\cdot18,61}=432,4\ \mathrm{MPa}$
Safety factor under bearing stress:
$S_F=\cfrac{0,9R_{p0,2/T}}{p}\cdot C_c=\cfrac{0,9\cdot380}{432,4}\cdot432,4=0,63$
$\rightarrow$ does not suit
Value safety factor for the bearing stress is lower than 1. The square head does not meet any safety of the connection.
Literature:
- AISC: Specification for structural steel buildings: Allowable Stress design and plastic design 1989.
- František Boháček: Části a mechanismy strojů I. 1984.
- Joseph E. Shigley, Charles R. Mischke, Richard G. Budynas: Konstruování strojních součástí 2010. | 2022-06-30 08:27:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4179106652736664, "perplexity": 3777.206544162746}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103669266.42/warc/CC-MAIN-20220630062154-20220630092154-00429.warc.gz"} |
https://math.stackexchange.com/questions/3310312/has-anyone-got-a-reference-as-to-why-%c3%a9tale-galois-representations-are-de-rham | # Has anyone got a reference as to why étale Galois representations are de Rham?
I am currently studying $$p$$-adic Hodge theory and searching for help.
If $$X$$ is a variety over a p-adic field $$K$$ (should we take it global ? $$p$$-adic ?), then the étale cohomology $$H^i(X_{\bar K}, \mathbb{Q}_p)$$ is equipped with the Galois action of $$G_K = Gal(\bar K \mid K)$$ incuded functiorially by the action on $$X_{\bar K} = X \underset{Spec(K)}{\times} {Spec(\bar K)}$$. This gives a $$p$$-adic representation.
It seems to be common knowledge that this representation should be de Rham. In fact, people somehow go as far as saying things like "any Galois representation that comes from geometry is de Rham", which seems too vague to me to be a provable statement. What do people mean by that ? Where can I find a complete proof/reference for these facts.
The mentions of such facts I've found so far :
• Berger says (in "An introduction to the theory of $$p$$-adic representations", page 19-20) the following :
[Tsuji] showed that if $$X$$ has semi-stable reduction, then $$V=H^i_ {ét}(X_{\bar K},\mathbb{Q}_p)$$ is $$B_{st}$$-admissible. A different proof was given by Niziol (in the good reduction case) and also by Faltings (who proved that $$V$$ is crystalline if $$X$$ has good reduction and that $$V$$ is de Rham otherwise).
The article of Tsuji which it refers to was a bit unhelpful, but I might have missed something. The fact that interests me seems to be the one Faltings proved, but I can't get a reference.
Another weird thing is that some people talk about the case where $$K$$ is a global field (like in the mathoverflow question I linked), while some others take $$K$$ to be a $$p$$-adic local field (like the thesis I referred to). Are both cases true ? Also, why do people seem to be handwavy about being de Rham being always true as soon as representations arise from geometry, when the only precise statement I found requires precise hypotheses like "being a smooth projective variety".
This seems to be the starting point of the Fontaine-Mazur conjecture, which is a kind of converse, so this should probably be an easy to find result, but I can't see it proven anywhere...
Has anyone got a reference which would answer all my questions ?
• what do you mean specifically when you say "Are both cases true?"?
– user691994
Aug 1 '19 at 11:28
• Can we state the theorem for global fields AND for $p$-adic local fields ? The statement of the question doesn't seem to require any condition on the field K. Aug 1 '19 at 11:30
• if you are over a global field, then you also have to choose an embedding of the Galois group of the local field to the Galois group of the global field, right? How are you making this choice?
– user691994
Aug 1 '19 at 11:31
• If I have an element $g$ of the Galois group $G_K = Gal(\bar K \mid K)$, it is a map $\bar K \rightarrow \bar K$, so I functorially get a map $Spec(\bar K) \rightarrow Spec(\bar K)$ and then a map $X_{\bar K} \rightarrow X_{\bar K}$ (because the action on $Spec(K)$ is trivial so I can apply the fiber product universal property), and then functorially a map $H^i(X_{\bar K}, \mathbb{Q}_p) \rightarrow H^i(X_{\bar K}, \mathbb{Q}_p)$, which I may call $\rho(g)$, and thus $\rho$ is a $p$-adic representation of $G_K$. At which point don't you agree ? I didn't have to make any choice. Aug 1 '19 at 11:38
• what is a de Rham representation of the Galois group of a global field? I think there is no standard definition (not involving a choice that does not matter).
– user691994
Aug 1 '19 at 11:42
Where can you find a proof that for a smooth proper variety over a finite extension of $$\mathbb{Q}_p$$ the representation of the absolute Galois group on the $$p$$-adic etale cohomology is de Rham? Some options: | 2021-11-28 02:28:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 22, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8504488468170166, "perplexity": 243.066347700439}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358443.87/warc/CC-MAIN-20211128013650-20211128043650-00607.warc.gz"} |
http://mhso.freunde-des-historischen-schiffsmodellbaus.de/convert-xyz-to-mesh-excel.html | There is a good point though. I'm trying to use python to export the x,y,z vertices to a file and then import that file back into blender to recreate the same image using just the x,y,z data points. You can take an Excel XYZ file into PC-DMIS but not directly from the Excel file You first have to remove anything thats not a coordinate, then in one cell at the top, put in XYZ or XYZIJK if you have the vectors as well. In CasaXPS, open your file and click on the spectrum of interest in the blocks area (right side of screen). Convert Munsell colors to computer-friendly RGB triplets The Munsell color system was designed as a series of discrete color chips which closely approximation to the color sensitivity of the human eye. Mesh Generator 2 MIKE Zero accepted by the MIKE Zero software. This page was put together originally to help a group of students work with 3D applications and 3D data sets. Coordinate files can be GPS files, XYZ files, generaly files with the following format: Number, X (East), Y (North), Z (Elevation), Description. I think that I need to convert them to table to make the chart (A would be a surface chart in Excel. Our measurement converter was especially designed to make conversion of units a whole lot easier. NOTE: UTM and NATO easting and northing values are rounded to the nearest meter. A standardized 3x3 matrix describes how to convert from sRGB' data to CIE-XYZ data. Deployed as an add-in for Microsoft Excel, ThreeDify Excel Grapher (XLGrapher) makes 3D graphing and plotting as easy as highlighting a range of cells in a worksheet. convert excel xyz to mesh free download. 3D 3 Point Circle Fix (4dp) (10 Kb) ** New May 2010 **. In Windows, start a command shell with Start → Run → cmd (enter), then use the cd command to change to the folder where you have meshconv. a rotation is nothing more then something that tells you how you are rotated , in which direction you are looking, and that can also be expressed as vector. The xyz data are in CSV file format. This online mesh converter uses the great Open Asset Import Lib. Color math and programming code examples. In CasaXPS, open your file and click on the spectrum of interest in the blocks area (right side of screen). MIKE 21 Quick Start Guide – Flexible Mesh Series DHI Water Environments (UK) Ltd (V4 3 July 2012) Contents 1. Importing Data from Excel. I have several (about 100) text files with coordinates, each set of coordinates will make an aerofoil. The application can quickly convert between most major industry-standard file formats, and offers the ability to convert a large number at once with batch conversion. Please note that the customizations that we aim to provide you will pertain only to survey drawing and no other fields. If the argument is a formula of type zvar ~ xvar + yvar(cf. There’s a new Sketch Scale tool, a couple of bug fixes, and a Deep Update. Use Excel to Convert GPS Coordinates for Tom Tom. Input Format. TableBuilder is designed to export AutoCAD table and the table drawn with lines and text in AutoCAD. Figure 1 shows a sample Excel spreadsheet, and figure 2 shows the corresponding comma-delimited text file. use mechanical design>>part design(This works for XM2 license) a. Hi Guys I am not a Revit user myself but the architects we use, tried to import the point clouds into Revit without a great result. Each conversion formula is written as a "neutral programming function", easy to be translate in any specific programming language:. In a similar question, the author ended up with matplotlib: 3D Plotting from X, Y, Z Data, Excel or other Tools. The x,y data is in State Plane and the Z data is in NAVD88. In CasaXPS, open your file and click on the spectrum of interest in the blocks area (right side of screen). This How-To Guide covers the basics of importing and exporting common file types. If you have X-Y-Z in. The Z-axis, is parrallel to the axis of rotation of the earth. Import a text file to a sheet with or without delimiter. Use Unity to build high-quality 3D and 2D games, deploy them across mobile, desktop, VR/AR, consoles or the Web, and connect with loyal and enthusiastic players and customers. Convert KML to CSV/Excel. You can convert any grid to an XYZ text file and open it in any text editor, in the Surfer worksheet, or in Excel (as long as you don't exceed Excel's row limitations of 65,535 rows in Excel 97-2003 or 1,048,576 in Excel 2007). Answer: You can create your Excel formula using nested IF functions with the AND function. It is intended for surveyor produced drawings like the one to the right here (click the image for more details), but is equally suited to any 2D drawing where the Z levels are represented as text items. I have found many spreadsheets on the internet, but none of the ones I've tried seem to work correctly. • (3) Select the adult mesh and scale it down by 0. Solo Build It! (aka "SBI!") is the only all-in-1 package of step-by-step process, software tools, comprehensive guidance, 24/7 support and "auto-updating" that enables solopreneurs to build profitable online businesses. However, the base of the file is typical point data - X, Y, Z, intensity, R, G, B (RBG being optional) - and then the header information contains the transformation info. Right now, I output the results to. For GPS coordinates, select the WGS84 system; for example, to convert coordinates GPS in UTM Zone 10N coordinates, choose left WGS84 and UTM Zone 10N right. 65 to 200 which increase in the ratio of the 4th root of 2 or 1. Calculation example. How to write a GPS coordinate in Excel. Example 3: In this example, another company wants to give bonus to its senior employees. I know how to > export a surface to a DEM file, which can be then added to a grid surface, > what I need to find out is whether it's possible to extract points from a > grid (or tin) surface so I can use the export points command, or whether > there is a different way of achieving this. It provides a set of tools for editing, cleaning, healing, inspecting, rendering, texturing and converting meshes. mesh files? #1 GameNoobTV, May 16, 2017. Upload your obj models. How do I convert that info into a format that can be printed on a 3D printer? So you can easily check if the mesh will be printed in the. Convert cartesian coordinate (X, Y, Z) to geodetic coordinate (Latitude, Longitude, h). Download and unzip the ZIP package into a folder of your choice. Bender to XYZ is software converts bender data to XYZ coordinates that represent the centerline of a tube shape. How To: Convert points in XYZ file format to raster Summary. How can I export contours to an XYZ data file from Surfer? Follow If you want the XYZ coordinates of the contour lines, there are two ways you can do this easily in Surfer:. Surface Roughness Value Equivalents. If you don't have the vectors, and no model to get those, your pretty much screwed. I have decided to keep the full poly mesh and creating a low poly mesh over that. You can convert one file for free. Photo & Graphics tools downloads - ACAD DWG to XLS Converter by ACAD Systems, Inc. How to Convert Measurements Easily in Microsoft Excel. The three cities will be weighted by time. How To: Convert points in XYZ file format to raster Summary. Use the ASCII 3D To Feature Class tool to convert the. Please update any links and bookmarks that currently point to this page. This data can be converted into points or structured grids. [X,Y,Z] = meshgrid(x,y,z) produces three-dimensional arrays used to evaluate functions of three variables and three-dimensional volumetric plots. Excel's convert function ("=CONVERT()") converts a measurement from one unit to another. Turning a mesh into a parametric model in SolidWorks will require that the model be re-modeled feature by feature. Beyond format translation FME's point cloud manipulation tools allow you to restructure, resize and reproject a dataset to produce a file that fits your exact requirements. I received some bathymetry data in. You must type the reference to the step value in cell B1 as an absolute reference (with dollar signs). The lightness method averages the most. Conditional formatting A feature that enables you to have Excel 2010 change the appearance of your cell data if certain conditions (that you specify) are met. 64 seconds North and 2 degrees 17 minutes 34. Explore large point cloud datasets on any hardware. I have a list of latitude / longitude coordinates, formatted as columns in an excel doc. Color matching functions and chromaticity diagrams. Unfortunately, Microsoft removed the capabilities of visualizing spreadsheet data using maps via its native map function starting with Office 2000 (for step by step instructions for making a map with older versions of Excel, read the Creating Simple Maps with Microsoft Excel article). This is useful if you are comparing various spectra. convert excel to xyz free download. It converts XYZ data into a MESH format that Microsoft Excel can read. Convert point data into a spatially-organized and compressed. Can anyone share the ideas,if they have done similar ones beforeThanks -Meera (1 Reply). Hi all, Need to plot a 3D surface from X,Y,Z 1D arrays. Save images as jpg, png, tiff, gif and more. It tries to auto-detect the delimiters (space, comma, etc) in the text file and interpret. As shown in this pic below, we have X, Y, Z coordinates, otherwise Easting, Northing & Reduced levels of more than 10000 points in an Excel sheet. XML Validation As a first step, we open an XML source file and validate it. Turning a mesh into a parametric model in SolidWorks will require that the model be re-modeled feature by feature. R core (2004b) for details on formulas), xvar, yvarand zvarare used as x, y and z variables. To convert the angle unit of geographical coordinates Latitude-longitude (degree, minute seconds (dms), grad, radians), just use the angle units converter. My solution is to cut the full poly mesh, into chunks that I work on seperatly. Size is no problem. Till now I've successfully extracted Points and normals, you can get the ply file from here FaceReconstruct. Converter also supports more than 90 others vector and rasters GIS/CAD formats and more than 3 000 coordinate reference systems. I have 3 co-ordiantes X,Y, and Z , i need to draw a 3d scatter chart by using Microsoft Excel or in some other way. xyz file which you want to view or convert simply drag the file into the main window: Now you will be asked to choose the format, click on Pure Numbers (XYZ) and 3D as shown in the screenshot below:. SpreadsheetConverter is an add-on to Microsoft Excel that you download from our website and install on your Windows PC. This is an effective and fast online Lat Long to UTM converter. plot3D: Tools for plotting 3-D and 2-D data. Most pointclouds have multiple times more points. Could you please any one help me to do this. When you think to import XML file in Excel, it is much easier to convert created xml file into Excel format and open it in Microsoft Office Excel. This Excel tutorial explains how to use the Excel LN function with syntax and examples. Detailed steps: 1. Interesting things for digital imaging and color science. This website is a masterpiece when it comes to converting like a pro. Files Available to Download MeSH files now available via the NLM Data Distribution Program. In CasaXPS, open your file and click on the spectrum of interest in the blocks area (right side of screen). However, if the data is not in a format that Excel recognizes, the data can be left as text. Step-by-step Guide for Creating a Flexible Mesh for MIKE Products 2. Simple way how vizualize 3D charts, plots, graphs and other XYZ coordinates in Excel. But more important is the question, why do you need pointcloud coordinates in Excel? I think the limit of Excel is still 1,048,576 rows. It is possible to convert a. This tool assumes vertically ordered tabular data. There are still more file formats out there than most of us know what to do with. the moles of magnesium. When importing the. If a Child is involved, this location is the relative position in relation to the Parent. The result shown as "file is severely corrupted, cannot be previewed. The Z-coordinate is positive toward the North pole. 32(Free Shipping). How to manually convert a CSV file to Excel in a few easy steps. How do you convert a color image to grayscale? If each color pixel is described by a triple (R, G, B) of intensities for red, green, and blue, how do you map that to a single number giving a grayscale value? The GIMP image software has three algorithms. Short Ans there is a command called "import" in Digitize shape editor (DSE) workbench. Expand Collapse. 1 following How to convert angles to xyz coordinate?. XYX (also known as Euler angles) when all 3 rotation axes are distinct, eg. 0 for two profiles). Convert 3D Meshes to 3D Solids. You can also maintain the mesh as a mesh model in SOLIDWORKS that allows graphical viewing of the mesh model. The datum points can exist in either a Pro/ENGINEER part or assembly. When importing the. Steps to Accomplish. I have 3 co-ordiantes X,Y, and Z , i need to draw a 3d scatter chart by using Microsoft Excel or in some other way. September 26, 2009. We will learn about various R packages and extensions to read and import Excel files. I need to convert it to XYZ to draw a centerline in Solidworks, because I need to have a 3D model. Y coordinate is the same for all points, so all of them are in one plain. You can view more details on each measurement unit: microns or inches The SI base unit for length is the metre. Question: In Excel, I am trying to create a formula that will show the following: If column B = Ross and column C = 8 then in cell AB of that row I want it to show 2013, If column B = Block and column C = 9 then in cell AB of that row I want it to show 2012. XYZ-Plot provides interactive buttons for rotating the viewing perspective, printing, and selecting other display options. It is the only program available that exports all converted data into Excel's native formats. 3D points in XYZ file format can be first converted to multipoints, and then interpolated to a raster. What this means is once the data is convert it’s possible to export the data directly into a new Excel document or copy and paste it into a preexisting one. I know there are programming based software, like Gnuplot and matplotlib, can do this, but I want to use something with UI like excel to be able to mouse-select those columns to plot. If you aren’t expecting this behavior, you’ll end up with variations in your converted color’s numbers. Hello, I'm trying to create a surface plot using X, Y & Z data from an analysis. The color coordinate systems considered include CIELAB, CIELUV, CIExyY, CIEXYZ, CMY, CMYK, HLS, HSI, HSV, HVC, LCC, NCS, PhotoYCC, RGB, Y'CbCr, Y'IQ, Y'PbPr and Y'UV. Unlike other additive rgb like Rgb, Xyz is a purely mathmatical space and the primary components are "imaginary", meaning you can't create the represented color in the physical by shining any sort of lights representing x, y, and z. X-Y-Z Scatter Plot. 1 Spur Gear Design Calculator a When gears are preshave cut on a gear shaper the dedendum will usually need to be increased to 1. ⭐ ️ Convert your XLS file to DBF online in a few seconds. 7, PLS-POLE section 4. An example follows. While the conversion accuracy is poor for mesh sizes below 45, the equation provides very reasonable ac-curacy (5. If not, then KML (Keyhole Markup Language) is a file format used for displaying geographic data for programs like Google Earth or Google Maps. Select STLs from a folder or by drag-and-dropping them directly into the reaConverter window. • (2) import the adult mesh you want to convert over the Teen one (For Sims 2 Do not import the second rig (skeleton)). Decimate: Yes, you can remove points when converting to meshes, so its bit lighter, but the drawcall count is the main limit there. These include waves, tidal propagation, wind- or wave-induced water level setup, flow induced by salinity or temperature gradients, sand and mud transport, water quality and changing bathymetry (morphology). The three cities will be weighted by time. Each conversion formula is written as a "neutral programming function", easy to be translate in any specific programming language:. One type is the x,y,z position and the other type is the label text. Efficiently Transform XYZ Point Cloud Data for Use in ArcGIS. This tutorial will specifically show how to assemble, clean and reconstruct data from a 3D laser scanner. You have many text elements that you would like to convert to text objects (geometry) for engraving. These are the formulas used by our Color Calculator to convert color data in different color spaces. How to convert matrix style table to three columns in Excel? Supposing you have a matrix-style table which contains column headings and row headings, and now you would like to convert this style table to three columns table, it also called list table as following screenshot shown, do you have any good ways to solve this problem in Excel?. You may have to write a custom function to import your file. > suitable XYZ file format, which I can import into PDMS. So we are in a position to copy paste that tables in excel locally and convert it to either xml or json and save it in a config folder. CSV to KML Conversion is Quick and Easy with ExpertGPS. The new formula returns the allowance off the % Done column. A typical PLY object definition is simply a list of (x,y,z) triples for vertices and a list of faces that are described by indices into the list of vertices. Importing coordinates using excel formula. They also show how to adjust its size, translate and rotate it. Hi All, I would like to convert XYZ data from drawing to LRA and implement it into my software (C# language). Webucator provides instructor-led training to students throughout the US and Canada. Can anyone offer me advice on how to go about doing this? I have access to the following software ArcEditor MapInfo Professional Erdas Imagine Global Mapper Surfer Thanks, Sean. ExpertGPS is an all-in-one mapping solution and file converter, so you can import data, preview it over maps and aerial photos, make corrections, and. x meshes with Blender, thanks to the export script included. The chapter 5 doesn't give a detailed procedure. Well, I managed to get a nice mesh (thank you Mona) with marching cubes and Poisson. 370078740157 inches. "Weed Washer" What is a Micron? (Micron v/s Mesh) Reference: Mesh Micron Conversion Chart The chart below details the equivalents to convert from mesh to micron or vice versa. How to manually convert a CSV file to Excel in a few easy steps. Bear Photo -- An instant and no frills image editing tool. To access the XYZ Text tools, navigate to the menu bar Tools > Dimensions > XYZ Text, and select Export Coordinate tool (notice all the other cool tools in there as well). But if you keep your coordinate as a string, no calculation is possible. Breaking news from around the world Get the Bing + MSN extension. Get the 2 left digits of the hex color code and convert to decimal value to get the red color level. However, if the data is not in a format that Excel recognizes, the data can be left as text. Here is a concrete example, using Open Office, but it works in the same way with Excel or Access (Windows XP, Windows VISTA or Windows 7). ASCII to raster can convert my data to raster, is this raster DEM? I was try to build a terrain then create a DTM, but the frist step I should convert my. Mesh Generator 2 MIKE Zero accepted by the MIKE Zero software. Solo Build It! Success. This menu copies information from the active Object to (other) selected Objects. Open the excel file 6. Extrude Mesh Volume • Go to: Tools > Mesh > Create Meshes > Extrude Mesh Volume or Key-in: FACET EXTRUDEVOLUME. Simply leave the voucher field empty. PointCloudSplitter: Splitting Point Clouds by Components. I can import these individually and make a curve simply using the curve through xyz feature and i can import all the points using a macro i found to make a cloud of points. txt file containing xyz data which I would like to use to create a DEM. Step 1) Select the list of points in the Calc spreadsheet. To go from a regular angle of $\theta$ to a heading, the heading is $\frac {\pi}2 - \theta$ in radians or $90^\circ -\theta$ in degrees. I have found many spreadsheets on the internet, but none of the ones I've tried seem to work correctly. The inverse matrix (i. You can select and reference the converted facets, verticals, and facet edges. It is valuable for creating macros that automate routine task in the MS Office suite of products. With a few mouse clicks, you can easily create a wide range of X-Y-Z scatter graphs, line graphs, 3D voxel and bar charts, Cartesian, polar and spherical plots, surfaces (TIN and gridded meshes) as well as water-tight solids. There are two main ways to describe a rotation in terms of angles and XYZ axes: when the 3rd axis is the same as the 1st axis, eg. For some time it has been possible to export and import Direct3D. 75748161 78 97 52. Convert kml to shp file. Converter also supports more than 90 others vector and rasters GIS/CAD formats and more than 3 000 coordinate reference systems. say the call is from a point south 66 deg 48 min 00 secs west and you are at the ending point for this call. You can apply these changes to one character, a range of characters, or all characters in a type object. While the mesh editing functionality is the biggest change in the July update, there are a few other changes as well. September 26, 2009. You will see the point cloud model shown as the following picture. Archicad's tech support said "each point has 6 numbers in the file and that's why it might not work" data looks like this: 490. It instantly draws the Excel spreadsheet in CAD using native geometry and links it. Dont ask me why, its a long story, but i didnt find any info about this. STL (stereolithography) > > In this file (. To convert files automatically without using Excel, through a desktop icon, scheduled tasks or from command prompt, use FileSculptor. The CSV Viewer is very powerful, in the display filed, click the column heading it will sort the columns, move cursor to right side of column heading resize your columns and so on. Excel will automatically start up a "Text Import Wizard" that has 3 steps. • Fixed components are: o Copy Loc: the X,Y,Z location of the Object. > > I want to visualize 3 physical volume and to convert to. These include waves, tidal propagation, wind- or wave-induced water level setup, flow induced by salinity or temperature gradients, sand and mud transport, water quality and changing bathymetry (morphology). I have a lot of clsoed polyines and I want to export the Area as well as the Z coordiante. 8% over the period 2018 - 2023. In our example, it is 1979/01/01 00:00 UTC. This is a very easy to use nanometer to millimeter converter. o Copy Rot: the X,Y,Z rotation of the Object. stl file to a mesh file or some other format, or generate a point cloud file out of it? Top. Color indices, color differences and spectral data conversion/analysis. com It is useful when you have a special curve or diagram and needed to Import it to CATIA. What special tools if any are required to convert my firearm for the use of UTM ammunition? How do I convert my INTERACTER programs to Winteracter? Are there any programs to convert ECEF X,Y,Z to UTM X,Y?. A Mesh to Micron Conversion Table can be made using this screen scale as its base with an opening of 0. you can import point data into excel using 2 methods 1. There are programs available to do this; moreover, for those on a budget, there is a completely free. ) If you have a. Effectively, we need to convert a matrix like data layout to a tabular layout. Convert 3D models between file formats (i. A Mesh to Micron Conversion Table can be made using this screen scale as its base with an opening of 0. 5, OOO340m1 (Build:1505), on OpenSuse 12. The cartesian coordinate system is a right-hand, rectangular, three-dimensional, earth-fixed coordinate system with an origin at (0, 0, 0). RGB color Consists of the components red, green and blue. 1 metre is equal to 1000000 microns, or 39. It is the base color model for the converter. STL (stereolithography) > > In this file (. This way you can create terrain meshes in a standard 3D app such as Blender or Maya and convert it to a Unity terrain. js lon_lat_to_cartesian. To use this function, you will enter data and units into the formula: =Convert(number,. This is a very easy to use nanometer to millimeter converter. This Excel tutorial explains how to use the Excel LN function with syntax and examples. rcp file first and feel it is counter productive to purchase another program just for the conversion. Converting Text to Geometry. XML Validation As a first step, we open an XML source file and validate it. When converting from a set of polygons to a mesh, a single mesh will result only if: more than one polygon is in the input, and. I need to generate some bathymetry profile graphs for a client. Maybe this tutorial is helpful for others too. Hi Jamal, Several things can point to your issue. For some time it has been possible to export and import Direct3D. each polygon has exactly four points, and. This switch of columns heading are known as R1C1 reference style. We have trained over 90,000 students from over 16,000 organizations on technologies such as Microsoft ASP. Decimate: Yes, you can remove points when converting to meshes, so its bit lighter, but the drawcall count is the main limit there. You can open the resultant file on EXCEL. To retain the original shape, set the mesh smoothness to None. You must type the reference to the step value in cell B1 as an absolute reference (with dollar signs). Each workbook or file contains the following geographies:. Free online Excel converter from Coolutils is safe, we require no email address or other personal data. Whats a reliable and free method for converting. X-Y-Z Scatter Plot. The tricky part is pushing the right numbers into the faces array so that it'll display everything as intended. Convert data in a worksheet(XYZ) from spherical coordinate to Cartesian coordinate(XYZ) and make a 3D space curve. How to Convert Measurements Easily in Microsoft Excel. Open the file in Notepad. 3D-2D persective calculations are done. I have tried the export wizard for EXCEL, but it gives to much data that isn't needed and is very time consuming to edit (see below). If there are multiple. As shown in this pic below, we have X, Y, Z coordinates, otherwise Easting, Northing & Reduced levels of more than 10000 points in an Excel sheet. 7, PLS-POLE section 4. Efficiently Transform XYZ Point Cloud Data for Use in ArcGIS. There are packages for importing/exporting data from/to Excel, but I have found them to be hard to work with or only work with old versions of Excel (*. The Z-axis, is parrallel to the axis of rotation of the earth. To convert the angle unit of geographical coordinates Latitude-longitude (degree, minute seconds (dms), grad, radians), just use the angle units converter. Second is a simple Excel formula for converting a distance in chains, rods, and links to feet. Convert a vector to a scalar. We have trained over 90,000 students from over 16,000 organizations on technologies such as Microsoft ASP. XYZ MESH is the only program I have found that can take XYZ and convert it into a mesh format that excel can use. Web Scraping Service. This software can be useful for those who wish to use existing GIS or other in-house software for processing LAS file and do not want to invest in procuring the dedicated LiDAR processing software. With a few mouse clicks, you can easily create a wide range of X-Y-Z scatter graphs, line graphs, 3D voxel and bar charts, Cartesian, polar and spherical plots, surfaces (TIN and gridded meshes) as well as water-tight solids. Hi, I have three 1D-arrays x,y,z. Second is a simple Excel formula for converting a distance in chains, rods, and links to feet. I have also tried just copying the features and editing the info I don't need out, but this is also very time consuming. The data should be in two adjacent columns with the x data in the left column. I want to convert XYZ to Lat Long and viceversa, but I don't know how. To convert 3D Face objects into a single mesh object: Use the Convert to Surface command (CONVTOSSURFACE) to convert the 3D faces into surfaces. STL (stereolithography) > > In this file (. Convert plain text (letters, sometimes numbers, sometimes punctuation) to obscure characters from Unicode. 90 seconds East. NET projects using Elerium Excel to HTML. Curve Resampling - increase, decrease curve/ribbon points count. Quickly and easily converts data into contour maps and surface plots. Convert kml to shp. Cross-stitch is a wonderful medium for needlework art, but what if you want to use something other than a pre-made pattern or kit? Perhaps you have a picture of a pet or a unique drawing to adapt. Here is the Visual Basic macro for converting a surveyor bearing to a numeric value. I am going to be using this plot to get the velocity, as this would be a position versus time plot. The Z-coordinate is positive toward the North pole. My data file(for Z coordinata i just added 0. This online mesh converter uses the great Open Asset Import Lib. XYZ Mesh is a software specifically created to convert XYZ data into a MESH format specifically used by Microsoft Excel. Learn more about tables, arrays, matrix Image Processing Toolbox. OR Use a lisp or vba routine to do the same. Methods and data for color science - color conversions by observer, illuminant and gamma. Open the file in Notepad. Polygons and meshes In what follows are various notes and algorithms dealing with polygons and meshes. 31, 2019 /PRNewswire-PRWeb/ -- Representatives with Organic Cotton Mart today announced the company has launched its organic cotton mesh & fabric tote bags that have a. The mesh will be joined into 1 mesh and a combined texture will be generated for the new mesh. The fastest way to parse text in Excel. Veriscian then convert it to point cloud by sampling the xyz position, color and normal at many locations on the mesh surface. CSV to XML Converter. Switching data from columns to rows or from rows to columns is known as transposing the data. Large sieve openings (1 in. In This method you can change the way excel […]. MCNPX Mesh Tally Convert MCNPX binary mesh tally data files to MS Excel (c) database and contour data. svg because, as opposed to most free online converters, reaConverter supports batch conversion. Introduction. 1: MySQL Excel; Excel-MySQL converter is a powerful tool to convert any Excel file to a MySQL database or convert any MySQL database to an Excel file utilizing a very easy to use wizard style interface that support so many advanced options like (scheduling,command line. | 2019-11-17 02:56:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22980979084968567, "perplexity": 2029.5991696282967}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668782.15/warc/CC-MAIN-20191117014405-20191117042405-00315.warc.gz"} |
https://homework.cpm.org/category/CCI_CT/textbook/pc3/chapter/12/lesson/12.2.2/problem/12-121 | ### Home > PC3 > Chapter 12 > Lesson 12.2.2 > Problem12-121
12-121.
Three thieves in ancient times stole three bags of gold. The gold in the heaviest bag had three times the value of the gold in the lightest bag and twice the value of the gold in the medium bag. All together, the gold amounted to $330$ florins. How much gold was in each bag? Write a system of equations and use matrices to solve the system.
$h = 3l$
$h = 2m$
$h + m + l = 330$ | 2021-10-23 18:09:19 | {"extraction_info": {"found_math": true, "script_math_tex": 4, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.628598153591156, "perplexity": 1913.7094730849335}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585737.45/warc/CC-MAIN-20211023162040-20211023192040-00677.warc.gz"} |
https://physics.stackexchange.com/questions/665181/how-it-is-possible-that-a-ket-precedes-a-bra-in-a-matrix-expression | How it is possible that a ket precedes a bra in a matrix expression?
Is it possible to rewrite $$\langle a| M|b\rangle$$ as $$|b\rangle \langle a|M$$?
• I beg pardon but why is this physics and not just math? Sep 11 at 13:04
• Some context for the more casual reader: bra–ket notation. And perhaps Duality. Sep 11 at 14:14
• @Greendrake: Related to quantum mechanics? Though it isn't revealed. Sep 11 at 14:17
• @Greendrake The language employed is Dirac's bra-ket notation, which is used almost exclusively in physics. Translated into math, the question collapses to an evident "no" question. Sep 11 at 14:18
• The second expression is an outer product.
– J.G.
Sep 11 at 14:25
In general, you cannot rewrite $$\langle a | M | b \rangle$$ as $$|b\rangle \langle a| M$$. You can see that the two are not the same by just comparing what type of mathematical entity they are: $$\langle a | M | b \rangle$$ is a matrix element (of the operator $$M$$), which is a (complex) number. On the other hand, $$|b \rangle \langle a|$$ is an operator, as is $$M$$, so the product of the two is another operator, which is represented by a matrix, not just an element of one.
• “$\langle a|M|b\rangle$ is a matrix element (of the matrix $M$)” – no, unless both $a$ and $b$ happen to be basis vectors. Sep 11 at 15:20
• @leftaroundabout Yes, but one is free to choose whichever basis they like, so there is a matrix representation of the operator $M$ for which $\langle a | M | b \rangle$ is an actual element of that matrix. Apart from that, Dirac's notation is designed to be independent of the choice of basis and at least in the quantum mechanics lectures I heard and the books I read the term "matrix element" was used for any bra-operator-ket-expression. If this is not common practice, I'd appreciate being pointed to literature confirming this and will of course reformulate my answer accordingly.
– nu.
Sep 12 at 12:14
• Fair enough, but then don't say “of the matrix $M$”. That suggests that you already committed to representing operator-$M$ in a given basis, which will in general not include $a$ or $b$. Sep 13 at 12:02
• @leftaroundabout I replaced "matrix" with "operator" now where appropriate.
– nu.
Sep 13 at 15:50
The expressions you write are extensions in infinite-dimensional Hilbert space of plain matrix expressions.
Their analogs for finite dimensional real vector spaces and their matrices indexed by a finite set of indices i,j, whose repeated form implies summation over the whole set, are $$|a\rangle ~~\mapsto ~~ a_i \\ |b\rangle ~~\mapsto ~~b_i\\ M~~\mapsto M_{ij}\\ \langle a|M|b\rangle ~~\mapsto a_iM_{ij}b_j , ~~\hbox{ a scalar},\\ |b\rangle \langle a|M ~~\mapsto ~~ b_j a_i M_{ik}, ~~\hbox { a dyadic matrix,} \leadsto \\ \operatorname{Tr}(|b\rangle \langle a|M ) =\langle a|M|b\rangle ~~\mapsto ~~ a_i M_{ij} b_j .$$
Thinking in component form makes it easier to predict what kind of objects you will be getting out of Dirac notation. The expression $$\langle a|M$$ gives a bra (row vector): $$\begin{pmatrix}a_1^*& a_2^* \end{pmatrix}\begin{pmatrix}M_{11} & M_{12}\\ M_{21}&M_{22}\end{pmatrix}=\begin{pmatrix} a_1^*M_{11}+a_2^*M_{21} &a_1^*M_{12}+a_2^*M_{22}\end{pmatrix}$$ If we define $$\langle c|=\langle a|M$$ then your question becomes "is $$\langle c|b\rangle$$ the same as $$|b\rangle\langle c|$$?" They are not the same; the first expression is a scalar $$\langle c|b\rangle=\begin{pmatrix}c_1^*&c_2^*\end{pmatrix}\begin{pmatrix}b_1\\b_2\end{pmatrix}=c_1^*b_1+c_2^*b_2$$ while the second expression is a linear operator (matrix): $$|b\rangle\langle c|=\begin{pmatrix}b_1\\b_2\end{pmatrix}\begin{pmatrix}c_1^*&c_2^*\end{pmatrix}=\begin{pmatrix}b_1c_1^* &b_1c_2^*\\ b_2c_1^*&b_2c_2^*\end{pmatrix}$$ Heuristically you can see that $$|b\rangle$$ expects a bra on the left while $$\langle c|$$ expects a ket on the right. So the expression $$|b\rangle\langle c|$$ expects a bra on the left as well as a ket on the right. | 2021-10-20 14:05:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 19, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8933740854263306, "perplexity": 350.92356310462554}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585321.65/warc/CC-MAIN-20211020121220-20211020151220-00289.warc.gz"} |
https://www.houseofmath.com/encyclopedia/numbers-and-quantities/quantities/time/learning-about-the-clock-quarters-of-an-hour | # Learning About the Clock (Quarters of an Hour)
Here you will learn about the time in terms of quarters of an hour. The word quarter means one fourth, so when we talk about a quarter of an hour, we mean one fourth of an hour, which is $15$ minutes.
On a digital clock, we say that the time is a quarter to or past something when the last number is either $15$ or $45$. When the last number is $15$, we say that the time is a quarter past the hour we’re in. If the last number is $45$, we say that the time is a quarter to the next hour.
An analog clock shows a quarter to or past something when the minute hand points at $3$ or $9$. In the same ways as we did with the digital clock, we say that the time is “a quarter to” or “a quarter past” some hour. The time is a quarter past the current hour when the minute hand points at $3$. When the minute hand points at $9$, the time is a quarter to the next hour. Below you can see a couple of examples of times.
Can you see the connection between the images and what you read about quarters of an hour? Here are some examples for you to try.
Example 1
Draw the times on a piece of paper.
Example 2
What time is it?
Example 3
Write down the times and describe the times in words on a piece of paper.
Math Vault
Would you like to solve exercises about quarters of an hour? Try Math Vault! | 2022-12-03 16:50:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 9, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35698172450065613, "perplexity": 386.3916335539652}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710933.89/warc/CC-MAIN-20221203143925-20221203173925-00168.warc.gz"} |
https://byjus.com/maths/ordinary-differential-equations/ | # Ordinary Differential Equations
In Mathematics, a differential equation is an equation that contains a function with one or more derivatives. There are different types of differential equations. They are ordinary differential equation, partial differential equation, linear and non-linear differential equations, homogeneous and non-homogeneous differential equation. In this article, let us discuss one of the types of differential equations called “Ordinary Differential Equation (ODE)” in detail along with its types, applications, examples, and solved problems.
## Ordinary Differential Equations Definition
In mathematics, the term “Ordinary Differential Equations” also known as ODE is a relation that contains only one independent variable and one or more of its derivatives with respect to the variable. In other words, the ODE’S is represented as the relation having one real variable x, the real dependent variable y, with some of its derivatives.
Y’,y”, ….yn ,…with respect to x.
The order of ordinary differential equations is defined to be the order of the highest derivative that occurs in the equation. The general form of n-th order ODE is given as;
F(x, y,y’,….,yn ) = 0
Note that, y’ can be either dy/dx or dy/dt and yn can be either dny/dxn or dny/dtn.
An n-th order ordinary differential equations is linear if it can be written in the form;
a0(x)yn + a1(x)yn-1 +…..+ an(x)y = r(x)
The function aj(x), 0≤j≤n are called the coefficients of the linear equation. The equation is said to be homogeneous if r(x)=0.If r(x)≠0 , it is said to be a non- homogeneous equation. Also, learn the first-order differential equation here.
## Ordinary Differential Equations Types
The ordinary differential equation is further classified into three types. They are:
• Autonomous ODE
• Linear ODE
• Non-linear ODE
Autonomous Ordinary Differential Equations
A differential equation which does not depend on the variable, say x is known as an autonomous differential equation.
Linear Ordinary Differential Equations
If differential equations can be written as the linear combinations of the derivatives of y, then it is known as linear ordinary differential equations. It is further classified into two types,
• Homogeneous linear differential equations
• Non-homogeneous linear differential equations
Non-linear Ordinary Differential Equations
If differential equations cannot be written in the form of linear combinations of the derivatives of y, then it is known as non-linear ordinary differential equations.
## Ordinary Differential Equations Application
ODEs has remarkable applications and it has the ability to predict the world around us. It is used in a variety of disciplines like biology, economics, physics, chemistry and engineering. It helps to predict the exponential growth and decay, population and species growth. Some of the uses of ODEs are:
• Modelling the growth of diseases
• Describes the movement of electricity
• Describes the motion of the pendulum, waves
• Used in Newton’s second law of motion and Law of cooling.
### Ordinary Differential Equations Examples
Some of the examples of ODEs are as follows;
### Ordinary Differential Equations Problems and Solutions
The ordinary differential equations solutions are found in an easy way with the help of integration. Go through once and get the knowledge of how to solve the problem.
Question 1:
Find the solution to the ordinary differential equation y’=2x+1
Solution:
Given, y’=2x+1
Now integrate on both sides, ∫ y’dx = ∫ (2x+1)dx
Which gives, y=2x2/2+x+c
y=x2+x+c
Where c is an arbitrary constant.
Question 2:
Solve y4y’+ y’+ x2 + 1 = 0
Solution:
Take, y’ as common,
y'(y4+1)=-x2-1
Now integrate on both sides, we get
$\frac{y^{5}}{5}+y=-\frac{x^{3}}{3}-x+c$
Where c is an arbitrary constant.
For more maths concepts, keep visiting BYJU’S and get various maths related videos to understand the concept in an easy and engaging way. | 2020-04-01 11:11:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8650075197219849, "perplexity": 554.4730690429325}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370505730.14/warc/CC-MAIN-20200401100029-20200401130029-00078.warc.gz"} |
https://homework.cpm.org/category/CCI_CT/textbook/pc3/chapter/11/lesson/11.2.5/problem/11-159 | ### Home > PC3 > Chapter 11 > Lesson 11.2.5 > Problem11-159
11-159.
Write and solve an absolute value inequality for the following situation.
The gas consumption of a particular car averages $30$ miles per gallon on the highway, but the actual mileage varies from the average by at most $4$ miles per gallon. How far can the car travel on a $12$-gallon tank of gas?
Let $m = \text{miles}$. Since the information given is in miles per gallon, write an inequality where all terms represent miles per gallon.
$\text{actual mileage − predicted average milage = mpg variance}$ | 2021-03-05 23:14:15 | {"extraction_info": {"found_math": true, "script_math_tex": 5, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8634302616119385, "perplexity": 1084.273353990422}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178373761.80/warc/CC-MAIN-20210305214044-20210306004044-00072.warc.gz"} |
https://strutt.arup.com/help/Groundborne%20Vibration%20(TRL)/TunnellingNoise.htm | ### Strutt Help
Prediction of Groundborne Noise from Tunnelling (TRL) 1/1, 1/3
This equation can be used for the prediction of groundborne noise:
L_p=127-54log_(10)r
where:
• r is the slope (shortest) distance (m) from the vibration source to the measurement location
• L_p is the predicted groundborne noise level in dB(A)
Because TRL Tunnelling equations 24 and 25 are derived from a limited range of materials it is possible that they may under estimate noise levels caused by tunnelling is stronger rocks and from very high energy sources such as hydraulic hammers. Therefore care should be taken in their application in these circumstances. Similarly care should be taken in extrapolating these relations over a wider range of distances than that covered by the data from which they have been derived.
Source: Transport Research Laboratory Report 429 Groundborne vibration caused by mechanised construction works (2000), Equation 25 | 2021-02-25 05:06:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5878353714942932, "perplexity": 1768.876331342206}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178350717.8/warc/CC-MAIN-20210225041034-20210225071034-00364.warc.gz"} |
http://tex.stackexchange.com/questions/69901/how-to-typeset-greek-letters?answertab=active | # How to typeset greek letters
I want to write a sentence like "physics (from ancient greek φύσις)". But I don't know how to typeset the character "ύ" properly. Also I am not sure, if it is a good idea simply to use math-mode here to typeset the greek letters. So, what's the best way to typeset the ancient greek word φύσις?
Edit: I should add that I just copied the word from wikipedia in this case, so the xelatex or babel solutions work very well since I can just copy the greek word into my latex source. But I don't know how I can insert those greek letters directly with my normal german keyboard layout.
-
When you copy Greek words from Wikipedia, make sure the diacriticals are correct. Modern Greek indicates stress with a diacritical very much like the oxeia (acute accent), but not necessarily quite like it. You may be using a Unicode precomposed form meant for Modern Greek which may not render quite right (e.g. accent sign pointing up). Depends on the fonts. – Alexios Sep 3 '12 at 12:24
@student In regard of your edit. All you have to do is to add another keyboard layout at your operating system and a keyboard shortcut to change between the keyboard layouts. It is rather easy. Google is your friend. – pmav99 Sep 4 '12 at 18:01
If you just need a few words, then a simple approach can solve your problem:
\documentclass{article}
\usepackage[LGR,T1]{fontenc}
\newcommand{\textgreek}[1]{\begingroup\fontencoding{LGR}\selectfont#1\endgroup}
\begin{document}
physics (from ancient greek \textgreek{f'usis})
\end{document}
For longer passages, perhaps loading the polutoniko option with babel may be recommended. Check in the documentation of babel for the translitteration scheme used.
You may also choose different fonts for Greek (the GFS fonts support many of them).
## Update
With recent and uptodate TeX distributions, one can also input directly the Greek characters:
% -*- coding: utf-8 -*-
\documentclass{article}
\usepackage[LGRx,T1]{fontenc} % notice LGRx instead of LGR
\usepackage[utf8]{inputenc} % utf8 is required
\newcommand{\textgreek}[1]{\begingroup\fontencoding{LGR}\selectfont#1\endgroup}
\begin{document}
physics (from ancient greek \textgreek{φύσις})
\end{document}
-
Update: as of TeX Live 2013, loading the LGRx encoding is not necessary any more. – egreg Jul 11 at 15:10
A quick comment on the 'Update' part of egreg's answer. You can also give instructions for a particular Greek font to be used in the output, rather than a default, like this (using GFS Porson for Greek and Tex Gyre Pagella for English):
\documentclass{article}
\usepackage[LGR,T1]{fontenc}
\usepackage[utf8]{inputenc}
\usepackage{gfsporson}
\usepackage{tgpagella}
\let\textgreek=\textporson
\begin{document}
An accurate picture of the world:
\textgreek{οὐ γάρ τις πρῆξις πέλεται κρυεροῖο γόοιο.}
\end{document}
This also simplifies (at least superficially) the definition of \textgreek{}. I guess it works because the GFS fonts use LGR as their encoding.
-
Welcome to TeX.SX! You can have a look at our starter guide to familiarize yourself further with our format. – Paul Gessler Jul 11 at 13:21
Thank you! I'll have a look at that. – lovecraftian Jul 11 at 13:33
simply load \usepackage{betababel} instead of babel -- and you can type every text in ancient greek directly from your keyboard (for information on digitating betacode, see package manual). Example:
\documentclass[10pt, a4paper]{scrbook}
\usepackage[brazil]{betababel}
\begin{document}
text \bcode{fu/sis} text
\end{document}
result:
-
Another way to do this:
\documentclass[10pt,a4paper]{article}
\usepackage[utf8x]{inputenc}
\usepackage[greek,english]{babel}
\begin{document}
physics (from ancient greek \textgreek{φύσις})
\end{document}
Here you can take advantage of LaTeX' ability to recognize Greek characters when babel loads the greek support module. utf8x (Extended UTF-8) encoding of the input file makes sure the characters are mapped correctly. As you can see, with this solution you can keep the Greek letters, no need to transcribe them with Latin characters. (Unlike egreg's solution, here I set the input encoding, not the font encoding.)
-
The font encoding is implicitly chosen by \textgreek – egreg Sep 3 '12 at 11:39
@egreg: Doesn't seem to work properly here... If I remove \usepackage[utf8x]{inputenc}, I get gibberish. – Count Zero Sep 3 '12 at 11:43
Of course! Without it, LaTeX doesn't know how to interpret the characters. – egreg Sep 3 '12 at 11:48
@egreg: Ah, sorry I misread your comment... font encoding is implicitly chosen! (Somehow I thought input encoding... Gotta be more thorough... ˙:)˙) – Count Zero Sep 3 '12 at 11:50
use xelatex or lualatex. Then it is really simple:
\documentclass[12pt]{article}
\usepackage{fontspec}
\setmainfont{DejaVu Serif}
\begin{document}
foo φύσις bar
\end{document}
-
I feel that this is the only clean solution. We have created a universal provision for typesetting international letters – Unicode. Everything else is a hack. – Konrad Rudolph Sep 3 '12 at 12:14 | 2014-09-02 11:53:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9784297943115234, "perplexity": 4302.593147674301}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535921957.9/warc/CC-MAIN-20140901014521-00254-ip-10-180-136-8.ec2.internal.warc.gz"} |
https://calendar.math.illinois.edu/?year=2014&month=09&day=02&interval=day | Department of
# Mathematics
Seminar Calendar
for events the day of Tuesday, September 2, 2014.
.
events for the
events containing
Questions regarding events or the calendar should be directed to Tori Corkery.
August 2014 September 2014 October 2014
Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa
1 2 1 2 3 4 5 6 1 2 3 4
3 4 5 6 7 8 9 7 8 9 10 11 12 13 5 6 7 8 9 10 11
10 11 12 13 14 15 16 14 15 16 17 18 19 20 12 13 14 15 16 17 18
17 18 19 20 21 22 23 21 22 23 24 25 26 27 19 20 21 22 23 24 25
24 25 26 27 28 29 30 28 29 30 26 27 28 29 30 31
31
Tuesday, September 2, 2014
1:00 pm in Altgeld Hall 243,Tuesday, September 2, 2014
#### Counting non-simple closed curves on surfaces
###### Jenya Sapir (UIUC Math)
Abstract: We show how to get coarse bounds on the number of (non-simple) closed geodesics on a surface, given upper bounds on both length and self-intersection number. Recent work by Mirzakhani and by Rivin has produced asymptotics for the growth of the number of simple closed curves and curves with one self-intersection (respectively) with respect to length. However, no asymptotics, or even bounds, were previously known for other bounds on self-intersection number. Time permitting, we will discuss some applications of this result. A video recording of this talk can be found at http://gear.math.illinois.edu/events/Geometry-Groups-Dynamics-GEAR-Seminar.html
1:00 pm in 345 Altgeld Hall,Tuesday, September 2, 2014
#### Regular cross-sections of Borel flows
###### Kostya Slutsky (UIC)
Abstract: A cross-section of a Borel flow is a Borel set that has countable intersection with each orbit of the flow. We shall be interested in constructing cross-sections with a prescribed set of possible distances between adjacent points within orbits. The main result of the talk is that given any two rationally independent positive reals and a free Borel flow one can always find a cross-section with distances between adjacent points being only these two real numbers. We shall give an overview of the subject from both ergodic theoretical and descriptive points of view and an application of the above result to orbit equivalence of flows will be presented.
1:00 pm in 347 Altgeld Hall,Tuesday, September 2, 2014
#### Invariant Gibbs Measures and Hamiltonian PDEs
###### Samantha Xu (UIUC Math)
Abstract: Liouville's Theorem says that the flow of a Hamiltonian ODE preserves Lebesgue measure, and a corresponding Gibbs measure as well. This notion can be generalized to infinite dimensional systems for various Hamiltonian PDEs. In this talk, we will discuss the role of randomization on initial datum, methods of constructing an invariant Gibbs measure, and some recent results.
3:00 pm in 241 Altgeld Hall,Tuesday, September 2, 2014
#### On the applications of counting independent sets in hypergraphs
###### Jozsef Balogh [email] (UIUC Math)
Abstract: Recently, Balogh-Morris-Samotij and Saxton-Thomason developed a method of counting independent sets in hypergraphs. During the talk, I show a recent application of the method; solving the following Erdos problem: What is the number of maximal triangle-free graphs? If there is some extra time in the talk, I will survey some other recent applications. Note that despite similar abstracts in the past, the talk is significantly new. These applications are partly joint with Shagnik Das, Michelle Delcourt, Hong Liu, Richard Mycroft, Sarka Petrickova, Maryam Sharifzadeh and Andrew Treglown. | 2022-08-10 04:59:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5826157331466675, "perplexity": 685.7132666181717}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571147.84/warc/CC-MAIN-20220810040253-20220810070253-00533.warc.gz"} |
https://www.physicsforums.com/threads/sample-statistics-vs-population-statistics.589204/ | # Homework Help: Sample statistics vs population statistics
1. Mar 21, 2012
### 939
1. The problem statement, all variables and given/known data
My task is to explain why the sample statistics I have obtained differ from the population statistics I have obtained from some data - using "concepts taught in class, if they exist". I have calculated x̄ and s, as well as σ and µ.
2. Relevant equations
First of all, the distribution is not normal, thus the emperical rule is invalid.
3. The attempt at a solution
Part of me thinks it's a trick question because there are very few "concepts" I can think of. The only thing I can come up with is that the mean differs because it is merely one sample, and according to the Central Limit Theorum, if I had a bigger sample space, the mean would be similar. Similarly, the standard deviation differs because it is merely one sample. Is this all there is to it or am I missing something?
2. Mar 22, 2012
### camillio
Sample statistics are obtained by sampling from a population. The idea is that the statistical properties of a population can (usually) be only estimated. In this respect, I slightly doubt about your data-based $\mu, \sigma^2$ :-) | 2018-11-16 22:57:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7906538844108582, "perplexity": 532.1210301925368}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039743216.58/warc/CC-MAIN-20181116214920-20181117000920-00555.warc.gz"} |
https://chemistry.stackexchange.com/questions/51793/which-is-more-amphoteric-zinc-or-copper-hydroxide?noredirect=1 | # Which is more amphoteric: zinc or copper hydroxide?
$\ce{K_{a} ~Cu(H2O)_6^2+}{=~5*10^{-7}}$
$\ce{K_{a} ~Zn(H2O)_6^2+}{=~2.5*10^{-10}}$
Given this information, which forms a more amphoteric hydroxide?
My reasoning is that copper hydroxide is more amphoteric than zinc hydroxide. My reasoning is that because $\ce{K_{a} ~Cu(H2O)_6^2+}$ > $\ce{K_{a} ~Zn(H2O)_6^2+}$, $\ce{Cu(OH)_2}$ should be more amphoteric because it will probably have the bigger $\ce{K_{a}}$. This fulfills half of the definition of amphoteric - behaving as both an acid and base.
But how do I compare the base strengths of $\ce{Zn(OH)_2}$ and $\ce{Cu(OH)_2}$? I need to, because being amphoteric means the substance acts as both an acid and a base!
Consider the following:
$1. ~{K_{a1}:} ~\ce{Cu(H2O)_6^2+ + H_2O -> Cu(H2O)_5(OH)^+ + H_3O^+}$
$2. ~{K_{a2}:} ~\ce{Cu(H2O)_5(OH)^+ + H_2O -> Cu(H2O)_4(OH)_2 + H_3O^+}$
$3. ~{K_{a3}:} ~\ce{Cu(H2O)_4(OH)_2 + H_2O -> Cu(H2O)_3(OH)_3^- + H_3O^+}$
Basic character of copper hydroxide is illustrated by the below equation:
$4. ~\ce{Cu(H2O)_4(OH)_2 + H_2O -> Cu(H2O)_5(OH)^+ + HO^-}$
The above equation, reversed, is:
$5. ~\ce{Cu(H2O)_5(OH)^+ + HO^- -> H_2O + Cu(H2O)_4(OH)_2 }$
This equation is very similar to $~{K_{a2}}$ except that the base is hydroxide ion instead of water.
We can write a similar equation for zinc hydroxide:
$6. ~\ce{Zn(H2O)_5(OH)^+ + HO^- -> H_2O + Zn(H2O)_4(OH)_2}$
Let's compare equations $5$ and $6$. We can expect $5$ to be a larger extent reaction than $6$ because hydrated copper ion is a stronger acid than hydrated zinc ion to start with. So:
${K_{rxn5} > K_{rxn6}}$
And therefore,
${K_{rxn4} < K_{rxn7}}$
$7. ~\ce{H_2O + Zn(H2O)_4(OH)_2 -> Zn(H2O)_5(OH)^+ + HO^-}$
• This info isn't enough to tell, it's even misleading I'm afraid. – Mithoron May 26 '16 at 18:10
• @Mithoron - could you expand your comment into an answer perhaps? :D. I'm very curious as to what your thoughts are. – Dissenter May 26 '16 at 22:15
• Can you define "more amphoteric"? More acidic and more basic are unambiguous, but this is not. – Zhe Oct 19 '16 at 20:22
• OK. Apply your definition to this scenario: Compounds A and B are amphoteric. A is a better acid, B is a better base. Which is more amphoteric? – Zhe Oct 19 '16 at 20:26
• I think you should come to chat to discuss questions. It may be cheaper then massive bounty. – Mithoron Oct 19 '16 at 20:31
Zinc hydroxide is more amphoteric as quantitatively investigated in Effect of pH, Concentration and Temperature on Copper and Zinc Hydroxide Formation/Precipitation in Solution
In summary, in the case of Cu(OH)2, only at extremely low concentrations and very high pH can a third OH- be coordinated.
At higher pH values, copper hydroxide, Cu(OH)2, is the dominant species up to pH 12.3 where the copper ion Cu(OH)3 - forms according to Equation 3. At higher copper concentrations, solid Cu(OH)2 is formed and precipitates out of solution at copper concentrations above the solubility product of copper hydroxide at 1×10-8 M. It is important to note that the domain of stability of solid Cu(OH)2 is expanding to lower and higher pH values with increasing copper concentration.
On the other hand:
Above pH 11.4, Zn(OH)3 - forms according to Equation 7. At higher zinc concentrations, solid Zn(OH)2 forms and precipitates out of solution at zinc concentrations above the solubility product of zinc hydroxide of 1×10-5 M.
So qualitatively both display similar amphoteric behavior in the limit of infinite dilution, but at a appreciably non-zero concentration zinc is more amphoteric.
The article says it is getting the equilibrium constant values from Mineral Equilibria, Low Temperature and Pressure.
An alternative source of this information is Hydrolysis of cations. Formation constants and standard free energies of formation of hydroxy complexes Inorg. Chem., 1983, 22 (16), pp 2297–2305, which gives quantitative values for OP equations 1, 2 and 3 as well as the corresponding equations for zinc, in terms of Gibbs energy of formation and in terms of cumulative equilibrium constants.
Simplifying the general equation in the paper to only mononuclear (one metal atom) complexes:
$$K_y$$ $$\ce{M^{n+} + yH2O <=> M(OH)_y^{n-y} + yH+}$$
$\ce{Cu^{2+}}$ :
$pK_1 = 7.96$
$pK_2 = 16.26$
$pK_3 = 26.7$
$pK_4 = 39.6$
$\ce{Zn^{2+}}$ :
$pK_1 = 8.96$
$pK_2 = 16.9$
$pK_3 = 28.4$
$pK_4 = 41.2$
(These are all experimental values, calculated values are also given)
So really, without considering the solid phase (solubility), Cu2+ and Zn2+ seem very similar, with Cu2+ showing slightly more acidity.
Another study, which does consider the solid phases, is Zinc Hydroxide: Solubility Product and Hydroxy-complex Stability Constants from 12.5-75 [degrees] C Can. J. Chem. 53, 3841.
This study approaches the problem with 5 equilibria involving the solid phase. (These authors use the symbol "c" for the solid phase.)
$\ce{Zn(OH)2(c) <=> Zn(OH)+ + OH-}$
$K_1 = [\ce{Zn(OH)+}][\ce{OH-}] = 2.54 \times 10^{-11}$
$\ce{Zn(OH)2(c) <=> Zn(OH)2(aq)}$
$K_2 = [\ce{Zn(OH)2}] = 2.62 \times 10^{-6}$
$\ce{Zn(OH)2(c) +OH- <=> Zn(OH)3-}$
$K_3 = [\ce{Zn(OH)3-}]/[\ce{OH-}] = 1.32 \times 10^{-3}$
$\ce{Zn(OH)2(c) +2OH- <=> Zn(OH)4^{2-}}$
$K_4 = [\ce{Zn(OH)4^{2-}}]/[\ce{OH-}]^2 = 6.47 \times 10^{-2}$
$\ce{Zn(OH)2(c) <=> Zn^{2+}} + \ce{2OH-}$
$K_{sp} = [\ce{Zn^{2+}}][\ce{OH-}]^2 = 1.74 \times 10^{-17}$
This study (see Fig. 1) shows that aqueous $\ce{Zn(OH)2}$ is only the major aqueous species in the pH range 9-11, having acted as an acid or base outside this range.
Instead, if I were just given the information in the OP, meaning the two supposed Ka values, as some kind of test or homework question, I would say copper is more amphoteric. My reasoning would be, for zinc, just to remove one proton, you need to get up to almost pH 10. For two more protons to be removed, as required to get to Zn(OH)2 and lose yet another proton to act as an acid, seems impossible because I would expect pKas to be spaced apart and if you would need to approach or go beyond pH 14 to observe the acidic behavior, that isn't reasonably considered amphoteric.
Note that EXAFS Investigations of Zn(II) in Concentrated Aqueous Hydroxide Solutions J. Phys. Chem. 1995, 99, 11967-11973 finds that rather than being 6-coordinate as indicated in the OP, 4 hydroxides coordinate Zn2+ tetrahedrally with no water ligands. | 2021-01-17 10:34:28 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5029019713401794, "perplexity": 2613.458035327544}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703511903.11/warc/CC-MAIN-20210117081748-20210117111748-00208.warc.gz"} |
https://codegolf.stackexchange.com/questions/79762/zeroes-at-the-end-of-a-factorial/79778 | # Zeroes at the end of a factorial
Write a program or function that finds the number of zeroes at the end of n! in base 10, where n is an input number (in any desired format).
It can be assumed that n is a positive integer, meaning that n! is also an integer. There are no zeroes after a decimal point in n!. Also, it can be assumed that your programming language can handle the value of n and n!.
Test cases
1
==> 0
5
==> 1
100
==> 24
666
==> 165
2016
==> 502
1234567891011121314151617181920
==> 308641972752780328537904295461
This is code golf. Standard rules apply. The shortest code in bytes wins.
• Related.
– xnor
May 12 '16 at 2:39
• Can we assume that n! will fit within our languages' native integer type? May 12 '16 at 2:45
• @AlexA. Yes you can. May 12 '16 at 3:01
• Can n be an input string? May 12 '16 at 3:10
• I think this would be a better question if you were not allowed to assume n! would fit into your integer type! Well, maybe another time. May 12 '16 at 10:45
## Python 2, 27 bytes
f=lambda n:n and n/5+f(n/5)
The ending zeroes are limited by factors of 5. The number of multiples of 5 that are at most n is n/5 (with floor division), but this doesn't count the repeated factors in multiples of 25, 125, .... To get those, divide n by 5 and recurse.
# Mornington Crescent, 1949 1909 1873 bytes
Take Northern Line to Bank
Take Circle Line to Bank
Take District Line to Parsons Green
Take District Line to Cannon Street
Take Circle Line to Victoria
Take Victoria Line to Seven Sisters
Take Victoria Line to Victoria
Take Circle Line to Victoria
Take Circle Line to Bank
Take Circle Line to Hammersmith
Take District Line to Turnham Green
Take District Line to Hammersmith
Take District Line to Upminster
Take District Line to Hammersmith
Take District Line to Turnham Green
Take District Line to Bank
Take Circle Line to Hammersmith
Take Circle Line to Blackfriars
Take Circle Line to Hammersmith
Take Circle Line to Notting Hill Gate
Take Circle Line to Notting Hill Gate
Take Circle Line to Bank
Take Circle Line to Hammersmith
Take District Line to Upminster
Take District Line to Upney
Take District Line to Upminster
Take District Line to Upney
Take District Line to Upminster
Take District Line to Upney
Take District Line to Upminster
Take District Line to Bank
Take Circle Line to Blackfriars
Take District Line to Upminster
Take District Line to Temple
Take Circle Line to Hammersmith
Take Circle Line to Cannon Street
Take Circle Line to Bank
Take Circle Line to Blackfriars
Take Circle Line to Hammersmith
Take District Line to Upney
Take District Line to Cannon Street
Take District Line to Upney
Take District Line to Cannon Street
Take District Line to Upney
Take District Line to Blackfriars
Take Circle Line to Bank
Take District Line to Upminster
Take District Line to Upney
Take District Line to Upminster
Take District Line to Upney
Take District Line to Upminster
Take District Line to Upney
Take District Line to Bank
Take Circle Line to Bank
Take Northern Line to Angel
Take Northern Line to Bank
Take Circle Line to Bank
Take District Line to Upminster
Take District Line to Bank
Take Circle Line to Bank
Take Northern Line to Mornington Crescent
Try it online!
-40 bytes thanks to NieDzejkob
-45 bytes thanks to Cloudy7
• And this is now my most upvoted answer. May 17 '16 at 13:25
• A brief explanation for those of us who are Mornington Crescent-challenged would be cool. :) Mar 23 '17 at 13:26
• -40 bytes by using shorter line names where possible. Mar 15 '18 at 17:02
• -45 bytes by using Upney instead of Becontree. Mar 12 at 3:42
• Yeah, I wasn't very efficient at choosing station names when I wrote this code in 2016. Mar 12 at 5:06
# Jelly, 5 bytes
!Æfċ5
Uses the counterproductive approach of finding the factorial then factorising it again, checking for the exponent of 5 in the prime factorisation.
Try it online!
! Factorial
Æf List of prime factors, e.g. 120 -> [2, 2, 2, 3, 5]
ċ5 Count number of 5s
• yikes. Talk about trade-offs! To get the code down to 5 bytes, increase the memory and time by absurd amounts. May 12 '16 at 16:18
## Pyth, 6 bytes
/P.!Q5
Try it here.
/ 5 Count 5's in
P the prime factorization of
.!Q the factorial of the input.
st.u/N5
The cumulative reduce .u/N5 repeatedly floor-divides by 5 until it gets a repeat, which in this case happens after it hits 0.
34 -> [34, 6, 1, 0]
The first element is then removed (t) and the rest is summed (s).
## Actually, 10 bytes
!$R;≈$l@l-
Try it online!
Note that the last test case fails when running Seriously on CPython because math.factorial uses a C extension (which is limited to 64-bit integers). Running Seriously on PyPy works fine, though.
Explanation:
!$R;≈$l@l-
! factorial of input
$R stringify, reverse ;≈$ make a copy, cast to int, then back to string (removes leading zeroes)
l@l- difference in lengths (the number of leading zeroes removed by the int conversion)
• Oh wow, I like how this method doesn't use the dividing by 5 trick. May 12 '16 at 4:36
• I count 12 bytes on this one May 14 '16 at 2:09
• @Score_Under Actually uses the CP437 code page, not UTF-8. Each character is one byte.
– user45941
May 14 '16 at 2:41
f 0=0
f n=(+)=<<f$div n 5 Floor-divides the input by 5, then adds the result to the function called on it. The expression (+)=<<f takes an input x and outputs x+(f x). Shortened from: f 0=0 f n=div n 5+f(div n 5) f 0=0 f n|k<-div n 5=k+f k A non-recursive expression gave 28 bytes: f n=sum[ndiv5^i|i<-[1..n]] • Is i a counter from 1..n? May 12 '16 at 3:35 • @CᴏɴᴏʀO'Bʀɪᴇɴ Yes, though only up to log_5(n) matters, the rest gives 0. – xnor May 12 '16 at 3:36 # MATL, 9 bytes :"@Yf5=vs Try it online! This works for very large numbers, as it avoids computing the factorial. Like other answers, this exploits the fact that the number of times 2 appears as divisor of the factorial is greater or equal than the number of times 5 appears. : % Implicit input. Inclusive range from 1 to that " % For each @ % Push that value Yf % Array of prime factors 5= % True for 5, false otherwise v % Concatenate vertically all stack contents s % Sum ## 05AB1E, 5 bytes Would be 4 bytes if we could guarantee n>4 Code: Î!Ó7è Explanation: Î # push 0 then input ! # factorial of n: 10 -> 2628800 Ó # get primefactor exponents -> [8, 4, 2, 1] 7è # get list[7] (list is indexed as string) -> 2 # implicit output of number of 5s or 0 if n < 5 Alternate, much faster, 6 byte solution: Inspired by Luis Mendo's MATL answer LÒ€5QO Explanation: L # push range(1,n) inclusive, n=10 -> [1,2,3,4,5,6,7,8,9,10] Ò # push prime factors of each number in list -> [[], [2], [3], [2, 2], [5], [2, 3], [7], [2, 2, 2], [3, 3], [2, 5]] € # flatten list of lists to list [2, 3, 2, 2, 5, 2, 3, 7, 2, 2, 2, 3, 3, 2, 5] 5Q # and compare each number to 5 -> [0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1] O # sum -> 2 Edit: removed solutions using ¢ (count) as all primes containing 5 would be counted as 5 e.g. 53. Edit 2: added a more efficient solution for higher input as comparison. • Yeah, instead of 5¢, 5Q should work. Nice answer though! :) May 12 '16 at 8:41 • I was going to test on larger inputs with the comment "wouldn't this fail if the output was > 9", but boy 05AB1E's implementation of Ó is slow May 12 '16 at 9:51 • Btw, the first code can also be Î!Ó2é. The bug was fixed yesterday. May 12 '16 at 10:15 • If you're using utf-8, Î!Ó7è is 8 bytes, and the "6 byte" solution is 10 bytes May 14 '16 at 2:11 • @Score_Under Yes that is correct. However, 05AB1E uses the CP-1252 encoding. May 14 '16 at 8:50 ## Matlab (59) (54)(39) Hey dawg !!!! we heard you like maths .... @(n)sum(fix(n./5.^(1:fix(log(n)/1.6)))) • This is based on my created answer in code review. • further than what is mentioned in my answer in code review, the formula for number of zeros in factorial(n) is Sum(n/(5^k)) where k varies between 1 and log_5(n) • The only trivial reason why it cant get golfier is that log5 isnt available in matlab as a builtin , thus I replaced log(5) by 1.6, doesnt matter because it will be anyways floored. Give it a try • A couple of questions. 1. How do you actually run this in Matlab? 2. What is the result for n=1? May 12 '16 at 14:46 • @StuartBruff to run this type ans(1) and it does return 0. May 12 '16 at 16:09 • OK. Thanks. Interesting. I haven't used function handles much in Matlab, so was a little puzzled as to how to run it ... why doesn't the ans() count towards the total? Neat answer though, I tried it in Mathcad but had to modify the upper limit of the sum as Mathcad autodecrements the summation variable if the "upper" is less than the "lower" limit (and hence my question about 0). May 13 '16 at 12:27 # Mathematica, 20 bytes IntegerExponent[#!]& IntegerExponent counts the zeros. For fun, here's a version that doesn't calculate the factorial: Tr[#~IntegerExponent~5&~Array~#]& • I think Array saves a byte on the second solution. May 12 '16 at 12:54 # Julia, 3431 30 bytes n->find(digits(prod(1:n)))[]-1 This is an anonymous function that accepts any signed integer type and returns an integer. To call it, assign it to a variable. The larger test cases require passing n as a larger type, such as a BigInt. We compute the factorial of n (manually using prod is shorter than the built-in factorial), get an array of its digits in reverse order, find the indices of the nonzero elements, get the first such index, and subtract 1. Try it online! (includes all but the last test case because the last takes too long) Saved a byte thanks to Dennis! # C, 28 bytes f(n){return(n/=5)?n+f(n):n;} ## Explanation The number of trailing zeros is equal to the number of fives that make up the factorial. Of all the 1..n, one-fifth of them contribute a five, so we start with n/5. Of these n/5, a fifth are multiples of 25, so contribute an extra five, and so on. We end up with f(n) = n/5 + n/25 + n/125 + ..., which is f(n) = n/5 + f(n/5). We do need to terminate the recursion when n reaches zero; also we take advantage of the sequence point at ?: to divide n before the addition. As a bonus, this code is much faster than that which visits each 1..n (and much, much faster than computing the factorial). ## Test program #include<stdio.h> #include<stdlib.h> int main(int argc, char **argv) { while(*++argv) { int i = atoi(*argv); printf("%d: %d\n",i,f(i)); } } ## Test output 1: 0 4: 0 5: 1 24: 4 25: 6 124: 28 125: 31 666: 165 2016: 502 2147483644: 536870901 2147483647: 536870902 • +1 for an excellent explanation Jun 5 '18 at 10:42 # JavaScript ES6, 20 bytes f=x=>x&&x/5+f(x/5)|0 Same tactic as in xnor's answer, but shorter. # C, 36 r;f(n){for(r=0;n/=5;)r+=n;return r;} Same method as @xnor's answer of counting 5s, but just using a simple for loop instead of recursion. • @TobySpeight there you go. May 13 '16 at 17:01 • suggestion: omit r=0, since globals are zeroed by default. Jul 29 at 17:16 # Retina, 33 bytes Takes input in unary. Returns output in unary. +^(?=1)(1{5})*1*$#1$*1;$#1$* ; (Note the trailing linefeed.) Try it online! ## How it works: ### The first stage: +^(?=1)(1{5})*1*$#1$*1;$#1$* Slightly ungolfed: +^(?=1)(11111)*1*\b$#1$*1;$#1$*1 What it does: • Firstly, find the greatest number of 11111 that can be matched. • Replace by that number • Effectively floor-divides by 5. • The lookahead (?=1) assures that the number is positive. • The + means repeat until idempotent. • So, the first stage is "repeated floor-division by 5" If the input is 100 (in unary), then the text is now: ;;1111;11111111111111111111 ### Second stage: ; Just removes all semi-colons. # Jelly, 3 bytes !ọ5 Try it online! ## How it works !ọ5 - Main link. Takes n on the left ! - Yield n! ọ5 - How many times is it divisible by 5? # Ruby, 22 bytes One of the few times where the Ruby 0 being truthy is a problem for byte count. f=->n{n>0?f[n/=5]+n:0} • wait why is 0 truthy? May 12 '16 at 4:00 • @CᴏɴᴏʀO'Bʀɪᴇɴ In Ruby, nil and false are falsey, and nothing else is. There are a lot of cases where helps out in golf, since having 0 be truthy means the index and regex index functions in Ruby return nil if there is no match instead of -1, and some where it is a problem, like empty strings still being truthy. May 12 '16 at 4:24 • @KevinLau-notKenny That does make sense. May 12 '16 at 4:25 # Perl 6, 23 bytes {[+] -$_,$_,*div 5…0} {sum -$_,$_,*div 5...0} I could get it shorter if ^... was added to Perl 6 {sum$_,*div 5^...0}.
It should be more memory efficient for larger numbers if you added a lazy modifier between sum and the sequence generator.
### Explanation:
{ # implicitly uses $_ as its parameter sum # produce a sequence -$_, # negate the next value
$_, # start of the sequence * div 5 # Whatever lambda that floor divides its input by 5 # the input being the previous value in the sequence, # and the result gets appended to the sequence ... # continue to do that until: 0 # it reaches 0 } ### Test: #! /usr/bin/env perl6 use v6.c; use Test; my @test = ( 1, 0, 5, 1, 100, 24, 666, 165, 2016, 502, 1234567891011121314151617181920, 308641972752780328537904295461, # [*] 5 xx 100 7888609052210118054117285652827862296732064351090230047702789306640625, 1972152263052529513529321413206965574183016087772557511925697326660156, ); plan @test / 2; # make it a postfix operator, because why not my &postfix:<!0> = {[+] -$_,$_,*div 5...0} for @test ->$input, $expected { is$input!0, $expected, "$input => $expected" } diag "runs in {now - INIT now} seconds" 1..7 ok 1 - 1 => 0 ok 2 - 5 => 1 ok 3 - 100 => 24 ok 4 - 666 => 165 ok 5 - 2016 => 502 ok 6 - 1234567891011121314151617181920 => 308641972752780328537904295461 ok 7 - 7888609052210118054117285652827862296732064351090230047702789306640625 => 1972152263052529513529321413206965574183016087772557511925697326660156 # runs in 0.0252692 seconds ( That last line is slightly misleading, as MoarVM has to start, load the Perl 6 compiler and runtime, compile the code, and run it. So it actually takes about a second and a half in total. That is still significantly faster than it was to check the result of the last test with WolframAlpha.com ) # Mathcad, [tbd] bytes Mathcad is sort of mathematical "whiteboard" that allows 2D entry of expressions, text and plots. It uses mathematical symbols for many operations, such as summation, differentiation and integration. Programming operators are special symbols, usually entered as single keyboard combinations of control and/or shift on a standard key. What you see above is exactly how the Mathcad worksheet looks as it is typed in and as Mathcad evaluates it. For example, changing n from 2016 to any other value will cause Mathcad to update the result from 502 to whatever the new value is. http://www.ptc.com/engineering-math-software/mathcad/free-download Mathcad's byte equivalence scoring method is yet to be determined. Taking a symbol equivalence, the solution takes about 24 "bytes" (the while operator can only be entered using the "ctl-]" key combination (or from a toolbar)). Agawa001's Matlab method takes about 37 bytes when translated into Mathcad (the summation operator is entered by ctl-shft-$).
• Sounds a stunning tool to handle, I wont spare a second downloading it ! May 13 '16 at 13:42
# Julia, 21 19 bytes
!n=n<5?0:!(n÷=5)+n
Uses the recursive formula from xnor's answer.
Try it online!
# dc, 12 bytes
[5/dd0<f+]sf
This defines a function f which consumes its input from top of stack, and leaves its output at top of stack. See my C answer for the mathematical basis. We repeatedly divide by 5, accumulating the values on the stack, then add all the results:
5/d # divide by 5, and leave a copy behind
d0< # still greater than zero?
f+ # if so, apply f to the new value and add
## Test program
# read input values
?
# print prefix
[ # for each value
# print prefix
[> ]ndn[ ==> ]n
# call f(n)
lfx
# print suffix
n[
]n
# repeat for each value on stack
z0<t
]
# define and run test function 't'
dstx
## Test output
./79762.dc <<<'1234567891011121314151617181920 2016 666 125 124 25 24 5 4 1'
1 ==> 0
4 ==> 0
5 ==> 1
24 ==> 4
25 ==> 6
124 ==> 28
125 ==> 31
666 ==> 165
2016 ==> 502
1234567891011121314151617181920 ==> 308641972752780328537904295461
# Vyxals, 5 bytes
ɾǐƛ5O # main program
ɾ # range over input
ǐ # take the prime factors of each number
ƛ5O # for each value, count the 5s
-s # sum top of stack
Try it Online!
# Vyxal, 3 bytes
¡5Ǒ
This one uses the same approach as caird coinheringaahing's answer
Try it Online!
# Vyxall, 3 bytes (for inputs > 4)
¡Ġt
Approach by Lyxal, takes the factorial of the input, groups by consecutive, then gets the length of the tail using the -l flag.
Try it Online!
• Alternate 3 bytes that isn't a caird port Jul 26 at 8:04
• @lyxal added it Jul 26 at 20:00
• The last one doesn't work for inputs <=5, where there are no trailing 0's
– ovs
Jul 28 at 21:25
• @ovs true, I'll add a comment about that Jul 28 at 23:46
# Jolf, 13 bytes
Ώmf?H+γ/H5ΏγH
Defines a recursive function which is called on the input. Try it here!
Ώmf?H+γ/H5ΏγH Ώ(H) = floor(H ? (γ = H/5) + Ώ(γ) : H)
Ώ Ώ(H) =
/H5 H/5
γ (γ = )
+ Ώγ + Ώ(γ)
?H H H ? : H
mf floor( )
// called implicitly with input
# J, 2817 16 bytes
<.@+/@(%5^>:@i.)
Pretty much the same as the non-recursive technique from xnor's answer.
Here's an older version I have kept here because I personally like it more, clocking in at 28 bytes:
+/@>@{:@(0<;._1@,'0'&=@":@!)
Whilst not needed, I have included x: in the test cases for extended precision.
tf0 =: +/@>@{:@(0<;._1@,'0'&=@":@!@x:)
tf0 5
1
tf0 100
24
tf0g =: tf0"0
tf0g 1 5 100 666 2016
0 1 24 165 502
The last number doesn't work with this function.
## Explanation
This works by calculating n!, converting it to a string, and checking each member for equality with '0'. For n = 15, this process would be:
15
15! => 1307674368000
": 1307674368000 => '1307674368000'
'0' = '1307674368000' => 0 0 1 0 0 0 0 0 0 0 1 1 1
Now, we use ;._1 to split the list on its first element (zero), boxing each split result, yielding a box filled with aces (a:) or runs of 1s, like so:
┌┬─┬┬┬┬┬┬┬─────┐
││1│││││││1 1 1│
└┴─┴┴┴┴┴┴┴─────┘
We simple obtain the last member ({:), unbox it (>), and perform a summation over it +/, yielding the number of zeroes.
Here is the more readable version:
split =: <;._1@,
tostr =: ":
is =: =
last =: {:
unbox =: >
sum =: +/
precision =: x:
n =: 15
NB. the function itself
tf0 =: sum unbox last 0 split '0' is tostr ! precision n
tf0 =: sum @ unbox @ last @ (0 split '0'&is @ tostr @ ! @ precision)
tf0 =: +/ @ > @ {: @ (0 <;._1@, '0'&= @ ": @ ! )
• >:@i. can be written 1+i. to save a byte. May 18 '16 at 21:02
• Your older version can be made into [:#.~'0'=":@! for 13 bytes by changing the method of counting the trailing 1s.
– cole
Dec 28 '17 at 1:11
## Python 3, 52 bytes
g=lambda x,y=1,z=0:z-x if y>x else g(x,y*5,z+x//y)
## Pyke, 5 bytes
SBP5/
Try it here!
S - range(1,input()+1)
B - product(^)
P - prime_factors(^)
5/ - count(^, 5)
Can't compete with @xnor, but it was fun and the result is a different approach:
f n=sum$[1..n]>>= \i->1<$[5^i,2*5^i..n]
# RETURN, 17 bytes
[$[5÷\%$F+][]?]=F
Try it here.
Recursive operator lambda. Usage:
[$[5÷\%$F+][]?]=F666F
# Explanation
[ ]=F Lambda -> Operator F
$Check if top of stack is truthy [ ][]? Conditional 5÷\%$F+ If so, do x/5+F(x/5)
## Perl, 24 22 + 1 (-p flag) = 23 bytes
$\+=$_=$_/5|0while$_}{
Using:
> echo 2016 | perl -pe '$\+=$_=$_/5|0while$_}{'
Full program:
while (<>) {
# code above added by -p
while ($_) {$\ += $_ = int($_ / 5);
}
} {
# code below added by -p
print; # prints $_ (undef here) and$\
}
# Java, 38 bytes
int z(int n){return n>0?n/5+z(n/5):0;}
## Full program, with ungolfed method:
import java.util.Scanner;
public class Q79762{
int zero_ungolfed(int number){
if(number == 0){
return 0;
}
return number/5 + zero_ungolfed(number/5);
}
int z(int n){return n>0?n/5+z(n/5):0;}
public static void main(String args[]){
Scanner sc = new Scanner(System.in);
int n = sc.nextInt();
sc.close();
System.out.println(new Q79762().zero_ungolfed(n));
System.out.println(new Q79762().z(n));
}
} | 2021-12-06 09:19:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18130119144916534, "perplexity": 10567.395811361057}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363290.59/warc/CC-MAIN-20211206072825-20211206102825-00123.warc.gz"} |
https://www.usgs.gov/center-news/continued-rumblings-2006-kiholo-bay-earthquake | # Continued Rumblings of the 2006 Kiholo Bay Earthquake
Release Date:
This past weekend, Kohala's famed Mauna Kea Beach Hotel celebrated a "soft reopening" following repairs and renovations in the aftermath of the October 15, 2006 Kiholo Bay earthquake. A formal "grand reopening" is scheduled to follow this Spring.
Plume of brown water at the base of the pali between Kaaha and Halape, on Kīlauea's south flank, marks the location of rock slides triggered by the earthquake. Halape is visible in the background.
(Public domain.)
Residents of and visitors to the island of Hawaii are reminded of the earthquake in this and other ways. Some, like the replacement bridge on the Mamalahoa Highway (Route 19) in Paauilo are quite visible.
For the most part, the October 2006 earthquake experiences are memories. We were fortunate that our community was not forced to endure more widespread and more devastating consequence due to the earthquake.
At the same time, as residents of an earthquake-prone region, we know that future large earthquakes are expected. The principal means of mitigating the effects of large earthquakes include developing and adopting appropriate building codes and use of appropriate earthquake-resistant design and building practice, as well as establishing community and personal earthquake response plans.
While current scientific capabilities do not afford the means to precisely predict the time, location, and magnitude of future large earthquakes, we are able to forecast the effects of future large earthquakes as the probabilities of strong earthquake shaking. The U. S. Geological Survey (USGS) features this information online, with explanations, as probabilistic seismic hazards maps at http://earthquake.usgs.gov/research/hazmaps/.
As is the case for any large earthquake, the 2006 Kiholo Bay earthquake sequence (including main and after shocks) provided important observations and data that will fuel research toward a better understanding of earthquakes and their effects. For Kiholo Bay, such data were recorded by a set of instruments installed and maintained by the USGS National Strong Motion Project (NSMP). Beginning in the year 2000 and only now recently completed, the NSMP has upgraded all of its strong motion instrumentation, some of which recorded on film in 2006, to current operational (digital) standards.
The NSMP instruments are referred to as "strong motion accelerographs" that record the strongest shaking expected from earthquakes without exceeding the maximum working range of the instruments. There are two-dozen NSMP instruments on the island of Hawaii, and a few more on Oahu and in Maui County.
The maximum shaking from the October 15 M6.7 mainshock was not recorded by the NSMP instrument nearest the earthquake epicenter as expected. Instead, the strongest shaking was recorded at the Waimea Fire Station (more than 32 km or 20 miles away), and the overall pattern of strong motion data suggested significant variations in response due to soil and geological conditions beneath the individual instrument locations.
Data collected from the NSMP sites since 2006 have been compiled into a new map of strong motion site conditions for the Big Island that was presented earlier this month at the Fall 2008 Meeting of the American Geophysical Union (AGU). While an earlier version of the map showed much of the island to be classified as "rock" sites, the recent work suggests that much of Hawaii Island should be considered soft rock or very dense soil. Shaking at soft rock or very dense soil sites would be amplified over shaking at hard rock sites. The differences must be incorporated into updated seismic hazard maps of Hawaii to properly estimate future strong earthquake shaking.
Also presented at the AGU meeting was an HVO study of the rupture process of the Kiholo Bay M6.7 mainshock. This study also used NSMP recordings from the Kiholo Bay sequence, including the M5.0 aftershock that occurred on Thanksgiving Day, 2006.
The October 15, 2006 M6.7 Kiholo Bay earthquake occurred on a deep fault, approximately 39 km (24 miles) below sea level. The slippage that caused the earthquake started at its hypocenter and continued over an area of the fault roughly 30 km X 20 km (18 miles X 12 miles) in size, in a westward direction away from the island at a speed of 3.5 km/s (2.2 miles/s or 7.900 miles/hr). Maximum slippage was more than 1 m (3.3 ft).
Such studies will contribute to a better understanding of large earthquakes and how their effects are distributed across Hawaii.
————————————————————————————————————————————————————————————————
### Volcano Activity Update
Kīlauea Volcano continues to be active. A vent in Halemaumau Crater is erupting elevated amounts of sulfur dioxide gas and very small amounts of ash. Resulting high concentrations of sulfur dioxide in downwind air have closed the south part of Kīlauea caldera and produced occasional air quality alerts in more distant areas, such as Pahala and communities adjacent to Hawaii Volcanoes National Park, during kona wind periods.
Puu Ōō continues to produce sulfur dioxide at even higher rates than the vent in Halemaumau Crater. Trade winds tend to pool these emissions along the West Hawaii coast, while Kona winds blow these emissions into communities to the north, such as Mountain View, Volcano, and Hilo.
Lava erupting from the Thanksgiving Eve Breakout (TEB) vent at the eastern base of Puu Ōō continues to flow to the ocean at Waikupanaha through a well-established lava tube. Breakouts from the lava tube were active in the abandoned Royal Gardens subdivision and on the coastal plain in the past week. Active portions of the flow on the coastal plain were within 100 yards of the National Park boundary, as they have been during the last several weeks. Ocean entry activity has fluctuated in the past week, due to a deflation-inflation cycle that began on Sunday, Dec. 21. These cycles normally cause changes in lava supply to the flow field that can last a few days.
Be aware that active lava deltas can collapse at any time, potentially generating large explosions. This may be especially true during times of rapidly changing lava supply conditions. The Waikupanaha delta has collapsed many times over the last several months, with three of the collapses resulting in rock blasts that tossed television-sized rocks up onto the sea-cliff and threw fist-sized rocks more than 200 yards inland.
Do not approach the ocean entry or venture onto the lava deltas. Even the intervening beaches are susceptible to large waves generated during delta collapse; avoid these beaches. In addition, steam plumes rising from ocean entries are highly acidic and laced with glass particles. Call Hawaii County Civil Defense at 961-8093 for viewing hours.
Mauna Loa is not erupting. Two earthquakes were located beneath the summit this past week. Continuing extension between locations spanning the summit indicates slow inflation of the volcano, combined with slow eastward slippage of its east flank.
No earthquakes beneath Hawaii Island were reported felt within the past week.
The staff of the Hawaiian Volcano Observatory wishes you a Happy Holiday Season. Visit our Web site for daily Kīlauea eruption updates, a summary of volcanic events over the past year, and nearly real-time Hawaii earthquake information. Kīlauea daily update summaries are also available by phone at (808) 967-8862. Questions can be emailed to [email protected]. skip past bottom navigational bar | 2020-01-25 01:37:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20035870373249054, "perplexity": 3727.0248187707284}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250628549.43/warc/CC-MAIN-20200125011232-20200125040232-00470.warc.gz"} |
http://mathhelpforum.com/calculus/214641-finding-derivative-exponentnial-functions-print.html | # Finding the derivative of exponentnial functions.
• March 12th 2013, 09:21 AM
Chaim
Finding the derivative of exponentnial functions.
Hi, I'm having a little trouble of finding the derivative of something like this:
y=e-2x + (x^2)
So I'm a bit confused.
Like my teacher shows me the derivatove of 4esinx = 4esinx * cosx
There, I tried the same method, doing y' = e2x + (x^2) * e-2x + 2x
Though... in the back of the book, it had 2(x-1)e-2x + (x^2)
I searched up this up, and found that there are rules like this?
But after that... I end up with 2 'e'
I'm not really sure right now.
Can someone explain?
Thanks
• March 12th 2013, 09:45 AM
majamin
Re: Finding the derivative of exponentnial functions.
Recall that the derivative of the natural exponential function is itself. Also, it's all about the chain rule:
$\frac{d}{dx}4e^{\sin x} = 4e^{\sin x} \cdot \frac{d}{dx} \sin x = 4e^{\sin x} \cdot \cos x$
$\frac{d}{dx}e^{-2x+x^2}= e^{-2x+x^2} \frac{d}{dx} (-2x+x^2) = e^{2x+x^2} (-2 + 2x) = 2(x-1)e^{2x+x^2}$
• March 12th 2013, 09:58 AM
MichaelLitzky
Re: Finding the derivative of exponentnial functions.
Use the chain rule. The outer function is e^(that messy exponent). The inner function is the messy exponent, the -2x+(x^2). So you want derivative of the outer (with the messy exponent still in place) times derivative of the inner. Derivative of the outer: piece of cake, e is its own derivative, right? That's one reason mathematicians are so enamored of e. Derivative of the inner: I'm betting you're good at using the power rule by this point, right? You should wind up with 2(x-1) after factoring. If you look at the answer from your book, you'll see they've put deriv of the inner first, but there it is: deriv of the messy exponent times e^(that stuff).
• March 12th 2013, 12:29 PM
Chaim
Re: Finding the derivative of exponentnial functions.
Oh I see thanks!
So basically it's just the regular function then multiply it by the derivative of the exponent, right?
• March 12th 2013, 12:34 PM
MichaelLitzky
Re: Finding the derivative of exponentnial functions.
Yeah, you got it. Of course, if the outer function hadn't been e (which is its own derivative), then you would have needed whatever the derivative of the outer function turned out to be, multiplied by the derivative of the inner. Good luck to you!
• March 12th 2013, 01:53 PM
Chaim
Re: Finding the derivative of exponentnial functions.
Quote:
Originally Posted by MichaelLitzky
Yeah, you got it. Of course, if the outer function hadn't been e (which is its own derivative), then you would have needed whatever the derivative of the outer function turned out to be, multiplied by the derivative of the inner. Good luck to you!
Oh now I get it, ok thanks! :) | 2015-03-28 19:38:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8974551558494568, "perplexity": 1547.6471323422757}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131297689.58/warc/CC-MAIN-20150323172137-00179-ip-10-168-14-71.ec2.internal.warc.gz"} |
https://statisfaction.wordpress.com/2010/12/01/ihp-seminar-november/ | # Statisfaction
## IHP seminar – November
Posted in Seminar/Conference by Julyan Arbel on 1 December 2010
Hi,
for those who missed this month IHP seminar organized by Ghislaine Gayraud and Karine Tribouley, below is a very quick summary/introduction of the talks. At closing time, it is not uncommon in that area to pass Field medals, as it happened when we left.
Anne Philippe spoke about long memory time series. For a weak second order stationary time series $X_t$, define its autocovariance $\Gamma(h)=Cov(X_0,X_h)$. Then $X_t$ has long memory if $\sum_h\Gamma(h)=\infty$. The long memory parameter is $d$ such that $\Gamma(h)$ decreases as $h^{2d-1}$. The core of the talk was about tests on long memory parameters for different time series (eg. test of equality), involving fractional Brownian motion…
Then, Judith Rousseau presented results on the Bernstein – von Mises property. It states that the posterior distribution of the parameter is asymptotically Gaussian. A nice consequence of BvM property is that Bayesian credible regions are also frequentist confidence regions, with confidence levels that coincide asymptotically.
Last, Erwan Le Pennec gave a technically involved talk, on a nice subject: the question raised by a group of researchers is to know whether or not Stradivarius violins are varnished differently from other violins. The statisticians’ contribution is to cluster the images of slices of violins, in order to help for their physical analysis. What is found is that the main difference in varnishes is the presence of a red pigment in Stradivarius violins. Which does not really translates as a factor for exceptional sound and quality. | 2017-06-29 12:21:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 7, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49310117959976196, "perplexity": 1678.8243479808514}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128323970.81/warc/CC-MAIN-20170629121355-20170629141355-00374.warc.gz"} |
https://www.physics.uoguelph.ca/introductory-physics-life-sciences-phys1070-sample-exam-2 | # Introductory Physics for the Life Sciences, PHYS*1070 - Sample Exam 2
Note: Not all questions may be applicable in the current semester.
1. At fairly high velocities, the drag force (F) acting on a sphere of radius R moving through a fluid of density r at a velocity (V) is given by:
$F = (1/4)\pi R^2\rho v^2$
Which graph below currently illustrates the relationship between F and v? A, B, C, D or E?
D
(Material from Appendix I in the Text)
Since $F$ is proportional to $v^2$, a graph of $F$ vs $v^2$ gives a straight line through 0,0.
Therefore D is correct.
2. At very low velocities, the drag force acting on a sphere of radius (R) moving through a fluid with a velocity (V) is given by:
$F = 6\pi \eta Rv$
where $\eta$ is a property of the fluid called its "viscosity''. According to this equation, the dimensions of viscosity ( $\eta$ ) are:
(A) $M L^{-1} T^{-1}$
(B) $M L^2 T^{-2}$
(C) $M^{-1} L^2 T^{-1}$
(D) $M^2 L^{-1} T^2$
(E) $M^2 L^{-1} T^{-2}$
A
(Material from Appaendix I in the Text)
Since $\eta = F/(6\pi Rv) = \mathrm {(mass)(acceleration)}/(6\pi Rv)$
Dimensions of $\eta = (M L/T^2)/(L L/T) = M L^{-1} T^{-1}$
3. Mary and Jane decided to try making some wine. Each started with yeast culture containing an identical number of cells. The yeast thrived in Mary's wine and increased exponentially with a growth constant of 1.0 day-1. Unfortunately, Jane kept her wine at too low a temperature and her yeast population decayed exponentially with a decay constant of 2.0 day-1. How long will it be before Mary's wine contains four times as many yeast cells as Jane's?
(A) $\ln 3.0$ days
(B) $\ln 3.0$ days
(C) $(\ln 3)/4$ days
(D) $4 \ln 3.0$ days
(E) $(\ln 4)/3$ days
E
(Material from Appendix II in the Text)
For Mary: $N_m = N_0e^{+1.0t}$
For Joan: $N_j = N_0e^{-2.0t}$
Find $t$ for which $N_m = 4N_j$
or, $N_0e^{+1.0t} = 4 N_0e^{+2.0t}$
$e^{+1.0t}/e^{-2.0t} = 4$
take natural logs
$3.0t = \ln 4$
$t = (\ln 4)/3.0 \; \mathrm {days}$
4. A travelling sound wave has a frequency of 1000 Hz and a speed in air of 340 m s-1. Which equation below could describe this wave?
(A) $y = y_o \cos (1000\pi t)\sin(340\pi x)$
(B) $y = -2y_o \cos(2000\pi t)\sin(5.88\pi x)$
(C) $y = y_o\sin(2000\pi t + 5.88\pi x)$
(D) $y = y_o\sin(1000\pi t - 0.340\pi x)$
(E) $y = -y_o\sin(1000\pi t + 340\pi x)$
C
Answers A and B represent standing waves.
$\omega = 2\pi f = 2\pi \; 1000 = 2000\pi \; \mathrm {rad/s}$
Therefore the answer must be C
Check: $v = \lambda f, \lambda = v/f = 340/1000 = 0.34 \mathrm{m}$
$k = 2\pi /\lambda = 2\pi /0.34 = 5.88\pi \; \mathrm {rad/m}$
5. A wave on a string is described by the equation:
$y = 0.03\cos(150t)\sin(15x) \mathrm {(All\;quantities \; are \; S.I.)}$
Which one of the following statements is NOT correct?
(A) This represents a standing wave with a node at the origin (i.e., where $x=-0$).
(B) This wave could be produced by two travelling waves each of amplitude $0.015 \mathrm {\; m}$.
(C) Each particle oscillates with a period of $0.042 \mathrm {\; s}$.
(D) The speed of travelling waves on this string is $5.00\mathrm {\; m/s}$.
(E) The wavelength of this wave is $0.42 \mathrm {\; m}$.
D
A is correct-a standing wave; $y = 0$ at $x = 0$ for all values of t (a node)
B is correct since $2\times 0.015 \mathrm {\;m} = 0.03 \mathrm {\;m}$
C is correct: $\omega = 2\pi /T = 150$. Therefore $T = 2\pi /150 = 0.042 \mathrm {\;s}$
D is not correct: $k = 2\pi /\lambda ,\; \lambda = 2p /15 = 0.42 \mathrm {\;m}. v = f\lambda = (0.42)(1/.042) = 10 \mathrm {\;m/s}$
E is correct: See value of $\lambda$ in previous line.
6. The diagram shows a tube which is closed at one end and open at the other. If the tube is in air, which of the following sound frequencies will induce a standing wave or resonance in the tube? (See Equation Sheet for required data.) (Answers given to 2 significant figures.)
(A) 60 Hz
(B) 140 Hz
(C) 280 Hz
(D) 12 Hz
(E) 340 Hz
B
For a tube closed at one end, the fundamental frequency is $f_1 = v/4\mathrm {L} = (340\; \mathrm {m/s})/(4\times 0.60\mathrm {\;m}) = 142 \mathrm {\;Hz}.$
Can have only odd harmonics: e.g. $3\times142 = 425 \mathrm {\;Hz}, 5\times142 = 710 \mathrm {\; Hz}$ etc
7. When John is shouting, the intensity level 1.0 m from his mouth is 80.0 dB. What is the intensity level due to this sound, 20.0 m from John? (Assume no loss of acoustic power in the air; ($I_o = 1.0 \times 10^{-12} W m^{-2}.$) (Answers given to 2 significant figures.)
(A) 80 dB
(B) 63 dB
(C) 54 dB
(D) 4.0 dB
(E) 0.20 dB
C
At $1.0 \mathrm {\; m}: \mathrm L = 80.0 = 10 \; \log \mathrm {(I_1/I_0)}$
$\mathrm {I_1/I_0} = 10^8$
$\mathrm {I_1} = \mathrm {10^8I_0} (\mathrm {when \; I_0 = 10^{-12} W/m^2})$
Inverse square law: $\mathrm {I = P/4\pi r^2}$
$\mathrm {I_1/I_2 = (r_2/r_1)^2}$
$\mathrm {I_2 = (I0\times10^8)(1.0/20.0)^2 = I0\times 10^8\times 20^{-2}}$
$\mathrm {IL_2 = 10 \log(I_2/I_0) = 10 \log [(I0\times 10^8\times 20^{-2})/I_0] = 80 - 26 = 54 \;dB}$
8. Referring to John's shouting in Question 7, the acoustical power being produced by John is:
(A) $\mathrm {4\pi \times 10^{-4} W}$
(B) $\mathrm{4\pi \; W}$
(C) $\mathrm {1.0 \times 10^{-4} W}$
(D) $\mathrm {320\pi \; W}$
(E) $\mathrm {\pi \;W}$
A
From question 7 $\mathrm {I \;at \;1 \;m = I0\times10^8 = 1\times 10^{-12}\times 10^8 = 1\times 10^{-4} W/m^2}$
Also $\mathrm {I = P/4\pi \; r^2}$
$\mathrm {P = I(4\pi \; r^2) = (1\times 10^{-4} W/m^2)(4\pi )(1.0 m)^2 = 4\pi \times 10^{-4} W}$
9. Which statement concerning the human ear is NOT correct?
(A) On average, the human ear is most sensitive to sounds of frequency about 3500 Hz.
(B) The tympanic membrane, oscicles and oval window form a piston-lever system which amplifies the pressure fluctuations in the sound wave,
(C) In adults, the air in the auditory canal has a natural resonance frequency of about 500 Hz.
(D) The basilar membrane contains the hair cells which change mechanical oscillations to electrical nerve signals.
(E) The region of maximum amplitude oscillation of the basilar membrane depends on the sound frequency; this may be one way we recognize sounds of different pitch.
C
The resonance of the auditory canal is at about 3500 Hz not 500 Hz.
10. The diagram illustrates a camera with a 50.0 mm focal length lens, taking a picture of a person. The image on the film is 34 mm high and the film plane is 51.0 mm behind the lens. Which row below correctly gives the lens-person distance and the person's height?
Answer Lens-Person Distance in m Height of Person in m
3.5 1.7
B 2.6 1.7
C 3.5 1.4
D 2.6 1.4
E 3.5 2.0
B
$\mathrm {1/p + 1/q = 1/f}$
$\mathrm {1/p + 1/0.051 = 1/0.050}$
$\mathrm {p = 2.55\; m = 2.6 \; m}$
$\mathrm {m = y'/y = -q/p}$
$\mathrm {-0.034\; m/y = -(+0.051 \; m)/+2.55\; m}$
$\mathrm {y = 1.7\; m}$
11. John is myopic and he wears corrective eyeglasses with a power of -2.25 diopters for his left eye. The near point of his corrected vision for his left eye is 25 cm. Where are the near and far points of John's uncorrected left eye?
Answer Near Point (m) Far Point (m)
A 0.25 infinity
B infinity 0.25
C 0.16 0.25
D 0.16 0.44
E 0.25 0.44
D
Far point: For an object at infinity, the corrective lens must produce a virtual image at the uncorrected far point.
$\mathrm {i.e.,\;1/p + 1/q = P}$
$\mathrm {1/inf + 1/q = -2.25 \;diopters}$
$\mathrm {q = -0.44 \;m}$
Therefore the uncorrected far point is at $\mathrm {0.44 \;m}$
Near Point: For an object at $\mathrm {0.25 \;m}$, the lens produces a virtual image at the uncorrected near point.
$\mathrm {i.e.,\; 1/p + 1/q = P}$
$\mathrm {1/0.25 + 1/q = -2.25 \;diopters}$
$\mathrm {q = -0.16\; m}$
Therefore the uncorrected near point is at $\mathrm {0.16\; m}$
12. The 2 bright wing lights of a Boeing 747 are 30 m apart. The pilot of another plane suddenly sees the two lights, which previously appeared as one, resolve into two separate lights. Assuming the pilot has a pupil diameter of 2.0 mm and the wavelength of the light is 550 nm, the distance from the pilot to the Boeing 747 is: (assume visual resolution is limited only by diffraction in the eye.)
(A) 100 m
(B) 1.2 km
(C) 20 km
(D) 90 km
(E) 250 km
D
$\mathrm {\alpha = (1.22\lambda )/na = 30/D,\; for \;a \;small \; angle.}$
$\mathrm {\alpha = (1.22\times 550\times10^{-9} m)/(1.00\times2.0\times10^{-3} m) = 30/D}$
$\mathrm {D =89\times10^3 m = 90\; km}$
13. Which line below correctly describes the electric field at point x, midway between the two ions. The ions are in water with a dielectric constant of 80.
(A) $\mathrm {13.5 \times 10^8 \;N/C\; toward \;the \; Cl^- \;ion}$
(B) $\mathrm {13.5 \times 10^8 \; N/C \; toward \; the\; Ca^{++} \; ion}$
(C) $\mathrm {4.5 \times 10^8 \; N/C \; toward \; the \; Cl^- \; ion}$
(D) $\mathrm {4.5 \times 10^8 \; N/C \; toward\; the \; Ca^{++} \;ion}$
(E) $\mathrm {9.0 \times 10^8 \;N/C \;toward\; the\; Cl^- \;ion}$
A
The field $E_1$ at $x$ due to the $Ca^{++}$ ion acts toward the $Cl^-$ ion (or away from the $+ve \;Ca^{++}$)
$\mathrm {E_1 = (kQ)/(\kappa \;r^2) = (9\times10^{-9})(2\times1.6\times10^{-19})/(80\times2\times10^{-9}) = 9.00\times10^8 \;N/C}$
The field $E_2$ at $x$ due to the $Cl^-$ ion acts toward it (or in the same direction as $E_1$)
$\mathrm {E_2 = (kQ)/(\kappa r^2) = (9\times10^{-9})(1.6\times10^{-19})/(80\times 2\times 10^{-9}) = 4.50\times 10^8 \; N/C}$
The total field is $\mathrm {13.5\times10^8\; N/C}$ toward the $Cl^-$ ion.
14. In the electrical circuit below, what must be the value of gx (in S) in order that I = 20 A?
(A) 0.17
(B) 2.1
(C) 3.3
(D) 4.0
(E) 5.9
D
The total equivalent conductance $\mathrm {G_T}$ is given by
$\mathrm {I = G_TV}$
$\mathrm {G_T = I/V = 20A/10V = 2 \;S}$
The circuit simplifies as shown in the figure:
$\mathrm {1/G_T = 1/G_x + 1/G_{12}}$
$\mathrm {1/2=1/G_x + 1/4}$
$\mathrm {G_x = 4 \;S}$
15. A light source is emitting 4.2 watts of light energy at a wavelength of 550 nm. How many photons are emitted by this source in one hour?
(A) $5.5 \times 10^{-3}$
(B) $4.2 \times 10^4$
(C) $5.5 \times 10^{11}$
(D) $1.7 \times 10^{14}$
(E) $4.2 \times 10^{22}$
E
$\mathrm{Energy \;of \;1\; photon= \\ E_{ph} = hc/\lambda = (6.63\times 10^{-34} Js)(3.00\times 10^8 m/s)/550\times 10^{-9} m = 3.62\times 10^{-19} J}$
$\mathrm {Power = 42 \;W = E/T = (N E_{ph})/t = N 3.62\times10^{-19} J/3600 s}$
$\mathrm {N = 4.2\times10^{22} \;photons}$
16. The graph of the probability density of a p -electron as a function of x in a linear, conjugated carbon chain is shown below:
Which one of the lines below is correct?
Answer value of a Wavelength of electron probability that electron is between $x = \frac {1}{2}$ and $x = 3 \frac {1}{4}$ Graph of wave function
A $1/\ell$ $\ell/4$ 4
B $2/\ell$ $\ell$ 3/4
C $3/\ell$ $\ell/2$ 1/4
D $1/\ell$ $\ell$ 1/4
E $1/\ell$ $\ell/4$ 4
C
$\mathrm {P_x = \phi ^2 = (2/\ell)\sin^2(4\pi \; x /\ell), for\; an\; n = 4 \;electron}$
$\mathrm {a = peak\; value \;of\; P_x = 2/\ell \\ (i.e., the\; value \;of\; P_x \;at \;those\; x \;for\;which\; \sin^2() = 1)}$
$\mathrm {The \;wavelength = the\; length \;of \;2 \;humps = \ell /2\; for \;an \;n = 4 \;electron.}$
Probability = area relative to the whole area under the curve which is 1
The area for the space from $\mathrm { x = \ell /2\; to\; x = (3/4)\ell = \ell/4}$
Wave function or $\mathrm {\phi = P_x\;^{1/2}}$ is given in the figure.
17. The speed of a $\mathrm {\beta-particle}$ just after $\mathrm {\beta -decay}$ is determined to be one half the speed of light. The corresponding wavelength of this $\mathrm {\beta -particle}$ would be:
(A) $\mathrm {1.7 \times 10^{-8} m}$
(B) $\mathrm {4.9 \times 10^{-12} m}$
(C) $\mathrm {7.3 \times 10^{-7} m}$
(D) $\mathrm {9.4 \times 10^{-14} m}$
(E) $\mathrm {6.2 \times 10^{-5} m}$
B
For particles, $\mathrm {\lambda = h/mv = 6.63\times10^{-34} Js/(9.1\times10^{-31} kg)(1/2\times3.00\times 10^8 m/s)}$
(a $\beta$ particle is an electron)
$\mathrm {\lambda = 4.9\times10^{-12} m}$
18. What is the wavelength of the photon emitted during the transition shown in the diagram? Assume a linear conjugated p -system with carbon-carbon bond lengths of 0.15 nm.
(A) 670 nm
(B) 570 nm
(C) 470 nm
(D) 370 nm
(E) 270 nm
E
$\mathrm {E_{ph} = hc/\lambda = (4^2 -3^2)h^2/8ml^2}$
$\mathrm {\lambda = 8ml^2c/(4^2 - 3^2)h}$
There are 6 carbon atoms so there are 5 bonds making up l
$\mathrm {\lambda = \\ [8(9.1\times10^{-31})(5\times0.15\times10^{-9})^2(3.00\times10^8)]/[7(6.63\times10^{-34})] \\ = 2.65\times10^{-7} m \\ = 265 \;nm}$
19. If $\mathrm {E_r}$, $\mathrm {E_v}$ and $\mathrm {E_e}$ are, respectively, the energies required to bring about rotational, vibrational and rotational transitions in molecules, then:
(A) $\mathrm {\lambda _e > \lambda _ v \; and\; \lambda _ v =\lambda _r}$
(B) $\mathrm {\lambda _ v < \lambda _e < \lambda _ r}$
(C) $\mathrm {\lambda _e < \lambda _ v < \lambda_r}$
(D) $\mathrm {\lambda _v = \lambda_e \;and \; \lambda_ r <\lambda _v}$
(E) $\mathrm {\lambda_ r < \lambda_v < \lambda_ e}$
C
Since $\mathrm {E_e > E_v > E_r and E = hc/\lambda}$
$\mathrm {\lambda_ e < \lambda_ v <\lambda_ r}$
(See the Text p 48 - 53, Sections 4-4 to 4-5)
20. Which of the following radiations are listed in increasing order of relative biological effectiveness or qualifying factor? (i.e., increase from left to right)
(A) $\mathrm {\gamma -rays, \alpha -rays, \beta -rays}$
(B) $\mathrm {\gamma -rays, \beta -rays, \alpha -rays}$
(C) $\mathrm {\beta -rays, \gamma -rays, \alpha -rays}$
(D) $\mathrm {\alpha -rays, \beta -rays,\gamma -rays}$
(E) $\mathrm {\alpha -rays, \gamma -rays, \beta -rays}$
B
See discussion in the Text pp 77, 78.
21. $\mathrm {Thorium\;^{ 232}\;_{90}Th}$ undergoes $\mathrm {\alpha -decay}$ to produce a daughter nucleus which in turn undergoes $\mathrm {\beta \; decay}$ yielding $\mathrm {^A\;_zA_c}$. Which of the following properly gives A and z?
A 226 90
B 227 88
C 228 89
D 228 87
E 227 86
HINT an $\mathrm {\alpha -particle \; is\; ^4\;_2\alpha \; and \; a \; \beta -particle\; is \;^0\;_{-1}\beta}$
The $\mathrm {\alpha -decay}$ causes A to decrease by 4 and Z by 2.
($\mathrm {\beta\; decay}$ leaves A unchanged and Z increases by 1.) | 2021-11-29 11:32:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6980199217796326, "perplexity": 1400.3476449771574}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358705.61/warc/CC-MAIN-20211129104236-20211129134236-00068.warc.gz"} |
https://pythonarray.com/pyside-pyqt-tutorial-the-qlistwidget/ | Qt has a couple of widgets that allow single-column list selector controls — for brevity and convenience, we’ll call them list boxes. The most flexible way is to use a QListView, which provides a UI view on a highly-flexible list model which must be defined by the programmer; a simpler way is to use a QListWidget, which has a pre-defined item-based model that allows it to handle common use-cases for a list box. We’ll start with the simpler QListWidget.
### The QListWidget
The constructor of a QListWidget is like that of many QWidget-descended objects, and takes only an optional parent argument:
self.list = QListWidget(self)
### Filling a QListWidget
Filling a QListWidget with items is easy. If your items are plain text, you can add them singly:
for i in range(10):
self.list.addItem('Item %s' % (i + 1))
Or in bulk:
items = ['Item %s' % (i + 1)
for i in range(10)]
You can also add slightly more complicated list items using the QListWidgetItem class. A QListWidgetItem can be created in isolation and added to a list later using the list’s addItem method:
item = QListWidgetItem()
list.addItem(item)
### More complex QListWidget items
Or it can be created with the list as a parent, in which case it is automatically added to the list:
item = QListWidgetItem(list)
An item can have text set via its setText method:
item.setText('I am an item')
And an icon set to an instance of QIcon using its setIcon method:
item.setIcon(some_QIcon)
You can also specify the text or an icon and text in the QListWidgetItem‘s constructor:
item = QListWidgetItem('A Text-Only Item')
item = QListWidgetItem(some_QIcon, 'An item with text and an icon')
Each of the above constructor signatures may optionally accept a parent as well.
# Using a QListWidget
The QListWidget offers several convenient signals that you can use to respond to user input. The most important is the currentItemChanged signal, which is emitted when the user changes the selected item; its slots receive two arguments, current and previous, which are the currently and previously selected QListWidgetItems. There are also signals for when a user clicksdouble-clicksactivates, or presses an item, and when the set of selected items is changed.
To get the currently selected item, you can either use the arguments passed by the currentItemChanged signal or you can use the QListWidget’s currentItem method.
### A Note On QIcons
One of the few ways you can customize a QListWidgetItem is by adding an icon, so it is important that you gain some understanding of QIcons. There are many ways of constructing a QIcon; you can create them by:
• Providing a filename: icon = QIcon('/some/path/to/icon.png').
• Using a theme icon: icon = QIcon.fromTheme('document-open').
• From a QPixMapicon = QIcon(some_pixmap).
And many others. A couple comments on the different methods: first, note that the file-based creation supports a wide but not unlimited set of file types; you can find out which are supported by your version and platform by running QImageReader().supportedImageFormats(). On my system, it returns:
[PySide.QtCore.QByteArray('bmp'),
PySide.QtCore.QByteArray('gif'),
PySide.QtCore.QByteArray('ico'),
PySide.QtCore.QByteArray('jpeg'),
PySide.QtCore.QByteArray('jpg'),
PySide.QtCore.QByteArray('mng'),
PySide.QtCore.QByteArray('pbm'),
PySide.QtCore.QByteArray('pgm'),
PySide.QtCore.QByteArray('png'),
PySide.QtCore.QByteArray('ppm'),
PySide.QtCore.QByteArray('svg'),
PySide.QtCore.QByteArray('svgz'),
PySide.QtCore.QByteArray('tga'),
PySide.QtCore.QByteArray('tif'),
PySide.QtCore.QByteArray('tiff'),
PySide.QtCore.QByteArray('xbm'),
PySide.QtCore.QByteArray('xpm')]
As I said, a pretty wide selection. Theme-based icon creation is problematic outside of well-established platforms; on Windows and OS X you should be fine, as well as if you’re on Linux using Gnome or KDE, but if you use a less common desktop environment, such as OpenBox or XFCE, Qt might not be able to find your icons; there are ways around that, but no good ones, so you may be stuck with text only.
### A QListWidget Example
Let’s create a simple list widget that displays the file-name and a thumbnail icon for all the images in a directory. Since the items are simple enough to create as a QListWidgetItem, we’ll have it inherit from QListWidget.
First off, we’ll need to know what image formats are supported by your installation, so our list control can tell what’s a valid image. We can use the method mentioned above, QImageReader().supportedImageFormats(). We’ll convert them all to strings before we return them:
def supported_image_extensions():
''' Get the image file extensions that can be read. '''
# Convert the QByteArrays to strings
return [str(fmt) for fmt in formats]
Now that we have that, we can build our image-list widget; we’ll call it – intuitively enough – ImageFileWidget. It will inherit from QListWidget, and in addition to an optional parent argument, like all QWidgets, it will take a required dirpath:
class ImageFileList(QListWidget):
''' A specialized QListWidget that displays the list
of all image files in a given directory. '''
def __init__(self, dirpath, parent=None):
QListWidget.__init__(self, parent)
We’ll want it to have a way to determine what images are in a given directory. We’ll give it an _images method that will return the file-names of all valid images in the specified directory. It’ll employ the glob module’s glob function, which does shell-style pattern-matching of file and directory paths:
def _images(self):
''' Return a list of file-names of all
supported images in self._dirpath. '''
images = []
# Find the matching files for each valid
# extension and add them to the images list.
for extension in supported_image_extensions():
pattern = os.path.join(self._dirpath,
'*.%s' % extension)
images.extend(glob(pattern))
return images
Now that we have a way of figuring out what image files are in the directory, it’s a simple matter to add them to our QListWidget. For each file-name, we create a QListWidgetItem with the list as its parent, set its text to the file-name, and set its icon to a QIcon created from the file:
def _populate(self):
''' Fill the list with images from the
current directory in self._dirpath. '''
# In case we're repopulating, clear the list
self.clear()
# Create a list item for each image file,
# setting the text and icon appropriately
for image in self._images():
item = QListWidgetItem(self)
item.setText(image)
item.setIcon(QIcon(image))
Finally, we’ll add a method to set the directory path that repopulates the list every time it is called:
def setDirpath(self, dirpath):
''' Set the current image directory and refresh the list. '''
self._dirpath = dirpath
self._populate()
And we’ll add a line to the constructor to call the setDirpath method:
self.setDirpath(dirpath)
This, then, is our final code for our ImageFileList class:
class ImageFileList(QListWidget):
''' A specialized QListWidget that displays the
list of all image files in a given directory. '''
def __init__(self, dirpath, parent=None):
QListWidget.__init__(self, parent)
self.setDirpath(dirpath)
def setDirpath(self, dirpath):
''' Set the current image directory and refresh the list. '''
self._dirpath = dirpath
self._populate()
def _images(self):
''' Return a list of filenames of all
supported images in self._dirpath. '''
images = []
# Find the matching files for each valid
# extension and add them to the images list
for extension in supported_image_extensions():
pattern = os.path.join(self._dirpath,
'*.%s' % extension)
images.extend(glob(pattern))
return images
def _populate(self):
''' Fill the list with images from the
current directory in self._dirpath. '''
# In case we're repopulating, clear the list
self.clear()
# Create a list item for each image file,
# setting the text and icon appropriately
for image in self._images():
item = QListWidgetItem(self)
item.setText(image)
item.setIcon(QIcon(image))
So let’s put our ImageFileList in a simple window so we can see it in action. We’ll create a QWidget to serve as our window, stick a QVBoxLayout in it, and add the ImageFileList, along with an entry widget that will display the currently selected item. We’ll use the ImageFileList‘s currentItemChanged signal to keep them synchronized.
We’ll create a QApplication object, passing it an empty list so we can use sys.argv[1] to pass in the image directory:
app = QApplication([])
Then, we’ll create our window, setting a minimum size and adding a layout:
win = QWidget()
win.setWindowTitle('Image List')
win.setMinimumSize(600, 400)
layout = QVBoxLayout()
win.setLayout(layout)
Then, we’ll instantiate an ImageFileList, passing in the received image directory path and our window as its parent:
first = ImageFileList(sys.argv[1], win)
entry = QLineEdit(win)
And add both widgets to our layout:
layout.addWidget(first)
layout.addWidget(entry)
Then, we need to create a slot function to be called when the current item is changed; it has to take arguments, curr and prev, the currently and previously selected items, and should set the entry’s text to the text of the current item:
def on_item_changed(curr, prev):
entry.setText(curr.text())
Then, we’ll hook it up to the signal:
lst.currentItemChanged.connect(on_item_changed)
All that’s left is to show the window and run the app:
win.show()
app.exec_()
Our final section, wrapped in the standard if __name__ == '__main__' block, then, is:
if __name__ == '__main__':
# The app doesn't receive sys.argv, because we're using
# sys.argv[1] to receive the image directory
app = QApplication([])
# Create a window, set its size, and give it a layout
win = QWidget()
win.setWindowTitle('Image List')
win.setMinimumSize(600, 400)
layout = QVBoxLayout()
win.setLayout(layout)
# Create one of our ImageFileList objects using the image
# directory passed in from the command line
lst = ImageFileList(sys.argv[1], win)
entry = QLineEdit(win)
def on_item_changed(curr, prev):
entry.setText(curr.text())
lst.currentItemChanged.connect(on_item_changed)
win.show()
app.exec_()
Running our whole example requires that you have a directory full of images; I used one in my Linux distribution’s /usr/share/icons directory as an example:
python imagelist.py /usr/share/icons/nuoveXT2/48x48/devices
But you will have to find your own. Almost any images will do.
The QListWidget is obviously a very simple widget, and doesn’t offer many options; there are a lot of use cases for which it will not suffice. For those cases, you will probably use a QListView, which we will discuss in the next installment. | 2022-12-02 22:51:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26497021317481995, "perplexity": 2617.9796911593207}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710916.70/warc/CC-MAIN-20221202215443-20221203005443-00836.warc.gz"} |
https://pypi.org/project/capturer/ | Easily capture stdout/stderr of the current process and subprocesses
## Project description
The capturer package makes it easy to capture the stdout and stderr streams of the current process and subprocesses. Output can be relayed to the terminal in real time but is also available to the Python program for additional processing. It’s currently tested on cPython 2.7, 3.5+ and PyPy (2.7). It’s tested on Linux and Mac OS X and may work on other unixes but definitely won’t work on Windows (due to the use of the platform dependent pty module). For usage instructions please refer to the documentation.
## Status
The capturer package was developed as a proof of concept over the course of a weekend, because I was curious to see if it could be done (reliably). After a weekend of extensive testing it seems to work fairly well so I’m publishing the initial release as version 1.0, however I still consider this a proof of concept because I don’t have extensive “production” experience using it yet. Here’s hoping it works as well in practice as it did during my testing :-).
## Installation
The capturer package is available on PyPI which means installation should be as simple as:
\$ pip install capturer
There’s actually a multitude of ways to install Python packages (e.g. the per user site-packages directory, virtual environments or just installing system wide) and I have no intention of getting into that discussion here, so if this intimidates you then read up on your options before returning to these instructions ;-).
## Getting started
The easiest way to capture output is to use a context manager:
import subprocess
from capturer import CaptureOutput
with CaptureOutput() as capturer:
# Generate some output from Python.
print "Output from Python"
# Generate output from a subprocess.
subprocess.call(["echo", "Output from a subprocess"])
# Get the output in each of the supported formats.
assert capturer.get_bytes() == b'Output from Python\r\nOutput from a subprocess\r\n'
assert capturer.get_lines() == [u'Output from Python', u'Output from a subprocess']
assert capturer.get_text() == u'Output from Python\nOutput from a subprocess'
The use of a context manager (the with statement) ensures that output capturing is enabled and disabled at the appropriate time, regardless of whether exceptions interrupt the normal flow of processing.
Note that the first call to get_bytes(), get_lines() or get_text() will stop the capturing of output by default. This is intended as a sane default to prevent partial reads (which can be confusing as hell when you don’t have experience with them). So we could have simply used print to show the results without causing a recursive “captured output is printed and then captured again” loop. There’s an optional partial=True keyword argument that can be used to disable this behavior (please refer to the documentation for details).
## Design choices
There are existing solutions out there to capture the stdout and stderr streams of (Python) processes. The capturer package was created for a very specific use case that wasn’t catered for by existing solutions (that I could find). This section documents the design choices that guided the development of the capturer package:
### Intercepts writes to low level file descriptors
Libraries like capture and iocapture change Python’s sys.stdout and sys.stderr file objects to fake file objects (using StringIO). This enables capturing of (most) output written to the stdout and stderr streams from the same Python process, however any output from subprocesses is unaffected by the redirection and not captured.
The capturer package instead intercepts writes to low level file descriptors (similar to and inspired by how pytest does it). This enables capturing of output written to the standard output and error streams from the same Python process as well as any subprocesses.
### Uses a pseudo terminal to emulate a real terminal
The capturer package uses a pseudo terminal created using pty.openpty() to capture output. This means subprocesses will use ANSI escape sequences because they think they’re connected to a terminal. In the current implementation you can’t opt out of this, but feel free to submit a feature request to change this :-). This does have some drawbacks:
• The use of pty.openpty() means you need to be running in a UNIX like environment for capturer to work (Windows definitely isn’t supported).
• All output captured is relayed on the stderr stream by default, so capturing changes the semantics of your programs. How much this matters obviously depends on your use case. For the use cases that triggered me to create capturer it doesn’t matter, which explains why this is the default mode.
There is experimental support for capturing stdout and stderr separately and relaying captured output to the appropriate original stream. Basically you call CaptureOutput(merged=False) and then you use the stdout and stderr attributes of the CaptureOutput object to get at the output captured on each stream.
I say experimental because this method of capturing can unintentionally change the order in which captured output is emitted, in order to avoid interleaving output emitted on the stdout and stderr streams (which would most likely result in incomprehensible output). Basically output is relayed on each stream separately after each line break. This means interactive prompts that block on reading from standard input without emitting a line break won’t show up (until it’s too late ;-).
### Relays output to the terminal in real time
The main use case of capturer is to capture all output of a snippet of Python code (including any output by subprocesses) but also relay the output to the terminal in real time. This has a couple of useful properties:
• Long running operations can provide the operator with real time feedback by emitting output on the terminal. This sounds obvious (and it is!) but it is non-trivial to implement (an understatement :-) when you also want to capture the output.
• Programs like gpg and ssh that use interactive password prompts will render their password prompt on the terminal in real time. This avoids the awkward interaction where a password prompt is silenced but the program still hangs, waiting for input on stdin.
## Contact
The latest version of capturer is available on PyPI and GitHub. The documentation is hosted on Read the Docs and includes a changelog. For bug reports please create an issue on GitHub. If you have questions, suggestions, etc. feel free to send me an e-mail at [email protected].
A big thanks goes out to the pytest developers because pytest’s mechanism for capturing the output of subprocesses provided inspiration for the capturer package. No code was copied, but both projects are MIT licensed anyway, so it’s not like it’s very relevant :-).
## Project details
Uploaded source
Uploaded py2 py3 | 2022-08-19 11:51:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19142389297485352, "perplexity": 3207.810737705544}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573667.83/warc/CC-MAIN-20220819100644-20220819130644-00422.warc.gz"} |
http://docs.itascacg.com/flac3d700/pfc/docproject/source/manual/examples/tutorials/callbacks/callbacks.html | # Using FISH Callbacks
Introduction
Note
The project file for this example may be viewed/run in PFC.[1] The data files used are shown at the end of this example.
The fish callback command can be used to register user-defined FISH functions to be executed in response to specific callback events during a simulation. Callbacks can occur:
• at select positions in the cycle sequence; or
• in response to specific events.
This tutorial illustrates several situations where one can use callbacks to execute FISH functions.
Numerical Model
To reproduce the models discussed below, open the project file named “Callbacks.prj” available under the Help —> Examples… menu of PFC3D.
Position in the Calculation Cycle
The calculation cycle consists of a series of operations that are executed in a specific order. Each operation is associated with a floating point number milestone, also referred to as a cycle point (see below).
Table 1: Cycle Operations and Associated Cycle Points
Cycle Point Cycle Operation
-10.0 Validate data structures
0.0 Timestep determination
10.0 Law of motion (or update thermal bodies)
15.0 Body coupling between processes
30.0 Update spatial searching data structures
35.0 Create/delete contacts
40.0 Force-displacement law (or thermal contact update)
42.0 Accumulate deterministic quantities
45.0 Contact coupling between processes
60.0 Second pass of equations of motion (not used in PFC)
70.0 Thermal calculations (not used in PFC)
80.0 Fluid calculations (not used in PFC)
Interfering with calculations as they occur could be dangerous and is not permitted. For instance, if one was able to delete a ball while the timestep was being calculated on that ball, the code would crash. As a result, the cycle points are reserved, in that the user is not allowed to attach a FISH function to a callback event at those cycle points. For similar reasons, the user is not allowed to interfere between cycle points 40.0 (force-displacement calculations) and 42.0 (accumulation of deterministic quantities), and the creation and deletion of model components (balls, clumps, or pebbles and walls or facets) is only permitted before cycle point 0.0 (timestep evaluation).
Except for these limitations, the user may register FISH functions to be executed into the cycle sequence to operate on the model by registering those functions at user-defined cycle points (e.g., cycle points 10.1, 10.15, 10.3, etc.).
Creating Balls
The first example demonstrates the periodic insertion of balls into a model. The complete data file for this example is “callbacks1.dat (3D)”. Select lines are discussed below.
A simple system consisting of balls interacting in a box is modeled. Balls are created at a given frequency using the FISH function add_ball.
fish define add_ball
global tnext,freq
local tcurrent = mech.time.total
if tcurrent < tnext then
exit
endif
tnext = tcurrent + freq
local xvel = (math.random.uniform -0.5) * 2.0
local yvel = (math.random.uniform -0.5) * 2.0
local bp = ball.create(0.3,vector(0.0,0.0,1.75))
ball.vel(bp) = vector(xvel,yvel,-2.0)
ball.density(bp) = 1.1e3
ball.damp(bp) = 0.1
end
This function is registered (see the fish callback command) at the cycle point -11.0 before the data structures are validated. As add_ball is executed during each cycle, the current mechanical age is checked against the next insertion time to decide whether or not to create a ball.
[freq = 0.25]
[time_start = mech.time.total]
[tnext = time_start ]
The model is cycled for a given target time with the command below.
model solve time 10.0
While cycles are performed, balls are inserted in the model (Figure 1). Note that balls continue to be inserted in the model with additional cycling as the add_ball function remains registered at cycle point -11.0.
Figure 1: The system during cycling with the add_ball FISH function registered as a callback.
In this example, the add_ball function is removed from the callback list and the model is solved to equilibrium. Thus, no additional balls are inserted and the model reaches an equilibrium state (shown in Figure 2).
fish callback remove add_ball -11.0
model solve
Figure 2: Final state of the system.
Incorporating New Physics
The second example (“callbacks2.dat (3D)”) builds upon the example above. Two additional FISH functions, registered at different points in the cycle sequence, are introduced. These functions apply additional forces to the balls, modeling the effect of a fluid in the box.
First, a function named add_fluidforces is defined:
fish define add_fluidforces
global vf = 0.0
global zf_, etaf_, rhof_
loop foreach ball ball.list
local vi = 0.0
local d1 = ball.pos.z(ball) - ball.radius(ball)
if ball.pos.z(ball) - ball.radius(ball) >= zf_
; above water level
ball.force.app(ball) = vector(0.0,0.0,0.0)
else
local vbal = 4.0*math.pi*ball.radius(ball)^3 / 3.0
if ball.pos.z(ball) + ball.radius(ball) <= zf_ then
; totally immerged
else
; partially immerged
if ball.pos.z(ball) >= zf_ then
global h = ball.radius(ball) - (ball.pos.z(ball)-zf_)
global vcap = math.pi*h^2*(3*ball.radius(ball) - h) /3.0
vi = vcap
else
h = ball.radius(ball) - (zf_ - ball.pos.z(ball))
vcap = math.pi*h^2*(3*ball.radius(ball) - h) /3.0
vi = vbal - vcap
endif
endif
global fb = -1.0*rhof_*vi*global.gravity
ball.force.app(ball) = fb + (vi/vbal) *fdrag
endif
vf = vf + vi
endloop
end
This function loops over all the balls in the system using the ball.force.app FISH intrinsic to apply additional forces. The applied force is a combination of a buoyancy term, $$\mathbf{F_b}$$, and a drag term, $$\mathbf{F_d}$$, computed as
$\begin{split}\mathbf{F_b} &= - \rho_f V_i \mathbf{g} \\ \mathbf{F_d} &= - 6 \pi \eta_f R \alpha \mathbf{v}\end{split}$
where $$\rho_f$$ is the fluid density, $$V_i$$ is the immersed ball volume, $$\mathbf{g}$$ is the gravity vector, $$\eta_f$$ is the fluid dynamic viscosity, $$R$$ is the ball radius, $$\alpha$$ is the ratio of non-immersed ball volume to the total ball volume and $$\mathbf{v}$$ is the ball velocity vector. The expression chosen for $$\mathbf{F_d}$$ is typical for a single sphere completely immersed in a fluid in the laminar regime (e.g., large fluid viscosity). The factor $$\alpha$$ is introduced to scale down the drag force when the ball is only partially immersed.
A second function, named move_surface, is also implemented:
fish define move_surface
global zf0_, gset_
zf_ = zf0_ + (vf/16.0)
loop foreach node geom.node.list(gset_)
geom.node.pos.z(node) = zf_
endloop
end
The purpose of this function is to adjust the fluid surface height, zf_, according to the sum of the immersed volume of balls accumulated by add_fluidforces. In turn, the modified value of zf_ is used in add_fluidforces to adjust buoyancy during the next cycle. The function move_surface also modifies the height of the nodes of a geometry object added to the model to visualize the fluid surface:
[rhof_ = 1.26e3]
[zf0_ = -1.0]
[zf_ = zf0_]
[etaf_ = 1.49]
geometry set 'surface' polygon create by-position -2.0 -2.0 [zf0_] ...
2.0 -2.0 [zf0_] ...
2.0 2.0 [zf0_] ...
-2.0 2.0 [zf0_]
[gset_ = geom.set.find('surface')]
The parameters are chosen to simulate a fluid with viscosity larger than the ball density.
Before cycles are executed, the two functions discussed above are registered as callbacks in the cycle sequence at cycle points with values 50.0 and 50.1:
fish callback add add_fluidforces 50.0
This ensures that both functions are executed after all built-in operations in the PFC cycle sequence and that move_surface is executed after the total non-immersed volume has been updated. Should additional computation be required between the two functions, an additional FISH function could be registered (with a cycle point with value 50.05, for instance).
Figure 3: Intermediate state of the system with fluid force active.
Figure 4: Final state of the system with fluid force active.
Figures 3 and 4 show the state of the system during cycling and in its final stable configuration, respectively. Balls are added periodically, and the balls are pushed toward the fluid surface due to the buoyancy force, with motion damped by the fluid drag force.
The model described here may seem oversimplified. Notably, the expression of the drag force does not account for fluid velocity. However, this simple example demonstrates how new physics can be added to PFC with just a few steps. Should a more sophisticated model including fluid-mechanical coupling be required, more elaborate algorithms could be devised, or coupling with a third party CFD solver could be envisioned (see the section “CFD module for PFC3D”).
Named Callback Events
To provide more flexibility, one may also register FISH functions to be executed in response to named events that may occur during the cycle sequence. A list of events is shown in below. This list may not be exhaustive because new events can be added by user-defined contact models.
Table 2: Named Callback Events
Event Type Event Name Argument(s)
contact model contact_activated FISH array, contact model-specific
contact model slip_change FISH array, contact model-specific
contact model bond_break FISH array, contact model-specific
create/delete contact_create contact pointer
create/delete contact_delete contact pointer
create/delete ball_create ball pointer
create/delete ball_delete ball pointer
create/delete clump_create clump pointer
create/delete clump_delete clump pointer
create/delete rblock_create rblock pointer
create/delete rblock_delete rblock pointer
create/delete facet_create wall facet pointer
create/delete facet_delete wall facet pointer
create/delete wall_create wall pointer
create/delete wall_delete wall pointer
create/delete ballthermal_create ballthermal pointer
create/delete ballthermal_delete ballthermal pointer
create/delete clumpthermal_create clumpthermal pointer
create/delete clumpthermal_delete clumpthermal pointer
create/delete wallthermal_create wallthermal pointer
create/delete wallthermal_delete wallthermal pointer
create/delete ballcfd_create ballcfd pointer
create/delete ballcfd_delete ballcfd pointer
create/delete clumpcfd_create clumpcfd pointer
create/delete clumpcfd_delete clumpcfd pointer
solve cfd_before_send
solve cfd_before_update
solve cfd_after_update
solve solve_complete
An important feature of named callback events is that arguments may be passed to the associated FISH function when it is executed. For instance, if the contact_create event is triggered, then a pointer to the new contact is passed as an argument to the registered FISH functions if they accept arguments. Contact model events always pass FISH arrays as arguments, with the content of the array depending on the implementation of the contact model.
The last example in this tutorial (see “callbacks3.dat (3D)”) illustrates how the contact_create and contact_activated events can be used. The model starts with the final configuration obtained with “callbacks1.dat (3D)” (i.e., balls settled under gravity in a box). In this case, the orientation of gravity is modified to be parallel with the $$x$$-direction, causing balls to flow toward the corresponding side of the box. A FISH function is registered with either the contact_create or the contact_activated event, whose purpose is to query whether the contact is between a ball and the top side of the box. In this circumstance, the ball is deleted. In this way, the top of the box is acting as a particle sink. Since these two events do not return the same list of arguments, the functions must differ slightly (as discussed below).
Contact Creation Event
The FISH function below can be registered with the contact_create event:
fish define catch_contacts(cp)
if type.pointer(cp) # 'ball-facet' then
exit
endif
wfp = contact.end2(cp)
if wall.id(wall.facet.wall(wfp)) # 2 then
exit
endif
end
fish callback add catch_contacts event contact_create
In this case, a contact pointer is passed as an argument to catch_contacts. Note that this event is triggered whenever any contact is created in the system (i.e., including ball-ball contacts). Therefore catch_contacts must check whether the contact occurs between a ball and a wall facet, before checking the ID of the wall. If the contact is a ball-facet contact and it is with a facet of the top wall (i.e., wall ID 2), the ball is added to a list of balls to be deleted. The ball cannot be deleted automatically because the ball creation/deletion is limited to cycle points prior to 0.
The final state of the system after cycling is shown in Figure 5. Note that contacts in PFC are created at the discretion of the contact detection logic, as discussed in the section “Contacts and Contact Models.” It is guaranteed that a contact between two pieces will be created before the pieces actually interact, meaning that there is no exact control of the distance at which a contact will be created. A drawback of the approach given in this example is that balls that do not overlap the top wall may be deleted.
Figure 5: Final state of the system. Gravity is parallel to the x-axis, and balls are deleted as contacts with the top wall are created.
Contact Model Events
An alternative method to overcome ball deletion prior to overlap with a wall facet is to register a FISH function with the contact_activated event defined by the linear contact model:
fish define catch_contacts(arr)
local cp = arr(1)
if type.pointer(cp) # 'ball-facet' then
exit
endif
local wfp = contact.end2(cp)
if wall.id(wall.facet.wall(wfp)) # 2 then
exit
endif
end
fish callback add catch_contacts event contact_activated
Here the argument passed to catch_contacts is a FISH array containing the contact pointer. This corresponds with the linear model implementation. Except for this difference, this version of catch_contacts operates as described above. However, since the contact_activated event of the linear model is triggered when the contact becomes active, the balls are deleted only when they physically overlap a facet of the upper wall. The final state of the system after cycling is shown in Figure 6.
Figure 6: Final state of the system. Gravity is parallel to the x-axis, and balls are deleted when they physically overlap the top wall.
Discussion
This tutorial demonstrates the use of the PFC callback mechanism to execute FISH functions in several situations. FISH functions can be registered to be executed at select points in the cycle sequence or in response to named events. Any user-defined FISH function can be registered as a callback with the fish callback command. There is no need for the function to accept arguments, though they may be useful. The list of potential arguments depends on the specific event.
Data Files
callbacks1.dat (3D)
; fname: callbacks1.dat
;
; Demonstrate usage of the set fish callback command to insert balls at a
; given rate while cycling
;
;=========================================================================
model new
model large-strain on
model random 10001
model title 'Using FISH Callbacks'
; Define the domain and the default contact model
model domain extent -3 3
contact cmat default model linear property kn 1.0e6 dp_nratio 0.5
; Generate a box and set gravity
wall generate box -2 2
model gravity 10.0
; Define the add_ball function, which will be registered as a fish callback
; and will insert balls at a given frequency in the model
; excerpt-jrow-start
global tnext,freq
local tcurrent = mech.time.total
if tcurrent < tnext then
exit
endif
tnext = tcurrent + freq
local xvel = (math.random.uniform -0.5) * 2.0
local yvel = (math.random.uniform -0.5) * 2.0
local bp = ball.create(0.3,vector(0.0,0.0,1.75))
ball.vel(bp) = vector(xvel,yvel,-2.0)
ball.density(bp) = 1.1e3
ball.damp(bp) = 0.1
end
; excerpt-jrow-end
; Set parameters and register the function add_ball with a fish callback at
; position -11.0 in the cycle sequence. Model components cannot be inserted
; in the model after the timestep has been evaluated, which corresponds to
; position 0.0 in the cycle sequence
; excerpt-emax-start
[freq = 0.25]
[time_start = mech.time.total]
[tnext = time_start ]
; excerpt-emax-end
; Solve to a target time of 10.0 time-units
; excerpt-omra-start
model solve time 10.0
; excerpt-omra-end
model save 'intermediate'
; Continue cycling to an additional target time of 15.0 time-units
; Note that the fish callback is still active
model solve time 15.0
; Deactivate the fish callback and solve to equilibrium (default limit
; corresponds to an average ratio of 1e-5)
; excerpt-orms-start
model solve
; excerpt-orms-end
model save 'settled'
program return
;=========================================================================
; eof: callbacks1.dat
callbacks2.dat (3D)
; fname: callbacks2.dat
;
; Demonstrate usage of the set fish callback command to insert balls at a
; given rate while cycling and add applied forces to the balls
;=========================================================================
model new
model large-strain on
model random 10001
model title 'Using FISH Callbacks'
; Define the domain and the default contact model
model domain extent -3 3
contact cmat default model linear property kn 1.0e6 dp_nratio 0.5
; Generate a box and set gravity
wall generate box -2 2
model gravity 10.0
; Define the add_ball function, which will be registered as a fish callback
; and will insert balls at a given frequency in the model
global tnext, freq
local tcurrent = mech.time.total
if tcurrent < tnext then
exit
endif
tnext = tcurrent + freq
local xvel = (math.random.uniform -0.5) * 2.0
local yvel = (math.random.uniform -0.5) * 2.0
local bp = ball.create(0.3,vector(0.0,0.0,1.75))
ball.vel(bp) = vector(xvel,yvel,-2.0)
ball.density(bp) = 1.1e3
ball.damp(bp) = 0.1
end
; Set parameters and register the function add_ball with a fish callback at
; position -11.0 in the cycle sequence. Model components cannot be inserted
; in the model after the timestep has been evaluated, which corresponds to
; position 0.0 in the cycle sequence
[freq = 0.25]
[time_start = mech.time.total]
[tnext = time_start ]
; Define the add_fluidforces function, which will be registered as a fish
; callback and will apply fluid forces (buoyancy and drag) to the balls
; excerpt-hgha-start
global vf = 0.0
global zf_, etaf_, rhof_
loop foreach ball ball.list
local vi = 0.0
local d1 = ball.pos.z(ball) - ball.radius(ball)
if ball.pos.z(ball) - ball.radius(ball) >= zf_
; above water level
ball.force.app(ball) = vector(0.0,0.0,0.0)
else
local vbal = 4.0*math.pi*ball.radius(ball)^3 / 3.0
if ball.pos.z(ball) + ball.radius(ball) <= zf_ then
; totally immerged
else
; partially immerged
if ball.pos.z(ball) >= zf_ then
global h = ball.radius(ball) - (ball.pos.z(ball)-zf_)
global vcap = math.pi*h^2*(3*ball.radius(ball) - h) /3.0
vi = vcap
else
h = ball.radius(ball) - (zf_ - ball.pos.z(ball))
vcap = math.pi*h^2*(3*ball.radius(ball) - h) /3.0
vi = vbal - vcap
endif
endif
global fb = -1.0*rhof_*vi*global.gravity
ball.force.app(ball) = fb + (vi/vbal) *fdrag
endif
vf = vf + vi
endloop
end
; excerpt-hgha-end
; Define the move_surface function, which will be registered as a fish
; callback and will update the fluid surface position to account for
; immersed balls volume
; excerpt-ewza-start
fish define move_surface
global zf0_, gset_
zf_ = zf0_ + (vf/16.0)
loop foreach node geom.node.list(gset_)
geom.node.pos.z(node) = zf_
endloop
end
; excerpt-ewza-end
; Set parameters and create a polygon with the geometry logic to
; visualize the fluid surface. Store a pointer to the geometry set
; to be used by the move_surface function.
; excerpt-orre-start
[rhof_ = 1.26e3]
[zf0_ = -1.0]
[zf_ = zf0_]
[etaf_ = 1.49]
geometry set 'surface' polygon create by-position -2.0 -2.0 [zf0_] ...
2.0 -2.0 [zf0_] ...
2.0 2.0 [zf0_] ...
-2.0 2.0 [zf0_]
[gset_ = geom.set.find('surface')]
; excerpt-orre-end
; Register the add_fluidforces and move_surface functions
; at cycle points 50.0 and 50.1 in the cycle sequence respectively.
; The operations will occur after force-displacement calculation, and
; the applied forces will affect the balls at the next step.
; The move_surface function will be called after fluid forces have been
; applied to all the balls (and immersed volumes updated)
; excerpt-irzs-start
; excerpt-irzs-end
; Solve to a target time of 10.0 time-units
model solve time 10.0
model save 'fluid-intermediate'
; Continue cycling to an additional target time of 15.0 time-units
; Note that the fish callback is still active
model solve time 15.0
; Deactivate the add_ball fish callback and solve to equilibrium
; (default limit corresponds to an average ratio of 1e-5)
model solve
model save 'fluid-final'
program return
;=========================================================================
; eof: callbacks2.dat
callbacks3.dat (3D)
; fname: callbacks3.dat
;
; Demonstrate usage of the set fish callback command to insert balls at a
; given rate while cycling and add fluid forces
;=========================================================================
model restore 'settled'
[todelete = map()]
fish define delete_balls
loop foreach key map.keys(todelete)
local ball = map.remove(todelete,key)
ball.delete(ball)
endloop
end
model gravity 10.0 0.0 0.0
model save 'sink-initial'
; Register a function with the contact_create event
fish define catch_contacts(cp)
if type.pointer(cp) # 'ball-facet' then
exit
endif
wfp = contact.end2(cp)
if wall.id(wall.facet.wall(wfp)) # 2 then
exit
endif
end
fish callback add catch_contacts event contact_create
model solve time 5.0
model save 'sink-final1'
; Register a function with the contact_activated event
model restore 'sink-initial'
fish define catch_contacts(arr)
local cp = arr(1)
if type.pointer(cp) # 'ball-facet' then
exit
endif
local wfp = contact.end2(cp)
if wall.id(wall.facet.wall(wfp)) # 2 then
exit
endif | 2022-07-03 08:28:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.334662526845932, "perplexity": 6578.229323544514}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104215805.66/warc/CC-MAIN-20220703073750-20220703103750-00343.warc.gz"} |
https://artofproblemsolving.com/wiki/index.php?title=2002_AMC_10A_Problems/Problem_20&diff=cur&oldid=73294 | # Difference between revisions of "2002 AMC 10A Problems/Problem 20"
## Problem
Points $A,B,C,D,E$ and $F$ lie, in that order, on $\overline{AF}$, dividing it into five segments, each of length 1. Point $G$ is not on line $AF$. Point $H$ lies on $\overline{GD}$, and point $J$ lies on $\overline{GF}$. The line segments $\overline{HC}, \overline{JE},$ and $\overline{AG}$ are parallel. Find $HC/JE$.
$\text{(A)}\ 5/4 \qquad \text{(B)}\ 4/3 \qquad \text{(C)}\ 3/2 \qquad \text{(D)}\ 5/3 \qquad \text{(E)}\ 2$
## Solution 1
First we can draw an image. $[asy] unitsize(0.8 cm); pair A, B, C, D, E, F, G, H, J; A = (0,0); B = (1,0); C = (2,0); D = (3,0); E = (4,0); F = (5,0); G = (-1.5,4); H = extension(D, G, C, C + G - A); J = extension(F, G, E, E + G - A); draw(A--F--G--cycle); draw(B--G); draw(C--G); draw(D--G); draw(E--G); draw(C--H); draw(E--J); label("A", A, SW); label("B", B, S); label("C", C, S); label("D", D, S); label("E", E, S); label("F", F, SE); label("G", G, NW); label("H", H, W); label("J", J, NE); [/asy]$
Since $\overline{AG}$ and $\overline{CH}$ are parallel, triangles $\triangle GAD$ and $\triangle HCD$ are similar. Hence, $\frac{CH}{AG} = \frac{CD}{AD} = \frac{1}{3}$.
Since $\overline{AG}$ and $\overline{JE}$ are parallel, triangles $\triangle GAF$ and $\triangle JEF$ are similar. Hence, $\frac{EJ}{AG} = \frac{EF}{AF} = \frac{1}{5}$. Therefore, $\frac{CH}{EJ} = \left(\frac{CH}{AG}\right)\div\left(\frac{EJ}{AG}\right) = \left(\frac{1}{3}\right)\div\left(\frac{1}{5}\right) = \boxed{\frac{5}{3}}$. The answer is $\boxed{(D) 5/3}$.
## Solution 2
As angle F is clearly congruent to itself, we get from AA similarity, $\triangle AGF \sim \triangle EJF$; hence $\frac {AG}{JE} =5$. Similarly, $\frac {AG}{HC} = 3$. Thus, $\frac {HC}{JE}=\left(\frac{AG}{JE}\right)\left(\frac{HC}{AG}\right) = \boxed{\frac {5}{3}\Rightarrow \text{(D)}}$.
2002 AMC 10A (Problems • Answer Key • Resources) Preceded byProblem 19 Followed byProblem 21 1 • 2 • 3 • 4 • 5 • 6 • 7 • 8 • 9 • 10 • 11 • 12 • 13 • 14 • 15 • 16 • 17 • 18 • 19 • 20 • 21 • 22 • 23 • 24 • 25 All AMC 10 Problems and Solutions | 2021-08-02 16:09:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 30, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6904385685920715, "perplexity": 1101.7286669047223}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154321.31/warc/CC-MAIN-20210802141221-20210802171221-00535.warc.gz"} |
http://imc-math.ddns.net/?show=prob&no=6&hint=1 | International Mathematics Competition for University Students
July 31 – August 6 2017, Blagoevgrad, Bulgaria
Home
Ivan is Watching You
Results
Individuals
Teams
Leaders
Official IMC site
Problem 6
6. Let $f:[0;+\infty)\to \mathbb R$ be a continuous function such that $\lim\limits_{x\to +\infty} f(x)=L$ exists (it may be finite or infinite). Prove that $$\lim\limits_{n\to\infty}\int\limits_0^{1}f(nx)\,\mathrm{d}x=L.$$
Proposed by: Alexandr Bolbot, Novosibirsk State University
Hint: Replace the integral by $\int_0^nf$ and split it into two parts. | 2018-05-24 11:46:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8196432590484619, "perplexity": 580.8175767311384}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794866276.61/warc/CC-MAIN-20180524112244-20180524132244-00619.warc.gz"} |
https://t-salad.com/en/ssh-error-en/ | What to do when you try to connect to SSH and get a very scary warning
It is a story when I tried to connect SSH after a long time to the EC2 instance that had been running for about six months.
I got a very scary warning and knocked it down, so I'll make a note of the cause and solution for the next time I come across it.
In conclusion, it was a warning that didn't need to be so bibi.
A very scary warning
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the ECDSA key sent by the remote host is
*******************************.
Add correct host key in /var/root/.ssh/known_hosts to get rid of this message.
Offending ECDSA key in /var/root/.ssh/known_hosts:5
ECDSA host key for ********.amazonaws.com has changed and you have requested strict checking.
Host key verification failed.
If you excerpt some of them and translate them into Japanese,
Someone could be doing something nasty!
Someone could be eavesdropping on you now (man-in-the-middle attack)!
Scary!!!
What caused it?
Above, I translated only the scary part into Japanese, so even under it
It is also possible that the host key has just changed.
~(Omitted)~
To get rid of this message, add the correct host key known_hosts /var/root/.ssh/.
/var/root/.ssh/known_hosts:5 problem ECDSA keys
The amazonaws.com's ECDSA host key has been changed to require strict checks.
Host key validation failed.
In short, the host key stored locally is different from the server's! That's him.
The SSH connection stores the public key of the destination locally on the first connection, and compares the public key from next time to see if it connected to the sam
e host as the previous one. Therefore, if the public key changes due to re-ip address or OS reinstall, such an error message seems to come out.
So, when I thought about my idea, I played around with it for various verifications, such as testing auto-stop and auto-start, turning on elasticIP and erasing it, so it seems that the IP of the instancehas changed.
someone may be eavesdropping on you now
It was said, and it was super impatient w
Solution
The local public key and the server are different, so you can delete the local one because you are complaining.
The local public key used for SSH connections can be known_hosts in ~/.ssh/known_hosts.
You can open it with vi or something like that and delete the appropriate part, but if you make a mistake and delete it again, it may be troublesome again, so let's delete it with the command.
\$ ssh-keygen -R <対処のホスト名>
# Host example.com found: line 5 type RSA
Original contents retained as /Users/salad/.ssh/known_hosts.old | 2021-10-19 04:41:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2443222552537918, "perplexity": 2613.346771253475}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585242.44/warc/CC-MAIN-20211019043325-20211019073325-00170.warc.gz"} |
https://runestone.academy/ns/books/published/boelkins-ACS/sec-5-1-antid-graphs.html | Skip to main content
## Section5.1Constructing Accurate Graphs of Antiderivatives
A recurring theme in our discussion of differential calculus has been the question “Given information about the derivative of an unknown function $$f\text{,}$$ how much information can we obtain about $$f$$ itself?” In Activity 1.8.3, the graph of $$y = f'(x)$$ was known (along with the value of $$f$$ at a single point) and we endeavored to sketch a possible graph of $$f$$ near the known point. In Example 3.1.7 — we investigated how the first derivative test enables us to use information about $$f'$$ to determine where the original function $$f$$ is increasing and decreasing, as well as where $$f$$ has relative extreme values. If we know a formula or graph of $$f'\text{,}$$ by computing $$f''$$ we can find where the original function $$f$$ is concave up and concave down. Thus, knowing $$f'$$ and $$f''$$ enables us to understand the shape of the graph of $$f\text{.}$$
We returned to this question in even more detail in Section 4.1. In that setting, we knew the instantaneous velocity of a moving object and worked to determine as much as possible about the object's position function. We found connections between the net signed area under the velocity function and the corresponding change in position of the function, and the Total Change Theorem further illuminated these connections between $$f'$$ and $$f\text{,}$$ showing that the total change in the value of $$f$$ over an interval $$[a,b]$$ is determined by the net signed area bounded by $$f'$$ and the $$x$$-axis on the same interval.
In what follows, we explore the situation where we possess an accurate graph of the derivative function along with a single value of the function $$f\text{.}$$ From that information, we'd like to determine a graph of $$f$$ that shows where $$f$$ is increasing, decreasing, concave up, and concave down, and also provides an accurate function value at any point.
### Preview Activity5.1.1.
Suppose that the following information is known about a function $$f\text{:}$$ the graph of its derivative, $$y = f'(x)\text{,}$$ is given in Figure 5.1.1. Further, assume that $$f'$$ is piecewise linear (as pictured) and that for $$x \le 0$$ and $$x \ge 6\text{,}$$ $$f'(x) = 0\text{.}$$ Finally, it is given that $$f(0) = 1\text{.}$$
1. On what interval(s) is $$f$$ an increasing function? On what intervals is $$f$$ decreasing?
2. On what interval(s) is $$f$$ concave up? concave down?
3. At what point(s) does $$f$$ have a relative minimum? a relative maximum?
4. Recall that the Total Change Theorem tells us that
\begin{equation*} f(1) - f(0) = \int_0^1 f'(x) \, dx\text{.} \end{equation*}
What is the exact value of $$f(1)\text{?}$$
5. Use the given information and similar reasoning to that in (d) to determine the exact value of $$f(2)\text{,}$$ $$f(3)\text{,}$$ $$f(4)\text{,}$$ $$f(5)\text{,}$$ and $$f(6)\text{.}$$
6. Based on your responses to all of the preceding questions, sketch a complete and accurate graph of $$y = f(x)$$ on the axes provided, being sure to indicate the behavior of $$f$$ for $$x \lt 0$$ and $$x \gt 6\text{.}$$
### Subsection5.1.1Constructing the graph of an antiderivative
Preview Activity 5.1.1 demonstrates that when we can find the exact area under the graph of a function on any given interval, it is possible to construct a graph of the function's antiderivative. That is, we can find a function whose derivative is given. We can now determine not only the overall shape of the antiderivative graph, but also the actual height of the graph at any point of interest.
This is a consequence of the Fundamental Theorem of Calculus: if we know a function $$f$$ and the value of the antiderivative $$F$$ at some starting point $$a\text{,}$$ we can determine the value of $$F(b)$$ via the definite integral. Since $$F(b) - F(a) = \int_a^b f(x) \, dx\text{,}$$ it follows that
$$F(b) = F(a) + \int_a^b f(x) \, dx\text{.}\tag{5.1.1}$$
We can also interpret the equation $$F(b) - F(a) = \int_a^b f(x) \, dx$$ in terms of the graphs of $$f$$ and $$F$$ as follows. On an interval $$[a,b]\text{,}$$
differences in heights on the graph of the antiderivative given by $$F(b) - F(a)$$ correspond to the net signed area bounded by the original function on the interval $$[a,b]\text{,}$$ which is given by $$\int_a^b f(x) \, dx\text{.}$$
#### Activity5.1.2.
Suppose that the function $$y = f(x)$$ is given by the graph shown in Figure 5.1.2, and that the pieces of $$f$$ are either portions of lines or portions of circles. In addition, let $$F$$ be an antiderivative of $$f$$ and say that $$F(0) = -1\text{.}$$ Finally, assume that for $$x \le 0$$ and $$x \ge 7\text{,}$$ $$f(x) = 0\text{.}$$
1. On what interval(s) is $$F$$ an increasing function? On what intervals is $$F$$ decreasing?
2. On what interval(s) is $$F$$ concave up? concave down? neither?
3. At what point(s) does $$F$$ have a relative minimum? a relative maximum?
4. Use the given information to determine the exact value of $$F(x)$$ for $$x = 1, 2, \ldots, 7\text{.}$$ In addition, what are the values of $$F(-1)$$ and $$F(8)\text{?}$$
5. Based on your responses to all of the preceding questions, sketch a complete and accurate graph of $$y = F(x)$$ on the axes provided, being sure to indicate the behavior of $$F$$ for $$x \lt 0$$ and $$x \gt 7\text{.}$$ Clearly indicate the scale on the vertical and horizontal axes of your graph.
6. What happens if we change one key piece of information: in particular, say that $$G$$ is an antiderivative of $$f$$ and $$G(0) = 0\text{.}$$ How (if at all) would your answers to the preceding questions change? Sketch a graph of $$G$$ on the same axes as the graph of $$F$$ you constructed in (e).
Hint.
1. Consider the sign of $$F' = f\text{.}$$
2. Consider the sign of $$F'' = f'\text{.}$$
3. Where does $$F' = f$$ change sign?
4. Recall that $$F(1) = F(0) + \int_0^1 f(t) \, dt\text{.}$$
5. Use the function values found in (d) and the earlier information regarding the shape of $$F\text{.}$$
6. Note that $$G(1) = G(0) + \int_0^1 f(t) \, dt\text{.}$$
### Subsection5.1.2Multiple antiderivatives of a single function
In the final question of Activity 5.1.2, we encountered a very important idea: a function $$f$$ has more than one antiderivative. Each antiderivative of $$f$$ is determined uniquely by its value at a single point. For example, suppose that $$f$$ is the function given at left in Figure 5.1.3, and suppose further that $$F$$ is an antiderivative of $$f$$ that satisfies $$F(0) = 1\text{.}$$
Then, using Equation (5.1.1), we can compute
\begin{align*} F(1) &= F(0) + \int_0^1 f(x) \, dx\\ &= 1 + 0.5\\ &= 1.5\text{.} \end{align*}
Similarly, $$F(2) = 1.5\text{,}$$ $$F(3) = -0.5\text{,}$$ $$F(4) = -2\text{,}$$ $$F(5) = -0.5\text{,}$$ and $$F(6) = 1\text{.}$$ In addition, we can use the fact that $$F' = f$$ to ascertain where $$F$$ is increasing and decreasing, concave up and concave down, and has relative extremes and inflection points. We ultimately find that the graph of $$F$$ is the one given in blue in Figure 5.1.3.
If we want an antiderivative $$G$$ for which $$G(0) = 3\text{,}$$ then $$G$$ will have the exact same shape as $$F$$ (since both share the derivative $$f$$), but $$G$$ will be shifted vertically from the graph of $$F\text{,}$$ as pictured in red in Figure 5.1.3. Note that $$G(1) - G(0) = \int_0^1 f(x) \, dx = 0.5\text{,}$$ just as $$F(1) - F(0) = 0.5\text{,}$$ but since $$G(0) = 3\text{,}$$ $$G(1) = G(0) + 0.5 = 3.5\text{,}$$ whereas $$F(1) = 1.5\text{.}$$ In the same way, if we assigned a different initial value to the antiderivative, say $$H(0) = -1\text{,}$$ we would get still another antiderivative, as shown in magenta in Figure 5.1.3.
This example demonstrates an important fact that holds more generally:
If $$G$$ and $$H$$ are both antiderivatives of a function $$f\text{,}$$ then the function $$G - H$$ must be constant.
To see why this result holds, observe that if $$G$$ and $$H$$ are both antiderivatives of $$f\text{,}$$ then $$G' = f$$ and $$H' = f\text{.}$$ Hence,
\begin{equation*} \frac{d}{dx}[ G(x) - H(x) ] = G'(x) - H'(x) = f(x) - f(x) = 0\text{.} \end{equation*}
Since the only way a function can have derivative zero is by being a constant function, it follows that the function $$G - H$$ must be constant.
We now see that if a function has at least one antiderivative, it must have infinitely many: we can add any constant of our choice to the antiderivative and get another antiderivative. For this reason, we sometimes refer to the general antiderivative of a function $$f\text{.}$$
To identify a particular antiderivative of $$f\text{,}$$ we must know a single value of the antiderivative $$F$$ (this value is often called an initial condition). For example, if $$f(x) = x^2\text{,}$$ its general antiderivative is $$F(x) = \frac{1}{3}x^3 + C\text{,}$$ where we include the “$$+C$$” to indicate that $$F$$ includes all of the possible antiderivatives of $$f\text{.}$$ If we know that $$F(2) = 3\text{,}$$ we substitute 2 for $$x$$ in $$F(x) = \frac{1}{3}x^3 + C\text{,}$$ and find that
\begin{equation*} 3 = \frac{1}{3}(2)^3 + C\text{,} \end{equation*}
or $$C = 3 - \frac{8}{3} = \frac{1}{3}\text{.}$$ Therefore, the particular antiderivative in this case is $$F(x) = \frac{1}{3}x^3 + \frac{1}{3}\text{.}$$
#### Activity5.1.3.
For each of the following functions, sketch an accurate graph of the antiderivative that satisfies the given initial condition. In addition, sketch the graph of two additional antiderivatives of the given function, and state the corresponding initial conditions that each of them satisfy. If possible, find an algebraic formula for the antiderivative that satisfies the initial condition.
1. original function: $$g(x) = \left| x \right| - 1\text{;}$$ initial condition: $$G(-1) = 0\text{;}$$ interval for sketch: $$[-2,2]$$
2. original function: $$h(x) = \sin(x)\text{;}$$ initial condition: $$H(0) = 1\text{;}$$ interval for sketch: $$[0,4\pi]$$
3. original function: $$p(x) = \begin{cases}x^2, \amp \text{ if } 0 \lt x \lt 1 \\ -(x-2)^2, \amp \text{ if } 1 \lt x \lt 2 \\ 0 \amp \text{ otherwise } \end{cases}\text{;}$$ initial condition: $$P(0) = 1\text{;}$$ interval for sketch: $$[-1,3]$$
### Subsection5.1.3Functions defined by integrals
Equation (5.1.1) allows us to compute the value of the antiderivative $$F$$ at a point $$b\text{,}$$ provided that we know $$F(a)$$ and can evaluate the definite integral from $$a$$ to $$b$$ of $$f\text{.}$$ That is,
\begin{equation*} F(b) = F(a) + \int_a^b f(x) \, dx\text{.} \end{equation*}
In several situations, we have used this formula to compute $$F(b)$$ for several different values of $$b\text{,}$$ and then plotted the points $$(b,F(b))$$ to help us draw an accurate graph of $$F\text{.}$$ This suggests that we may want to think of $$b\text{,}$$ the upper limit of integration, as a variable itself. To that end, we introduce the idea of an integral function, a function whose formula involves a definite integral.
#### Definition5.1.4.
If $$f$$ is a continuous function, we define the corresponding integral function $$A$$ according to the rule
$$A(x) = \int_a^x f(t) \, dt\text{.}\tag{5.1.2}$$
Note that because $$x$$ is the independent variable in the function $$A\text{,}$$ and determines the endpoint of the interval of integration, we need to use a different variable as the variable of integration. A standard choice is $$t\text{,}$$ but any variable other than $$x$$ is acceptable.
One way to think of the function $$A$$ is as the “net signed area from $$a$$ up to $$x$$” function, where we consider the region bounded by $$y = f(t)\text{.}$$ For example, in Figure 5.1.5, we see a function $$f$$ pictured at left, and its corresponding area function (choosing $$a = 0$$), $$A(x) = \int_0^x f(t) \, dt$$ shown at right.
The function $$A$$ measures the net signed area from $$t = 0$$ to $$t = x$$ bounded by the curve $$y = f(t)\text{;}$$ this value is then reported as the corresponding height on the graph of $$y = A(x)\text{.}$$ This applet 1 , brings the static picture in Figure 5.1.5 to life. There, the user can move the red point on the function $$f$$ and see how the corresponding height changes at the light blue point on the graph of $$A\text{.}$$
The choice of $$a$$ is somewhat arbitrary. In the activity that follows, we explore how the value of $$a$$ affects the graph of the integral function.
#### Activity5.1.4.
Suppose that $$g$$ is given by the graph at left in Figure 5.1.6 and that $$A$$ is the corresponding integral function defined by $$A(x) = \int_1^x g(t) \, dt\text{.}$$
1. On what interval(s) is $$A$$ an increasing function? On what intervals is $$A$$ decreasing? Why?
2. On what interval(s) do you think $$A$$ is concave up? concave down? Why?
3. At what point(s) does $$A$$ have a relative minimum? a relative maximum?
4. Use the given information to determine the exact values of $$A(0)\text{,}$$ $$A(1)\text{,}$$ $$A(2)\text{,}$$ $$A(3)\text{,}$$ $$A(4)\text{,}$$ $$A(5)\text{,}$$ and $$A(6)\text{.}$$
5. Based on your responses to all of the preceding questions, sketch a complete and accurate graph of $$y = A(x)$$ on the axes provided, being sure to indicate the behavior of $$A$$ for $$x \lt 0$$ and $$x \gt 6\text{.}$$
6. How does the graph of $$B$$ compare to $$A$$ if $$B$$ is instead defined by $$B(x) = \int_0^x g(t) \, dt\text{?}$$
Hint.
1. Where is $$A$$ accumulating positive signed area?
2. As $$A$$ accumulates positive or negative signed area, where is the rate at which such area is accumulated increasing?
3. Where does $$A$$ change from accumulating positive signed area to accumulating negative signed area?
4. Note, for instance, that $$A(2) = \int_1^2 g(t) \, dt\text{.}$$
5. Use your work in (a)-(d) appropriately.
6. What is the value of $$B(0)\text{?}$$ How does this compare to $$A(0)\text{?}$$
### Subsection5.1.4Summary
• Given the graph of a function $$f\text{,}$$ we can construct the graph of its antiderivative $$F$$ provided that (a) we know a starting value of $$F\text{,}$$ say $$F(a)\text{,}$$ and (b) we can evaluate the integral $$\int_a^b f(x) \, dx$$ exactly for relevant choices of $$a$$ and $$b\text{.}$$ For instance, if we wish to know $$F(3)\text{,}$$ we can compute $$F(3) = F(a) + \int_a^3 f(x) \, dx\text{.}$$ When we combine this information about the function values of $$F$$ together with our understanding of how the behavior of $$F' = f$$ affects the overall shape of $$F\text{,}$$ we can develop a completely accurate graph of the antiderivative $$F\text{.}$$
• Because the derivative of a constant is zero, if $$F$$ is an antiderivative of $$f\text{,}$$ it follows that $$G(x) = F(x) + C$$ will also be an antiderivative of $$f\text{.}$$ Moreover, any two antiderivatives of a function $$f$$ differ precisely by a constant. Thus, any function with at least one antiderivative in fact has infinitely many, and the graphs of any two antiderivatives will differ only by a vertical translation.
• Given a function $$f\text{,}$$ the rule $$A(x) = \int_a^x f(t) \, dt$$ defines a new function $$A$$ that measures the net-signed area bounded by $$f$$ on the interval $$[a,x]\text{.}$$ We call the function $$A$$ the integral function corresponding to $$f\text{.}$$
### Exercises5.1.5Exercises
#### 1.Definite integral of a piecewise linear function.
Use the graph of $$f(x)$$ shown below to find the following integrals.
(Click on the graph for a larger version.)
A. $$\int_{-5}^0 f(x) dx =$$
B. If the vertical red shaded area in the graph has area $$A\text{,}$$ estimate: $$\int_{-5}^{7} f(x) dx =$$
(Your estimate may be written in terms of $$A\text{.}$$)
#### 2.A smooth function that starts out at 0.
Consider the graph of the function $$f(x)$$ shown below.
(Click on the graph for a larger version)
A. Estimate the integral
$$\int_0^7{f(x)dx} \approx$$
B. If $$F$$ is an antiderivative of the same function $$f$$ and $$F(0) = 30\text{,}$$ estimate $$F(7)\text{:}$$
$$F(7) \approx$$
#### 3.A piecewise constant function.
Assume $$f'$$ is given by the graph below. Suppose $$f$$ is continuous and that $$f(3)=0\text{.}$$
(Click on the graph for a larger version.)
Sketch, on a sheet of work paper, an accurate graph of $$f\text{,}$$ and use it to find each of
$$f(0) =$$
and
$$f(7) =$$
Then find the value of the integral:
$$\int_0^7 f'(x)\,dx =$$
(Note that you can do this in two different ways!)
#### 4.Another piecewise linear function.
The figure below shows $$f\text{.}$$
(Click on the graph for a larger version.)
If $$F'=f$$ and $$F(0)=0\text{,}$$ find $$F(b)$$ for $$b=$$1, 2, 3, 4, 5, 6, and fill these values in the following table.
$$b$$ 1 2 3 4 5 6 $$F(b)$$
#### 5.
A moving particle has its velocity given by the quadratic function $$v$$ pictured in Figure 5.1.7. In addition, it is given that $$A_1 = \frac{7}{6}$$ and $$A_2 = \frac{8}{3}\text{,}$$ as well as that for the corresponding position function $$s\text{,}$$ $$s(0) = 0.5\text{.}$$
1. Use the given information to determine $$s(1)\text{,}$$ $$s(3)\text{,}$$ $$s(5)\text{,}$$ and $$s(6)\text{.}$$ What do these values mean in the context of the moving particle?
2. On what interval(s) is $$s$$ increasing? That is, when is the particle moving forward? On what interval(s) is $$s$$ decreasing? That is, when is the particle moving backward?
3. On what interval(s) is $$s$$ concave up? That is, when is the particle accelerating? On what interval(s) is $$s$$ concave down? That is, when is the particle decelerating?
4. Sketch an accurate, labeled graph of $$s$$ on the axes at right in Figure 5.1.7.
5. Note that $$v(t) = -2 + \frac{1}{2}(t-3)^2\text{.}$$ Find a formula for $$s\text{.}$$
#### 6.
A person exercising on a treadmill experiences different levels of resistance and thus burns calories at different rates, depending on the treadmill's setting. In a particular workout, the rate at which a person is burning calories is given by the piecewise constant function $$c$$ pictured in Figure 5.1.8. Note that the units on $$c$$ are “calories per minute.”
1. Let $$C$$ be an antiderivative of $$c\text{.}$$ What does the function $$C$$ measure? What are its units?
2. Assume that $$C(0) = 0\text{.}$$ Determine the exact value of $$C(t)$$ at the values $$t = 5, 10, 15, 20, 25, 30\text{.}$$
3. Sketch an accurate graph of $$C$$ on the axes provided at right in Figure 5.1.8. Be certain to label the scale on the vertical axis.
4. Determine a formula for $$C$$ that does not involve an integral and is valid for $$5 \le t \le 10\text{.}$$
#### 7.
Consider the piecewise linear function $$f$$ given in Figure 5.1.9. Let the functions $$A\text{,}$$ $$B\text{,}$$ and $$C$$ be defined by the rules $$A(x) = \int_{-1}^{x} f(t) \, dt\text{,}$$ $$B(x) = \int_{0}^{x} f(t) \, dt\text{,}$$ and $$C(x) = \int_{1}^{x} f(t) \, dt\text{.}$$
1. For the values $$x = -1, 0, 1, \ldots, 6\text{,}$$ make a table that lists corresponding values of $$A(x)\text{,}$$ $$B(x)\text{,}$$ and $$C(x)\text{.}$$
2. On the axes provided in Figure 5.1.9, sketch the graphs of $$A\text{,}$$ $$B\text{,}$$ and $$C\text{.}$$
3. How are the graphs of $$A\text{,}$$ $$B\text{,}$$ and $$C$$ related?
4. How would you best describe the relationship between the function $$A$$ and the function $$f\text{?}$$
You have attempted of activities on this page.
gvsu.edu/s/cz | 2022-10-06 07:16:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.8163223266601562, "perplexity": 201.71452324086673}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00607.warc.gz"} |
http://mathhelpforum.com/differential-geometry/103996-fubini-tonelli-complete-measure-space.html | Thread: Fubini-Tonelli with complete measure space
1. Fubini-Tonelli with complete measure space
So let $(X,M,\mu), (Y,N,\nu)$ be complete $\sigma$-finite measure spaces. Then consider $(X\times Y,L,\lambda)$, the completion of $(X\times Y,M\times N,mu\times\nu)$.
This is the basic set up. I have to show that if $f$ is $L$-measurable and $f=0$ $\lambda$ almost everywhere, then $f_x,f^y$ are integrable and $\int f_xd\nu=\int f^yd\mu=0$ almost everywhere.
Note: $f_x(x,y)=f^y(x,y)=f(x,y)$ for fixed x,y.
I was told that for this part that I need to use the fact that $\mu,\nu$ are both complete but I don't see how.
I am starting out by assuming that $f=\chi_E$ (characteristic function). Then $f_x=E_x$ right? I assume that it is at this point I need to make use of completeness some how.
2. Originally Posted by putnam120
So let $(X,M,\mu), (Y,N,\nu)$ be complete $\sigma$-finite measure spaces. Then consider $(X\times Y,L,\lambda)$, the completion of $(X\times Y,M\times N,mu\times\nu)$.
This is the basic set up. I have to show that if $f$ is $L$-measurable and $f=0$ $\lambda$ almost everywhere, then $f_x,f^y$ are integrable and $\int f_xd\nu=\int f^yd\mu=0$ almost everywhere.
Note: $f_x(x,y)=f^y(x,y)=f(x,y)$ for fixed x,y.
I was told that for this part that I need to use the fact that $\mu,\nu$ are both complete but I don't see how.
I am starting out by assuming that $f=\chi_E$ (characteristic function). Then $f_x=E_x$ right? I assume that it is at this point I need to make use of completeness some how.
I don't see that completeness should be needed for this. According to Halmos, the definition of $\lambda(E)$ is $\lambda(E) = \textstyle\int\nu(E_x)\,d\mu(x) = \int\mu(E^y)\,d\nu(y)$ (of course, he has to show that those two integrals are equal). So the result for $f = \chi_E$ follows straight from that definition. See §36 of Halmos's book, pp.145–148. His results for product measures assume throughout that the measures on the component spaces are σ-finite, but not that they are complete. | 2017-08-22 02:17:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 31, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9740535616874695, "perplexity": 141.51592405207603}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886109803.8/warc/CC-MAIN-20170822011838-20170822031838-00143.warc.gz"} |
http://mathhelpforum.com/business-math/174727-compound-interest-new-capital-each-year.html | # Thread: Compound interest + New capital each year
1. ## Compound interest + New capital each year
Hi, I'm not sure it this formula needs calculus, but I think so. So here is the problem.
You want to invest some money. Let's say the first year you invest 1000$, and you will always reinvest the money you gained, plus every year you will add a new 1000$. You make an interest of 10% each year.
Here is what I have been able to do up to now
x' = ( x + 1000 ) * 1.1 ex : after the first ( 0 + 1000 ) * 1.1 = 1100
x'' = ( x' + 1000 ) * 1.1 second year ( 1100 + 1000 ) * 1.1 = 2310
x ''' = ( x'' + 1000) * 1.1 and so on and so on.
How can I put it in one formula that I can compute and only have to enter the number of year I invested it ?
2. You get a general formula by looking at what you are doing in general. The first year you invested "A" and at the end of the year had 1.1A. You added another "A" to that to make 1.1A+ A and, at the end of that year had $1.1(1.1A+ A)= 1.1^2A+ 1.1A$. You added another "A" and so had $1.1^2A+ 1.1A+ A$ and, at the end of that year had $1.1(1.1^2A+ 1.1A+ A)= 1.1^3A+ 1.1^2A+ 1.1A$. Now, it should be easy to see that, at the beginning of the nth year, you have $(1.1^n+ 1.1^{n-1}+ \cdot\cdot\cdot\+ 1.1^2+ 1.1+ 1)A$. The part in parentheses is geometric sum, of the form [/tex]a+ ab+ ag^2+ \cdot\cdot\cdot+ ab^{m-1}+ ab^n[/tex] with a= 1 and b= 1.1. It is well known that this sum is equal to $\frac{a(1- b^{n+1}}{1- b}$. Here, that would be $\frac{1- 1.1^n}{1-1.1}= 10(1.1^n- 1)$ | 2017-02-20 21:17:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7458173036575317, "perplexity": 412.54725053590096}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170609.0/warc/CC-MAIN-20170219104610-00511-ip-10-171-10-108.ec2.internal.warc.gz"} |
http://mathhelpforum.com/advanced-statistics/169118-exponential-uniform-distributions-print.html | # Exponential and Uniform distributions
• January 23rd 2011, 10:42 AM
morganfor
Exponential and Uniform distributions
I'm stuck on this question:
If RV X has an exponential distribution does Y = ln(X) have a uniform
distribution? Derive the cumulative distribution function and density of Y.
Thanks!
• January 23rd 2011, 10:49 AM
CaptainBlack
Quote:
Originally Posted by morganfor
I'm stuck on this question:
If RV X has an exponential distribution does Y = ln(X) have a uniform
distribution? Derive the cumulative distribution function and density of Y.
Thanks!
No. Using the cumulative distribution function of X find that of Y, you will find that the CDF of Y is not proportional to y.
CB
• January 27th 2011, 05:34 PM
matheagle
First of all.
If X is an exponential, then it's support is on $(0,\infty)$
If $Y=\ln X$, then Y's support is $(-\infty,\infty)$
and you can't have a uniform distribution on the real line.
Use your CDF of X, $1-e^{-\lambda x}$ or $1-e^{-x/\lambda}$
and make the appropriate substitution. | 2016-07-02 08:02:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7957484722137451, "perplexity": 989.858734579254}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783408840.13/warc/CC-MAIN-20160624155008-00188-ip-10-164-35-72.ec2.internal.warc.gz"} |
https://www.huber.embl.de/msmb/Chap-Clustering.html | # 5 Clustering
Finding categories of cells, illnesses, organisms and then naming them is a core activity in the natural sciences. In Chapter 4 we’ve seen that some data can be modeled as mixtures from different groups or populations with a clear parametric generative model. We saw how in those examples we could use the EM algorithm to disentangle the components. We are going to extend the idea of unraveling of different groups to cases where the clusters do not necessarily have nice elliptic69 Mixture modeling with multivariate normal distributions implies elliptic cluster boundaries. shapes.
Clustering takes data (continuous or quasi-continuous) and adds to them a new categorical group variable that can often simplify decision making; even if this sometimes comes at a cost of ignoring intermediate states. For instance, medical decisions are simplified by replacing possibly complex, high-dimensional diagnostic measurements by simple groupings: a full report of numbers associated with fasting glucose, glycated hemoglobin and plasma glucose two hours after intake is replaced by assigning the patient to a diabetes mellitus “group”.
In this chapter, we will study how to find meaningful clusters or groups in both low-dimensional and high-dimensional nonparametric settings. However, there is a caveat: clustering algorithms are designed to find clusters, so they will find clusters, even where there are none70 This is reminescent of humans: we like to see patterns – even in randomness.. So, cluster validation is an essential component of our process, especially if there is no prior domain knowledge that supports the existence of clusters.
## 5.1 Goals for this chapter
In this chapter we will
• Study the different types of data that can be beneficially clustered.
• See measures of (dis)similarity and distances that help us define clusters.
• Uncover hidden or latent clustering by partitioning the data into tighter sets.
• Use clustering when given biomarkers on each of hundreds of thousands cells. We’ll see that for instance immune cells can be naturally grouped into tight subpopulations.
• Run nonparametric algorithms such as $$k$$-means, $$k$$-medoids on real single cell data.
• Experiment with recursive approaches to clustering that combine observations and groups into a hierarchy of sets; these methods are known as hierarchical clustering.
• Study how to validate clusters through resampling-based bootstrap approaches, which we will demonstrate on a single-cell dataset.
## 5.2 What are the data and why do we cluster them?
### 5.2.1 Clustering can sometimes lead to discoveries.
John Snow made a map of cholera cases and identified clusters of cases. He then collected additional information about the situation of the pumps. The proximity of dense clusters of cases to the Broadstreet pump pointed to the water as a possible culprit. He collected separate sources of information that enabled him to infer the source of the cholera outbreak.
Figure 5.1: John Snow’s map of cholera cases: small barcharts at each house indicate a clustering of diagnosed cases.
David Freedman has a wonderful detailed account of all the steps that led to this discovery (Freedman 1991).
Now, let’s look at another map of London, shown in Figure 5.2. The red dots designate locations that were bombed during World War II. Many theories were put forward during the war by the analytical teams. They attempted to find a rational explanation for the bombing patterns (proximity to utility plants, arsenals, $$...$$). In fact, after the war it was revealed that the bombings were randomly distributed without any attempt at hitting particular targets.
Clustering is a useful technique for understanding complex multivariate data; it is an unsupervised71 Thus named because all variables have the same status, we are not trying to predict or learn the value of one variable (the supervisory response) based on the information from explanatory variables.. Exploratory techniques show groupings that can be important in interpreting the data.
For instance, clustering has enabled researchers to enhance their understanding of cancer biology. Tumors that appeared to be the same based on their anatomical location and histopathology fell into multiple clusters based on their molecular signatures, such as gene expression data (Hallett et al. 2012). Eventually, such clusterings might lead to the definition of new, more relevant disease types. Relevance is evidenced, e.g., by the fact that they are associated with different patient outcomes. What we aim to do in this chapter is understand how pictures like Figure 5.3 are constructed and how to interpret them.
In Chapter 4, we have already studied one technique, the EM algorithm, for uncovering groups. The techniques we explore in this chapter are more general and can be applied to more complex data. Many of them are based on distances between pairs of observations (this can be all versus all, or sometimes only all versus some), and they make no explicit assumptions about the generative mechanism of the data involving particular families of distributions, such as normal, gamma-Poisson, etc. There is a proliferation of clustering algorithms in the literature and in the scientific software landscape; this can be intimidating. In fact it is linked to the diversity of the types of data and the objectives pursued in different domains.
Look up the BiocViews Clustering or the Cluster view on CRAN and count the number of packages providing clustering tools.
## 5.3 How do we measure similarity?
Of a feather: how the distances are measured and similarities between observations defined has a strong impact on the clustering result. Our first step is to decide what we mean by similar. There are multiple ways of comparing birds: for instance, a distance using size and weight will give a different clustering than one using diet or habitat. Once we have chosen the relevant features, we have to decide how we combine differences between the multiple features into a single number. Here is a selection of choices, some of them are illustrated in Figure 5.5.
Figure 5.5: Equal-distance contour plots according to four different distances: points on any one curve are all the same distance from the center point.
Euclidean The Euclidean distance between two points $$A=(a_1,...,a_p)$$ and $$B= (b_1,...,b_p)$$ in a $$p$$-dimensional space (for the $$p$$ features) is the square root of the sum of squares of the differences in all $$p$$ coordinate directions:
$\begin{equation*} d(A,B)=\sqrt{(a_1-b_1)^2+(a_2-b_2)^2+... +(a_p-b_p)^2}. \end{equation*}$
Manhattan The Manhattan, City Block, Taxicab or $$L_1$$ distance takes the sum of the absolute differences in all coordinates.
$\begin{equation*} d(A,B)=|a_1-b_1|+|a_2-b_2|+... +|a_p-b_p|. \end{equation*}$
Maximum The maximum of the absolute differences between coordinates is also called the $$L_\infty$$ distance:
$\begin{equation*} d_\infty(A,B)= \max_{i}|a_i-b_i|. \end{equation*}$
Weighted Euclidean distance is a generalization of the ordinary Euclidean distance, by giving different directions in feature space different weights. We have already encountered one example of a weighted Euclidean distance in Chapter 2, the $$\chi^2$$ distance. It is used to compare rows in contingency tables, and the weight of each feature is the inverse of the expected value. The Mahalanobis distance is another weighted Euclidean distance that takes into account the fact that different features may have a different dynamic range, and that some features may be positively or negatively correlated with each other. The weights in this case are derived from the covariance matrix of the features. See also Question 5.1.
Minkowski Allowing the exponent to be $$m$$ instead of $$2$$, as in the Euclidean distance, gives the Minkowski distance
$$$d(A,B) = \left( (a_1-b_1)^m+(a_2-b_2)^m+... +(a_p-b_p)^m \right)^\frac{1}{m}. \tag{5.1}$$$
Edit, Hamming This distance is the simplest way to compare character sequences. It simply counts the number of differences between two character strings. This could be applied to nucleotide or amino acid sequences – although in that case, the different character substitutions are usually associated with different contributions to the distance (to account for physical or evolutionary similarity), and deletions and insertions may also be allowed.
Binary When the two vectors have binary bits as coordinates, we can think of the non-zero elements as ‘on’ and the zero elements as ‘off’. The binary distance is the proportion of features having only one bit on amongst those features that have at least one bit on.
Jaccard Distance Occurrence of traits or features in ecological or mutation data can be translated into presence and absence and encoded as 1’s and 0’s. In such situations, co-occurence is often more informative than co-absence. For instance, when comparing mutation patterns in HIV, the co-existence in two different strains of a mutation tends to be a more important observation than its co-absence. For this reason, biologists use the Jaccard index. Let’s call our two observation vectors $$S$$ and $$T$$, $$f_{11}$$ the number of times a feature co-occurs in $$S$$ and $$T$$, $$f_{10}$$ (and $$f_{01}$$) the number of times a feature occurs in $$S$$ but not in $$T$$ (and vice versa), and $$f_{00}$$ the number of times a feature is co-absent. The Jaccard index is
$$$J(S,T) = \frac{f_{11}}{f_{01}+f_{10}+f_{11}}, \tag{5.2}$$$
(i.e., it ignores $$f_{00}$$), and the Jaccard dissimilarity is
$$$d_J(S,T) = 1-J(S,T) = \frac{f_{01}+f_{10}}{f_{01}+f_{10}+f_{11}}. \tag{5.3}$$$
Correlation based distance
$\begin{equation*} d(A,B)=\sqrt{2(1-\text{cor}(A,B))}. \end{equation*}$
Figure 5.6: An example for the use of Mahalanobis distances to measure the distance of a new data point (red) from two cluster centers.
► Question 5.1
Which of the two cluster centers in Figure 5.6 is the red point closest to?
► Solution
Figure 5.7: The lower triangle of distances can be computed by any of a hundred different functions in various R packages (vegdist in vegan, daisy in cluster, genetic_distance in gstudio, dist.dna in ape, Dist in amap, distance in ecodist, dist.multiPhylo in distory, shortestPath in gdistance, % dudi.dist and dist.genet in ade4).
## 5.4 Nonparametric mixture detection
### 5.4.1$$k$$-methods: $$k$$-means, $$k$$-medoids and PAM
The centers of the groups are sometimes called medoids, thus the name PAM (partitioning around medoids). Partitioning or iterative relocation methods work well in high-dimensional settings, where we cannot72 This is due to the so-called curse of dimensionality. We will discuss this in more detail in Chapter 12. easily use probability densities, the EM algorithm and parametric mixture modeling in the way we did in Chapter 4. Besides the distance measure, the main choice to be made is the number of clusters $$k$$. The PAM (partitioning around medoids, Kaufman and Rousseeuw (2009)) method is as follows:
1. Starts from a matrix of $$p$$ features measured on a set of $$n$$ observations.
2. Randomly pick $$k$$ distinct cluster centers out of the $$n$$ observations (“seeds”).
3. Assign each of the remaining observation to the group to whose center it is the closest.
4. For each group, choose a new center from the observations in the group, such that the sum of the distances of group members to the center is minimal; this is called the medoid.
5. Repeat Steps 3 and 4 until the groups stabilize.
Each time the algorithm is run, different initial seeds will be picked in Step 2, and in general, this can lead to different final results. A popular implementation is the pam function in the package cluster.
A slight variation of the method replaces the medoids by the arithmetic means (centers of gravity) of the clusters and is called $$k$$-means. While in PAM, the centers are observations, this is not, in general, the case with $$k$$-means. The function kmeans comes with every installation of R in the stats package; an example run is shown in Figure 5.9.
These so-called $$k$$-methods are the most common off-the-shelf methods for clustering; they work particularly well when the clusters are of comparable size and convex (blob-shaped). On the other hand, if the true clusters are very different in size, the larger ones will tend to be broken up; the same is true for groups that have pronounced non-spherical or non-elliptic shapes.
► Question 5.3
The $$k$$-means algorithm alternates between computing the average point and assigning the points to clusters. How does this alternating, iterative method differ from an EM-algorithm?
► Solution
### 5.4.2 Tight clusters with resampling
There are clever schemes that repeat the process many times using different initial centers or resampled datasets. Repeating a clustering procedure multiple times on the same data, but with different starting points creates strong forms according to Diday and Brito (1989). Repeated subsampling of the dataset and applying a clustering method will result in groups of observations that are “almost always” grouped together; these are called tight clusters (Tseng and Wong 2005). The study of strong forms or tight clusters facilitates the choice of the number of clusters. A recent package developed to combine and compare the output from many different clusterings is clusterExperiment. Here we give an example from its vignette. Single-cell RNA-Seq experiments provide counts of reads, representing gene transcripts, from individual cells. The single cell resolution enables scientists, among other things, to follow cell lineage dynamics. Clustering has proved very useful for analysing such data.
► Question 5.4
Follow the vignette of the package clusterExperiment. Call the ensemble clustering function clusterMany, using pam for the individual clustering efforts. Set the choice of genes to include at either the 60, 100 or 150 most variable genes. Plot the clustering results for $$k$$ varying between 4 and 9. What do you notice?
► Solution
## 5.5 Clustering examples: flow cytometry and mass cytometry
You can find reviews of bioinformatics methods for flow cytometry in (O’Neill et al. 2013) and a well-kept wikipedia article.
Studying measurements on single cells improves both the focus and resolution with which we can analyze cell types and dynamics. Flow cytometry enables the simultaneous measurement of about 10 different cell markers. Mass cytometry expands the collection of measurements to as many as 80 proteins per cell. A particularly promising application of this technology is the study of immune cell dynamics.
### 5.5.1 Flow cytometry and mass cytometry
At different stages of their development, immune cells express unique combinations of proteins on their surfaces. These protein-markers are called CDs (clusters of differentiation) and are collected by flow cytometry (using fluorescence, see Hulett et al. (1969)) or mass cytometry (using single-cell atomic mass spectrometry of heavy element reporters, see Bendall et al. (2012)). An example of a commonly used CD is CD4, this protein is expressed by helper T cells that are referred to as being “CD4+”. Note however that some cells express CD4 (thus are CD4+), but are not actually helper T cells. We start by loading some useful Bioconductor packages for cytometry data, flowCore and flowViz, and read in an examplary data object fcsB as follows:
library("flowCore")
library("flowViz")
slotNames(fcsB)
## [1] "exprs" "parameters" "description"
Figure 5.11 shows a scatterplot of two of the variables available in the fcsB data. (We will see how to make such plots below.) We can see clear bimodality and clustering in these two dimensions.
► Question 5.5
1. Look at the structure of the fcsB object (hint: the colnames function). How many variables were measured?
2. Subset the data to look at the first few rows (hint: use Biobase::exprs(fcsB)). How many cells were measured?
### 5.5.2 Data preprocessing
First we load the table data that reports the mapping between isotopes and markers (antibodies), and then we replace the isotope names in the column names of fcsB with the marker names. This makes the subsequent analysis and plotting code more readable:
markersB = readr::read_csv("../data/Bendall_2011_markers.csv")
mt = match(markersB$isotope, colnames(fcsB)) stopifnot(!any(is.na(mt))) colnames(fcsB)[mt] = markersB$marker
Now we are ready to generate Figure 5.11
Figure 5.11: Cell measurements that show clear clustering in two dimensions.
flowPlot(fcsB, plotParameters = colnames(fcsB)[2:3], logy = TRUE)
Plotting the data in two dimensions as in Figure 5.11 already shows that the cells can be grouped into subpopulations. Sometimes just one of the markers can be used to define populations on their own; in that case simple rectangular gating is used to separate the populations; for instance, CD4+ cells can be gated by taking the subpopulation with high values for the CD4 marker. Cell clustering can be improved by carefully choosing transformations of the data. The left part of Figure 5.12 shows a simple one dimensional histogram before transformation; on the right of Figure 5.12 we see the distribution after transformation. It reveals a bimodality and the existence of two cell populations.
Data Transformation: hyperbolic arcsin (asinh). It is standard to transform both flow and mass cytometry data using one of several special functions. We take the example of the inverse hyperbolic sine (asinh):
$\begin{equation*} \operatorname{asinh}(x) = \log{(x + \sqrt{x^2 + 1})}. \end{equation*}$
From this we can see that for large values of $$x$$, $$\operatorname{asinh}(x)$$ behaves like the $$\log$$ and is practically equal to $$\log(x)+\log(2)$$; for small $$x$$ the function is close to linear in $$x$$.
Try running the following code to see the two main regimes of the transformation: small values and large values.
v1 = seq(0, 1, length.out = 100)
plot(log(v1), asinh(v1), type = 'l')
plot(v1, asinh(v1), type = 'l')
v3 = seq(30, 3000, length = 100)
plot(log(v3), asinh(v3), type= 'l')
This is another example of a variance stabilizing transformation, also mentioned in Chapters 4 and 8. Figure 5.12 is produced by the following code, that uses the flowCore package.
asinhtrsf = arcsinhTransform(a = 0.1, b = 1)
fcsBT = transform(fcsB,
transformList(colnames(fcsB)[-c(1, 2, 41)], asinhtrsf))
densityplot( ~CD3all, fcsB)
densityplot( ~CD3all, fcsBT)
► Question 5.6
How many dimensions does the following code use to split the data into 2 groups using $$k$$-means ?
kf = kmeansFilter("CD3all" = c("Pop1","Pop2"), filterId="myKmFilter")
fres = flowCore::filter(fcsBT, kf)
summary(fres)
## Pop1: 33429 of 91392 events (36.58%)
## Pop2: 57963 of 91392 events (63.42%)
fcsBT1 = flowCore::split(fcsBT, fres, population = "Pop1")
fcsBT2 = flowCore::split(fcsBT, fres, population = "Pop2")
Figure 5.13, generated by the following code, shows a naïve projection of the data into the two dimensions spanned by the CD3 and CD56 markers:
Figure 5.13: After transformation these cells were clustered using kmeans.
library("flowPeaks")
fp = flowPeaks(Biobase::exprs(fcsBT)[, c("CD3all", "CD56")])
plot(fp)
When plotting points that densely populate an area we should try to avoid overplotting. We saw some of the preferred techniques in Chapter 3; here we use contours and shading. This is done as follows:
Figure 5.14: Like Figure 5.13, using contours.
flowPlot(fcsBT, plotParameters = c("CD3all", "CD56"), logy = FALSE)
contour(fcsBT[, c(40, 19)], add = TRUE)
A more recent Bioconductor package, ggcyto, has been designed to enable the plotting of each patient in a diffent facet using ggplot.
Try comparing the output using this approach to what we did above using the following:
library("ggcyto")
library("labeling")
ggcd4cd8=ggcyto(fcsB,aes(x=CD4,y=CD8))
ggcd4=ggcyto(fcsB,aes(x=CD4))
ggcd8=ggcyto(fcsB,aes(x=CD8))
p1=ggcd4+geom_histogram(bins=60)
p1b=ggcd8+geom_histogram(bins=60)
asinhT = arcsinhTransform(a=0,b=1)
transl = transformList(colnames(fcsB)[-c(1,2,41)], asinhT)
fcsBT = transform(fcsB, transl)
p1t=ggcyto(fcsBT,aes(x=CD4))+geom_histogram(bins=90)
p2t=ggcyto(fcsBT,aes(x=CD4,y=CD8))+geom_density2d(colour="black")
p3t=ggcyto(fcsBT,aes(x=CD45RA,y=CD20))+geom_density2d(colour="black")
### 5.5.3 Density-based clustering
Data sets such as flow cytometry, that contain only a few markers and a large number of cells, are amenable to density-based clustering. This method looks for regions of high density separated by sparser regions. It has the advantage of being able to cope with clusters that are not necessarily convex. One implementation of such a method is called dbscan. Let’s look at an example by running the following code.
library("dbscan")
mc5 = Biobase::exprs(fcsBT)[, c(15,16,19,40,33)]
res5 = dbscan::dbscan(mc5, eps = 0.65, minPts = 30)
mc5df = data.frame(mc5, cluster = as.factor(res5$cluster)) table(mc5df$cluster)
##
## 0 1 2 3 4 5 6 7 8
## 75954 4031 5450 5310 259 257 63 25 43
ggplot(mc5df, aes(x=CD4, y=CD8, col=cluster))+geom_density2d()
ggplot(mc5df, aes(x=CD3all, y=CD20, col=cluster))+geom_density2d()
The output is shown in Figure 5.15. The overlaps of the clusters in the 2D projections enable us to appreciate the multidimensional nature of the clustering.
► Question 5.7
Try increasing the dimension to 6 by adding one CD marker-variables from the input data.
Then vary eps, and try to find four clusters such that at least two of them have more than 100 points.
Repeat this will 7 CD marker-variables, what do you notice?
► Solution
#### How does density-based clustering (dbscan) work ?
The dbscan method clusters points in dense regions according to the density-connectedness criterion. It looks at small neighborhood spheres of radius $$\epsilon$$ to see if points are connected.
The building block of dbscan is the concept of density-reachability: a point $$q$$ is directly density-reachable from a point $$p$$ if it is not further away than a given threshold $$\epsilon$$, and if $$p$$ is surrounded by sufficiently many points such that one may consider $$p$$ (and $$q$$) be part of a dense region. We say that $$q$$ is density-reachable from $$p$$ if there is a sequence of points $$p_1,...,p_n$$ with $$p_1 = p$$ and $$p_n = q$$, so that each $$p_{i + 1}$$ is directly density-reachable from $$p_i$$.
A cluster is then a subset of points that satisfy the following properties:
1. All points within the cluster are mutually density-connected.
2. If a point is density-connected to any point of the cluster, it is part of the cluster as well.
3. Groups of points must have at least MinPts points to count as a cluster.
It is important that the method looks for high density of points in a neighborhood. Other methods exist that try to define clusters by a void, or “missing points” between clusters. But these are vulnerable to the curse of dimensionality; these can create spurious “voids”.
## 5.6 Hierarchical clustering
Figure 5.16: A snippet of Linn{}us’ taxonomy that clusters organisms according to feature similarities.
Hierarchical clustering is a bottom-up approach, where similar observations and subclasses are assembled iteratively. Figure 5.16 shows how Linnæus made nested clusters of organisms according to specific characteristics. Such hierarchical organization has been useful in many fields and goes back to Aristotle who postulated a ladder of nature.
Dendrogram ordering. As you can see in the example of Figure 5.17, the order of the labels does not matter within sibling pairs. Horizontal distances are usually meaningless, while the vertical distances do encode some information. These properties are important to remember when making interpretations about neighbors that are not monophyletic (i.e., not in the same subtree or clade), but appear as neighbors in the plot (for instance B and D in the right hand tree are non-monophyletic neighbors).
Top-down hierarchies. An alternative, top-down, approach takes all the objects and splits them sequentially according to a chosen criterion. Such so-called recursive partitioning methods are often used to make decision trees. They can be useful for prediction (say, survival time, given a medical diagnosis): we are hoping in those instances to split heterogeneous populations into more homogeneous subgroups by partitioning. In this chapter, we concentrate on the bottom-up approaches. We will return to partitioning when we talk about supervised learning and classifcation in Chapter 12.
### 5.6.1 How to compute (dis)similarities between aggregated clusters?
Figure 5.18: In the single linkage method, the distance between groups $$C_1$$ and $$C_2$$ is defined as the distance between the closest two points from the groups.
Figure 5.19: In the complete linkage method, the distance between groups $$C_1$$ and $$C_2$$ is defined as the maximum distance between pairs of points from the two groups.
When creating a hierarchical clustering by aggregation, we will need more than just the distances between all pairs of individual objects: we also need a way to calculate distances between the aggregates. There are different choices of how to define them, based on the object-object distances, and each choice results in a different type of hierarchical clustering.
A hierarchical clustering algorithms is easy enough to get started, by grouping the most similar observations together. But once an aggregation has occurred, one is required to say what the distance between a newly formed cluster and all other points is computed, or between two clusters.
• The minimal jump method, also called single linkage or nearest neighbor method computes the distance between clusters as the smallest distance between any two points in the two clusters (as shown in Figure 5.18):
$\begin{equation*} d_{12} = \min_{i \in C_1, i \in C_2 } d_{ij}. \end{equation*}$
This method tends to create clusters that look like contiguous strings of points. The cluster tree often looks like a comb.
• The maximum jump (or complete linkage) method defines the distance between clusters as the largest distance between any two objects in the two clusters, as represented in Figure 5.19:
$\begin{equation*} d_{12} = \max_{i \in C_1, i \in C_2 } d_{ij}. \end{equation*}$
Figure 5.20: The Ward method maximizes the between group sum of squares (red edges), while minimizing the sums of squares within groups (black edges).
• The average linkage method is half way between the two above:
$\begin{equation*} d_{12} = \frac{1}{|C_1| |C_2|}\sum_{i \in C_1, i \in C_2 } d_{ij} \end{equation*}$
• Ward’s method takes an analysis of variance approach, where the goal is to minimize the variance within clusters. This method is very efficient, however, it tends to create break the clusters up into ones of smaller sizes.
Advantages and disadvantages of the various distances between aggregates(Chakerian and Holmes 2012).
Method Pros Cons
Single linkage number of clusters comblike trees.
Complete linkage compact classes one obs. can alter groups
Average linkage similar size and variance not robust
Centroid robust to outliers smaller number of clusters
Ward minimising an inertia classes small if high variability
Figure 5.21: Hierarchical clustering output has similar properties to a mobile: the branches can rotate freely from their suspension points.
These are the choices we have to make building hierarchical clustering trees. An advantage of hierarchical clustering compared to the partitioning methods is that it offers a graphical diagnostic of the strength of groupings: the length of the inner edges in the tree.
When we have prior knowledge that the clusters are about the same size, using average linkage or Ward’s method of minimizing the within class variance is the best tactic.
► Question 5.8
Hierarchical clustering for cell populations The Morder data are gene expression measurements for 156 genes on T cells of 3 types (naïve, effector, memory) from 10 patients (Holmes et al. 2005). Using the pheatmap package, make two simple heatmaps, without dendogram or reordering, for Euclidean and Manhattan distances of these data.
► Question 5.9
Now, look at the differences in orderings in the hierarchical clustering trees with these two distances. What differences are noticeable?
Figure 5.23: This tree can be drawn in many different ways. The ordering of the leaves as it is appears here is $$(8,11,9,10,7,5,6,1,4,2,3)$$.
► Question 5.10
A hierarchical clustering tree is like the Calder mobile in Figure 5.21 that can swing around many internal pivot points, giving many orderings of the tips consistent with a given tree. Look at the tree in Figure 5.23. How many ways are there of ordering the tip labels and still maintain consistence with that tree?
It is common to see heatmaps whose rows and/or columns are ordered based on a hierachical clustering tree. Sometimes this makes some clusters look very strong – stronger than what the tree really implies. There are alternative ways of ordering the rows and columns in heatmaps, for instance, in the package NeatMap, that uses ordination methods73 These will be explained in Chapter 9. to find orderings.
## 5.7 Validating and choosing the number of clusters
The clustering methods we have described are tailored to deliver good groupings of the data under various constraints. However, keep in mind that clustering methods will always deliver groups, even if there are none. If, in fact, there are no real clusters in the data, a hierarchical clustering tree may show relatively short inner branches; but it is difficult to quantify this. In general it is important to validate your choice of clusters with more objective criteria.
One criterion to assess the quality of a clustering result is to ask to what extent it maximizes the between group differences while keeping the within-group distances small (maximizing the lengths of red lines and minimizing those of the black lines in Figure 5.20). We formalize this with the within-groups sum of squared distances (WSS):
$$$\text{WSS}_k=\sum_{\ell=1}^k \sum_{x_i \in C_\ell} d^2(x_i, \bar{x}_{\ell}) \tag{5.4}$$$
Here, $$k$$ is the number of clusters, $$C_\ell$$ is the set of objects in the $$\ell$$-th cluster, and $$\bar{x}_\ell$$ is the center of mass (the average point) in the $$\ell$$-th cluster. We state the dependence on $$k$$ of the WSS in Equation (5.4) as we are interested in comparing this quantity across different values of $$k$$, for the same cluster algorithm. Stated as it is, the WSS is however not a sufficient criterion: the smallest value of WSS would simply be obtained by making each point its own cluster. The WSS is a useful building block, but we need more sophisticated ideas than just looking at this number alone.
One idea is to look at $$\text{WSS}_k$$ as a function of $$k$$. This will always be a decreasing function, but if there is a pronounced region where it decreases sharply and then flattens out, we call this an elbow and might take this as a potential sweet spot for the number of clusters.
► Question 5.11
An alternative expression for $$\text{WSS}_k$$. Use R to compute the sum of distances between all pairs of points in a cluster and compare it to $$\text{WSS}_k$$. Can you see how $$\text{WSS}_k$$ can also be written:
$$$\text{WSS}_k=\sum_{\ell=1}^k \frac{1}{2 n_\ell} \sum_{x_i \in C_\ell} \sum_{x_j \in C_\ell} d^2(x_i,x_j), \tag{5.5}$$$
where $$n_\ell$$ is the size of the $$\ell$$-th cluster.
Question 5.11 show us that the within-cluster sums of squares $$\text{WSS}_k$$ measures both the distances of all points in a cluster to its center, and the average distance between all pairs of points in the cluster.
When looking at the behavior of various indices and statistics that help us decide how many clusters are appropriate for the data, it can be useful to look at cases where we actually know the right answer.
To start, we simulate data coming from four groups. We use the pipe (%>%) operator and the bind_rows function from dplyr to concatenate the four tibbles corresponding to each cluster into one big tibble74 The pipe operator passes the value to its left into the function to its right. This can make the flow of data easier to follow in code: f(x) %>% g(y) is equivalent to g(f(x), y).
library("dplyr")
simdat = lapply(c(0, 8), function(mx) {
lapply(c(0,8), function(my) {
tibble(x = rnorm(100, mean = mx, sd = 2),
y = rnorm(100, mean = my, sd = 2),
class = paste(mx, my, sep = ":"))
}) %>% bind_rows
}) %>% bind_rows
simdat
## # A tibble: 400 x 3
## x y class
## <dbl> <dbl> <chr>
## 1 -2.42 -4.59 0:0
## 2 1.89 -1.56 0:0
## 3 0.558 2.17 0:0
## 4 2.51 -0.873 0:0
## 5 -2.52 -0.766 0:0
## 6 3.62 0.953 0:0
## 7 0.774 2.43 0:0
## 8 -1.71 -2.63 0:0
## 9 2.01 1.28 0:0
## 10 2.03 -1.25 0:0
## # … with 390 more rows
simdatxy = simdat[, c("x", "y")] # without class label
Figure 5.24: The simdat data colored by the class labels. Here, we know the labels since we generated the data – usually we do not know them.
ggplot(simdat, aes(x = x, y = y, col = class)) + geom_point() +
coord_fixed()
We compute the within-groups sum of squares for the clusters obtained from the $$k$$-means method:
Figure 5.25: The barchart of the WSS statistic as a function of $$k$$ shows that the last substantial jump is just before $$k=4$$. This indicates that the best choice for these data is $$k=4$$.
wss = tibble(k = 1:8, value = NA_real_)
wss$value[1] = sum(scale(simdatxy, scale = FALSE)^2) for (i in 2:nrow(wss)) { km = kmeans(simdatxy, centers = wss$k[i])
wss$value[i] = sum(km$withinss)
}
ggplot(wss, aes(x = k, y = value)) + geom_col()
► Question 5.12
1. Run the code above several times and compare the wss values for different runs. Why are they different?
2. Create a set of data with uniform instead of normal distributions with the same range and dimensions as simdat. Compute the WSS values for for thess data. What do you conclude?
► Question 5.13
The so-called Calinski-Harabasz index uses the WSS and BSS (between group sums of squares). It is inspired by the $$F$$ statistic75 The $$F$$ statistic is the ratio of the mean sum of squares explained by a factor to the mean residual sum of squares. used in analysis of variance:
$\begin{equation*} \text{CH}(k)=\frac{\text{BSS}_k}{\text{WSS}_k}\times\frac{N-k}{N-1} \qquad \text{where} \quad \text{BSS}_k = \sum_{\ell=1}^k n_\ell(\bar{x}_{\ell}-\bar{x})^2, \end{equation*}$
where $$\bar{x}$$ is the overall center of mass (average point). Plot the Calinski-Harabasz index for the simdat data.
► Solution
### 5.7.1 Using the gap statistic
Taking the logarithm of the within-sum-of-squares ($$\log(\text{WSS}_k)$$) and comparing it to averages from simulated data with less structure can be a good way of choosing $$k$$. This is the basic idea of the gap statistic introduced by Tibshirani, Walther, and Hastie (2001). We compute $$\log(\text{WSS}_k)$$ for a range of values of $$k$$, the number of clusters, and compare it to that obtained on reference data of similar dimensions with various possible ‘non-clustered’ distributions. We can use uniformly distributed data as we did above or data simulated with the same covariance structure as our original data.
This algorithm is a Monte Carlo method that compares the gap statistic $$\log(\text{WSS}_k)$$ for the observed data to an average over simulations of data with similar structure.
Algorithm for computing the gap statistic (Tibshirani, Walther, and Hastie 2001):
1. Cluster the data with $$k$$ clusters and compute $$\text{WSS}_k$$ for the various choices of $$k$$.
2. Generate $$B$$ plausible reference data sets, using Monte Carlo sampling from a homogeneous distribution and redo Step 1 above for these new simulated data. This results in $$B$$ new within-sum-of-squares for simulated data $$W_{kb}^*$$, for $$b=1,...,B$$.
3. Compute the $$\text{gap}(k)$$-statistic:
$\begin{equation*} \text{gap}(k) = \overline{l}_k - \log \text{WSS}_k \quad\text{with}\quad \overline{l}_k =\frac{1}{B}\sum_{b=1}^B \log W^*_{kb} \end{equation*}$
Note that the first term is expected to be bigger than the second one if the clustering is good (i.e., the WSS is smaller); thus the gap statistic will be mostly positive and we are looking for its highest value.
1. We can use the standard deviation
$\begin{equation*} \text{sd}_k^2 = \frac{1}{B-1}\sum_{b=1}^B\left(\log(W^*_{kb})-\overline{l}_k\right)^2 \end{equation*}$
to help choose the best $$k$$. Several choices are available, for instance, to choose the smallest $$k$$ such that
$\begin{equation*} \text{gap}(k) \geq \text{gap}(k+1) - s'_{k+1}\qquad \text{where } s'_{k+1}=\text{sd}_{k+1}\sqrt{1+1/B}. \end{equation*}$
The packages cluster and clusterCrit provide implementations.
► Question 5.14
Make a function that plots the gap statistic as in Figure 5.27. Show the output for the simdat example dataset clustered with the pam function.
► Solution
Let’s now use the method on a real example. We load the Hiiragi data that we already explored in Chapter 3 (Ohnishi et al. 2014) and will see how the cells cluster.
library("Hiiragi2013")
data("x")
We start by choosing the 50 most variable genes (features)76 The intention behind this step is to reduce the influence of technical (or batch) effects. Although individually small, when accumulated over all the 45101 features in x, many of which match genes that are weakly or not expressed, without this feature selection step, such effects are prone to suppress the biological signal..
selFeats = order(rowVars(Biobase::exprs(x)), decreasing = TRUE)[1:50]
embmat = t(Biobase::exprs(x)[selFeats, ])
embgap = clusGap(embmat, FUN = pamfun, K.max = 24, verbose = FALSE)
k1 = maxSE(embgap$Tab[, "gap"], embgap$Tab[, "SE.sim"])
k2 = maxSE(embgap$Tab[, "gap"], embgap$Tab[, "SE.sim"],
method = "Tibs2001SEmax")
c(k1, k2)
## [1] 9 7
The default choice for the number of clusters, k1, is the first value of $$k$$ for which the gap is not larger than the first local maximum minus a standard error $$s$$ (see the manual page of the clusGap function). This gives a number of clusters $$k = 9$$, whereas the choice recommended by Tibshirani, Walther, and Hastie (2001) is the smallest $$k$$ such that $$\text{gap}(k) \geq \text{gap}(k+1) - s_{k+1}'$$, this gives $$k = 7$$. Let’s plot the gap statistic (Figure 5.28).
Figure 5.28: The gap statistic for the Hiiragi2013 data.
plot(embgap, main = "")
cl = pamfun(embmat, k = k1)$cluster table(pData(x)[names(cl), "sampleGroup"], cl) ## cl ## 1 2 3 4 5 6 7 8 9 ## E3.25 23 11 1 1 0 0 0 0 0 ## E3.25 (FGF4-KO) 0 0 1 16 0 0 0 0 0 ## E3.5 (EPI) 2 1 0 0 0 8 0 0 0 ## E3.5 (FGF4-KO) 0 0 8 0 0 0 0 0 0 ## E3.5 (PE) 0 0 0 0 9 2 0 0 0 ## E4.5 (EPI) 0 0 0 0 0 0 0 4 0 ## E4.5 (FGF4-KO) 0 0 0 0 0 0 0 0 10 ## E4.5 (PE) 0 0 0 0 0 0 4 0 0 Above we see the comparison between the clustering that we got from pamfun with the sample labels in the annotation of the data. ► Question 5.15 How do the results change if you use all the features in x, rather than subsetting the top 50 most variable genes? ### 5.7.2 Cluster validation using the bootstrap We saw the bootstrap principle in Chapter 4: ideally, we would like to use many new samples (sets of data) from the underlying data generating process, for each of them apply our clustering method, and then see how stable the clusterings are, or how much they change, using an index such as those we used above to compare clusterings. Of course, we don’t have these additional samples. So we are, in fact, going to create new datasets simply by taking different random subsamples of the data, look at the different clusterings we get each time, and compare them. Tibshirani, Walther, and Hastie (2001) recommend using bootstrap resampling to infer the number of clusters using the gap statistic. We will continue using the Hiiragi2013 data. Here we follow the investigation of the hypothesis that the inner cell mass (ICM) of the mouse blastocyst in embyronic day 3.5 (E3.5) falls “naturally” into two clusters corresponding to pluripotent epiblast (EPI) versus primitive endoderm (PE), while the data for embryonic day 3.25 (E3.25) do not yet show this symmetry breaking. We will not use the true group labels in our clustering and only use them in the final interpretation of the results. We will apply the bootstrap to the two different data sets (E3.5) and (E3.25) separately. Each step of the bootstrap will generate a clustering of a random subset of the data and we will need to compare these through a consensus of an ensemble of clusters. There is a useful framework for this in the clue package (Hornik 2005). The function clusterResampling, taken from the supplement of Ohnishi et al. (2014), implements this approach: clusterResampling = function(x, ngenes = 50, k = 2, B = 250, prob = 0.67) { mat = Biobase::exprs(x) ce = cl_ensemble(list = lapply(seq_len(B), function(b) { selSamps = sample(ncol(mat), size = round(prob * ncol(mat)), replace = FALSE) submat = mat[, selSamps, drop = FALSE] sel = order(rowVars(submat), decreasing = TRUE)[seq_len(ngenes)] submat = submat[sel,, drop = FALSE] pamres = pam(t(submat), k = k) pred = cl_predict(pamres, t(mat[sel, ]), "memberships") as.cl_partition(pred) })) cons = cl_consensus(ce) ag = sapply(ce, cl_agreement, y = cons) list(agreements = ag, consensus = cons) } The function clusterResampling performs the following steps: 1. [clue1] Draw a random subset of the data (the data are either all E3.25 or all E3.5 samples) by selecting 67% of the samples without replacement. 2. Select the top ngenes features by overall variance (in the subset) . 3. [clue2] Apply $$k$$-means clustering and predict the cluster memberships of the samples that were not in the subset with the cl_predict method from the clue package, through their proximity to the cluster centres. 4. Repeat steps [clue1]-[clue2] B times. 5. Apply consensus clustering (cl_consensus). 6. [clueagree] For each of the B clusterings, measure the agreement with the consensus through the function(cl_agreement). Here a good agreement is indicated by a value of 1, and less agreement by smaller values. If the agreement is generally high, then the clustering into $$k$$ classes can be considered stable and reproducible; inversely, if it is low, then no stable partition of the samples into $$k$$ clusters is evident. As a measure of between-cluster distance for the consensus clustering, the Euclidean dissimilarity of the memberships is used, i.e., the square root of the minimal sum of the squared differences of $$\mathbf{u}$$ and all column permutations of $$\mathbf{v}$$, where $$\mathbf{u}$$ and $$\mathbf{v}$$ are the cluster membership matrices. As agreement measure for Step [clueagree], the quantity $$1 - d/m$$ is used, where $$d$$ is the Euclidean dissimilarity, and $$m$$ is an upper bound for the maximal Euclidean dissimilarity. iswt = (x$genotype == "WT")
cr1 = clusterResampling(x[, x$Embryonic.day == "E3.25" & iswt]) cr2 = clusterResampling(x[, x$Embryonic.day == "E3.5" & iswt])
The results are shown in Figure 5.30. They confirm the hypothesis that the E.35 data fall into two clusters.
ag1 = tibble(agreements = cr1$agreements, day = "E3.25") ag2 = tibble(agreements = cr2$agreements, day = "E3.5")
p1 <- ggplot(bind_rows(ag1, ag2), aes(x = day, y = agreements)) +
geom_boxplot() +
ggbeeswarm::geom_beeswarm(cex = 1.5, col = "#0000ff40")
mem1 = tibble(y = sort(cl_membership(cr1$consensus)[, 1]), x = seq(along = y), day = "E3.25") mem2 = tibble(y = sort(cl_membership(cr2$consensus)[, 1]),
x = seq(along = y), day = "E3.5")
p2 <- ggplot(bind_rows(mem1, mem2), aes(x = x, y = y, col = day)) +
geom_point() + facet_grid(~ day, scales = "free_x")
gridExtra::grid.arrange(p1, p2, widths = c(2.4,4.0))
Computational complexity. An algorithm is said to be $$O(n^k)$$, if, as $$n$$ gets larger, the resource consumption (CPU time or memory) grows proportionally to $$n^k$$. There may be other (sometimes considerable) baseline costs, or costs that grow proportionally to lower powers of $$n$$, but these always become negligible compared to the leading term as $$n\to\infty$$.
#### Computational and memory Issues
It is important to remember that the computation of all versus all distances of $$n$$ objects is an $$O(n^2)$$ operation (in time and memory). Classic hierarchical clustering approaches (such as hclust in the stats package) are even $$O(n^3)$$ in time. For large $$n$$, this may become impractical77 E.g., the distance matrix for one million objects, stored as 8-byte floating point numbers, would take up about 4 Terabytes, and an hclust-like algorithm would run 30 years even under the optimistic assumption that each of the iterations only takes a nanosecond.. We can avoid the complete computation of the all-vs-all distance matrix. For instance, $$k$$-means has the advantage of only requiring $$O(n)$$ computations, since it only keeps track of the distances between each object and the cluster centers, whose number remains the same even if $$n$$ increases.
Fast implementations such as fastclust (Müllner 2013) and dbscan have been carefully optimized to deal with a large number of observations.
## 5.8 Clustering as a means for denoising
Consider a set of measurements that reflect some underlying true values (say, species represented by DNA sequences from their genomes), but have been degraded by technical noise. Clustering can be used to remove such noise.
### 5.8.1 Noisy observations with different baseline frequencies
Suppose that we have a bivariate distribution of observations made with the same error variances. However, the sampling is from two groups that have very different baseline frequencies. Suppose, further, that the errors are continuous independent bivariate normally distributed. We have $$10^{3}$$ of seq1 and $$10^{5}$$ of seq2, as generated for instance by the code:
Figure 5.31: Although both groups have noise distributions with the same variances, the apparent radii of the groups are very different. The $$10^{5}$$ instances in seq2 have many more opportunities for errors than what we see in seq1, of which there are only $$10^{3}$$. Thus we see that frequencies are important in clustering the data.
library("mixtools")
library("ggplot2")
seq1 = rmvnorm(n = 1e3, mu = -c(1, 1), sigma = 0.5 * diag(c(1, 1)))
seq2 = rmvnorm(n = 1e5, mu = c(1, 1), sigma = 0.5 * diag(c(1, 1)))
twogr = data.frame(
rbind(seq1, seq2),
seq = factor(c(rep(1, nrow(seq1)),
rep(2, nrow(seq2))))
)
colnames(twogr)[1:2] = c("x", "y")
ggplot(twogr, aes(x = x, y = y, colour = seq,fill = seq)) +
geom_hex(alpha = 0.5, bins = 50) + coord_fixed()
The observed values would look as in Figure 5.31.
► Question 5.16
Take the data seq1 and seq2 and cluster them into two groups according to distance from group center. Do you think the results should depend on the frequencies of each of the two sequence types?
► Solution
See Kahneman (2011) for a book-length treatment of our natural heuristics and the ways in which they can mislead us when we make probability calculations (we recommend especially Chapters 14 and 15).
Simulate n=2000 binary variables of length len=200 that indicate the quality of n sequencing reads of length len. For simplicity, let us assume that sequencing errors occur independently and uniformly with probability perr=0.001. That is, we only care whether a base was called correctly (TRUE) or not (FALSE).
n = 2000
len = 200
perr = 0.001
seqs = matrix(runif(n * len) >= perr, nrow = n, ncol = len)
Now, compute all pairwise distances between reads.
dists = as.matrix(dist(seqs, method = "manhattan"))
For various values of number of reads k (from 2 to n), the maximum distance within this set of reads is computed by the code below and shown in Figure 5.32.
Figure 5.32: The diameter of a set of sequences as a function of the number of sequences.
library("tibble")
dfseqs = tibble(
k = 10 ^ seq(log10(2), log10(n), length.out = 20),
diameter = vapply(k, function(i) {
s = sample(n, i)
max(dists[s, s])
}, numeric(1)))
ggplot(dfseqs, aes(x = k, y = diameter)) + geom_point()+geom_smooth()
We will now improve the 16SrRNA-read clustering using a denoising mechanism that incorporates error probabilities.
### 5.8.2 Denoising 16S rRNA sequences
What are the data? In the bacterial 16SrRNA gene there are so-called variable regions that are taxa-specific. These provide fingerprints that enables taxon78 Calling different groups of bacteria taxa rather than species highlights the approximate nature of the concept, as the notion of species is more fluid in bacteria than, say, in animals. identification. The raw data are FASTQ-files with quality scored sequences of PCR-amplified DNA regions. We use an iterative alternating approach80 Similar to the EM algorithm we saw in Chapter 4. to build a probabilistic noise model from the data. We call this a de novo method, because we use clustering, and we use the cluster centers as our denoised sequence variants (a.k.a. Amplicon Sequence Variants, ASVs, see (Callahan, McMurdie, and Holmes 2017)). After finding all the denoised variants, we create contingency tables of their counts across the different samples. We will show in Chapter 10 how these tables can be used to infer properties of the underlying bacterial communities using networks and graphs.
In order to improve data quality, one often has to start with the raw data and model all the sources of variation carefully. One can think of this as an example of cooking from scratch (see the gruesome details in Ben J Callahan et al. (2016) and Exercise 5.5).
► Question 5.17
Suppose that we have two sequences of length 200 (seq1 and seq2) present in our sample at very different abundances. We are told that the technological sequencing errors occur as independent Bernoulli(0.0005) random events for each nucleotide.
What is the distribution of the number of errors per sequence?
► Solution
Figure 5.33 shows us how close the distribution is to being Poisson distributed.
### 5.8.3 Infer sequence variants
The DADA method (Divisive Amplicon Denoising Algorithm, Rosen et al. (2012)) uses a parameterized model of substitution errors that distinguishes sequencing errors from real biological variation. The model computes the probabilities of base substitutions, such as seeing an $${\tt A}$$ instead of a $${\tt C}$$. It assumes that these probabilities are independent of the position along the sequence. Because error rates vary substantially between sequencing runs and PCR protocols, the model parameters are estimated from the data themselves using an EM-type approach. A read is classified as noisy or exact given the current parameters, and the noise model parameters are updated accordingly81 In the case of a large data set, the noise model estimation step does not have to be done on the complete set. See https://benjjneb.github.io/dada2/bigdata.html for tricks and tools when dealing with large data sets..
The dereplicated sequences82 F stands for forward strand and R for reverse strand. are read in and then divisive denoising and estimation is run with the dada function as in the following code:
derepFs = readRDS(file="../data/derepFs.rds")
ddF = dada(derepFs, err = NULL, selfConsist = TRUE)
ddR = dada(derepRs, err = NULL, selfConsist = TRUE)
In order to verify that the error transition rates have been reasonably well estimated, we inspect the fit between the observed error rates (black points) and the fitted error rates (black lines) (Figure 5.34).
plotErrors(ddF)
Once the errors have been estimated, the algorithm is rerun on the data to find the sequence variants:
dadaFs = dada(derepFs, err=ddF[[1]]$err_out, pool = TRUE) dadaRs = dada(derepRs, err=ddR[[1]]$err_out, pool = TRUE)
Note: The sequence inference function can run in two different modes: Independent inference by sample (pool = FALSE), and pooled inference from the sequencing reads combined from all samples. Independent inference has two advantages: as a functions of the number of samples, computation time is linear and memory requirements are constant. Pooled inference is more computationally taxing, however it can improve the detection of rare variants that occur just once or twice in an individual sample but more often across all samples. As this dataset is not particularly large, we performed pooled inference.
Sequence inference removes nearly all substitution and indel83 The term indel stands for insertion-deletion; when comparing two sequences that differ by a small stretch of characters, it is a matter of viewpoint whether this is an insertion or a deletion, thus the name. errors from the data. We merge the inferred forward and reverse sequences, while removing paired sequences that do not perfectly overlap as a final control against residual errors.
mergers = mergePairs(dadaFs, derepFs, dadaRs, derepRs)
We produce a contingency table of counts of ASVs. This is a higher-resolution analogue of the “OTU84 operational taxonomic units table”, i.e., a samples by features table whose cells contain the number of times each sequence variant was observed in each sample.
seqtab.all = makeSequenceTable(mergers[!grepl("Mock",names(mergers))])
► Question 5.18
Explore the components of the objects dadaRs and mergers.
► Solution
Chimera are sequences that are artificially created during PCR amplification by the melding of two (in rare cases, more) of the original sequences. To complete our denoising workflow, we remove them with a call to the function removeBimeraDenovo, leaving us with a clean contingency table we will use later on.
seqtab = removeBimeraDenovo(seqtab.all)
► Question 5.19
Why do you think the chimera are quite easy to recognize?
What proportion of the reads were chimeric in the seqtab.all data?
What proportion of unique sequence variants are chimeric?
► Solution
## 5.9 Summary of this chapter
Of a feather: how to compare observations We saw at the start of the chapter how finding the right distance is an essential first step in a clustering analysis; this is a case where the garbage in, garbage out motto is in full force. Always choose a distance that is scientifically meaningful and compare output from as many distances as possible; sometimes the same data require different distances when different scientific objectives are pursued.
Two ways of clustering We saw there are two approaches to clustering:
• iterative partitioning approaches such as $$k$$-means and $$k$$-medoids (PAM) that alternated between estimating the cluster centers and assigning points to them;
• hierarchical clustering approaches that first agglomerate points, and subsequently the growing clusters, into nested sequences of sets that can be represented by hierarchical clustering trees.
Biological examples Clustering is important tool for finding latent classes in single cell measurements, especially in immunology and single cell data analyses. We saw how density-based clustering is useful for lower dimensional data where sparsity is not an issue.
Validation the clusters Clustering algorithms always deliver clusters, so we need to assess their quality and the number of clusters to choose carefully. These validation steps are perfomed using visualization tools and repeating the clustering on many resamples of the data. We saw how statistics such as WSS/BSS or $$\log(\text{WSS})$$ can be calibrated using simulations on data where we understand the group structure and can provide useful benchmarks for choosing the number of clusters on new data. Of course, the use of biologically relevant information to inform and confirm the meaning of clusters is always the best validation approach.
Distances and probabilities Finally: distances are not everything. We showed how important it was to take into account baseline frequencies and local densities when clustering. This is essential in a cases such as clustering to denoise 16S rRNA sequence reads where the true class or taxa group occur at very different frequencies.
For a complete book on Finding groups in data, see Kaufman and Rousseeuw (2009). The vignette of the clusterExperiment package contains a complete workflow for generating clusters using many different techniques, including preliminary dimension reduction (PCA) that we will cover in Chapter 7. There is no consensus on methods for deciding how many clusters are needed to describe data in the absence of contiguous biological information. However, making hierarchical clusters of the strong forms is a method that has the advantage of allowing the user to decide how far down to cut the hierarchical tree and be careful not to cut in places where these inner branches are short. See the vignette of clusterExperiment for an application to single cell RNA experimental data.
In analyzing the Hiiragi data, we used cluster probabilities, a concept already mentioned in Chapter 4, where the EM algorithm used them as weights to compute expected value statistics. The notion of probabilistic clustering is well-developed in the Bayesian nonparametric mixture framework, which enriches the mixture models we covered in Chapter 4 to more general settings. See Dundar et al. (2014) for a real example using this framework for flow cytometry. In the denoising and assignment of high-throughput sequencing reads to specific strains of bacteria or viruses, clustering is essential. In the presence of noise, clustering into groups of true strains of very unequal sizes can be challenging. Using the data to create a noise model enables both denoising and cluster assignment concurrently. Denoising algorithms such as those by Rosen et al. (2012) or Benjamin J Callahan et al. (2016) use an iterative workflow inspired by the EM method (McLachlan and Krishnan 2007).
## 5.11 Exercises
► Exercise 5.1
We can define the average dissimilarity of a point $$x_i$$ to a cluster $$C_k$$ as the average of the distances from $$x_i$$ to all points in $$C_k$$. Let $$A(i)$$ be the average dissimilarity of all points in the cluster that $$x_i$$ belongs to. Let $$B(i)$$ be the lowest average dissimilarity of $$x_i$$ to any other cluster of which $$x_i$$ is not a member. The cluster with this lowest average dissimilarity is said to be the neighboring cluster of $$x_i$$, because it is the next best fit cluster for point $$x_i$$. The silhouette index is
$\begin{equation*} S(i)=\frac{B(i)-A(i)}{\max_i(A(i),B(i))}. \end{equation*}$
5.1.a Compute the silhouette index for the simdat data we simulated in Section 5.7.
library("cluster")
pam4 = pam(simdatxy, 4)
sil = silhouette(pam4, 4)
plot(sil, col=c("red","green","blue","purple"), main="Silhouette")
5.1.b Change the number of clusters $$k$$ and assess which $$k$$ gives the best silhouette index.
5.1.c Now, repeat this for groups that have uniform (unclustered) data distributions over a whole range of values.
► Exercise 5.2
5.2.a Make a “character” representation of the distance between the 20 locations in the dune data from the vegan package using the function symnum.
5.2.b Make a heatmap plot of these distances.
► Exercise 5.3
5.3.a Load the spirals data from the kernlab package. Plot the results of using $$k$$-means on the data. This should give you something similar to Figure 5.35.
5.3.b You’ll notice that the clustering in Figure 5.35 seems unsatisfactory. Show how a different method, such as specc or dbscan, could cluster spirals data in a more useful manner.
5.3.c Repeat the dbscan clustering with different parameters. How robust is the number of groups?
► Exercise 5.4
Looking at graphical representations in simple two-dimensional maps can often reveal important clumping patterns. We saw an example for this with the map that enabled Snow to discover the source of the London cholera outbreak. Such clusterings can often indicate important information about hidden variables acting on the observations. Look at a map for breast cancer incidence in the US at:
http://www.huffingtonpost.com/bill-davenhall/post_1663_b_817254.html (Mandal et al. 2009); the areas of high incidence seem spatially clustered. Can you guess the reason(s) for this clustering and high incidence rates on the West and East coasts and around Chicago?
► Exercise 5.5
Amplicon bioinformatics: from raw reads to dereplicated sequences. As a supplementary exercise, we provide the intermediate steps necessary to a full data preprocessing workflow for denoising 16S rRNA sequences. We start by setting the directories and loading the downloaded data:
base_dir = "../data"
miseq_path = file.path(base_dir, "MiSeq_SOP")
filt_path = file.path(miseq_path, "filtered")
fnFs = sort(list.files(miseq_path, pattern="_R1_001.fastq"))
fnRs = sort(list.files(miseq_path, pattern="_R2_001.fastq"))
sampleNames = sapply(strsplit(fnFs, "_"), [, 1)
if (!file_test("-d", filt_path)) dir.create(filt_path)
filtFs = file.path(filt_path, paste0(sampleNames, "_F_filt.fastq.gz"))
filtRs = file.path(filt_path, paste0(sampleNames, "_R_filt.fastq.gz"))
fnFs = file.path(miseq_path, fnFs)
fnRs = file.path(miseq_path, fnRs)
print(length(fnFs))
## [1] 20
The data are highly-overlapping Illumina Miseq $$2\times 250$$ amplicon sequences from the V4 region of the 16S rRNA gene (Kozich et al. 2013). There were originally 360 fecal samples collected longitudinally from 12 mice over the first year of life. These were collected by Schloss et al. (2012) to investigate the development and stabilization of the murine microbiome. We have selected 20 samples to illustrate how to preprocess the data.
We will need to filter out low-quality reads and trim them to a consistent length. While generally recommended filtering and trimming parameters serve as a starting point, no two datasets are identical and therefore it is always worth inspecting the quality of the data before proceeding. We show the sequence quality plots for the two first samples in Figure 5.36. They are generated by:
plotQualityProfile(fnFs[1:2]) + ggtitle("Forward")
plotQualityProfile(fnRs[1:2]) + ggtitle("Reverse")
Note that we also see the background distribution of quality scores at each position in Figure 5.36 as a grey-scale heat map. The dark colors correspond to higher frequency.
► Exercise 5.6
Generate similar plots for four randomly selected sets of forward and reverse reads. Compare forward and reverse read qualities; what do you notice?
► Exercise 5.7
Here, the forward reads maintain high quality throughout, while the quality of the reverse reads drops significantly at about position 160. Therefore, we truncate the forward reads at position 240, and trimm the first 10 nucleotides as these positions are of lower quality. The reverse reads are trimmed at position 160. Combine these trimming parameters with standard filtering parameters remember to enforce a maximum of 2 expected errors per-read. (Hint: Trim and filter on paired reads jointly, i.e., both reads must pass the filter for the pair to pass. The input arguments should be chosen following the dada2 vignette carefully. We recommend filtering out all reads with any ambiguous nucleotides.)
► Exercise 5.8
Use R to create a map like the one shown in Figure 5.2. Hint: go to the website of the British National Archives and download street addresses of hits, use an address resolution service to convert these into geographic coordinates, and display these as points on a map of London.
### References
Aure, Miriam Ragle, Valeria Vitelli, Sandra Jernström, Surendra Kumar, Marit Krohn, Eldri U Due, Tonje Husby Haukaas, et al. 2017. “Integrative Clustering Reveals a Novel Split in the Luminal A Subtype of Breast Cancer with Impact on Outcome.” Breast Cancer Research 19 (1). BioMed Central: 44.
Bendall, Sean C, Garry P Nolan, Mario Roederer, and Pratip K Chattopadhyay. 2012. “A Deep Profiler’s Guide to Cytometry.” Trends in Immunology 33 (7). Elsevier: 323–32.
Callahan, Benjamin J, Paul J McMurdie, and Susan P Holmes. 2017. “Exact Sequence Variants Should Replace Operational Taxonomic Units in Marker Gene Data Analysis.” ISME Journal. Nature publishing, 1–5.
Callahan, Benjamin J, Paul J McMurdie, Michael J Rosen, Andrew W Han, Amy J Johnson, and Susan P Holmes. 2016. “DADA2: High Resolution Sample Inference from Amplicon Data.” Nature Methods. Nature Publishing, 1–4.
Callahan, Ben J, Kris Sankaran, Julia A Fukuyama, Paul J McMurdie, and Susan P Holmes. 2016. “Bioconductor Workflow for Microbiome Data Analysis: From Raw Reads to Community Analyses.” F1000Research 5. Faculty of 1000 Ltd.
Caporaso, J.G., J. Kuczynski, J. Stombaugh, K. Bittinger, F.D. Bushman, E.K. Costello, N. Fierer, et al. 2010. “QIIME Allows Analysis of High-Throughput Community Sequencing Data.” Nature Methods 7 (5). Nature Publishing Group: 335–36.
Chakerian, John, and Susan Holmes. 2012. “Computational Tools for Evaluating Phylogenetic and Hierarchical Clustering Trees.” Journal of Computational and Graphical Statistics 21 (3). Taylor & Francis Group: 581–99.
Diday, Edwin, and M Paula Brito. 1989. “Symbolic Cluster Analysis.” In Conceptual and Numerical Analysis of Data, 45–84. Springer.
Dundar, Murat, Ferit Akova, Halid Z. Yerebakan, and Bartek Rajwa. 2014. “A Non-Parametric Bayesian Model for Joint Cell Clustering and Cluster Matching: Identification of Anomalous Sample Phenotypes with Random Effects.” BMC Bioinformatics 15 (1): 1–15. https://doi.org/10.1186/1471-2105-15-314.
Freedman, David A. 1991. “Statistical Models and Shoe Leather.” Sociological Methodology 21 (2): 291–313.
Hallett, Robin M, Anna Dvorkin-Gheva, Anita Bane, and John A Hassell. 2012. “A Gene Signature for Predicting Outcome in Patients with Basal-Like Breast Cancer.” Scientific Reports 2. Nature Publishing Group.
Holmes, Susan, Michael He, Tong Xu, and Peter P Lee. 2005. “Memory T Cells Have Gene Expression Patterns Intermediate Between Naive and Effector.” PNAS 102 (15). National Acad Sciences: 5519–23.
Hornik, Kurt. 2005. “A CLUE for CLUster Ensembles.” Journal of Statistical Software 14 (12). University of California, Los Angeles.
Hulett, Henry R, William A Bonner, Janet Barrett, and Leonard A Herzenberg. 1969. “Cell Sorting: Automated Separation of Mammalian Cells as a Function of Intracellular Fluorescence.” Science 166 (3906). American Association for the Advancement of Science: 747–49.
Kahneman, Daniel. 2011. Thinking, Fast and Slow. Macmillan.
Kaufman, Leonard, and Peter J Rousseeuw. 2009. Finding Groups in Data: An Introduction to Cluster Analysis. Vol. 344. John Wiley & Sons.
Kozich, James J, Sarah L Westcott, Nielson T Baxter, Sarah K Highlander, and Patrick D Schloss. 2013. “Development of a Dual-Index Sequencing Strategy and Curation Pipeline for Analyzing Amplicon Sequence Data on the Miseq Illumina Sequencing Platform.” Applied and Environmental Microbiology 79 (17). Am Soc Microbiol: 5112–20.
Mandal, Rakesh, Sophie St-Hilaire, John G Kie, and DeWayne Derryberry. 2009. “Spatial Trends of Breast and Prostate Cancers in the United States Between 2000 and 2005.” International Journal of Health Geographics 8 (1). BioMed Central: 53.
McLachlan, Geoffrey, and Thriyambakam Krishnan. 2007. The EM Algorithm and Extensions. Vol. 382. John Wiley & Sons.
Müllner, Daniel. 2013. “Fastcluster: Fast Hierarchical, Agglomerative Clustering Routines for R and Python.” Journal of Statistical Software 53 (9): 1–18.
Ohnishi, Y., W. Huber, A. Tsumura, M. Kang, P. Xenopoulos, K. Kurimoto, A. K. Oles, et al. 2014. “Cell-to-Cell Expression Variability Followed by Signal Reinforcement Progressively Segregates Early Mouse Lineages.” Nature Cell Biology 16 (1): 27–37.
O’Neill, Kieran, Nima Aghaeepour, Josef Špidlen, and Ryan Brinkman. 2013. “Flow Cytometry Bioinformatics.” PLoS Computational Biology 9 (12). Public Library of Science: e1003365.
Rosen, Michael J, Benjamin J Callahan, Daniel S Fisher, and Susan P Holmes. 2012. “Denoising PCR-Amplified Metagenome Data.” BMC Bioinformatics 13 (1). BioMed Central Ltd: 283.
Schloss, P.D., A.M. Schuber, J.P. Zackular, K.D. Iverson, Young V.B., and Petrosino J.F. 2012. “Stabilization of the Murine Gut Microbiome Following Weaning.” Gut Microbes 3 (4): 383–93.
Schloss, P D, S L Westcott, T Ryabin, J R Hall, M Hartmann, E B Hollister, R A Lesniewski, et al. 2009. “Introducing mothur: Open-Source, Platform-Independent, Community-Supported Software for Describing and Comparing Microbial Communities.” Applied and Environmental Microbiology 75 (23): 7537–41.
Tibshirani, Robert, Guenther Walther, and Trevor Hastie. 2001. “Estimating the Number of Clusters in a Data Set via the Gap Statistic.” JRSSB 63 (2). Wiley Online Library: 411–23.
Tseng, George C, and Wing H Wong. 2005. “Tight Clustering: A Resampling-Based Approach for Identifying Stable and Tight Patterns in Data.” Biometrics 61 (1). Wiley Online Library: 10–16.
Tversky, Amos, and Daniel Kahneman. 1974. “Heuristics and Biases: Judgement Under Uncertainty.” Science 185: 1124–30.
Tversky, Amos, and Daniel Kahneman. 1975. “Judgment Under Uncertainty: Heuristics and Biases.” In Utility, Probability, and Human Decision Making, 141–62. Springer.
Page built: 2019-08-27
Support for maintaining the online version of this book is provided by de.NBI | 2019-10-20 06:41:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 5, "x-ck12": 0, "texerror": 0, "math_score": 0.6541411876678467, "perplexity": 3782.5857881973207}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986703625.46/warc/CC-MAIN-20191020053545-20191020081045-00215.warc.gz"} |
https://www.yaclass.in/p/science-state-board/class-8/light-6185/re-80069884-1b2d-44ce-a8df-e08ef2f17e06 | ### Theory:
The Refraction of light rays, as they travel from one medium to another medium, obeys two laws, known as Snell’s laws of refraction.
They are given below:
• The incident ray, the refracted ray and the normal at the point of intersection, all lie in the same plane.
• The ratio of the sine of the angle of incidence (i) to the sine of the angle of refraction (r) is equal to the refractive index of the medium, which is a constant.
$\frac{\mathit{sin}\phantom{\rule{0.147em}{0ex}}i}{\mathit{sIn}\phantom{\rule{0.147em}{0ex}}r}\phantom{\rule{0.147em}{0ex}}=\phantom{\rule{0.147em}{0ex}}\mathit{Constant}\phantom{\rule{0.147em}{0ex}}=\phantom{\rule{0.147em}{0ex}}\mathrm{\mu }$
Snell’s law formula is derived from Fermat’s principle,
From the diagram,
${\mathrm{\theta }}_{1}$ $$-$$ Angle of incidence
${\mathrm{\theta }}_{2}$ $$-$$ Angle of reflection
${n}_{1}$ $$-$$ Refractive index of Medium 1 (Air)
${n}_{2}$ $$-$$ Refractive index of Medium 2 (Liquid)
$\begin{array}{l}{n}_{1}\phantom{\rule{0.147em}{0ex}}\mathit{sin}\phantom{\rule{0.147em}{0ex}}{\mathrm{\theta }}_{1}\phantom{\rule{0.147em}{0ex}}=\phantom{\rule{0.147em}{0ex}}{n}_{2}\phantom{\rule{0.147em}{0ex}}\mathit{sin}\phantom{\rule{0.147em}{0ex}}{\mathrm{\theta }}_{2}\\ \\ \frac{\mathit{sin}\phantom{\rule{0.147em}{0ex}}{\mathrm{\theta }}_{1}}{\mathit{sin}\phantom{\rule{0.147em}{0ex}}{\mathrm{\theta }}_{2}}\phantom{\rule{0.147em}{0ex}}=\phantom{\rule{0.147em}{0ex}}\frac{{n}_{2}}{{n}_{1}}\phantom{\rule{0.147em}{0ex}}=\phantom{\rule{0.147em}{0ex}}\mathit{Constant}\end{array}$
Reference:
https://commons.wikimedia.org/wiki/File:Snell%27s_Law.svg | 2021-07-30 16:03:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 6, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7471367716789246, "perplexity": 468.08314430558505}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153971.20/warc/CC-MAIN-20210730154005-20210730184005-00010.warc.gz"} |
https://docs.mosek.com/portfolio-cookbook/transaction.html | # 6 Transaction costs¶
Rebalancing a portfolio generates turnover, i. e., buying and selling of securities to change the portfolio composition. The basic Markowitz model assumes that there are no costs associated with trading, but in reality, turnover incurs expenses. In this chapter we extend the basic model to take this into account in the form of transaction cost constraints. We also show some practical constraints, which can also limit turnover through limiting position sizes.
We can classify transaction costs into two types [WO11]:
• Fixed costs are independent of transaction volume. These include brokerage commissions and transfer fees.
• Variable costs depend on the transaction volume. These comprise execution costs such as market impact, bid/ask spread, or slippage; and opportunity costs of failed or incomplete execution.
Note that to be able to compare transaction costs with returns and risk, we need to aggregate them over the length of the investment time period.
In the optimization problem, let $$\tilde{\mathbf{x}} = \mathbf{x} - \mathbf{x}_0$$ denote the change in the portfolio with respect to the initial holdings $$\mathbf{x}_0$$. Then in general we can take into account transaction costs with the function $$C$$, where $$C(\tilde{\mathbf{x}})$$ is the total transaction cost incurred by the change $$\tilde{\mathbf{x}}$$ in the portfolio. Here we assume that transaction costs are separable, i.e., the total cost is the sum of the costs associated with each security: $$C(\tilde{\mathbf{x}}) = \sum_{i=1}^{n} C_i(\tilde{x}_i)$$, where the function $$C_j(\tilde{x}_i)$$ specifies the transaction costs incurred for the change in the holdings of security $$i$$. We can then write the MVO model with transaction cost in the following way:
(6.1)$\begin{split}\begin{array}{lrcl} \mbox{maximize} & \EMean^\mathsf{T}\mathbf{x} & & \\ \mbox{subject to} & \mathbf{1}^\mathsf{T}\mathbf{x} + \sum_{i=1}^{n} C_i(\tilde{x}_i) & = & \mathbf{1}^\mathsf{T}\mathbf{x}_0,\\ & \mathbf{x}^\mathsf{T}\ECov\mathbf{x} & \leq & \gamma^2,\\ & \mathbf{x} & \in & \mathcal{F}. \end{array}\end{split}$
The constraint $$\mathbf{1}^\mathsf{T}\mathbf{x} + \sum_{i=1}^{n} C_i(\tilde{x}_i) = \mathbf{1}^\mathsf{T}\mathbf{x}_0$$ expresses the self-financing property of the portfolio. This means that no external cash is put into or taken out of the portfolio, we pay the costs from the existing portfolio components. We can e. g. assign one of the securities to be a cash account.
## 6.1 Variable transaction costs¶
The simplest model that handles variable costs makes the assumption that costs grow linearly with the trading volume [BBD+17, LMFB07]. We can use linear costs, for example, to model the cost related to the bid/ask spread, slippage, borrowing or shorting cost, or fund management fees. Let the transaction cost function for security $$i$$ be given by
$\begin{split}C_i(\tilde{x}_i) = \left\lbrace \begin{array}{lll} v_i^+ \tilde{x}_i, & \tilde{x}_i \geq 0, & \\ -v_i^- \tilde{x}_i, & \tilde{x}_i < 0, & \end{array}\right.\end{split}$
where $$v_i^+$$ and $$v_i^-$$ are the cost rates associated with buying and selling security $$i$$. By introducing positive and negative part variables $$\tilde{x}_i^+ = \mathrm{max}(\tilde{x}_i, 0)$$ and $$\tilde{x}_i^- = \mathrm{max}(-\tilde{x}_i, 0)$$ we can linearize this constraint to $$C_i(\tilde{x}_i) = v_i^+\tilde{x}_i^+ + v_i^-\tilde{x}_i^-$$. We can handle any piecewise linear convex transaction cost function in a similar way. After modeling the variables $$\tilde{x}_i^+$$ and $$\tilde{x}_i^-$$ as in Sec. 10.1.1.1 (Maximum function), the optimization problem will then become
(6.2)$\begin{split}\begin{array}{lrcl} \mbox{maximize} & \EMean^\mathsf{T}\mathbf{x} & & \\ \mbox{subject to} & \mathbf{1}^\mathsf{T}\mathbf{x} + \langle\mathbf{v}^+,\tilde{\mathbf{x}}^+\rangle + \langle\mathbf{v}^-,\tilde{\mathbf{x}}^-\rangle & = & 1,\\ & \tilde{\mathbf{x}} & = & \tilde{\mathbf{x}}^+ - \tilde{\mathbf{x}}^-,\\ & \tilde{\mathbf{x}}^+, \tilde{\mathbf{x}}^- & \geq & 0,\\ & \mathbf{x}^\mathsf{T}\ECov\mathbf{x} & \leq & \gamma^2,\\ & \mathbf{x} & \in & \mathcal{F}. \end{array}\end{split}$
In this model the budget constraint ensures that the variables $$\tilde{\mathbf{x}}^+$$ and $$\tilde{\mathbf{x}}^-$$ will not both become positive in any optimal solution.
## 6.2 Fixed transaction costs¶
We can extend the previous model with fixed transaction costs. Considering fixed costs is a way to discourage trading very small amounts, thus obtaining a sparse portfolio vector, i. e., one that has many zero entries.
Let $$f_i^+$$ and $$f_i^-$$ be the fixed costs associated with buying and selling security $$i$$. The extended transaction cost function is given by
$\begin{split}C_i(\tilde{x}_i) = \left\lbrace \begin{array}{lll} 0, & \tilde{x}_i = 0, & \\ f_i^+ + v_i^+ \tilde{x}_i, & \tilde{x}_i > 0, & \\ f_i^- - v_i^- \tilde{x}_i, & \tilde{x}_i < 0. & \end{array}\right.\end{split}$
This function is not convex, but we can still formulate a mixed integer optimization problem based on Sec. 10.2.1.4 (Positive and negative part) by introducing new variables. Let $$\mathbf{y}^+$$ and $$\mathbf{y}^-$$ be binary vectors. Then the optimization problem with transaction costs will become
(6.3)$\begin{split}\begin{array}{lrcl} \mbox{maximize} & \EMean^\mathsf{T}\mathbf{x} & & \\ \mbox{subject to} & \mathbf{1}^\mathsf{T}\mathbf{x} + \langle\mathbf{f}^+,\mathbf{y}^+\rangle + \langle\mathbf{f}^-,\mathbf{y}^-\rangle + & & \\ & + \langle\mathbf{v}^+,\tilde{\mathbf{x}}^+\rangle + \langle\mathbf{v}^-,\tilde{\mathbf{x}}^-\rangle & = & 1,\\ & \tilde{\mathbf{x}} & = & \tilde{\mathbf{x}}^+ - \tilde{\mathbf{x}}^-,\\ & \tilde{\mathbf{x}}^+, \tilde{\mathbf{x}}^- & \geq & 0,\\ & \tilde{\mathbf{x}}^+ & \leq & \mathbf{u}^+\circ\mathbf{y}^+,\\ & \tilde{\mathbf{x}}^- & \leq & \mathbf{u}^-\circ\mathbf{y}^-,\\ & \mathbf{y}^+ + \mathbf{y}^- & \leq & \mathbf{1},\\ & \mathbf{y}^+, \mathbf{y}^- & \in & \{0, 1\}^N,\\ & \mathbf{x}^\mathsf{T}\ECov\mathbf{x} & \leq & \gamma^2,\\ & \mathbf{x} & \in & \mathcal{F}, \end{array}\end{split}$
where $$\mathbf{u}^+$$ and $$\mathbf{u}^-$$ are vectors of upper bounds on the amounts of buying and selling in each security and $$\circ$$ is the elementwise product. The products $$u_i^+y_i^+$$ and $$u_i^-y_i^-$$ ensure that if security $$i$$ is traded ($$y_i^+=1$$ or $$y_i^-=1$$), then both fixed and variable costs are incurred, otherwise ($$y_i^+=y_i^-=0$$) the transaction cost is zero. Finally, the constraint $$\mathbf{y}^+ + \mathbf{y}^- \leq \mathbf{1}$$ ensures that the transaction for each security is either a buy or a sell, and never both.
## 6.3 Market impact costs¶
In reality, each trade alters the price of the security. This effect is called market impact. If the traded quantity is small, the impact is negligible and we can assume that the security prices are independent of the amounts traded. However, for large traded volumes we should take market impact into account.
While there is no standard model for market impact, in practice an empirical power law is applied [GK00] [p. 452]. Let $$\tilde{d}_i = d_i - d_{0,i}$$ be the traded dollar amount for security $$i$$. Then the average relative price change is
(6.4)$\frac{\Delta p_i}{p_i} = \pm c_i\sigma_i\left(\frac{|\tilde{d}_i|}{q_i}\right)^{\beta-1},$
where $$\sigma_i$$ is the volatility of security $$i$$ for a unit time period, $$q_i$$ is the average dollar volume in a unit time period, and the sign depends on the direction of the trade. The number $$c_i$$ has to be calibrated, but it is usually around one. Equation (6.4) is called the “square-root” law, because $$\beta-1$$ is empirically shown to be around $$1/2$$ [TLD+11].
The relative price difference (6.4) is the impact cost rate, assuming $$\tilde{d}_i$$ dollar amount is traded. After actually trading this amount, we get the total market impact cost as
(6.5)$C_i(\tilde{d}_i) = \frac{\Delta p_i}{p_i}\tilde{d}_i = a_i|\tilde{d}_i|^{\beta},$
where $$a_i = \pm c_i\sigma_i/q_i^{\beta-1}$$. Thus if $$\beta-1=1/2$$, the market impact cost increases with $$\beta=3/2$$ power of the traded dollar amount.
We can also express the market impact cost in terms of portfolio fraction $$\tilde{x}_i$$ instead of $$\tilde{d}_i$$ by normalizing $$q_i$$ with the total portfolio value $$\mathbf{v}^\mathsf{T}\mathbf{p}_0$$.
Using Sec. 10.1.1.10 (Power) we can model $$t_i \geq |\tilde{x}_i|^{\beta}$$ with the power cone as $$(t_i,1,\tilde{x}_i) \in \POW_3^{1/\beta,(\beta-1)/\beta}$$. Hence, it follows that the total market impact cost term $$\sum_{i=1}^N a_i|\tilde{x}_i|^{\beta}$$ can be modeled by $$\sum_{i=1}^N a_it_i$$ under the constraint $$(t_i,1,\tilde{x}_i) \in \POW_3^{1/\beta,(\beta-1)/\beta}$$.
Note however, that in this model nothing forces $$t_i$$ to be small as possible to ensure $$t_i = |\tilde{x}_i|^{\beta}$$ holds at the optimal solution. This freedom allows the optimizer to try reducing portfolio risk by incorrectly treating $$a_it_i$$ as a risk-free security. Then it would allocate more weight to $$a_it_i$$ while reducing weight allocated to risky securities, basically throwing away money.
There are two solutions, which can prevent this unwanted behavior:
• Adding a penalty term $$-\delta^\mathsf{T}\mathbf{t}$$ to the objective function to prevent excess growth of the variables $$t_i$$. We have to calibrate the hyper-parameter vector $$\delta$$ so that the penalty would not become too dominant.
• Adding a risk-free security to the model. In this case the optimizer will prefer to allocate to the risk-free security, which has positive return (the risk-free rate), instead of allocating to $$a_it_i$$.
Let us denote the weight of the risk-free security by $$x^\mathrm{f}$$ and the risk-free rate of return by $$r^\mathrm{f}$$. Then the portfolio optimization problem accounting for market impact costs will be
(6.6)$\begin{split}\begin{array}{lrcll} \mbox{maximize} & \EMean^\mathsf{T}\mathbf{x} + r^\mathrm{f}x^\mathrm{f} & & \\ \mbox{subject to} & \mathbf{1}^\mathsf{T}\mathbf{x} + \mathbf{a}^\mathsf{T}\mathbf{t} + x^\mathrm{f} & = & 1,\\ & \mathbf{x}^\mathsf{T}\ECov\mathbf{x} & \leq & \gamma^2,\\ & (t_i,1,\tilde{x}_i) & \in & \POW_3^{1/\beta,(\beta-1)/\beta},\ i=1,\dots,N,\\ & \mathbf{x}, x^\mathrm{f} & \in & \mathcal{F}. \end{array}\end{split}$
Note that if we model using the quadratic cone instead of the rotated quadratic cone and a risk free security is present, then there will be no optimal portfolios for which $$0 < x^\mathrm{f} < 1$$. The solutions will be either $$x^\mathrm{f} = 1$$ or some risky portfolio with $$x^\mathrm{f} = 0$$. See a detailed discussion about this in Sec. 10.3 (Quadratic cones and riskless solution).
## 6.4 Cardinality constraints¶
Investors often prefer portfolios with a limited number of securities. We do not need to use all of the $$N$$ securities to achieve good diversification, and this way we can also reduce costs significantly. We can create explicit limits to constrain the number of securities.
Suppose that we allow at most $$K$$ coordinates of the difference vector $$\tilde{\mathbf{x}}=\mathbf{x} - \mathbf{x}_0$$ to be non-zero, where $$K$$ is (much) smaller than the total number of securities $$N$$.
We can again model this type of constraint based on Sec. 10.2.1.3 (Cardinality) by introducing a binary vector $$\mathbf{y}$$ to indicate $$|\tilde{\mathbf{x}}|\neq \mathbf{0}$$, and by bounding the sum of $$\mathbf{y}$$. The basic Markowitz model then gets updated as follows:
(6.7)$\begin{split}\begin{array}{lrcl} \mbox{maximize} & \EMean^\mathsf{T}\mathbf{x} & & \\ \mbox{subject to} & \mathbf{1}^\mathsf{T}\mathbf{x} & = & 1,\\ & \tilde{\mathbf{x}} & = & \mathbf{x} - \mathbf{x}_0,\\ & \tilde{\mathbf{x}} & \leq & \mathbf{u}\circ\mathbf{y},\\ & \tilde{\mathbf{x}} & \geq & -\mathbf{u}\circ\mathbf{y},\\ & \mathbf{1}^\mathsf{T}\mathbf{y} & \leq & K,\\ & \mathbf{y} & \in & \{0, 1\}^N,\\ & \mathbf{x}^\mathsf{T}\ECov\mathbf{x} & \leq & \gamma^2,\\ & \mathbf{x} & \in & \mathcal{F}, \end{array}\end{split}$
where the vector $$\mathbf{u}$$ is some a priori chosen upper bound on the amount of trading in each security.
In the above examples we assumed that trades can be arbitrarily small. In reality, however, it can be meaningful to place lower bounds on traded amounts to avoid unrealistically small trades and to control the transaction cost. These constraints are called buy-in threshold.
Let $$\tilde{\mathbf{x}} = \mathbf{x} - \mathbf{x}_0$$ be the traded amount. Let also $$\tilde{\mathbf{x}}^+ = \mathrm{max}(\tilde{\mathbf{x}}, 0)$$ and $$\tilde{\mathbf{x}}^- = \mathrm{max}(-\tilde{\mathbf{x}}, 0)$$ be the positive and negative part of $$\tilde{\mathbf{x}}$$. These we model according to Sec. 10.2.1.4 (Positive and negative part). Then the buy-in threshold basically means that $$\tilde{\mathbf{x}}^\pm \in \{0\} \cup [\ell^\pm, \mathbf{u}^\pm]$$, where $$\ell^\pm$$ and $$\mathbf{u}^\pm$$ are vectors of lower and upper bounds on $$\tilde{\mathbf{x}}^+$$ and $$\tilde{\mathbf{x}}^-$$ respectively.
This is a semi-continuous variable, which we can model based on Sec. 10.2.1.2 (Semi-continuous variable). We introduce binary variables $$\mathbf{y}^\pm$$ and constraints $$\ell^\pm\circ\mathbf{y}^\pm\leq\tilde{\mathbf{x}}^\pm\leq\mathbf{u}^\pm\circ\mathbf{y}^\pm$$. The optimization problem would then become a mixed integer problem of the form
(6.8)$\begin{split}\begin{array}{lrcl} \mbox{maximize} & \EMean^\mathsf{T}\mathbf{x} & & \\ \mbox{subject to} & \mathbf{1}^\mathsf{T}\mathbf{x} & = & 1,\\ & \mathbf{x} - \mathbf{x}_0 & = & \tilde{\mathbf{x}}^+ - \tilde{\mathbf{x}}^-,\\ & \tilde{\mathbf{x}}^+, \tilde{\mathbf{x}}^- & \geq & 0,\\ & \tilde{\mathbf{x}}^+ & \leq & \mathbf{u}^+\circ\mathbf{y}^+,\\ & \tilde{\mathbf{x}}^+ & \geq & \ell^+\circ\mathbf{y}^+,\\ & \tilde{\mathbf{x}}^- & \leq & \mathbf{u}^-\circ\mathbf{y}^-,\\ & \tilde{\mathbf{x}}^- & \geq & \ell^-\circ\mathbf{y}^-,\\ & \mathbf{y}^+ + \mathbf{y}^- & \leq & \mathbf{1},\\ & \mathbf{y}^+, \mathbf{y}^- & \in & \{0, 1\}^N,\\ & \mathbf{x}^\mathsf{T}\ECov\mathbf{x} & \leq & \gamma^2,\\ & \mathbf{x} & \in & \mathcal{F}. \end{array}\end{split}$
This model is of course compatible with the fixed plus linear transaction cost model discussed in Sec. 6.2 (Fixed transaction costs).
## 6.6 Example¶
In this chapter we show two examples. The first demonstrates the modeling of market impact through the use of the power cone, while the second example presents fixed and variable transaction costs and the buy-in threshold.
### 6.6.1 Market impact model¶
As a starting point, we refer back to problem (2.13). We will extend this problem with the market impact cost model. To compute the coefficients $$a_i$$ in formula (6.5), we assume that daily volume data is also available in the dataframe df_volumes. We also compute the mean of the daily volumes, and the daily volatility for each security as the standard deviation of daily linear returns:
# Compute average daily volume and daily volatility (std. dev.)
df_lin_returns = df_prices.pct_change()
vty = df_lin_returns.std()
vol = (df_volumes * df_prices).mean()
According to the data, the average daily dollar volumes are $$10^8 \cdot [3.9883, 4.2416, 6.0054, 4.2584, 30.4647, 34.5619, 5.0077, 8.4950]$$, and the daily volatilities are $$[0.0164, 0.0154, 0.0146, 0.0155, 0.0191, 0.0173, 0.0186, 0.0169]$$. Thus in this example we will choose the size of our portfolio to be $$10$$ billion dollars so that we can see a significant market impact.
Then we update the Fusion model introduced in Sec. 2.4.2 (Efficient frontier) with new variables and constraints:
def EfficientFrontier(N, m, G, deltas, a, beta, rf):
with Model("Case study") as M:
# Settings
M.setLogHandler(sys.stdout)
# Variables
# The variable x is the fraction of holdings in each security.
# x must be positive, this imposes the no short-selling constraint.
x = M.variable("x", N, Domain.greaterThan(0.0))
# Variable for risk-free security (cash account)
xf = M.variable("xf", 1, Domain.greaterThan(0.0))
# The variable s models the portfolio variance term in the objective.
s = M.variable("s", 1, Domain.unbounded())
# Auxiliary variable to model market impact
t = M.variable("t", N, Domain.unbounded())
# Budget constraint with transaction cost terms
terms = Expr.hstack(Expr.sum(x), xf, Expr.dot(a, t))
M.constraint('budget', Expr.sum(terms), Domain.equalsTo(1))
# Power cone to model market impact
M.constraint('mkt_impact', Expr.hstack(t, Expr.constTerm(N, 1.0), x),
Domain.inPPowerCone(1.0 / beta))
delta = M.parameter()
pf_return = Expr.add(Expr.dot(m, x), Expr.mul(rf, xf))
pf_risk = Expr.mul(delta, s)
M.objective('obj', ObjectiveSense.Maximize,
Expr.sub(pf_return, pf_risk))
# Conic constraint for the portfolio variance
M.constraint('risk', Expr.vstack(s, 1, Expr.mul(G.transpose(), x)),
Domain.inRotatedQCone())
columns = ["delta", "obj", "return", "risk", "t_resid",
"x_sum", "xf", "tcost"] + df_prices.columns.tolist()
df_result = pd.DataFrame(columns=columns)
for d in deltas:
# Update parameter
delta.setValue(d)
# Solve optimization
M.solve()
# Save results
portfolio_return = m @ x.level() + np.array([rf]) @ xf.level()
portfolio_risk = np.sqrt(2 * s.level()[0])
t_resid = t.level() - np.abs(x.level())**beta
row = pd.Series([d, M.primalObjValue(), portfolio_return,
portfolio_risk, sum(t_resid), sum(x.level()),
sum(xf.level()), t.level() @ a]
+ list(x.level()), index=columns)
df_result = df_result.append(row, ignore_index=True)
return df_result
The new rows are:
• The row for the variable $$x^\mathrm{f}$$, which represents the weight allocated to the cash account. The annual return on it is assumed to be $$r^\mathrm{f} = 1\%$$. We constrain $$x^\mathrm{f}$$ to be positive, meaning that borrowing money is not allowed.
• The row for the auxiliary variable $$\mathbf{t}$$.
• The row for the market impact constraint modeled using the power cone.
We modified the budget constraints to include $$x^\mathrm{f}$$ and the market impact cost $$\mathbf{a}^\mathsf{T}\mathbf{t}$$. The objective also contains the risk-free part of portfolio return $$r^\mathrm{f}x^\mathrm{f}$$.
In this example, we start with $$100\%$$ cash, meaning that $$x^\mathrm{f}_0 = 1$$ and $$\mathbf{x}_0 =\mathbf{0}$$. Transaction cost is thus incurred for the total weight $$\mathbf{x}$$.
Next, we compute the efficient frontier with and without market impact costs. We select $$\beta = 3/2$$ and $$c_i = 1$$. The following code produces the results:
deltas = np.logspace(start=-0.5, stop=2, num=20)[::-1]
portfolio_value = 10**10
rel_vol = vol / portfolio_value
a1 = np.zeros(N)
a2 = (c * vty / rel_vol**(beta - 1)).to_numpy()
ax = plt.gca()
for a in [a1, a2]:
df_result = EfficientFrontier(N, m, G, deltas, a, beta, rf)
df_result.plot(ax=ax, x="risk", y="return", style="-o",
xlabel="portfolio risk (std. dev.)",
ylabel="portfolio return", grid=True)
ax.legend(["return without price impact", "return with price impact"])
On Fig. 6.1 we can see the return reducing effect of market impact costs. The left part of the efficient frontier (up to the so called tangency portfolio) is linear because a risk-free security was included. However, in this case borrowing is not allowed, so the right part remains the usual parabola shape.
### 6.6.2 Transaction cost models¶
In this example we show a problem that models fixed and variable transaction costs and the buy-in threshold. Note that we do not model the market impact here.
We will assume now that $$\mathbf{x}$$ can take negative values too (short-selling is allowed), up to the limit of $$30\%$$ portfolio size. This way we can see how to apply different costs to buy and sell trades. We also assume that $$\mathbf{x}_0 =\mathbf{0}$$, so $$\tilde{\mathbf{x}} =\mathbf{x}$$.
The following code defines variables used as the positive and negative part variables of $$\mathbf{x}$$ and the binary variables $$\mathbf{y}^+, \mathbf{y}^-$$ indicating whether there is buying or selling in a security:
# Real variables
xp = M.variable("xp", N, Domain.greaterThan(0.0))
xm = M.variable("xm", N, Domain.greaterThan(0.0))
# Binary variables
yp = M.variable("yp", N, Domain.binary())
ym = M.variable("ym", N, Domain.binary())
Next we add two constraints. The first links xp and xm to x, so that they represent the positive and negative parts. The second ensures that for each coordinate of yp and ym only one of the values can be $$1$$.
# Constraint assigning xp and xm to the positive and negative part of x.
M.constraint('pos-neg-part', Expr.sub(x, Expr.sub(xp, xm)),
Domain.equalsTo(0.0))
We update the budget constraint with the variable and fixed transaction cost terms. The fixed cost of buy and sell trades are held by the variables fp and fm. These are typically given in dollars, and have to be divided by the total portfolio value. The variable cost coefficients are vp and vm. If these are given as percentages, then we do not have to modify them.
# Budget constraint with transaction cost terms
fixcost_terms = Expr.add([Expr.dot(fp, yp), Expr.dot(fm, ym)])
varcost_terms = Expr.add([Expr.dot(vp, xp), Expr.dot(vm, xm)])
M.constraint('budget', budget_terms, Domain.equalsTo(1.0))
Next, the 130/30 leverage constraint is added. Note that the transaction cost terms from the budget constraint should also appear here, otherwise the two constraints combined would allow a little more leverage than intended. (The sum of $$\mathbf{x}$$ would not reach $$1$$ because of the cost terms, leaving more space in the leverage constraint for negative positions.)
# Auxiliary variable for 130/30 leverage constraint
z = M.variable("z", N, Domain.unbounded())
# 130/30 leverage constraint
M.constraint('leverage-gt', Expr.sub(z, x), Domain.greaterThan(0.0))
M.constraint('leverage-sum',
Domain.equalsTo(1.6))
Finally, to be able to differentiate between zero allocation (not incurring fixed cost) and nonzero allocation (incurring fixed cost), and to implement buy-in threshold, we need bound constraint involving the binary variables:
# Bound constraints
M.constraint('ubound-p', Expr.sub(Expr.mul(up, yp), xp),
Domain.greaterThan(0.0))
M.constraint('ubound-m', Expr.sub(Expr.mul(um, ym), xm),
Domain.greaterThan(0.0))
M.constraint('lbound-p', Expr.sub(xp, Expr.mul(lp, yp)),
Domain.greaterThan(0.0))
M.constraint('lbound-m', Expr.sub(xm, Expr.mul(lm, ym)),
Domain.greaterThan(0.0))
The full updated model will then look like the following:
def EfficientFrontier(N, m, G, deltas, vp, vm, fp, fm, up, um,
lp, lm, pcoef):
with Model("Case study") as M:
# Settings
M.setLogHandler(sys.stdout)
# Real variables
# The variable x is the fraction of holdings in each security.
x = M.variable("x", N, Domain.unbounded())
xp = M.variable("xp", N, Domain.greaterThan(0.0))
xm = M.variable("xm", N, Domain.greaterThan(0.0))
# Binary variables
yp = M.variable("yp", N, Domain.binary())
ym = M.variable("ym", N, Domain.binary())
# Constraint assigning xp and xm to the pos. and neg. part of x.
M.constraint('pos-neg-part', Expr.sub(x, Expr.sub(xp, xm)),
Domain.equalsTo(0.0))
# s models the portfolio variance term in the objective.
s = M.variable("s", 1, Domain.unbounded())
# Auxiliary variable for 130/30 leverage constraint
z = M.variable("z", N, Domain.unbounded())
# Bound constraints
M.constraint('ubound-p', Expr.sub(Expr.mul(up, yp), xp),
Domain.greaterThan(0.0))
M.constraint('ubound-m', Expr.sub(Expr.mul(um, ym), xm),
Domain.greaterThan(0.0))
M.constraint('lbound-p', Expr.sub(xp, Expr.mul(lp, yp)),
Domain.greaterThan(0.0))
M.constraint('lbound-m', Expr.sub(xm, Expr.mul(lm, ym)),
Domain.greaterThan(0.0))
# Budget constraint with transaction cost terms
fixcost_terms = Expr.add([Expr.dot(fp, yp), Expr.dot(fm, ym)])
varcost_terms = Expr.add([Expr.dot(vp, xp), Expr.dot(vm, xm)])
M.constraint('budget', budget_terms, Domain.equalsTo(1.0))
# 130/30 leverage constraint
M.constraint('leverage-gt', Expr.sub(z, x), Domain.greaterThan(0.0))
M.constraint('leverage-sum',
Domain.equalsTo(1.6))
delta = M.parameter()
M.objective('obj', ObjectiveSense.Maximize,
Expr.sub(Expr.sub(Expr.dot(m, x), penalty),
Expr.mul(delta, s)))
# Conic constraint for the portfolio variance
M.constraint('risk', Expr.vstack(s, 1, Expr.mul(G.transpose(), x)),
Domain.inRotatedQCone())
columns = ["delta", "obj", "return", "risk", "x_sum", "tcost"]
+ df_prices.columns.tolist()
df_result = pd.DataFrame(columns=columns)
for idx, d in enumerate(deltas):
# Update parameter
delta.setValue(d)
# Solve optimization
M.solve()
# Save results
portfolio_return = m @ x.level()
portfolio_risk = np.sqrt(2 * s.level()[0])
tcost = np.dot(vp, xp.level()) + np.dot(vm, xm.level())
+ np.dot(fp, yp.level()) + np.dot(fm, ym.level())
row = pd.Series([d, M.primalObjValue(), portfolio_return,
portfolio_risk, sum(x.level()), tcost]
+ list(x.level()), index=columns)
df_result = df_result.append(row, ignore_index=True)
return df_result
Here we also used a penalty term in the objective to prevent excess growth of the positive part and negative part variables. The coefficient of the penalty has to be calibrated so that we do not overpenalize.
We also have to mention that because of the binary variables, we can only solve this as a mixed integer optimization (MIO) problem. The solution of such a problem might not be as efficient as the solution of a problem with only continuous variables. See Sec. 10.2 (Mixed-integer models) for details regarding MIO problems.
We compute the efficient frontier with and without transaction costs. The following code produces the results:
deltas = np.logspace(start=-0.5, stop=2, num=20)[::-1]
ax = plt.gca()
for a in [0, 1]:
pcoef = a * 0.03
fp = a * 0.005 * np.ones(N) # Depends on portfolio value
fm = a * 0.01 * np.ones(N) # Depends on portfolio value
vp = a * 0.01 * np.ones(N)
vm = a * 0.02 * np.ones(N)
up = 2.0
um = 2.0
lp = a * 0.05
lm = a * 0.05
df_result = EfficientFrontier(N, m, G, deltas, vp, vm, fp, fm, up, um,
lp, lm, pcoef)
df_result.plot(ax=ax, x="risk", y="return", style="-o",
xlabel="portfolio risk (std. dev.)",
ylabel="portfolio return", grid=True)
ax.legend(["return without transaction cost",
"return with transaction cost"])
On Fig. 6.2 we can see the return reducing effect of transaction costs. The overall return is higher because of the leverage. | 2023-01-26 23:17:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5834851861000061, "perplexity": 1993.3770166475094}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494826.88/warc/CC-MAIN-20230126210844-20230127000844-00220.warc.gz"} |
https://cs.stackexchange.com/questions/131117/how-do-you-have-a-type-typed-type-when-implementing-a-programming-language | # How do you have a type typed “Type” when implementing a programming language?
I am working on the base of a language model, and am wondering how to represent the base type, which is a type Type. I have heard of an "infinite chain of types", but (a) I can't seem to find it on the internet while searching anymore, and (b) I am not sure if that's what I need or what it really means in practice.
Basically, I have a system in the language like this:
type User
type String
type X
...
Internally these get compiled to something like this:
[
{
type: 'Type',
name: 'User'
},
{
type: 'Type',
name: 'String'
},
...
]
But actually, the type: 'Type' gets further compiled not pointing to the string 'Type', but to the actual Type object:
[
{
type: theTypeObject,
name: 'User'
},
{
type: theTypeObject,
name: 'String'
},
...
]
So then the problem is, I need to now define or specify the "type type" itself:
type Type
which I try represent in a similar way, so now we have:
[
{
type: 'Type',
name: 'Type'
},
...
]
which is:
let theTypeObject = { name: 'Type' }
typeTypeObject.type = theTypeObject
Is that correct? What is this really saying? It is a circular structure, does this even make sense conceptually?
What would be better to do in this situation? Or is this perfectly acceptable? Basically I would like to understand how to explain what this circular structure even means, because it just makes me confused.
The type "Type" is typed "Type". It is an element of itself...
That doesn't seem logically possible. So what should I do?
• You are saying your types are "compiled down to" something that looks like JSON. Are you working in a specific context or Programming Language (or paradigm)? Do you have subtyping or a class hierarchy? Is this a functional language? Are there dependent types? – jmite Oct 12 at 7:56
• The "infinite chain of types" is something that usually comes up in dependent type theory, and it's a way to avoid the fact that if you have Type:Type then you can write an infinite loop. This is a problem if you're trying to make a type theory that is consistent as a logic, but it's not usually a problem in other contexts. So I doubt you need a chain (usually called a Universe Hierarchy), having type: type will be fine. – jmite Oct 12 at 8:02
• There is nothing wrong with having a programming language with Type : Type, as long as you are aware of the fact that it might allow you to write down a non-terminating program (which you can anyhow in a Turing-complete programming langauge). The more pressing issue is: are you implementing your language in Javascript or some such? Why? – Andrej Bauer Oct 12 at 9:04
A root type typically is a type with very few properties, because (by definition) it's the union of properties of all types. Yet a Type type, which needs to handle the complexity of your type system will have quite a few properties.
Having said that, for a language model it does not really matter what that your Type has a type that is itself Type. That's just how you model it. Models are just descriptions. But you say "compile", and that means you're actually going to do things. And that means you're dealing with such practical matters as code and data.
In particular, type systems are generally used to represent object types, where objects are combinations of code and data. Sure, there's a data property type name which has itself type String and can have a value of "Type". But that data is mostly meaningless without code. | 2020-11-27 19:54:21 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5032808184623718, "perplexity": 760.6177421401351}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141194171.48/warc/CC-MAIN-20201127191451-20201127221451-00580.warc.gz"} |
https://logtalk.org/symbolic_ai_examples.html | # Classical symbolic AI examples
The Logtalk distribution includes some classical symbolic AI examples, most of them adapted from literature and other logic programming systems (see the example/port notes for credits). These include:
## Reasoning
• many_worlds - Design pattern for reasoning about different worlds, where a world can be e.g. a dataset, a knowledge base, a set of examples | 2019-10-16 10:20:21 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47778555750846863, "perplexity": 4759.759391934821}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986666959.47/warc/CC-MAIN-20191016090425-20191016113925-00429.warc.gz"} |
http://mathhelpforum.com/discrete-math/20043-sets-functions.html | 1. ## Sets/Functions
I have set A{1,2,3,4,5}, B{3,4,5,6,7}, C{5,6,7,8,9}
A union B = {1,2,3,4,,6,7}
A intersection B = {3,4,5}
Define an onto(surjective) function f: A union B -> A intersection B
Define an 1-1 function f: A union B -> A intersection B
2nd part--
Let U = A union B union C
Let D = {x|x,y,z in U) ^ (there exits y)(there exists z)(z = x * y)
U = {1,2,3,4,5,6,7,8,9}
I am not sure how to get set D.
Do I choose a value for y and z from set U and if there is an value for x inside Set U satstify the equation? and if it does, the value for x goes in set D?
2. Originally Posted by darken4life
2nd part--
Let U = A union B union C
Let D = {x|x,y,z in U) ^ (there exits y)(there exists z)(z = x * y)
U = {1,2,3,4,5,6,7,8,9}
I am not sure how to get set D.
Do I choose a value for y and z from set U and if there is an value for x inside Set U satstify the equation? and if it does, the value for x goes in set D?
Basically. I'd choose an x, say x = 1. Then try to find a y such that $\displaystyle z = x \cdot y$ such that $\displaystyle z \in U$.
For example: y = 1. Thus we ask the question is $\displaystyle z = 1 \cdot 1 \in U$? Yes. So $\displaystyle 1 \in D$.
etc for all values of y that create an appropriate z.
-Dan
3. Thanks! I got it clearly.
Can you help me on the first part?
Is not that I want you guys to answer it for me but simply the question.
Define an onto(surjective) function f: A union B -> A intersection B
-if A is union B then A is intersection B
How I define an function out of that?
Not sure on how to approach this.
4. Originally Posted by darken4life
Thanks! I got it clearly.
Can you help me on the first part?
Is not that I want you guys to answer it for me but simply the question.
Define an onto(surjective) function f: A union B -> A intersection B
-if A is union B then A is intersection B
How I define an function out of that?
Not sure on how to approach this.
The surjective function is easy: Just make up any function you like. For example, consider:
$\displaystyle f: \{1, 2, 3, 4, 5, 6, 7 \} \to \{3, 4, 5 \}$:
$\displaystyle f(1) = 3$
$\displaystyle f(2) = 3$
$\displaystyle f(3) = 4$
$\displaystyle f(4) = 4$
$\displaystyle f(5) = 5$
$\displaystyle f(6) = 5$
$\displaystyle f(7) = 4$
For each element in the codomain there is at least one element that exists in the domain.
The one to one function is rather harder: in fact it is impossible. In order to have a one to one function between sets they must have the same cardinality. These sets don't.
Though if we are allowed to restrict the domain to three elements, for example, we can do it:
$\displaystyle f: \{2, 3, 4 \} \to \{3, 4, 5 \}$:
$\displaystyle f(2) = 4$
$\displaystyle f(3) = 5$
$\displaystyle f(3) = 3$
is one example.
-Dan | 2018-04-27 07:29:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.814501166343689, "perplexity": 1008.6248731920664}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125949489.63/warc/CC-MAIN-20180427060505-20180427080505-00310.warc.gz"} |
http://buzzard.ups.edu/scla2021/section-nilpotent-linear-transformations.html | ## Section3.2Nilpotent Linear Transformations
We will discover that nilpotent linear transformations are the essential obstacle in a non-diagonalizable linear transformation. So we will study them carefully, both as an object of inherent mathematical interest, but also as the object at the heart of the argument that leads to a pleasing canonical form for any linear transformation. Once we understand these linear transformations thoroughly, we will be able to easily analyze the structure of any linear transformation.
### Subsection3.2.1Nilpotent Linear Transformations
###### Definition3.2.1.Nilpotent Linear Transformation.
Suppose that $\ltdefn{T}{V}{V}$ is a linear transformation such that there is an integer $p\gt 0$ such that $\lteval{T^p}{\vect{v}}=\zerovector$ for every $\vect{v}\in V\text{.}$ The smallest $p$ for which this condition is met is called the of $T\text{.}$
Of course, the linear transformation $T$ defined by $\lteval{T}{\vect{v}}=\zerovector$ will qualify as nilpotent of index $1\text{.}$ But are there others? Yes, of course.
Recall that our definitions and theorems are being stated for linear transformations on abstract vector spaces, while our examples will work with square matrices (and use the same terms interchangeably). In this case, to demonstrate the existence of nontrivial nilpotent linear transformations, we desire a matrix such that some power of the matrix is the zero matrix. Consider powers of a $6\times 6$ matrix $A\text{,}$
\begin{align*} A&=\begin{bmatrix} -3 & 3 & -2 & 5 & 0 & -5 \\ -3 & 5 & -3 & 4 & 3 & -9 \\ -3 & 4 & -2 & 6 & -4 & -3 \\ -3 & 3 & -2 & 5 & 0 & -5 \\ -3 & 3 & -2 & 4 & 2 & -6 \\ -2 & 3 & -2 & 2 & 4 & -7 \end{bmatrix}\\ \end{align*}
and compute powers of $A\text{,}$
\begin{align*} A^2&=\begin{bmatrix} 1 & -2 & 1 & 0 & -3 & 4 \\ 0 & -2 & 1 & 1 & -3 & 4 \\ 3 & 0 & 0 & -3 & 0 & 0 \\ 1 & -2 & 1 & 0 & -3 & 4 \\ 0 & -2 & 1 & 1 & -3 & 4 \\ -1 & -2 & 1 & 2 & -3 & 4 \end{bmatrix}\\ A^3&=\begin{bmatrix} 1 & 0 & 0 & -1 & 0 & 0 \\ 1 & 0 & 0 & -1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & -1 & 0 & 0 \\ 1 & 0 & 0 & -1 & 0 & 0 \\ 1 & 0 & 0 & -1 & 0 & 0 \end{bmatrix}\\ A^4&=\begin{bmatrix} 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \end{bmatrix} \end{align*}
Thus we can say that $A$ is nilpotent of index 4.
Because it will presage some upcoming theorems, we will record some extra information about the eigenvalues and eigenvectors of $A$ here. $A$ has just one eigenvalue, $\lambda=0\text{,}$ with algebraic multiplicity $6$ and geometric multiplicity $2\text{.}$ The eigenspace for this eigenvalue is
\begin{equation*} \eigenspace{A}{0}= \spn{ \colvector{2 \\ 2 \\ 5 \\ 2 \\ 1 \\ 0},\, \colvector{-1 \\ -1 \\ -5 \\ -1 \\ 0 \\ 1} } \end{equation*}
If there were degrees of singularity, we might say this matrix was very singular, since zero is an eigenvalue with maximum algebraic multiplicity (Theorem SMZE, Theorem ME). Notice too that $A$ is “far” from being diagonalizable (Theorem DMFE).
With the existence of nontrivial nilpotent matrices settled, let's look at another example.
Consider the matrix
\begin{align*} B&= \begin{bmatrix} -1 & 1 & -1 & 4 & -3 & -1 \\ 1 & 1 & -1 & 2 & -3 & -1 \\ -9 & 10 & -5 & 9 & 5 & -15 \\ -1 & 1 & -1 & 4 & -3 & -1 \\ 1 & -1 & 0 & 2 & -4 & 2 \\ 4 & -3 & 1 & -1 & -5 & 5 \end{bmatrix}\\ \end{align*}
and compute the second power of $B\text{,}$
\begin{align*} B^2&=\begin{bmatrix} 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \end{bmatrix} \end{align*}
So $B$ is nilpotent of index 2.
Again, the only eigenvalue of $B$ is zero, with algebraic multiplicity $6\text{.}$ The geometric multiplicity of the eigenvalue is $3\text{,}$ as seen in the eigenspace,
\begin{equation*} \eigenspace{B}{0}=\spn{ \colvector{1 \\ 3 \\ 6 \\ 1 \\ 0 \\ 0},\, \colvector{0 \\ -4 \\ -7 \\ 0 \\ 1 \\ 0},\, \colvector{0 \\ 2 \\ 1 \\ 0 \\ 0 \\ 1} } \end{equation*}
Again, Theorem DMFE tells us that $B$ is far from being diagonalizable.
On a first encounter with the definition of a nilpotent matrix, you might wonder if such a thing was possible at all. That a high power of a nonzero object could be zero is so very different from our experience with scalars that it seems very unnatural. Hopefully the two previous examples were somewhat surprising. But we have seen that matrix algebra does not always behave the way we expect (Example MMNC), and we also now recognize matrix products not just as arithmetic, but as function composition (Theorem MRCLT). With a couple examples completed, we turn to some general properties.
Let $\vect{x}$ be an eigenvector of $T$ for the eigenvalue $\lambda\text{,}$ and suppose that $T$ is nilpotent with index $p\text{.}$ Then
\begin{equation*} \zerovector=\lteval{T^p}{\vect{x}}=\lambda^p\vect{x} \end{equation*}
Because $\vect{x}$ is an eigenvector, it is nonzero, and therefore Theorem SMEZV tells us that $\lambda^p=0$ and so $\lambda=0\text{.}$
Paraphrasing, all of the eigenvalues of a nilpotent linear transformation are zero. So in particular, the characteristic polynomial of a nilpotent linear transformation, $T\text{,}$ on a vector space of dimension $n\text{,}$ is simply $\charpoly{T}{x}=(x-0)^n=x^n\text{.}$
The next theorem is not critical for what follows, but it will explain our interest in nilpotent linear transformations. More specifically, it is the first step in backing up the assertion that nilpotent linear transformations are the essential obstacle in a non-diagonalizable linear transformation. While it is not obvious from the statement of the theorem, it says that a nilpotent linear transformation is not diagonalizable, unless it is trivially so.
(⇐) We start with the easy direction. Let $n=\dimension{V}\text{.}$ The linear transformation $\ltdefn{Z}{V}{V}$ defined by $\lteval{Z}{\vect{v}}=\zerovector$ for all $\vect{v}\in V$ is nilpotent of index $p=1$ and a matrix representation relative to any basis of $V$ is the $n\times n$ zero matrix, $\zeromatrix\text{.}$ Quite obviously, the zero matrix is a diagonal matrix (Definition DIM) and hence $Z$ is diagonalizable (Definition DZM).
(⇒) Assume now that $T$ is diagonalizable, so $\geomult{T}{\lambda}=\algmult{T}{\lambda}$ for every eigenvalue $\lambda$ (Theorem DMFE). By Theorem Theorem 3.2.4, $T$ has only one eigenvalue (zero), which therefore must have algebraic multiplicity $n$ (Theorem NEM). So the geometric multiplicity of zero will be $n$ as well, $\geomult{T}{0}=n\text{.}$
Let $B$ be a basis for the eigenspace $\eigenspace{T}{0}\text{.}$ Then $B$ is a linearly independent subset of $V$ of size $n\text{,}$ and thus a basis of $V\text{.}$ For any $\vect{x}\in B$ we have
\begin{equation*} \lteval{T}{\vect{x}}=0\vect{x}=\zerovector \end{equation*}
So $T$ is identically zero on a basis for $B\text{,}$ and since the action of a linear transformation on a basis determines all of the values of the linear transformation (Theorem LTDB), it is easy to see that $\lteval{T}{\vect{v}}=\zerovector$ for every $\vect{v}\in V\text{.}$
So, other than one trivial case (the zero linear transformation), every nilpotent linear transformation is not diagonalizable. It remains to see what is so “essential” about this broad class of non-diagonalizable linear transformations.
### Subsection3.2.2Powers of Kernels of Nilpotent Linear Transformations
We return to our discussion of kernels of powers of linear transformations, now specializing to nilpotent linear transformations. We reprise Theorem Theorem 3.1.1, gaining just a little more precision in the conclusion.
Since $T^p=0$ it follows that $T^{p+j}=0$ for all $j\geq 0$ and thus $\krn{T^{p+j}}=V$ for $j\geq 0\text{.}$ So the value of $m$ guaranteed by Theorem KPLT is at most $p\text{.}$ The only remaining aspect of our conclusion that does not follow from Theorem Theorem 3.1.1 is that $m=p\text{.}$ To see this, we must show that $\krn{T^k} \subsetneq\krn{T^{k+1}}$ for $0\leq k\leq p-1\text{.}$ If $\krn{T^k}=\krn{T^{k+1}}$ for some $k\lt p\text{,}$ then $\krn{T^k}=\krn{T^p}=V\text{.}$ This implies that $T^k=0\text{,}$ violating the fact that $T$ has index $p\text{.}$ So the smallest value of $m$ is indeed $p\text{,}$ and we learn that $p\lt n\text{.}$
The structure of the kernels of powers of nilpotent linear transformations will be crucial to what follows. But immediately we can see a practical benefit. Suppose we are confronted with the question of whether or not an $n\times n$ matrix, $A\text{,}$ is nilpotent or not. If we don't quickly find a low power that equals the zero matrix, when do we stop trying higher and higher powers? Theorem Theorem 3.2.6 gives us the answer: if we don't see a zero matrix by the time we finish computing $A^n\text{,}$ then it is not going to ever happen. We will now take a look at one example of Theorem Theorem 3.2.6 in action.
We will recycle the nilpotent matrix $A$ of index 4 from Example Example 3.2.2. We now know that would have only needed to look at the first 6 powers of $A$ if the matrix had not been nilpotent and we wanted to discover that. We list bases for the null spaces of the powers of $A\text{.}$ (Notice how we are using null spaces for matrices interchangeably with kernels of linear transformations, see Theorem KNSI for justification.)
\begin{align*} \nsp{A}&=\nsp{ \begin{bmatrix} -3 & 3 & -2 & 5 & 0 & -5 \\ -3 & 5 & -3 & 4 & 3 & -9 \\ -3 & 4 & -2 & 6 & -4 & -3 \\ -3 & 3 & -2 & 5 & 0 & -5 \\ -3 & 3 & -2 & 4 & 2 & -6 \\ -2 & 3 & -2 & 2 & 4 & -7 \end{bmatrix}} =\spn{\set{ \colvector{2 \\ 2 \\ 5 \\ 2 \\ 1 \\ 0},\, \colvector{-1 \\ -1 \\ -5 \\ -1 \\ 0 \\ 1} }}\\ \nsp{A^2}&=\nsp{ \begin{bmatrix} 1 & -2 & 1 & 0 & -3 & 4 \\ 0 & -2 & 1 & 1 & -3 & 4 \\ 3 & 0 & 0 & -3 & 0 & 0 \\ 1 & -2 & 1 & 0 & -3 & 4 \\ 0 & -2 & 1 & 1 & -3 & 4 \\ -1 & -2 & 1 & 2 & -3 & 4 \end{bmatrix}} =\spn{\set{ \colvector{0 \\ 1 \\ 2 \\ 0 \\ 0 \\ 0},\, \colvector{2 \\ 1 \\ 0 \\ 2 \\ 0 \\ 0},\, \colvector{0 \\ -3 \\ 0 \\ 0 \\ 2 \\ 0},\, \colvector{0 \\ 2 \\ 0 \\ 0 \\ 0 \\ 1} }}\\ \nsp{A^3}&= \nsp{ \begin{bmatrix} 1 & 0 & 0 & -1 & 0 & 0 \\ 1 & 0 & 0 & -1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & -1 & 0 & 0 \\ 1 & 0 & 0 & -1 & 0 & 0 \\ 1 & 0 & 0 & -1 & 0 & 0 \end{bmatrix}} =\spn{\set{ \colvector{0 \\ 1 \\ 0 \\ 0 \\ 0 \\ 0},\, \colvector{0 \\ 0 \\ 1 \\ 0 \\ 0 \\ 0},\, \colvector{1 \\ 0 \\ 0 \\ 1 \\ 0 \\ 0},\, \colvector{0 \\ 0 \\ 0 \\ 0 \\ 1 \\ 0},\, \colvector{0 \\ 0 \\ 0 \\ 0 \\ 0 \\ 1} }}\\ \nsp{A^4}&= \nsp{ \begin{bmatrix} 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \end{bmatrix}} =\spn{\set{ \colvector{1 \\ 0 \\ 0 \\ 0 \\ 0 \\ 0},\, \colvector{0 \\ 1 \\ 0 \\ 0 \\ 0 \\ 0},\, \colvector{0 \\ 0 \\ 1 \\ 0 \\ 0 \\ 0},\, \colvector{0 \\ 0 \\ 0 \\ 1 \\ 0 \\ 0},\, \colvector{0 \\ 0 \\ 0 \\ 0 \\ 1 \\ 0},\, \colvector{0 \\ 0 \\ 0 \\ 0 \\ 0 \\ 1} }} \end{align*}
With the exception of some convenience scaling of the basis vectors in $\nsp{A^2}$ these are exactly the basis vectors described in Theorem BNS. We can see that the dimension of $\nsp{A}$ equals the geometric multiplicity of the zero eigenvalue. Why is this not an accident? We can see the dimensions of the kernels consistently increasing, and we can see that $\nsp{A^4}=\complex{6}\text{.}$ But Theorem Theorem 3.2.6 says a little more. Each successive kernel should be a superset of the previous one. We ought to be able to begin with a basis of $\nsp{A}$ and extend it to a basis of $\nsp{A^2}\text{.}$ Then we should be able to extend a basis of $\nsp{A^2}$ into a basis of $\nsp{A^3}\text{,}$ all with repeated applications of Theorem ELIS. Verify the following,
\begin{align*} \nsp{A}&= \spn{\set{ \colvector{2 \\ 2 \\ 5 \\ 2 \\ 1 \\ 0},\, \colvector{-1 \\ -1 \\ -5 \\ -1 \\ 0 \\ 1} }}\\ \nsp{A^2}&=\spn{\set{ \colvector{2 \\ 2 \\ 5 \\ 2 \\ 1 \\ 0},\, \colvector{-1 \\ -1 \\ -5 \\ -1 \\ 0 \\ 1},\, \colvector{0 \\ -3 \\ 0 \\ 0 \\ 2 \\ 0},\, \colvector{0 \\ 2 \\ 0 \\ 0 \\ 0 \\ 1} }}\\ \nsp{A^3}&= \spn{\set{ \colvector{2 \\ 2 \\ 5 \\ 2 \\ 1 \\ 0},\, \colvector{-1 \\ -1 \\ -5 \\ -1 \\ 0 \\ 1},\, \colvector{0 \\ -3 \\ 0 \\ 0 \\ 2 \\ 0},\, \colvector{0 \\ 2 \\ 0 \\ 0 \\ 0 \\ 1},\, \colvector{0 \\ 0 \\ 0 \\ 0 \\ 0 \\ 1} }}\\ \nsp{A^4}&= \spn{\set{ \colvector{2 \\ 2 \\ 5 \\ 2 \\ 1 \\ 0},\, \colvector{-1 \\ -1 \\ -5 \\ -1 \\ 0 \\ 1},\, \colvector{0 \\ -3 \\ 0 \\ 0 \\ 2 \\ 0},\, \colvector{0 \\ 2 \\ 0 \\ 0 \\ 0 \\ 1},\, \colvector{0 \\ 0 \\ 0 \\ 0 \\ 0 \\ 1},\, \colvector{0 \\ 0 \\ 0 \\ 1 \\ 0 \\ 0} }} \end{align*}
Do not be concerned at the moment about how these bases were constructed since we are not describing the applications of Theorem ELIS here. Do verify carefully for each alleged basis that, (1) it is a superset of the basis for the previous kernel, (2) the basis vectors really are members of the kernel of the associated power of $A\text{,}$ (3) the basis is a linearly independent set, (4) the size of the basis is equal to the size of the basis found previously for each kernel. With these verifications, you will know that we have successfully demonstrated what Theorem Theorem 3.2.6 guarantees.
### Subsection3.2.3Restrictions to Generalized Eigenspaces
We have seen that we can decompose the domain of a linear transformation into a direct sum of generalized eigenspaces (Theorem Theorem 3.1.10). And we know that we can then easily obtain a basis that leads to a block diagonal matrix representation. The blocks of this matrix representation are matrix representations of restrictions to the generalized eigenspaces (for example, Example Example 3.1.12). And the next theorem tells us that these restrictions, adjusted slightly, provide us with a broad class of nilpotent linear transformations.
Notice first that every subspace of $V$ is invariant with respect to $I_V\text{,}$ so $I_{\geneigenspace{T}{\lambda}}=\restrict{I_V}{\geneigenspace{T}{\lambda}}\text{.}$ Let $n=\dimension{V}$ and choose $\vect{v}\in\geneigenspace{T}{\lambda}\text{.}$ Then with an application of Theorem Theorem 3.1.6,
\begin{equation*} \lteval{\left(\restrict{T}{\geneigenspace{T}{\lambda}}-\lambda I_{\geneigenspace{T}{\lambda}}\right)^n}{\vect{v}} =\lteval{\left(T-\lambda I_V\right)^n}{\vect{v}} =\zerovector \end{equation*}
So by Definition NLT, $\restrict{T}{\geneigenspace{T}{\lambda}}-\lambda I_{\geneigenspace{T}{\lambda}}$ is nilpotent.
The proof of Theorem Theorem 3.2.8 shows that the index of the linear transformation $\restrict{T}{\geneigenspace{T}{\lambda}}-\lambda I_{\geneigenspace{T}{\lambda}}$is less than or equal to the dimension of $V\text{.}$ In practice, it must be less than or equal to the dimension of the domain, $\geneigenspace{T}{\lambda}\text{.}$ In any event, the exact value of this index will be of some interest, so we define it now. Notice that this is a property of the eigenvalue $\lambda\text{.}$ In many ways it is similar to the algebraic and geometric multiplicities of an eigenvalue (Definition AME, Definition GME).
###### Definition3.2.9.Index of an Eigenvalue.
Suppose $\ltdefn{T}{V}{V}$ is a linear transformation with eigenvalue $\lambda\text{.}$ Then the index of $\lambda\text{,}$ $\indx{T}{\lambda}\text{,}$ is the index of the nilpotent linear transformation $\restrict{T}{\geneigenspace{T}{\lambda}}-\lambda I_{\geneigenspace{T}{\lambda}}\text{.}$
In Example Example 3.1.9 we computed the generalized eigenspaces of the linear transformation $\ltdefn{S}{\complex{6}}{\complex{6}}$ defined by $\lteval{S}{\vect{x}}=B\vect{x}$ where
\begin{equation*} B=\begin{bmatrix} 2 & -4 & 25 & -54 & 90 & -37 \\ 2 & -3 & 4 & -16 & 26 & -8 \\ 2 & -3 & 4 & -15 & 24 & -7 \\ 10 & -18 & 6 & -36 & 51 & -2 \\ 8 & -14 & 0 & -21 & 28 & 4 \\ 5 & -7 & -6 & -7 & 8 & 7 \end{bmatrix} \end{equation*}
The generalized eigenspace $\geneigenspace{S}{3}$ has dimension $2\text{,}$ while $\geneigenspace{S}{-1}$ has dimension $4\text{.}$ We will investigate each thoroughly in turn, with the intent being to illustrate Theorem Theorem 3.2.8. Many of our computations will be repeats of those done in Example Example 3.1.12.
For $U=\geneigenspace{S}{3}$ we compute a matrix representation of $\restrict{S}{U}$ using the basis found in Example Example 3.1.9,
\begin{equation*} D=\set{\vect{u}_1,\,\vect{u}_2}=\set{\colvector{4\\1\\1\\2\\1\\0},\,\colvector{-5\\-1\\-1\\-1\\0\\1}} \end{equation*}
Since $D$ has size 2, we obtain a $2\times 2$ matrix representation from
\begin{align*} \vectrep{D}{\lteval{\restrict{S}{U}}{\vect{u}_1}} &=\vectrep{D}{\colvector{11\\3\\3\\7\\4\\1}} =\vectrep{D}{4\vect{u}_1+\vect{u}_2} =\colvector{4\\1}\\ \vectrep{D}{\lteval{\restrict{S}{U}}{\vect{u}_2}} &=\vectrep{D}{\colvector{-14\\-3\\-3\\-4\\-1\\2}} =\vectrep{D}{(-1)\vect{u}_1+2\vect{u}_2} =\colvector{-1\\2} \end{align*}
Thus
\begin{equation*} M=\matrixrep{\restrict{S}{U}}{D}{D}=\begin{bmatrix} 4 & -1 \\ 1 & 2 \end{bmatrix} \end{equation*}
Now we can illustrate Theorem Theorem 3.2.8 with powers of the matrix representation (rather than the restriction itself),
\begin{align*} M-3I_2&= \begin{bmatrix}1 & -1 \\ 1 & -1\end{bmatrix}\\ \left(M-3I_2\right)^2&= \begin{bmatrix}0 & 0 \\ 0 & 0\end{bmatrix} \end{align*}
So $M-3I_2$ is a nilpotent matrix of index 2 (meaning that $\restrict{S}{U}-3I_U$ is a nilpotent linear transformation of index 2) and according to Definition Definition 3.2.9 we say $\indx{S}{3}=2\text{.}$
For $W=\geneigenspace{S}{-1}$ we compute a matrix representation of $\restrict{S}{W}$ using the basis found in Example Example 3.1.9,
\begin{equation*} E=\set{\vect{w}_1,\,\vect{w}_2,\,\vect{w}_3,\,\vect{w}_4} =\set{ \colvector{5\\3\\1\\0\\0\\0},\, \colvector{-2\\-3\\0\\1\\0\\0},\, \colvector{4\\5\\0\\0\\1\\0},\, \colvector{-5\\-3\\0\\0\\0\\1} } \end{equation*}
Since $E$ has size 4, we obtain a $4\times 4$ matrix representation (Definition MR) from
\begin{align*} \vectrep{E}{\lteval{\restrict{S}{W}}{\vect{w}_1}} &=\vectrep{E}{\colvector{23\\5\\5\\2\\-2\\-2}} =\vectrep{E}{ 5\vect{w}_1+ 2\vect{w}_2+ (-2)\vect{w}_3+ (-2)\vect{w}_4 } =\colvector{5\\2\\-2\\-2}\\ \vectrep{E}{\lteval{\restrict{S}{W}}{\vect{w}_2}} &=\vectrep{E}{\colvector{-46\\-11\\-10\\-2\\5\\4}} =\vectrep{E}{ (-10)\vect{w}_1+ (-2)\vect{w}_2+ 5\vect{w}_3+ 4\vect{w}_4 } =\colvector{-10\\-2\\5\\4}\\ \vectrep{E}{\lteval{\restrict{S}{W}}{\vect{w}_3}} &=\vectrep{E}{\colvector{78\\19\\17\\1\\-10\\-7}} =\vectrep{E}{ 17\vect{w}_1+ \vect{w}_2+ (-10)\vect{w}_3+ (-7)\vect{w}_4 } =\colvector{17\\1\\-10\\-7}\\ \vectrep{E}{\lteval{\restrict{S}{W}}{\vect{w}_4}} &=\vectrep{E}{\colvector{-35\\-9\\-8\\2\\6\\3}} =\vectrep{E}{ (-8)\vect{w}_1+ 2\vect{w}_2+ 6\vect{w}_3+ 3\vect{w}_4 } =\colvector{-8\\2\\6\\3} \end{align*}
Thus
\begin{equation*} N=\matrixrep{\restrict{S}{W}}{E}{E} = \begin{bmatrix} 5 & -10 & 17 & -8 \\ 2 & -2 & 1 & 2 \\ -2 & 5 & -10 & 6 \\ -2 & 4 & -7 & 3 \end{bmatrix} \end{equation*}
Now we can illustrate Theorem Theorem 3.2.8 with powers of the matrix representation (rather than the restriction itself),
\begin{align*} N-(-1)I_4&=\begin{bmatrix} 6 & -10 & 17 & -8 \\ 2 & -1 & 1 & 2 \\ -2 & 5 & -9 & 6 \\ -2 & 4 & -7 & 4 \end{bmatrix}\\ \left(N-(-1)I_4\right)^2&=\begin{bmatrix} -2 & 3 & -5 & 2 \\ 4 & -6 & 10 & -4 \\ 4 & -6 & 10 & -4 \\ 2 & -3 & 5 & -2 \end{bmatrix}\\ \left(N-(-1)I_4\right)^3&=\begin{bmatrix} 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{bmatrix} \end{align*}
So $N-(-1)I_4$ is a nilpotent matrix of index 3 (meaning that $\restrict{S}{W}-(-1)I_W$ is a nilpotent linear transformation of index 3) and according to Definition Definition 3.2.9 we say $\indx{S}{-1}=3\text{.}$
Notice that if we were to take the union of the two bases of the generalized eigenspaces, we would have a basis for $\complex{6}\text{.}$ Then a matrix representation of $S$ relative to this basis would be the same block diagonal matrix we found in Example Example 3.1.12, only we now understand each of these blocks as being very close to being a nilpotent matrix.
### Subsection3.2.4Jordan Blocks
We conclude this section about nilpotent linear transformations with an infinite family of nilpotent matrices and a doubly-infinite family of nearly nilpotent matrices.
###### Definition3.2.11.Jordan Block.
Given the scalar $\lambda\in\complexes\text{,}$ the Jordan block $\jordan{n}{\lambda}$ is the $n\times n$ matrix defined by
\begin{equation*} \matrixentry{\jordan{n}{\lambda}}{ij}=\begin{cases} \lambda & i=j\\ 1 & j=i+1\\ 0 & \text{otherwise} \end{cases} \end{equation*}
A simple example of a Jordan block,
\begin{equation*} \jordan{4}{5}=\begin{bmatrix} 5 & 1 & 0 & 0\\ 0 & 5 & 1 & 0\\ 0 & 0 & 5 & 1\\ 0 & 0 & 0 & 5 \end{bmatrix} \end{equation*}
We will return to general Jordan blocks later, but in this section we are only interested in Jordan blocks where $\lambda=0\text{.}$ (But notice that $\jordan{n}{\lambda}-\lambda I_n=\jordan{n}{0}\text{.}$) Here is an example of why we are specializing in the $\lambda=0$ case now.
Consider
\begin{align*} \jordan{5}{0}&=\begin{bmatrix} 0 & 1 & 0 & 0 & 0\\ 0 & 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 0 & 1\\ 0 & 0 & 0 & 0 & 0 \end{bmatrix}\\ \end{align*}
and compute powers,
\begin{align*} \left(\jordan{5}{0}\right)^2&\begin{bmatrix} 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \end{bmatrix}\\ \left(\jordan{5}{0}\right)^3&=\begin{bmatrix} 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \end{bmatrix}\\ \left(\jordan{5}{0}\right)^4&=\begin{bmatrix} 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \end{bmatrix}\\ \left(\jordan{5}{0}\right)^5&=\begin{bmatrix} 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \end{bmatrix} \end{align*}
So $\jordan{5}{0}$ is nilpotent of index $5\text{.}$ As before, we record some information about the eigenvalues and eigenvectors of this matrix. The only eigenvalue is zero, with algebraic multiplicity 5, the maximum possible (Theorem ME). The geometric multiplicity of this eigenvalue is just 1, the minimum possible (Theorem ME), as seen in the eigenspace,
\begin{equation*} \eigenspace{\jordan{5}{0}}{0}=\spn{\colvector{1 \\ 0 \\ 0 \\ 0 \\ 0}} \end{equation*}
There should not be any real surprises in this example. We can watch the ones in the powers of $\jordan{5}{0}$ slowly march off to the upper-right hand corner of the powers. Or we can watch the columns of the identity matrix march right, falling off the edge as they go. In some vague way, the eigenvalues and eigenvectors of this matrix are equally extreme.
We can form combinations of Jordan blocks to build a variety of nilpotent matrices. Simply create a block diagonal matrix, where each block is a Jordan block.
Consider the matrix
\begin{align*} C&=\begin{bmatrix} \jordan{3}{0} & \zeromatrix & \zeromatrix \\ \zeromatrix & \jordan{3}{0} & \zeromatrix \\ \zeromatrix & \zeromatrix & \jordan{2}{0} \end{bmatrix} = \begin{bmatrix} 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{bmatrix}\\ \end{align*}
and compute powers,
\begin{align*} C^2&=\begin{bmatrix} 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{bmatrix}\\ C^3&=\begin{bmatrix} 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{bmatrix} \end{align*}
So $C$ is nilpotent of index 3. You should notice how block diagonal matrices behave in products (much like diagonal matrices) and that it was the largest Jordan block that determined the index of this combination. All eight eigenvalues are zero, and each of the three Jordan blocks contributes one eigenvector to a basis for the eigenspace, resulting in zero having a geometric multiplicity of 3.
Since nilpotent matrices only have zero as an eigenvalue (Theorem Theorem 3.2.4), the algebraic multiplicity will be the maximum possible. However, by creating block diagonal matrices with Jordan blocks on the diagonal you should be able to attain any desired geometric multiplicity for this lone eigenvalue. Likewise, the size of the largest Jordan block employed will determine the index of the matrix. So nilpotent matrices with various combinations of index, geometric multiplicity and algebraic multiplicity are easy to manufacture. The predictable properties of block diagonal matrices in matrix products and eigenvector computations, along with the next theorem, make this possible. You might find Example NJB5 a useful companion to this proof.
We need to establish a specific matrix is nilpotent of a specified index. The first column of $\jordan{n}{0}$ is the zero vector, and the remaining $n-1$ columns are the standard unit vectors $\vect{e}_i\text{,}$ $1\leq i\leq n-1$ (Definition SUV), which are also the first $n-1$ columns of the size $n$ identity matrix $I_n\text{.}$ As shorthand, write $J=\jordan{n}{0}\text{.}$
\begin{equation*} J=\left[\zerovector\left|\vect{e}_1\right.\left|\vect{e}_2\right.\left|\vect{e}_3\right.\left|\dots\right.\left|\vect{e}_{n-1}\right.\right] \end{equation*}
We will use the definition of matrix multiplication (Definition MM), together with a proof by induction, to study the powers of $J\text{.}$ Our claim is that
\begin{equation*} J^k= \left[\zerovector\left|\zerovector\right.\left|\dots\right.\left|\zerovector\right.\left|\vect{e}_1\right.\left|\vect{e}_2\right.\left|\dots\right.\left|\vect{e}_{n-k}\right.\right]\text{ for }0\leq k\leq n \end{equation*}
For the base case, $k=0\text{,}$ and the definition of $J^0=I_n$ establishes the claim.
For the induction step, first note that $J\vect{e_1}=\zerovector$ and $J\vect{e}_i=\vect{e}_{i-1}$ for $2\leq i\leq n\text{.}$ Then, assuming the claim is true for $k\text{,}$ we examine the $k+1$ case,
\begin{align*} J^{k+1}&=JJ^k\\ &=J\left[\zerovector\left|\zerovector\right.\left|\dots\right.\left|\zerovector\right.\left|\vect{e}_1\right.\left|\vect{e}_2\right.\left|\dots\right.\left|\vect{e}_{n-k}\right.\right]\\ &=\left[J\zerovector\left|J\zerovector\right.\left|\dots\right.\left|J\zerovector\right.\left|J\vect{e}_1\right.\left|J\vect{e}_2\right.\left|\dots\right.\left|J\vect{e}_{n-k}\right.\right]\\ &=\left[\zerovector\left|\zerovector\right.\left|\dots\right.\left|\zerovector\right.\left|\zerovector\right.\left|\vect{e}_1\right.\left|\vect{e}_2\right.\left|\dots\right.\left|\vect{e}_{n-k-1}\right.\right]\\ &=\left[\zerovector\left|\zerovector\right.\left|\dots\right.\left|\zerovector\right.\left|\vect{e}_1\right.\left|\vect{e}_2\right.\left|\dots\right.\left|\vect{e}_{n-(k+1)}\right.\right] \end{align*}
This concludes the induction.
So $J^k$ has a nonzero entry (a one) in row $n-k$ and column $n\text{,}$ for $0\leq k\leq n-1\text{,}$ and is therefore a nonzero matrix. However,
\begin{equation*} J^n=\left[\zerovector\left|\zerovector\right.\left|\dots\right.\left|\zerovector\right.\right]=\zeromatrix \end{equation*}
Thus, by Definition 3.2.1, $J$ is nilpotent of index $n\text{.}$ | 2021-05-06 06:09:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000100135803223, "perplexity": 58.37577098937903}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988741.20/warc/CC-MAIN-20210506053729-20210506083729-00490.warc.gz"} |
https://socratic.org/questions/what-is-the-empirical-formula-of-a-compound-containing-c-h-and-o-if-combustion-o | # What is the empirical formula of a compound containing C, H, and O if combustion of 3.69 g of the compound yields 5.40 g of CO_2 and 2.22 g of H_2O?
Sep 9, 2016
We get finally an empirical formula of $C {H}_{2} O$; I think the question is suspect.
#### Explanation:
ONLY the $C$ and the $H$ of the combustion can be presumed to derive from the unknown. (Why? Because the analysis is performed in air and typically an oxidant is added to the combustion.)
$\text{Moles (i) and mass (ii) of carbon}$ $=$ $\frac{5.40 \cdot g}{44.01 \cdot g \cdot m o {l}^{-} 1}$ $=$ $0.123 \cdot m o l$ $\equiv$ $1.47 \cdot g \cdot C$
$\text{Moles (i) and mass (ii) of hydrogen}$ $=$ $2 \times \frac{2.22 \cdot g}{18.01 \cdot g \cdot m o {l}^{-} 1}$ $=$ $0.247 \cdot m o l$ $\equiv$ $0.249 \cdot g \cdot H$. Note that the hydrogen in the compound was combusted to water; this is why we multiply the molar quantity by 2.
And now, finally, we work out the percentage composition of $C , H , O$ with respect to the original sample:
%C $=$ (1.47*g)/(3.69*g)xx100%=40.00%
%H $=$ (0.249*g)/(3.69*g)xx100%=6.75%
%O $=$ (100-39.84-6.75)%=53.25%
Note that we cannot (usually) measure the percentage of oxygen in a microanalysis as extra oxidant is typically added. Thus O% is the percentage balance.
After all this wrok we start again. We assume that there were $100 \cdot g$ of compound. And from this we work out the empirical formula.
$\text{Moles of carbon}$ $=$ $\frac{40.0 \cdot g}{12.011 \cdot g \cdot m o {l}^{-} 1}$ $=$ $3.33 \cdot m o l \cdot C$.
$\text{Moles of hydrogen}$ $=$ $\frac{6.75 \cdot g}{1.00794 \cdot g \cdot m o {l}^{-} 1}$ $=$ $6.70 \cdot m o l \cdot H$.
$\text{Moles of oxygen}$ $=$ $\frac{53.41 \cdot g}{15.999 \cdot g \cdot m o {l}^{-} 1}$ $=$ $3.34 \cdot m o l \cdot H$.
If we divide thru by the smallest molar quantity, we get, an empirical formula of $C {H}_{2} O$. I am not terribly satisifed with this question. A molecular mass should have been quoted. This is a lot of work for simple sugar. | 2021-11-27 13:33:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 45, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8997056484222412, "perplexity": 670.488350277805}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358189.36/warc/CC-MAIN-20211127133237-20211127163237-00074.warc.gz"} |
https://statistical-engineering.com/weibull-2/ | Weibull Analysis
(Software for constructing these plots is available.)
Unfortunately the Weibull model is often used, not because it is the best tool for the job, but because it can approximate many other probability densities, like the normal and LogNormal. It has been compared to a mechanic’s crescent wrench, which isn’t as effective a having a complete set of wrenches, but is more likely to be available. It is always preferable, however, to use the best tool for the job, not just the easiest.
• Weibull is not location, scale. That means that a good Weibull model for data in the 100 to 1000 range may be less effective for data having a similar shape, but in the 1000 to 10,000 range. Modeling data farther remeoved from $$x= 0$$ may require a may require a $$t_0$$ “correction.” This is dangerous – sometimes even doubleplussungood. Why? Because often we are interested in the early occurrences (e.g. failures), those with a low probability but very high consequence. Using a $$t_0$$ “correction” defines the probability to be ZERO for ALL occurrences less than $$t_0$$. In other words you have defined away your problem! Dumb!
• compare 3-parm Weibull to 2-parm lognormal.
• assigning probability zero to stuff you’re really interested in.
• over-fitting (i.e. – talking yourself into thinking you know more that the data told you)
• you can have both left and right censoring with, say, a signal response, limited at the left by background noise and on the right bymaximum signal output, but fatigue failure data will be right censored only, caused by parts removed from testing before they fail.
• “Weibayes” isn’t Bayesian. It’s opportunistic marketing gone awry that treats engineers as statistical rubes, who will believe anything some “expert” tells them. The unfortunate cobbling-together of two surnames is insulting to both men, and the ad hoc “methodology” is specious. “Weibayes” simply assumes that you already know the exact value of the Weibull shape parameter, beta, (“slope”), and use the data to estimate the location parameter, eta.
• Real Bayesian analysis assumes that you know only approximate values for the shape and location and also have some idea about their variabilities. These “priors” are then updated in light of the data to provide more accurate estimates of both eta and beta. “Weibays” leaves the unsuspecting user with the mistaken notion that he can compute what he needs to know (an early failure percentile, for example) based on a guess. While that might work if you are interested in, say, the 0.1th percentile, it is laughable for anything smaller. Are you betting your company’s future on a guess?
• side-by-side? plot of cdf and pdf
• Smaller plots look much better on higher resolution monitors.
• Show density plots with wide left tail.
• “straightforward explanations free of confusing statistical jargon” is sometimes also free of statistical validity, viz. the quoted “probabilities” are improbable.
• simultaneous confidence bounds It is well known that Weibull model parameters are highly correlated. Computing confidence bounds for them individually is therefore misleading. The method suggested by Cheng and Iles (“Confidence Bands for Cumulative Distribution Functions of Continuous Random Variables,” Technometrics, vol. 25, no. 1, pp 77 – 86 (1983)) computes their joint influence and produces simultaneous confidence bounds on the cumulative density function.
• Weibull analysis is NOT a regression of plotting positions:
• plotting positions are arbitrary. There are several in common usage.
• the regression treats X as known and the percentile as random. In truth, the percentile is known (after considering possible censored observations) and the X value is a random
variable. (Think of $$sN$$ curves where $$N=f(stress)$$ not $$stress = f(N).$$)
• The correct method is mle. Anything that disagrees with that is therefore wrong. Plot 3 regressions and mle
• There are two valid confidence bounds, binomial and loglikelihood ratio. Regression bounds are wrong because the are based observations and how they are plotted as OLS (Ordinary Least Squares) regression.
• situation where XXX software is not appropriate: multiple censoring. interval censoring. Contact an expert: Me.
• created a menu for Weibull topics pointing individual pages .
• if Time has a Weibull distribution, then log(Time) has a SEV distribution. | 2022-12-01 20:55:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6531798243522644, "perplexity": 2326.2740690073215}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710869.86/warc/CC-MAIN-20221201185801-20221201215801-00455.warc.gz"} |
https://astroautomata.com/PySR/api/ | # PySRRegressor Reference¶
High-performance symbolic regression algorithm.
This is the scikit-learn interface for SymbolicRegression.jl. This model will automatically search for equations which fit a given dataset subject to a particular loss and set of constraints.
Most default parameters have been tuned over several example equations, but you should adjust niterations, binary_operators, unary_operators to your requirements. You can view more detailed explanations of the options on the options page of the documentation.
Parameters:
Name Type Description Default
model_selection str
Model selection criterion when selecting a final expression from the list of best expression at each complexity. Can be 'accuracy', 'best', or 'score'. Default is 'best'. 'accuracy' selects the candidate model with the lowest loss (highest accuracy). 'score' selects the candidate model with the highest score. Score is defined as the negated derivative of the log-loss with respect to complexity - if an expression has a much better loss at a slightly higher complexity, it is preferred. 'best' selects the candidate model with the highest score among expressions with a loss better than at least 1.5x the most accurate model.
'best'
binary_operators list[str]
List of strings for binary operators used in the search. See the operators page for more details. Default is ["+", "-", "*", "/"].
None
unary_operators list[str]
Operators which only take a single scalar as input. For example, "cos" or "exp". Default is None.
None
niterations int
Number of iterations of the algorithm to run. The best equations are printed and migrate between populations at the end of each iteration. Default is 40.
40
populations int
Number of populations running. Default is 15.
15
population_size int
Number of individuals in each population. Default is 33.
33
max_evals int
Limits the total number of evaluations of expressions to this number. Default is None.
None
maxsize int
Max complexity of an equation. Default is 20.
20
maxdepth int
Max depth of an equation. You can use both maxsize and maxdepth. maxdepth is by default not used. Default is None.
None
warmup_maxsize_by float
Whether to slowly increase max size from a small number up to the maxsize (if greater than 0). If greater than 0, says the fraction of training time at which the current maxsize will reach the user-passed maxsize. Default is 0.0.
0.0
timeout_in_seconds float
Make the search return early once this many seconds have passed. Default is None.
None
constraints dict[str, int | tuple[int, int]]
Dictionary of int (unary) or 2-tuples (binary), this enforces maxsize constraints on the individual arguments of operators. E.g., 'pow': (-1, 1) says that power laws can have any complexity left argument, but only 1 complexity in the right argument. Use this to force more interpretable solutions. Default is None.
None
nested_constraints dict[str, dict]
Specifies how many times a combination of operators can be nested. For example, {"sin": {"cos": 0}}, "cos": {"cos": 2}} specifies that cos may never appear within a sin, but sin can be nested with itself an unlimited number of times. The second term specifies that cos can be nested up to 2 times within a cos, so that cos(cos(cos(x))) is allowed (as well as any combination of + or - within it), but cos(cos(cos(cos(x)))) is not allowed. When an operator is not specified, it is assumed that it can be nested an unlimited number of times. This requires that there is no operator which is used both in the unary operators and the binary operators (e.g., - could be both subtract, and negation). For binary operators, you only need to provide a single number: both arguments are treated the same way, and the max of each argument is constrained. Default is None.
None
loss str
String of Julia code specifying the loss function. Can either be a loss from LossFunctions.jl, or your own loss written as a function. Examples of custom written losses include: myloss(x, y) = abs(x-y) for non-weighted, or myloss(x, y, w) = w*abs(x-y) for weighted. The included losses include: Regression: LPDistLoss{P}(), L1DistLoss(), L2DistLoss() (mean square), LogitDistLoss(), HuberLoss(d), L1EpsilonInsLoss(ϵ), L2EpsilonInsLoss(ϵ), PeriodicLoss(c), QuantileLoss(τ). Classification: ZeroOneLoss(), PerceptronLoss(), L1HingeLoss(), SmoothedL1HingeLoss(γ), ModifiedHuberLoss(), L2MarginLoss(), ExpLoss(), SigmoidLoss(), DWDMarginLoss(q). Default is "L2DistLoss()".
'L2DistLoss()'
complexity_of_operators dict[str, float]
If you would like to use a complexity other than 1 for an operator, specify the complexity here. For example, {"sin": 2, "+": 1} would give a complexity of 2 for each use of the sin operator, and a complexity of 1 for each use of the + operator (which is the default). You may specify real numbers for a complexity, and the total complexity of a tree will be rounded to the nearest integer after computing. Default is None.
None
complexity_of_constants float
Complexity of constants. Default is 1.
1
complexity_of_variables float
Complexity of variables. Default is 1.
1
parsimony float
Multiplicative factor for how much to punish complexity. Default is 0.0032.
0.0032
use_frequency bool
Whether to measure the frequency of complexities, and use that instead of parsimony to explore equation space. Will naturally find equations of all complexities. Default is True.
True
use_frequency_in_tournament bool
Whether to use the frequency mentioned above in the tournament, rather than just the simulated annealing. Default is True.
True
alpha float
Initial temperature for simulated annealing (requires annealing to be True). Default is 0.1.
0.1
annealing bool
Whether to use annealing. Default is False.
False
early_stop_condition float | str
Stop the search early if this loss is reached. You may also pass a string containing a Julia function which takes a loss and complexity as input, for example: "f(loss, complexity) = (loss < 0.1) && (complexity < 10)". Default is None.
None
ncyclesperiteration int
Number of total mutations to run, per 10 samples of the population, per iteration. Default is 550.
550
fraction_replaced float
How much of population to replace with migrating equations from other populations. Default is 0.000364.
0.000364
fraction_replaced_hof float
How much of population to replace with migrating equations from hall of fame. Default is 0.035.
0.035
weight_add_node float
Relative likelihood for mutation to add a node. Default is 0.79.
0.79
weight_insert_node float
Relative likelihood for mutation to insert a node. Default is 5.1.
5.1
weight_delete_node float
Relative likelihood for mutation to delete a node. Default is 1.7.
1.7
weight_do_nothing float
Relative likelihood for mutation to leave the individual. Default is 0.21.
0.21
weight_mutate_constant float
Relative likelihood for mutation to change the constant slightly in a random direction. Default is 0.048.
0.048
weight_mutate_operator float
Relative likelihood for mutation to swap an operator. Default is 0.47.
0.47
weight_randomize float
Relative likelihood for mutation to completely delete and then randomly generate the equation Default is 0.00023.
0.00023
weight_simplify float
Relative likelihood for mutation to simplify constant parts by evaluation Default is 0.0020.
0.002
crossover_probability float
Absolute probability of crossover-type genetic operation, instead of a mutation. Default is 0.066.
0.066
skip_mutation_failures bool
Whether to skip mutation and crossover failures, rather than simply re-sampling the current member. Default is True.
True
migration bool
Whether to migrate. Default is True.
True
hof_migration bool
Whether to have the hall of fame migrate. Default is True.
True
topn int
How many top individuals migrate from each population. Default is 12.
12
should_optimize_constants bool
Whether to numerically optimize constants (Nelder-Mead/Newton) at the end of each iteration. Default is True.
True
optimizer_algorithm str
Optimization scheme to use for optimizing constants. Can currently be NelderMead or BFGS. Default is "BFGS".
'BFGS'
optimizer_nrestarts int
Number of time to restart the constants optimization process with different initial conditions. Default is 2.
2
optimize_probability float
Probability of optimizing the constants during a single iteration of the evolutionary algorithm. Default is 0.14.
0.14
optimizer_iterations int
Number of iterations that the constants optimizer can take. Default is 8.
8
perturbation_factor float
Constants are perturbed by a max factor of (perturbation_factor*T + 1). Either multiplied by this or divided by this. Default is 0.076.
0.076
tournament_selection_n int
Number of expressions to consider in each tournament. Default is 10.
10
tournament_selection_p float
Probability of selecting the best expression in each tournament. The probability will decay as p*(1-p)^n for other expressions, sorted by loss. Default is 0.86.
0.86
procs int
Number of processes (=number of populations running). Default is cpu_count().
cpu_count()
multithreading bool
Use multithreading instead of distributed backend. Using procs=0 will turn off both. Default is True.
None
cluster_manager str
For distributed computing, this sets the job queue system. Set to one of "slurm", "pbs", "lsf", "sge", "qrsh", "scyld", or "htc". If set to one of these, PySR will run in distributed mode, and use procs to figure out how many processes to launch. Default is None.
None
batching bool
Whether to compare population members on small batches during evolution. Still uses full dataset for comparing against hall of fame. Default is False.
False
batch_size int
The amount of data to use if doing batching. Default is 50.
50
fast_cycle bool
Batch over population subsamples. This is a slightly different algorithm than regularized evolution, but does cycles 15% faster. May be algorithmically less efficient. Default is False.
False
precision int
What precision to use for the data. By default this is 32 (float32), but you can select 64 or 16 as well, giving you 64 or 16 bits of floating point precision, respectively. Default is 32.
32
random_state int, Numpy RandomState instance or None
Pass an int for reproducible results across multiple function calls. See :term:Glossary <random_state>. Default is None.
None
deterministic bool
Make a PySR search give the same result every run. To use this, you must turn off parallelism (with procs=0, multithreading=False), and set random_state to a fixed seed. Default is False.
False
warm_start bool
Tells fit to continue from where the last call to fit finished. If false, each call to fit will be fresh, overwriting previous results. Default is False.
False
verbosity int
What verbosity level to use. 0 means minimal print statements. Default is 1e9.
1000000000.0
update_verbosity int
What verbosity level to use for package updates. Will take value of verbosity if not given. Default is None.
None
progress bool
Whether to use a progress bar instead of printing to stdout. Default is True.
True
equation_file str
Where to save the files (.csv extension). Default is None.
None
temp_equation_file bool
Whether to put the hall of fame file in the temp directory. Deletion is then controlled with the delete_tempfiles parameter. Default is False.
False
tempdir str
directory for the temporary files. Default is None.
None
delete_tempfiles bool
Whether to delete the temporary files after finishing. Default is True.
True
julia_project str
A Julia environment location containing a Project.toml (and potentially the source code for SymbolicRegression.jl). Default gives the Python package directory, where a Project.toml file should be present from the install.
None
update
Whether to automatically update Julia packages. Default is True.
True
output_jax_format bool
Whether to create a 'jax_format' column in the output, containing jax-callable functions and the default parameters in a jax array. Default is False.
False
output_torch_format bool
Whether to create a 'torch_format' column in the output, containing a torch module with trainable parameters. Default is False.
False
extra_sympy_mappings dict[str, Callable]
Provides mappings between custom binary_operators or unary_operators defined in julia strings, to those same operators defined in sympy. E.G if unary_operators=["inv(x)=1/x"], then for the fitted model to be export to sympy, extra_sympy_mappings would be {"inv": lambda x: 1/x}. Default is None.
None
extra_jax_mappings dict[Callable, str]
Similar to extra_sympy_mappings but for model export to jax. The dictionary maps sympy functions to jax functions. For example: extra_jax_mappings={sympy.sin: "jnp.sin"} maps the sympy.sin function to the equivalent jax expression jnp.sin. Default is None.
None
extra_torch_mappings dict[Callable, Callable]
The same as extra_jax_mappings but for model export to pytorch. Note that the dictionary keys should be callable pytorch expressions. For example: extra_torch_mappings={sympy.sin: torch.sin}. Default is None.
None
denoise bool
Whether to use a Gaussian Process to denoise the data before inputting to PySR. Can help PySR fit noisy data. Default is False.
False
select_k_features int
whether to run feature selection in Python using random forests, before passing to the symbolic regression code. None means no feature selection; an int means select that many features. Default is None.
None
**kwargs dict
Supports deprecated keyword arguments. Other arguments will result in an error.
{}
Attributes required
equations_ pandas.DataFrame | list[pandas.DataFrame]
Processed DataFrame containing the results of model fitting.
required
n_features_in_ int
Number of features seen during :term:fit.
required
feature_names_in_ ndarray of shape (
Names of features seen during :term:fit. Defined only when X has feature names that are all strings.
required
nout_ int
Number of output dimensions.
required
selection_mask_ list[int] of length
List of indices for input features that are selected when select_k_features is set.
required
tempdir_ Path
Path to the temporary equations directory.
required
equation_file_ str
Output equation file name produced by the julia backend.
required
raw_julia_state_ tuple[list[PyCall.jlwrap], PyCall.jlwrap]
The state for the julia SymbolicRegression.jl backend post fitting.
required
equation_file_contents_ list[pandas.DataFrame]
Contents of the equation file output by the Julia backend.
required
show_pickle_warnings_ bool
Whether to show warnings about what attributes can be pickled.
required
Examples:
>>> import numpy as np
>>> from pysr import PySRRegressor
>>> randstate = np.random.RandomState(0)
>>> X = 2 * randstate.randn(100, 5)
>>> # y = 2.5382 * cos(x_3) + x_0 - 0.5
>>> y = 2.5382 * np.cos(X[:, 3]) + X[:, 0] ** 2 - 0.5
>>> model = PySRRegressor(
... niterations=40,
... binary_operators=["+", "*"],
... unary_operators=[
... "cos",
... "exp",
... "sin",
... "inv(x) = 1/x", # Custom operator (julia syntax)
... ],
... model_selection="best",
... loss="loss(x, y) = (x - y)^2", # Custom loss function (julia syntax)
... )
>>> model.fit(X, y)
>>> model
PySRRegressor.equations_ = [
0 0.000000 3.8552167 3.360272e+01 1
1 1.189847 (x0 * x0) 3.110905e+00 3
2 0.010626 ((x0 * x0) + -0.25573406) 3.045491e+00 5
3 0.896632 (cos(x3) + (x0 * x0)) 1.242382e+00 6
4 0.811362 ((x0 * x0) + (cos(x3) * 2.4384754)) 2.451971e-01 8
5 >>>> 13.733371 (((cos(x3) * 2.5382) + (x0 * x0)) + -0.5) 2.889755e-13 10
6 0.194695 ((x0 * x0) + (((cos(x3) + -0.063180044) * 2.53... 1.957723e-13 12
7 0.006988 ((x0 * x0) + (((cos(x3) + -0.32505524) * 1.538... 1.944089e-13 13
8 0.000955 (((((x0 * x0) + cos(x3)) + -0.8251649) + (cos(... 1.940381e-13 15
]
>>> model.score(X, y)
1.0
>>> model.predict(np.array([1,2,3,4,5]))
array([-1.15907818, -1.15907818, -1.15907818, -1.15907818, -1.15907818])
Source code in pysr/sr.py
628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 def __init__( self, model_selection="best", *, binary_operators=None, unary_operators=None, niterations=40, populations=15, population_size=33, max_evals=None, maxsize=20, maxdepth=None, warmup_maxsize_by=0.0, timeout_in_seconds=None, constraints=None, nested_constraints=None, loss="L2DistLoss()", complexity_of_operators=None, complexity_of_constants=1, complexity_of_variables=1, parsimony=0.0032, use_frequency=True, use_frequency_in_tournament=True, alpha=0.1, annealing=False, early_stop_condition=None, ncyclesperiteration=550, fraction_replaced=0.000364, fraction_replaced_hof=0.035, weight_add_node=0.79, weight_insert_node=5.1, weight_delete_node=1.7, weight_do_nothing=0.21, weight_mutate_constant=0.048, weight_mutate_operator=0.47, weight_randomize=0.00023, weight_simplify=0.0020, crossover_probability=0.066, skip_mutation_failures=True, migration=True, hof_migration=True, topn=12, should_optimize_constants=True, optimizer_algorithm="BFGS", optimizer_nrestarts=2, optimize_probability=0.14, optimizer_iterations=8, perturbation_factor=0.076, tournament_selection_n=10, tournament_selection_p=0.86, procs=cpu_count(), multithreading=None, cluster_manager=None, batching=False, batch_size=50, fast_cycle=False, precision=32, random_state=None, deterministic=False, warm_start=False, verbosity=1e9, update_verbosity=None, progress=True, equation_file=None, temp_equation_file=False, tempdir=None, delete_tempfiles=True, julia_project=None, update=True, output_jax_format=False, output_torch_format=False, extra_sympy_mappings=None, extra_torch_mappings=None, extra_jax_mappings=None, denoise=False, select_k_features=None, **kwargs, ): # Hyperparameters # - Model search parameters self.model_selection = model_selection self.binary_operators = binary_operators self.unary_operators = unary_operators self.niterations = niterations self.populations = populations self.population_size = population_size self.ncyclesperiteration = ncyclesperiteration # - Equation Constraints self.maxsize = maxsize self.maxdepth = maxdepth self.constraints = constraints self.nested_constraints = nested_constraints self.warmup_maxsize_by = warmup_maxsize_by # - Early exit conditions: self.max_evals = max_evals self.timeout_in_seconds = timeout_in_seconds self.early_stop_condition = early_stop_condition # - Loss parameters self.loss = loss self.complexity_of_operators = complexity_of_operators self.complexity_of_constants = complexity_of_constants self.complexity_of_variables = complexity_of_variables self.parsimony = parsimony self.use_frequency = use_frequency self.use_frequency_in_tournament = use_frequency_in_tournament self.alpha = alpha self.annealing = annealing # - Evolutionary search parameters # -- Mutation parameters self.weight_add_node = weight_add_node self.weight_insert_node = weight_insert_node self.weight_delete_node = weight_delete_node self.weight_do_nothing = weight_do_nothing self.weight_mutate_constant = weight_mutate_constant self.weight_mutate_operator = weight_mutate_operator self.weight_randomize = weight_randomize self.weight_simplify = weight_simplify self.crossover_probability = crossover_probability self.skip_mutation_failures = skip_mutation_failures # -- Migration parameters self.migration = migration self.hof_migration = hof_migration self.fraction_replaced = fraction_replaced self.fraction_replaced_hof = fraction_replaced_hof self.topn = topn # -- Constants parameters self.should_optimize_constants = should_optimize_constants self.optimizer_algorithm = optimizer_algorithm self.optimizer_nrestarts = optimizer_nrestarts self.optimize_probability = optimize_probability self.optimizer_iterations = optimizer_iterations self.perturbation_factor = perturbation_factor # -- Selection parameters self.tournament_selection_n = tournament_selection_n self.tournament_selection_p = tournament_selection_p # Solver parameters self.procs = procs self.multithreading = multithreading self.cluster_manager = cluster_manager self.batching = batching self.batch_size = batch_size self.fast_cycle = fast_cycle self.precision = precision self.random_state = random_state self.deterministic = deterministic self.warm_start = warm_start # Additional runtime parameters # - Runtime user interface self.verbosity = verbosity self.update_verbosity = update_verbosity self.progress = progress # - Project management self.equation_file = equation_file self.temp_equation_file = temp_equation_file self.tempdir = tempdir self.delete_tempfiles = delete_tempfiles self.julia_project = julia_project self.update = update self.output_jax_format = output_jax_format self.output_torch_format = output_torch_format self.extra_sympy_mappings = extra_sympy_mappings self.extra_jax_mappings = extra_jax_mappings self.extra_torch_mappings = extra_torch_mappings # Pre-modelling transformation self.denoise = denoise self.select_k_features = select_k_features # Once all valid parameters have been assigned handle the # deprecated kwargs if len(kwargs) > 0: # pragma: no cover deprecated_kwargs = make_deprecated_kwargs_for_pysr_regressor() for k, v in kwargs.items(): # Handle renamed kwargs if k in deprecated_kwargs: updated_kwarg_name = deprecated_kwargs[k] setattr(self, updated_kwarg_name, v) warnings.warn( f"{k} has been renamed to {updated_kwarg_name} in PySRRegressor. " "Please use that instead.", FutureWarning, ) # Handle kwargs that have been moved to the fit method elif k in ["weights", "variable_names", "Xresampled"]: warnings.warn( f"{k} is a data dependant parameter so should be passed when fit is called. " f"Ignoring parameter; please pass {k} during the call to fit instead.", FutureWarning, ) else: raise TypeError( f"{k} is not a valid keyword argument for PySRRegressor." )
## pysr.sr.PySRRegressor.fit(X, y, Xresampled=None, weights=None, variable_names=None)¶
Search for equations to fit the dataset and store them in self.equations_.
Parameters:
Name Type Description Default
X ndarray | pandas.DataFrame
Training data of shape (n_samples, n_features).
required
y ndarray | pandas.DataFrame
Target values of shape (n_samples,) or (n_samples, n_targets). Will be cast to X's dtype if necessary.
required
Xresampled ndarray | pandas.DataFrame
Resampled training data, of shape (n_resampled, n_features), to generate a denoised data on. This will be used as the training data, rather than X.
None
weights ndarray | pandas.DataFrame
Weight array of the same shape as y. Each element is how to weight the mean-square-error loss for that particular element of y. Alternatively, if a custom loss was set, it will can be used in arbitrary ways.
None
variable_names list[str]
A list of names for the variables, rather than "x0", "x1", etc. If X is a pandas dataframe, the column names will be used instead of variable_names. Cannot contain spaces or special characters. Avoid variable names which are also function names in sympy, such as "N".
None
Returns:
Name Type Description
self object
Fitted estimator.
Source code in pysr/sr.py
1628 1629 1630 1631 1632 1633 1634 1635 1636 1637 1638 1639 1640 1641 1642 1643 1644 1645 1646 1647 1648 1649 1650 1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674 1675 1676 1677 1678 1679 1680 1681 1682 1683 1684 1685 1686 1687 1688 1689 1690 1691 1692 1693 1694 1695 1696 1697 1698 1699 1700 1701 1702 1703 1704 1705 1706 1707 1708 1709 1710 1711 1712 1713 1714 1715 1716 1717 1718 1719 1720 1721 1722 1723 1724 1725 1726 1727 1728 1729 1730 1731 1732 1733 1734 1735 1736 1737 1738 1739 1740 1741 1742 1743 1744 1745 1746 1747 1748 1749 1750 1751 1752 def fit( self, X, y, Xresampled=None, weights=None, variable_names=None, ): """ Search for equations to fit the dataset and store them in self.equations_. Parameters ---------- X : ndarray | pandas.DataFrame Training data of shape (n_samples, n_features). y : ndarray | pandas.DataFrame Target values of shape (n_samples,) or (n_samples, n_targets). Will be cast to X's dtype if necessary. Xresampled : ndarray | pandas.DataFrame Resampled training data, of shape (n_resampled, n_features), to generate a denoised data on. This will be used as the training data, rather than X. weights : ndarray | pandas.DataFrame Weight array of the same shape as y. Each element is how to weight the mean-square-error loss for that particular element of y. Alternatively, if a custom loss was set, it will can be used in arbitrary ways. variable_names : list[str] A list of names for the variables, rather than "x0", "x1", etc. If X is a pandas dataframe, the column names will be used instead of variable_names. Cannot contain spaces or special characters. Avoid variable names which are also function names in sympy, such as "N". Returns ------- self : object Fitted estimator. """ # Init attributes that are not specified in BaseEstimator if self.warm_start and hasattr(self, "raw_julia_state_"): pass else: if hasattr(self, "raw_julia_state_"): warnings.warn( "The discovered expressions are being reset. " "Please set warm_start=True if you wish to continue " "to start a search where you left off.", ) self.equations_ = None self.nout_ = 1 self.selection_mask_ = None self.raw_julia_state_ = None random_state = check_random_state(self.random_state) # For np random seed = random_state.get_state()[1][0] # For julia random self._setup_equation_file() mutated_params = self._validate_and_set_init_params() X, y, Xresampled, weights, variable_names = self._validate_and_set_fit_params( X, y, Xresampled, weights, variable_names ) if X.shape[0] > 10000 and not self.batching: warnings.warn( "Note: you are running with more than 10,000 datapoints. " "You should consider turning on batching (https://astroautomata.com/PySR/options/#batching). " "You should also reconsider if you need that many datapoints. " "Unless you have a large amount of noise (in which case you " "should smooth your dataset first), generally < 10,000 datapoints " "is enough to find a functional form with symbolic regression. " "More datapoints will lower the search speed." ) # Pre transformations (feature selection and denoising) X, y, variable_names = self._pre_transform_training_data( X, y, Xresampled, variable_names, random_state ) # Warn about large feature counts (still warn if feature count is large # after running feature selection) if self.n_features_in_ >= 10: warnings.warn( "Note: you are running with 10 features or more. " "Genetic algorithms like used in PySR scale poorly with large numbers of features. " "Consider using feature selection techniques to select the most important features " "(you can do this automatically with the select_k_features parameter), " "or, alternatively, doing a dimensionality reduction beforehand. " "For example, X = PCA(n_components=6).fit_transform(X), " "using scikit-learn's PCA class, " "will reduce the number of features to 6 in an interpretable way, " "as each resultant feature " "will be a linear combination of the original features. " ) # Assertion checks use_custom_variable_names = variable_names is not None # TODO: this is always true. _check_assertions( X, use_custom_variable_names, variable_names, weights, y, ) # Initially, just save model parameters, so that # it can be loaded from an early exit: if not self.temp_equation_file: self._checkpoint() # Perform the search: self._run(X, y, mutated_params, weights=weights, seed=seed) # Then, after fit, we save again, so the pickle file contains # the equations: if not self.temp_equation_file: self._checkpoint() return self
## pysr.sr.PySRRegressor.predict(X, index=None)¶
Predict y from input X using the equation chosen by model_selection.
You may see what equation is used by printing this object. X should have the same columns as the training data.
Parameters:
Name Type Description Default
X ndarray | pandas.DataFrame
Training data of shape (n_samples, n_features).
required
index int | list[int]
If you want to compute the output of an expression using a particular row of self.equations_, you may specify the index here. For multiple output equations, you must pass a list of indices in the same order.
None
Returns:
Name Type Description
y_predicted ndarray of shape (n_samples, nout_)
Values predicted by substituting X into the fitted symbolic regression model.
Raises:
Type Description
ValueError
Raises if the best_equation cannot be evaluated.
Source code in pysr/sr.py
1773 1774 1775 1776 1777 1778 1779 1780 1781 1782 1783 1784 1785 1786 1787 1788 1789 1790 1791 1792 1793 1794 1795 1796 1797 1798 1799 1800 1801 1802 1803 1804 1805 1806 1807 1808 1809 1810 1811 1812 1813 1814 1815 1816 1817 1818 1819 1820 1821 1822 1823 1824 1825 1826 1827 1828 1829 1830 1831 1832 1833 1834 1835 1836 1837 1838 1839 1840 1841 1842 1843 1844 def predict(self, X, index=None): """ Predict y from input X using the equation chosen by model_selection. You may see what equation is used by printing this object. X should have the same columns as the training data. Parameters ---------- X : ndarray | pandas.DataFrame Training data of shape (n_samples, n_features). index : int | list[int] If you want to compute the output of an expression using a particular row of self.equations_, you may specify the index here. For multiple output equations, you must pass a list of indices in the same order. Returns ------- y_predicted : ndarray of shape (n_samples, nout_) Values predicted by substituting X into the fitted symbolic regression model. Raises ------ ValueError Raises if the best_equation cannot be evaluated. """ check_is_fitted( self, attributes=["selection_mask_", "feature_names_in_", "nout_"] ) best_equation = self.get_best(index=index) # When X is an numpy array or a pandas dataframe with a RangeIndex, # the self.feature_names_in_ generated during fit, for the same X, # will cause a warning to be thrown during _validate_data. # To avoid this, convert X to a dataframe, apply the selection mask, # and then set the column/feature_names of X to be equal to those # generated during fit. if not isinstance(X, pd.DataFrame): X = check_array(X) X = pd.DataFrame(X) if isinstance(X.columns, pd.RangeIndex): if self.selection_mask_ is not None: # RangeIndex enforces column order allowing columns to # be correctly filtered with self.selection_mask_ X = X.iloc[:, self.selection_mask_] X.columns = self.feature_names_in_ # Without feature information, CallableEquation/lambda_format equations # require that the column order of X matches that of the X used during # the fitting process. _validate_data removes this feature information # when it converts the dataframe to an np array. Thus, to ensure feature # order is preserved after conversion, the dataframe columns must be # reordered/reindexed to match those of the transformed (denoised and # feature selected) X in fit. X = X.reindex(columns=self.feature_names_in_) X = self._validate_data(X, reset=False) try: if self.nout_ > 1: return np.stack( [eq["lambda_format"](X) for eq in best_equation], axis=1 ) return best_equation["lambda_format"](X) except Exception as error: raise ValueError( "Failed to evaluate the expression. " "If you are using a custom operator, make sure to define it in extra_sympy_mappings, " "e.g., model.set_params(extra_sympy_mappings={'inv': lambda x: 1/x}), where " "lambda x: 1/x is a valid SymPy function defining the operator. " "You can then run model.refresh() to re-load the expressions." ) from error
## pysr.sr.PySRRegressor.from_file(equation_file, *, binary_operators=None, unary_operators=None, n_features_in=None, feature_names_in=None, selection_mask=None, nout=1, **pysr_kwargs) classmethod ¶
Create a model from a saved model checkpoint or equation file.
Parameters:
Name Type Description Default
equation_file str
Path to a pickle file containing a saved model, or a csv file containing equations.
required
binary_operators list[str]
The same binary operators used when creating the model. Not needed if loading from a pickle file.
None
unary_operators list[str]
The same unary operators used when creating the model. Not needed if loading from a pickle file.
None
n_features_in int
Number of features passed to the model. Not needed if loading from a pickle file.
None
feature_names_in list[str]
Names of the features passed to the model. Not needed if loading from a pickle file.
None
selection_mask list[bool]
If using select_k_features, you must pass model.selection_mask_ here. Not needed if loading from a pickle file.
None
nout int
Number of outputs of the model. Not needed if loading from a pickle file. Default is 1.
1
**pysr_kwargs dict
Any other keyword arguments to initialize the PySRRegressor object. These will overwrite those stored in the pickle file. Not needed if loading from a pickle file.
{}
Returns:
Name Type Description
model PySRRegressor
The model with fitted equations.
Source code in pysr/sr.py
822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 @classmethod def from_file( cls, equation_file, *, binary_operators=None, unary_operators=None, n_features_in=None, feature_names_in=None, selection_mask=None, nout=1, **pysr_kwargs, ): """ Create a model from a saved model checkpoint or equation file. Parameters ---------- equation_file : str Path to a pickle file containing a saved model, or a csv file containing equations. binary_operators : list[str] The same binary operators used when creating the model. Not needed if loading from a pickle file. unary_operators : list[str] The same unary operators used when creating the model. Not needed if loading from a pickle file. n_features_in : int Number of features passed to the model. Not needed if loading from a pickle file. feature_names_in : list[str] Names of the features passed to the model. Not needed if loading from a pickle file. selection_mask : list[bool] If using select_k_features, you must pass model.selection_mask_ here. Not needed if loading from a pickle file. nout : int Number of outputs of the model. Not needed if loading from a pickle file. Default is 1. **pysr_kwargs : dict Any other keyword arguments to initialize the PySRRegressor object. These will overwrite those stored in the pickle file. Not needed if loading from a pickle file. Returns ------- model : PySRRegressor The model with fitted equations. """ if os.path.splitext(equation_file)[1] != ".pkl": pkl_filename = _csv_filename_to_pkl_filename(equation_file) else: pkl_filename = equation_file # Try to load model from .pkl print(f"Checking if {pkl_filename} exists...") if os.path.exists(pkl_filename): print(f"Loading model from {pkl_filename}") assert binary_operators is None assert unary_operators is None assert n_features_in is None with open(pkl_filename, "rb") as f: model = pkl.load(f) # Change equation_file_ to be in the same dir as the pickle file base_dir = os.path.dirname(pkl_filename) base_equation_file = os.path.basename(model.equation_file_) model.equation_file_ = os.path.join(base_dir, base_equation_file) # Update any parameters if necessary, such as # extra_sympy_mappings: model.set_params(**pysr_kwargs) if "equations_" not in model.__dict__ or model.equations_ is None: model.refresh() return model # Else, we re-create it. print( f"{equation_file} does not exist, " "so we must create the model from scratch." ) assert binary_operators is not None assert unary_operators is not None assert n_features_in is not None # TODO: copy .bkup file if exists. model = cls( equation_file=equation_file, binary_operators=binary_operators, unary_operators=unary_operators, **pysr_kwargs, ) model.nout_ = nout model.n_features_in_ = n_features_in if feature_names_in is None: model.feature_names_in_ = [f"x{i}" for i in range(n_features_in)] else: assert len(feature_names_in) == n_features_in model.feature_names_in_ = feature_names_in if selection_mask is None: model.selection_mask_ = np.ones(n_features_in, dtype=bool) else: model.selection_mask_ = selection_mask model.refresh(checkpoint_file=equation_file) return model
## pysr.sr.PySRRegressor.sympy(index=None)¶
Return sympy representation of the equation(s) chosen by model_selection.
Parameters:
Name Type Description Default
index int | list[int]
If you wish to select a particular equation from self.equations_, give the index number here. This overrides the model_selection parameter. If there are multiple output features, then pass a list of indices with the order the same as the output feature.
None
Returns:
Name Type Description
best_equation str, list[str] of length nout_
SymPy representation of the best equation.
Source code in pysr/sr.py
1846 1847 1848 1849 1850 1851 1852 1853 1854 1855 1856 1857 1858 1859 1860 1861 1862 1863 1864 1865 1866 1867 1868 def sympy(self, index=None): """ Return sympy representation of the equation(s) chosen by model_selection. Parameters ---------- index : int | list[int] If you wish to select a particular equation from self.equations_, give the index number here. This overrides the model_selection parameter. If there are multiple output features, then pass a list of indices with the order the same as the output feature. Returns ------- best_equation : str, list[str] of length nout_ SymPy representation of the best equation. """ self.refresh() best_equation = self.get_best(index=index) if self.nout_ > 1: return [eq["sympy_format"] for eq in best_equation] return best_equation["sympy_format"]
## pysr.sr.PySRRegressor.latex(index=None, precision=3)¶
Return latex representation of the equation(s) chosen by model_selection.
Parameters:
Name Type Description Default
index int | list[int]
If you wish to select a particular equation from self.equations_, give the index number here. This overrides the model_selection parameter. If there are multiple output features, then pass a list of indices with the order the same as the output feature.
None
precision int
The number of significant figures shown in the LaTeX representation. Default is 3.
3
Returns:
Name Type Description
best_equation str or list[str] of length nout_
LaTeX expression of the best equation.
Source code in pysr/sr.py
1870 1871 1872 1873 1874 1875 1876 1877 1878 1879 1880 1881 1882 1883 1884 1885 1886 1887 1888 1889 1890 1891 1892 1893 1894 1895 1896 1897 1898 1899 1900 def latex(self, index=None, precision=3): """ Return latex representation of the equation(s) chosen by model_selection. Parameters ---------- index : int | list[int] If you wish to select a particular equation from self.equations_, give the index number here. This overrides the model_selection parameter. If there are multiple output features, then pass a list of indices with the order the same as the output feature. precision : int The number of significant figures shown in the LaTeX representation. Default is 3. Returns ------- best_equation : str or list[str] of length nout_ LaTeX expression of the best equation. """ self.refresh() sympy_representation = self.sympy(index=index) if self.nout_ > 1: output = [] for s in sympy_representation: latex = to_latex(s, prec=precision) output.append(latex) return output return to_latex(sympy_representation, prec=precision)
## pysr.sr.PySRRegressor.pytorch(index=None)¶
Return pytorch representation of the equation(s) chosen by model_selection.
Each equation (multiple given if there are multiple outputs) is a PyTorch module containing the parameters as trainable attributes. You can use the module like any other PyTorch module: module(X), where X is a tensor with the same column ordering as trained with.
Parameters:
Name Type Description Default
index int | list[int]
If you wish to select a particular equation from self.equations_, give the index number here. This overrides the model_selection parameter. If there are multiple output features, then pass a list of indices with the order the same as the output feature.
None
Returns:
Name Type Description
best_equation torch.nn.Module
PyTorch module representing the expression.
Source code in pysr/sr.py
1932 1933 1934 1935 1936 1937 1938 1939 1940 1941 1942 1943 1944 1945 1946 1947 1948 1949 1950 1951 1952 1953 1954 1955 1956 1957 1958 1959 1960 def pytorch(self, index=None): """ Return pytorch representation of the equation(s) chosen by model_selection. Each equation (multiple given if there are multiple outputs) is a PyTorch module containing the parameters as trainable attributes. You can use the module like any other PyTorch module: module(X), where X is a tensor with the same column ordering as trained with. Parameters ---------- index : int | list[int] If you wish to select a particular equation from self.equations_, give the index number here. This overrides the model_selection parameter. If there are multiple output features, then pass a list of indices with the order the same as the output feature. Returns ------- best_equation : torch.nn.Module PyTorch module representing the expression. """ self.set_params(output_torch_format=True) self.refresh() best_equation = self.get_best(index=index) if self.nout_ > 1: return [eq["torch_format"] for eq in best_equation] return best_equation["torch_format"]
## pysr.sr.PySRRegressor.jax(index=None)¶
Return jax representation of the equation(s) chosen by model_selection.
Each equation (multiple given if there are multiple outputs) is a dictionary containing {"callable": func, "parameters": params}. To call func, pass func(X, params). This function is differentiable using jax.grad.
Parameters:
Name Type Description Default
index int | list[int]
If you wish to select a particular equation from self.equations_, give the index number here. This overrides the model_selection parameter. If there are multiple output features, then pass a list of indices with the order the same as the output feature.
None
Returns:
Name Type Description
best_equation dict[str, Any]
Dictionary of callable jax function in "callable" key, and jax array of parameters as "parameters" key.
Source code in pysr/sr.py
1902 1903 1904 1905 1906 1907 1908 1909 1910 1911 1912 1913 1914 1915 1916 1917 1918 1919 1920 1921 1922 1923 1924 1925 1926 1927 1928 1929 1930 def jax(self, index=None): """ Return jax representation of the equation(s) chosen by model_selection. Each equation (multiple given if there are multiple outputs) is a dictionary containing {"callable": func, "parameters": params}. To call func, pass func(X, params). This function is differentiable using jax.grad. Parameters ---------- index : int | list[int] If you wish to select a particular equation from self.equations_, give the index number here. This overrides the model_selection parameter. If there are multiple output features, then pass a list of indices with the order the same as the output feature. Returns ------- best_equation : dict[str, Any] Dictionary of callable jax function in "callable" key, and jax array of parameters as "parameters" key. """ self.set_params(output_jax_format=True) self.refresh() best_equation = self.get_best(index=index) if self.nout_ > 1: return [eq["jax_format"] for eq in best_equation] return best_equation["jax_format"]
## pysr.sr.PySRRegressor.latex_table(indices=None, precision=3, columns=['equation', 'complexity', 'loss', 'score'])¶
Create a LaTeX/booktabs table for all, or some, of the equations.
Parameters:
Name Type Description Default
indices list[int] | list[list[int]]
If you wish to select a particular subset of equations from self.equations_, give the row numbers here. By default, all equations will be used. If there are multiple output features, then pass a list of lists.
None
precision int
The number of significant figures shown in the LaTeX representations. Default is 3.
3
columns list[str]
Which columns to include in the table. Default is ["equation", "complexity", "loss", "score"].
['equation', 'complexity', 'loss', 'score']
Returns:
Name Type Description
latex_table_str str
A string that will render a table in LaTeX of the equations.
Source code in pysr/sr.py
2153 2154 2155 2156 2157 2158 2159 2160 2161 2162 2163 2164 2165 2166 2167 2168 2169 2170 2171 2172 2173 2174 2175 2176 2177 2178 2179 2180 2181 2182 2183 2184 2185 2186 2187 2188 2189 2190 2191 2192 2193 2194 2195 2196 2197 2198 2199 2200 2201 2202 2203 2204 2205 2206 2207 def latex_table( self, indices=None, precision=3, columns=["equation", "complexity", "loss", "score"], ): """Create a LaTeX/booktabs table for all, or some, of the equations. Parameters ---------- indices : list[int] | list[list[int]] If you wish to select a particular subset of equations from self.equations_, give the row numbers here. By default, all equations will be used. If there are multiple output features, then pass a list of lists. precision : int The number of significant figures shown in the LaTeX representations. Default is 3. columns : list[str] Which columns to include in the table. Default is ["equation", "complexity", "loss", "score"]. Returns ------- latex_table_str : str A string that will render a table in LaTeX of the equations. """ self.refresh() if self.nout_ > 1: if indices is not None: assert isinstance(indices, list) assert isinstance(indices[0], list) assert isinstance(len(indices), self.nout_) generator_fnc = generate_multiple_tables else: if indices is not None: assert isinstance(indices, list) assert isinstance(indices[0], int) generator_fnc = generate_single_table table_string = generator_fnc( self.equations_, indices=indices, precision=precision, columns=columns ) preamble_string = [ r"\usepackage{breqn}", r"\usepackage{booktabs}", "", "...", "", ] return "\n".join(preamble_string + [table_string])
## pysr.sr.PySRRegressor.refresh(checkpoint_file=None)¶
Update self.equations_ with any new options passed.
For example, updating extra_sympy_mappings will require a .refresh() to update the equations.
Parameters:
Name Type Description Default
checkpoint_file str
Path to checkpoint hall of fame file to be loaded. The default will use the set equation_file_.
None
Source code in pysr/sr.py
1754 1755 1756 1757 1758 1759 1760 1761 1762 1763 1764 1765 1766 1767 1768 1769 1770 1771 def refresh(self, checkpoint_file=None): """ Update self.equations_ with any new options passed. For example, updating extra_sympy_mappings will require a .refresh() to update the equations. Parameters ---------- checkpoint_file : str Path to checkpoint hall of fame file to be loaded. The default will use the set equation_file_. """ if checkpoint_file: self.equation_file_ = checkpoint_file self.equation_file_contents_ = None check_is_fitted(self, attributes=["equation_file_"]) self.equations_ = self.get_hof() | 2022-09-24 18:50:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21298164129257202, "perplexity": 4473.779067853836}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00255.warc.gz"} |
https://proxies-free.com/tag/probability/ | ## combinatorics – Probability of consecutive coin flips
There are 15 coins in a bag. 5 of the 15 are fair coins and the rest are biased (80% H, 20% T). When a coin is chosen randomly from the bag and flipped twice, what is the probability that both of them are heads?
I tried to solve this using two different ways and I get two different answers.
Method 1:
$$P_{fair}(HH) = frac {1}{2^2}$$
Considering 80% is $$frac 45$$ and 20% is $$frac 15$$
$$P_{biased}(HH) = frac {4^2}{5^2}$$
$$P(HH) = frac {1}{3}P_{fair}(HH) + frac {2}{3}P_{biased}(HH)$$
$$P(HH) = frac {51}{100}$$
Method 2:
Constructing from all possible outcomes, consider 1 fair coin for every 2 biased coins.
$$P(HH) = frac {text{number of outcomes with two H}}{text{total outcomes}}$$
$$text{total outcomes} = (text{outcomes for fair coin}) + 2(text{outcomes for biased coin})$$
$$text{outcomes for fair coin} = text{{H, T} tossed twice}$$
$$text{outcomes for biased coin} = text{{H, H, H, H, T} tossed twice}$$
$$text{total outcomes}=2^2+2(5^2)$$
$$=54$$
$$text{number of outcomes with two H} = text{HH for fair coin} + 2(text{HH for biased coin})$$
$$=1+2(4^2)$$
$$=33$$
$$P(HH) = frac{33}{54}$$
Have I done a mistake in either of the methods or maybe both? It’s not like I didn’t understand conditional probability (or did I?). For instance, I can find the probability of drawing two red cards from a deck of playing cards using both those methods.
$$P(RR) = P(Red_1) P(Red_2|Red_1)$$
$$=frac{26}{52} frac{25}{51}$$
Also using just combinatorics,
$$P(RR) = frac{^{26}P_2}{^{52}P_2}$$
$$=frac{26 times 25}{52times51}$$
So definitely my methods aren’t incorrect. Coming back to my original question, where did I go wrong?
## probability – Expectation of the minimum of two continuous random variables – using the joint pdf
Define $$Z = min(X, Y)$$ and the joint pdf of $$X$$ and $$Y$$ as $$f_{XY}(x,y)$$.
I saw an approach that said
$$E(Z) = int int min(x,y) f_{XY}(x,y) dydx$$
Is this readily obvious, or do you need to convert the following:
$$E(Z) = int min(x,y)f_Z(z) dz$$
to the above?
## probability – With what frequency should points in a 2D grid be chosen in order to have roughly \$n\$ points left after a time \$t\$
Say I have a 2D array of $$x$$ by $$y$$ points with some default value, for generalisation we’ll just say “0”.
I randomly select a pair of coordinates with a frequency of $$f$$ per second and, if the point selected is `0`: flip it to `1`. However, if the point is already $$1$$ then do not change it.
How then, given a known $$x$$ and $$y$$ (thus known total points as $$xy$$) can I calculate a frequency $$f$$ that will leave me with approximately (as this is random) $$n$$ `0` points remaining, after a set time $$t$$? Where $$t$$ is also seconds.
For some context I am attempting some simplistic approximation of nuclear half-life but I’m not sure how to make use of the half-life equation for this, nor do I think that it is strictly correct to apply it given my implementation isn’t exactly true to life, picking single points at a time.
## probability theory – CDF of \$S_{N_{t}}\$ where \$S_{N_{t}}\$ is the time of the last arrival in \$[0, t]\$
I am confused on this problem. My professor gave this as the solution:
$$S_{N_{T}}$$ is the time of the last arrival in $$(0, t)$$. For $$0 < x leq t, P(S_{N_{T}} leq x) sum_{k=0}^{infty} P(S_{N_{T}} leq x | N_{T}=k)P(N_{T}=k) = sum_{k=0}^{infty} P(S_{N_{T}} leq x | N_{T}=k) * frac{e^{- lambda t}*(lambda t)^k}{k!}$$.
Let $$M=max(S_1, S_2, …, S_k)$$ where $$S_i$$ is i.i.d. for $$i = 1,2,.., k$$ and $$S_i$$~ Uniform$$(0,t)$$.
So, $$P(S_{N_{T}} leq x = sum_{k=0}^{infty} P(M leq x)frac{e^{- lambda t}*(lambda t)^k}{k!} = sum_{k=0}^{infty} (frac{x}{t})^k frac{e^{- lambda t}*(lambda t)^k}{k!} = e^{- lambda t} sum_{k=0}^{infty} frac{(lambda t)^k}{k!} = e^{- lambda t}e^{- lambda x} = e^{lambda(x-t)}$$
If $$N_t = 0$$, then $$S_{N_{T}} = S_0 =0$$. This occurs with probability $$P(N_t = 0) = e^{- lambda t}$$.
Therefore, the cdf of $$S_{N_{T}}$$ is:
$$P(S_{N_{T}} leq x) = begin{array}{cc} { & begin{array}{cc} 0 & x < 0 \ e^{- lambda (x-t)} & 0leq xleq t \ 1 & x geq t end{array} end{array}$$
## probability – “First principles” proof of the limit of the expected excess of the uniform renewal function
The closed form of the expected number of samples for $$sum_r X_r geqslant t, X_r sim text{U(0,1)}$$ is given by:
$$m(t) = sum_{k=0}^{lfloor t rfloor} frac{(k-t)^k}{k!}e^{t-k}$$
From this we can deduce the expected amount by which this sum exceeds $$t$$, namely:
$$varepsilon(t) = frac{m(t)}{2} – t$$
From knowing that $$m(t) to 2t+dfrac{2}{3}$$, we can easily see that $$varepsilon(t) to dfrac{1}{3}$$.
Is there a simple (“low tech”) way of proving that $$varepsilon(t) to dfrac{1}{3}$$ without first passing through proving $$m(t) to 2t+dfrac{2}{3}$$ ?
## Probability of dice roll between values
Context: In calculating the optimal policy for MDP’s an algorithm called Value Iteration is used. I am using this algorithm to calculate the optimal policy for a small game and test my knowledge in the field.
In the game, $$d$$ normal dices (1-6) are rolled simultaneously, and you can either pick all dices with the largest value, or all dices with the smallest value. To not have to compute all possible $$6^d$$ dice rolls, I limit it to $$x$$ dices getting the smallest values, and $$y$$ dices getting the highest values, where $$x leq d$$ and $$y leq d – x$$.
Now my question is: With $$d$$ dices, what is the probability that $$x$$ dices fall on a minimum value $$v_x$$, $$y$$ dices fall on a maximum value $$v_y$$, and $$z = d – (x+y)$$ dices are between (not including) $$(v_x, v_y)$$?
I have the feeling that the $$z$$ in-between dices can be modeled with a binomial distribution with $$binom(z, d, frac{v_y – v_x – 1}{6})$$, but I am not sure how to reconcile this with the probabilities of $$x$$ and $$y$$.
## probability theory – Why is the a sum in the definition of a simple function?
Simple functions assume finitely many values in their image, and can be written as
$$f(omega)= sum_{i=1}^n a_i mathbb I_{A_i}(omega), quad forall omega in Omega$$
where $$a_i geq 0, forall i in {1,2,3, dots, n},$$ and $$A_i in mathcal F, forall i.$$
So this is how I process it in “human”: For each outcome in the sample space (i.e. $$omega$$), one must check whether it belongs or not to a measurable set $$A_i$$ in the sigma algebra $$mathcal F.$$ If the Boolean operation (characteristic function $$mathbb I_{A_i}$$) is $$1,$$ the result of the function will be some value $$a_i$$ which will be exactly the same for all outcomes in $$A_i.$$ This could be symbolically plotted as step function, which each step corresponding to one of the $$A_i$$‘s.
So far, clear as day.
Now, when you introduce the $$sum$$ at the beginning of the definition, it looks like you are integrating: in other words, the function $$f(omega)$$ with the sum in front doesn’t seem to “spit out” the corresponding step of that particular $$omega,$$ but rather all the steps for all omegas – all at once. And that “all at once” seems like a contradiction: after all in a truly simple function, such as $$f(x)=2x+2,$$ you don’t get a line because you sum the results of the function across the real line, but because you collect as a set the results for every and each value of the real line entered into the function as an independent variable.
## How to write probability of \$P(X<Z<Y)\$ in the from of expected value of indicator function?
Suppose we have three independent exponential random variables $$X, Z$$ and $$Y$$ with rates $$lambda_x, lambda_z$$ and $$lambda_y$$, respectively. How can $$P(X be calculated as an expected value of indicator function?
$$E(1_{X ?
## bessel functions – Compute the conditional probability distribution of a noncentral \$chi\$ variable given the range of Erlang distributed non-centrality parameter
I need to compute a conditional probability distribution as described below for my research.
In $$(mathbb R^2,||cdot||_2)$$, I have a random vector $$underline{z}$$ with uniformly distributed angle and $$Z=||underline{z}||$$ following Erlang distribution with $$k=2$$ and scale parameter $$mu$$, i.e. with the density function $$f_Z(z)=frac{z}{mu^2}e^{-frac{z}{mu}}$$. I have another normal random vector $$underline{y}$$ independent of $$underline{z}$$. I’m interested in the resultant vector $$underline{x}=underline{y}+underline{z}$$ and want to compute the conditional distribution of $$X=||underline{x}||$$ given $$aleq||underline{z}||leq b,0leq a, to be specific, the complementary cumulative distribution function $$overline{F}_{X|Z}(x|(a,b))=P(X>x|aleq Zleq b)$$. Solutions for special cases where $$Zleq c$$ or $$Zgeq c$$ for any $$c>0$$ would be sufficient for my research if they are easier to solve.
Following is my attempt. Given a fixed $$Z=z$$, since $$underline{y}$$ is normal, $$X$$ follows the noncentral $$chi$$ distribution with $$k=2$$ and non-centrality parameter $$lambda=z$$, i.e. $$f_{X|Z}(x|z)=xe^{-frac{x^2+z^2}{2}}I_0(xz)$$, where $$I_0(x)=frac{1}{pi}int_0^pi e^{xcosalpha}dalpha$$ is a modified Bessel function of the first kind. Then the density function of the conditional distribution is
$$f_{X|Z}(x|(a,b))=frac{int_a^b f_Z(z)f_{X|Z}(x|z)dz}{int_a^b f_Z(z)dz}$$
The denominator $$int_a^b f_Z(z)dz=gamma(2,frac{b}{mu})-gamma(2,frac{a}{mu})$$ where $$gamma$$ is the lower incomplete gamma function.
Change the order of integration, the numerator is
begin{align} int_a^b f_Z(z)f_{X|Z}(x|z)dz & = frac{1}{pi}int_a^bfrac{z}{mu^2}e^{-frac{z}{mu}}xe^{-frac{x^2+z^2}{2}}int_0^pi e^{xcosalpha}dalpha \ & = frac{x}{pimu^2}e^{-frac{x^2}{2}}int_0^pi e^{frac{1}{2}(frac{1}{mu}-xcosalpha)^2}int_a^b ze^{-frac{1}{2}(z+frac{1}{mu}-xcosalpha)^2}dzdalpha \ & = frac{x}{pimu^2}e^{-frac{x^2}{2}}int_0^pi e^{frac{beta^2}{2}}left(e^{-bar{a}^2}-e^{-bar{b}^2}+sqrt{frac{pi}{2}}left(operatorname{erf}bar{a}-operatorname{erf}bar{b}right)right)dalpha end{align}
where $$beta=frac{1}{mu}-xcosalpha$$, $$bar{a}=frac{a+beta}{sqrt{2}},bar{b}=frac{b+beta}{sqrt{2}}$$, $$operatorname{erf}$$ is the error function.
Then I got stuck at the second integral. I am looking for an analytical expression of $$f_{X|Z}(x|(a,b))$$. I tried numerical integration and compared it to a simulation using matlab. The results are as expected.
Finally, what I want is an analytical expression of $$overline{F}_{X|Z}(x|(0,c))=P(X>t|Zleq c)=int_t^infty f_{X|Z}(x|(0,c))dx$$ and $$overline{F}_{X|Z}(x|(c,infty))=P(X>t|Zgeq c)=int_t^infty f_{X|Z}(x|(c,infty))dx$$.
Is it possible?
## probability or statistics – Linear regression: confidence interval of the average of 10 slopes, error of propagation and error of the mean
As each slope has a different standard error, you should calculate a weighted mean with the weights taken as the inverse of the variance (= square of standard deviation).
Lets assume we have n slope values sl((i)) with var((i)) ( = (standard error i)^2), then the mean and variance of slopes are:
mean= Sum(sl((i))/ var((i)),{i,n}) / Sum(1/ var((i)), {i,n})
variance= 1/Sum(1/var((i)), {i,n})
std error= Sqrt(variance) | 2021-04-21 01:02:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 130, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8951876163482666, "perplexity": 309.5690714103224}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039503725.80/warc/CC-MAIN-20210421004512-20210421034512-00430.warc.gz"} |
https://mimoamimascota.com/avocado-plant-bhgow/archive.php?e1b118=domain-math-example | # domain math example
They may also have been called the input and output of the function.) Before, getting into the topic of domain and range, let’s briefly describe what a function is. The set of values of the independent variable(s) for which a function or relation is defined. We now look at a few examples of domain and range for each type of function below – linear, absolute, parabola, hyperbolic, cubic, circle, exponential, top half of a circle, top half of a parabola, etc. Note: Usually domain means domain of definition, but sometimes domain refers to a restricted domain. We are thankful to be welcome on these lands in friendship. Hence the domain, in inequality notation, is written as - 4 ≤ x < 2. ... Algebra Examples. Algebra. Domain of a function – this is the set of input values for the function. An example of a range with a union is [-3,2)U(5,9). Solution to Example 7 The graph starts at x = - 4 and ends x < 2. The domain does not include x = 2 because of the open circle at x = 2. Here, 3 is not included and is the function's lower limit, and 4 is included and is the functions upper limit. Finding Domain and Range of a Function using a Graph To find the domain form a graph, list all the x-values that correspond to points on the graph. In the example … (In grammar school, you probably called the domain the replacement set and the range the solution set. In the example above, the domain of $$f\left( x \right)$$ is set A. The lands we are situated on are covered by the Williams Treaties and are the traditional territory of the Mississaugas, a branch of the greater Anishinaabeg Nation, including Algonquin, Ojibway, Odawa and Pottawatomi. Domain and Range of a Function – Explanation & Examples In this article, we will learn what a domain and range of a function mean and how to calculate the two quantities. Typically, this is the set of x-values that give rise to real y-values. In mathematics… An example of a function's domain in interval notation is (3,4]. Examples: Using interval notation, state the domain and range of each given graph. Functions. Step-by-Step Examples. To find the range, list all the y values. Free math problem solver answers your algebra, geometry, trigonometry, calculus, and statistics homework questions with step-by-step explanations, just like a math tutor. Khan Academy is a 501(c)(3) nonprofit organization. And so we could say that the domain, the domain here is all real values of x, such that x minus two does not equal zero. Find the Domain and Range. Now typically, people would not want to just see that such that x minus two does not equal zero, and so we can simplify this a little bit so that we just have an x on the left hand side. Show Step-by-step Solutions Worked example: determining domain word problem (real numbers) Our mission is to provide a free, world-class education to anyone, anywhere. This lesson describes one of the most well-recognized models and provides tangible examples. Domain. y = 4x + 8 Domain : {all real x} Range: {all real y} This is a linear function. Range of a function – this is the set of output values generated by the function (based on the input values from the domain set). Domain and Range The domain of a function f ( x ) is the set of all values for which the function is defined, and the range of the function is the set of all values that f takes. Psychomotor Domain Definition Let's say that you teach a class about learning and development. In this function, -3 is included, as demonstrated by the use of the closed parenthesis, and 2 is not. Example 8. This is a linear function. a linear function. \right ) \ ) is set a a about. And ends x < 2 relation is defined given graph list all the y values ( 3 nonprofit... Not include x = 2 a restricted domain domain and range, list all the y values Using notation!, let domain math example s briefly describe what a function – this is the upper... This function, -3 is included and is the function. and provides tangible examples grammar,. Set and the range, list all the y values to real y-values class about learning and.... State the domain does not include x = - 4 ≤ x < 2 ( f\left ( x \right \. 'S say that you teach a class about learning and development let 's say that you teach a class learning!: Usually domain means domain of definition, but sometimes domain refers a... Restricted domain to find the range, list all the y values to example 7 the graph at! Ends x < 2 but sometimes domain refers to a restricted domain ) (. Real y } this is the functions upper limit above, the domain does not include x = 2 the... 5,9 ) all the y values the use of the independent variable ( s for! = 2 because of the open circle at x = - 4 ≤ x <.! For the function. = - 4 ≤ x < 2 the independent variable s! Circle at x = 2 to find the range domain math example let ’ s briefly describe what function. Is ( 3,4 ] function. real y-values the graph starts at x =.! - 4 and ends x < 2 state the domain of a range with a union is -3,2. We are thankful to be welcome on these lands in friendship x 2!, this is the set of x-values that give rise to real y-values typically, this is set... Notation is ( 3,4 ] of x-values that give rise to real y-values example 7 the graph at! This function, -3 is included, as demonstrated by the use the! What a function – this is a 501 ( c ) ( 3 ) nonprofit organization each given graph 4x! Graph starts at x = 2 is written as - 4 and ends x < 2 U 5,9... Set a have been called the input and output of the closed,... Of a function 's domain in interval notation, state the domain of a function or relation is....: { all real x } range: { all real y } this is the functions limit... ) \ ) is set a find the range the Solution set output of the function. of given. Of each given graph have been called the input and output of the variable. Function or relation is defined give rise to real y-values 's domain in interval notation, is written as 4... Limit, and 4 is included, as demonstrated by the use of the open at... Upper limit given graph describes one of the function 's domain in interval notation is ( 3,4 ] the... School, you probably called the domain of \ ( f\left ( x \right ) )! All real y } this is the function. say that you teach a class about learning and.... Is the set of values of the open circle at x = - 4 ≤ x < 2 replacement. Before, getting into the topic of domain and range, let ’ s briefly describe what a function.. Does not include x = 2 of the function 's lower limit and. The domain the replacement set and the range, let ’ s briefly what! Restricted domain \ ) is set a rise to real y-values example of a function domain... 3,4 ] before, getting into the topic of domain and range of each given graph in. X = 2 c ) ( 3 ) nonprofit organization open circle at x = 2 because of the.! Refers to a restricted domain, as demonstrated by the use of the independent variable ( s ) which. As demonstrated by the use of the closed parenthesis, and 2 is not ends x <.. State the domain and range of each given graph these lands in friendship domain! Union is [ -3,2 ) U ( 5,9 ) 4 ≤ x < 2: Using notation! Function, -3 is included and is the set of input values for the function. the! What a function or relation is defined } range: { all real y this. The range the Solution set range: { all real y } this is the function. a! The independent variable ( s ) for which a function is function. getting into the of! ( x \right ) \ ) is set a lesson describes one of the 's! Example … Solution to example 7 the graph starts at x = 2 are thankful to welcome! \ ( f\left ( x \right ) \ ) is set a this is a 501 ( c ) 3... Of \ ( f\left ( x \right ) \ ) is set a values the... The function 's lower limit, and 2 is not included and is set... < 2 lower limit, and 4 is included, as demonstrated by the use of the independent variable s. Of definition, but sometimes domain refers to a restricted domain = 2 restricted domain rise... Of definition, but sometimes domain refers to a restricted domain, as demonstrated by the use the! Of domain and range of each given graph + 8 domain: { all real domain math example } range: all. ( 3,4 ], and 4 is included and is the set input. The closed parenthesis, and 4 is included, as demonstrated by the of! Definition, but sometimes domain refers to a restricted domain and the domain math example, let ’ briefly. This function, -3 is included and is the set of input values for the function 's lower limit and... Of x-values that give rise to real y-values note: Usually domain means domain definition. ( 5,9 ) getting into the topic of domain and range, list all the y values in interval is. The Solution set you teach a class about learning and development ’ s briefly describe what a function this. Or relation is defined of a function is as demonstrated by the use of the most well-recognized models and tangible. Function, -3 is included and is the function. have been called the domain, in inequality notation is. Been called the domain the replacement set and the range the Solution set 501 ( )... Function, -3 is included, as demonstrated by the use of the independent (. Is set a ) is set a, the domain the replacement set domain math example the range, list the! Probably called the domain of definition, but sometimes domain refers to a restricted domain ’... Or relation is defined interval notation, state the domain and range, let ’ s describe... Range: { all real x } range: { all real x } range: { all real }! 7 the graph starts at x = 2: Using interval notation is ( 3,4 ] interval. On these lands in friendship topic of domain and range of each given graph for the function )... Have been called the input and output of the most well-recognized models and provides examples! But sometimes domain refers to a restricted domain provides tangible examples definition 's! Describe what a function or relation is defined the Solution set set of of. Set and the range, let ’ s briefly describe what a 's. ) U ( 5,9 ) Solution to example 7 the graph starts at x = 2 ( \right... Function. upper limit refers to a restricted domain these lands in friendship as demonstrated the! A restricted domain - 4 and ends x < 2 a class about learning and development functions upper limit of! Replacement set and the range, list all the y values -3,2 ) U ( 5,9 ) interval! 3,4 ] ( in grammar school, you probably called the domain does not include x = 2 included! Independent variable ( s ) for which a function or relation is.... 4 and ends x < 2 you teach a class about learning and development limit, 4! Inequality notation, is written as - 4 and ends x < 2 of domain and range let... Rise to real y-values nonprofit organization provides tangible examples given graph ≤ x < 2 Solution domain math example... } range: { all real x } range: { all real y } this is the functions limit! ( 3,4 ] models and provides tangible examples 3,4 ] included and is the set of values... Typically, this is the functions upper limit domain, in inequality notation, is written as 4. Included and is the functions upper limit - 4 ≤ x < 2 lower limit and. Union is [ -3,2 ) U ( 5,9 ) of a function this. Variable ( s ) for which a function or relation is defined output of the well-recognized... - 4 and ends x < 2 inequality notation, state the domain does not include x = because!: { all real x } range: { all real y } this is a linear function. is... Y values in inequality notation, is written as - 4 and ends x < 2 - ≤! ) is set a } range: { all real y } this is the set of of! The set of values of the independent variable ( s ) for which a function 's domain in interval,. U ( 5,9 ) domain the replacement set and the range the Solution.!
× | 2021-05-08 13:30:24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7924523949623108, "perplexity": 676.3002777645912}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988882.7/warc/CC-MAIN-20210508121446-20210508151446-00235.warc.gz"} |
https://trac.edgewall.org/wiki/1.3/TracRepositoryAdmin | This page documents the 1.3 release. Documentation for other releases can be found here.
## Quick start
• Enable the repository connector(s) for the version control system(s) that you will use.
• Add repositories through the Repositories admin page, using trac-admin or by editing the [repositories] section of trac.ini.
• Synchronize the repositories with the cache, if using cached repositories.
• Configure your repository hooks to synchronize the repository. Alternatively you can synchronize on every request or disable the use of cached repositories, both of which have performance drawbacks and reduced functionality, but are easier to configure.
## Enabling the components
Support for version control systems is provided by optional components distributed with Trac, which are disabled by default (since 1.0). Subversion and Git must be explicitly enabled if you wish to use them.
The version control systems can be enabled by adding the following to the [components] section of your trac.ini, or enabling the components through the Plugins admin page.
tracopt.versioncontrol.svn.* = enabled
tracopt.versioncontrol.git.* = enabled
## Specifying repositories
Trac supports multiple repositories per environment, and the repositories may be for different version control systems. Each repository must be defined in a repository configuration provider. Repository providers included with Trac are the database store, the trac.ini configuration file and the GitWeb configuration file. Additional providers are available as plugins.
You can define your repositories through a mix of providers, but each repository should only be defined in a single provider. The repository names must be unique across all providers and duplicate names are discarded.
It is possible to define aliases of repositories that act as "pointers" to real repositories. This can be useful when renaming a repository, to avoid breaking links to the old name.
### Default Repository
Trac's repositories are listed in the Repository Index when navigating to Browse Source. The default repository is displayed first, followed by the Repository Index. TracLinks without a repository name in the path specification (e.g. [1] rather than [1/repos1]) refer to the default repository. TracLinks for repositories other than the default must include the repository name in the path specification.
From the Repository Admin page, the default repository is specified by leaving the Name attribute empty. From the command line, the default repository is specified using the string (default) or "". In TracIni, the default repository is specified by leaving the {name} empty for each {name}.{attribute} option (e.g. .dir = /path/to/dir).
### Repository Attributes
There are a number of attributes that can be specified for each repository, and additional attributes may be available through plugins. A repository name and one of the alias or dir attributes are mandatory. All others are optional.
The following attributes are supported:
Attribute Description
alias Defines an alias to a real repository. All TracLinks referencing the alias resolve to the aliased repository. Note that multiple indirection is not supported, so an alias must always point to a real repository. The alias and dir attributes are mutually exclusive.
cached For a version control system that support caching, specifies that repository caching should be used. Defaults to true for version control systems that support caching.
description The text specified in the description attribute is displayed below the top-level entry for the repository in the source browser. It supports WikiFormatting.
dir The dir attribute specifies the location of the repository in the filesystem. The alias and dir attributes are mutually exclusive.
hidden When set to true, the repository is hidden from the repository index page in the source browser. Browsing the repository is still possible, and links referencing the repository remain valid.
sync_per_request When set to true the repository will be synchronized on every request (implicit synchronization). This is generally not recommended. See repository synchronization for a comparison of explicit and implicit synchronization. The attribute defaults to false.
type The type attribute specifies the version control system used by the repository. Trac provides support for Subversion and Git, and plugins add support for several other systems. If type is not specified, it defaults to the value of the [versioncontrol] default_repository_type option.
url The url attribute specifies the root URL to be used for checking out from the repository. When specified, a "Repository URL" link is added to the context navigation links in the source browser, that can be copied into the tool used for creating the working copy.
### Scoped Repository
For some version control systems, it is possible to specify not only the path to the repository in the dir attribute, but also a scope within the repository. Trac will then only show information related to the files and changesets below that scope. The scope is specified by appending a path that is relative to the repository root. The Subversion backend for Trac supports this.
For example, assume a repository at filesystem path /var/svn/repos1 with several directories at the root of the repository: /proj1, /proj2, etc. The following configuration would scope the repository to /proj1:
proj1.dir = /var/svn/repos1/proj1
proj1.type = svn
For other repository types, check the corresponding plugin's documentation.
Repositories can also be specified in the database, using either the Repositories admin page under Version Control, or the trac-admin $ENV repository commands. The admin panel shows the list of all repositories defined in the Trac environment. It allows adding repositories and aliases, editing repository attributes and removing repositories. Note that repositories defined in trac.ini are displayed but cannot be edited. The following trac-admin commands can be used to perform repository operations from the command line. repository add <repos> <dir> [type] Add a repository <repos> located at <dir>, and optionally specify its type. repository alias <name> <target> Create an alias <name> for the repository <target>. repository remove <repos> Remove the repository <repos>. repository set <repos> <key> <value> Set the attribute <key> to <value> for the repository <repos>. Note that the default repository has an empty name, so it will need to be quoted when running trac-admin from a shell. Alternatively, the name (default) can be used instead, for example when running trac-admin in interactive mode. ### In trac.ini Repositories and repository attributes can be specified in the [repositories] section of trac.ini. Every attribute consists of a key structured as {name}.{attribute} and the corresponding value separated with an equal sign (=). The name of the default repository is empty. The main advantage of specifying repositories in trac.ini is that they can be inherited from a global configuration. Cached repositories defined in trac.ini at the time of environment initialization will be automatically synchronized if the repository's connector is enabled. One drawback is that due to limitations in the ConfigParser class used to parse trac.ini, the repository name is always lowercase. The following example defines two Subversion repositories named project and lib, and an alias to project as the default repository. This is a typical use case where a Trac environment previously had a single repository (the project repository), and was converted to multiple repositories. The alias ensures that links predating the change continue to resolve to the project repository. [repositories] project.dir = /var/repos/project project.description = This is the ''main'' project repository. project.type = svn project.url = http://example.com/svn/project project.hidden = true lib.dir = /var/repos/lib lib.description = This is the secondary library code. lib.type = svn lib.url = http://example.com/svn/lib .alias = project Note that name.alias = target makes name an alias for the target repo, not the other way around. ### In GitWeb GitWeb is a CGI script that comes with Git for web-based visualization of repositories. Trac can read the gitweb-formatted project.lists file. The configuration is done through the [gitweb-repositories] section of trac.ini. ## Repository caching Caching improves the performance browsing the repository, viewing logs and viewing changesets. Cached repositories must be synchronized, using either explicit or implicit synchronization. When searching changesets, only cached repositories are searched. Repositories that support caching are cached by default. The Subversion and Git backends support caching. The Mercurial plugin does not yet support caching (#8417). To disable caching, set the cached attribute to false. After adding a cached repository, the cache must be populated with the trac-admin$ENV repository resync command.
repository resync <repos>
Re-synchronize Trac with a repository.
## Repository synchronization
Either explicit or implicit synchronization can be used. Implicit synchronization is easier to configure, but may result in noticeably worse performance. The changeset added and modified events can't be triggered with implicit synchronization, so the commit ticket updater won't be available.
### Explicit synchronization
This is the preferred method of repository synchronization. It requires adding a call to trac-admin in the post-commit hook of each repository. Additionally, if a repository allows changing revision metadata, a call to trac-admin must be added to the post-revprop-change hook as well.
changeset added <repos> <rev> […]
Notify Trac that one or more changesets have been added to a repository.
changeset modified <repos> <rev> […]
Notify Trac that metadata on one or more changesets in a repository has been modified.
The <repos> argument can be either a repository name (use "(default)" for the default repository) or the path to the repository.
Note that you may have to set the environment variable PYTHON_EGG_CACHE to the same value as was used for the web server configuration before calling trac-admin, if you changed it from its default location. See TracPlugins for more information.
#### Subversion
##### Using trac-svn-hook
In a Unix environment, the simplest way to configure explicit synchronization is by using the contrib/trac-svn-hook script. trac-svn-hook starts trac-admin asynchronously to avoid slowing the commit and log editing operations. The script comes with a number of safety checks and usage advice. Output is written to a log file with prefix svn-hooks- in the environment log directory, which can make configuration issues easier to debug.
There's no equivalent trac-svn-hook.bat for Windows yet, but the script can be run by Cygwin's bash. The documentation header of trac-svn-hook contains a Cygwin configuration example.
Follow the help in the documentation header of the script to configure trac-svn-hook. You'll need to minimally set the TRAC_ENV variable, and may also need to set TRAC_PATH and TRAC_LD_LIBRARY_PATH for a non-standard installation or a virtual environment.
Configuring the hook environment variables is even easier in Subversion 1.8 and later using the hook script environment configuration. Rather than directly editing trac-svn-hook to set the environment variables, or exporting them from the hook that invokes trac-svn-hook, they can be configured through the repository conf/hooks-env file.
Here is an example, using a Python virtual environment at /usr/local/venv:
[default]
TRAC_ENV=/var/trac/project-1
TRAC_PATH=/usr/local/venv/bin
##### Writing Your Own Hook Script
The following examples are complete post-commit and post-revprop-change scripts for Subversion. They should be edited for the specific environment, marked executable (where applicable) and placed in the hooks directory of each repository. On Unix (post-commit):
#!/bin/sh
export PYTHON_EGG_CACHE="/path/to/dir"
/usr/bin/trac-admin /path/to/env changeset added "$1" "$2"
Adapt the path to the actual location of trac-admin. On Windows (post-commit.cmd):
@C:\Python26\Scripts\trac-admin.exe C:\path\to\env changeset added "%1" "%2"
The post-revprop-change hook for Subversion is very similar. On Unix (post-revprop-change):
#!/bin/sh
export PYTHON_EGG_CACHE="/path/to/dir"
/usr/bin/trac-admin /path/to/env changeset modified "$1" "$2"
On Windows (post-revprop-change.cmd):
@C:\Python26\Scripts\trac-admin.exe C:\path\to\env changeset modified "%1" "%2"
The Unix variants above assume that the user running the Subversion commit has write access to the Trac environment, which is the case in the standard configuration where both the repository and Trac are served by the web server. If you access the repository through another means, for example svn+ssh://, you may have to run trac-admin with different privileges, for example by using sudo.
See the section about hooks in the Subversion book for more information. Other repository types will require different hook configuration.
#### Git
Git hooks can be used in the same way for explicit syncing of Git repositories.
If your repository is one that only gets pushed to, add the following to the hooks/post-receive file in the repo:
#!/bin/sh
tracenv=/path/to/env # set to your Trac environment's path
repos= # set to your repository's name
while read oldrev newrev refname; do
if [ "$oldrev" = 0000000000000000000000000000000000000000 ]; then git rev-list --reverse "$newrev" --
else
git rev-list --reverse "$newrev" "^$oldrev" --
fi | xargs trac-admin "$tracenv" changeset added "$repos"
done
The repos variable is the repository name (use "(default)" for the default repository).
Alternatively, if your git repository is one that gets committed to directly on the machine that hosts Trac, add the following to the hooks/post-commit file in your Git repository:
#!/bin/sh
tracenv=/path/to/env # set to your Trac environment's path
repos= # set to your repository's name
REV=$(git rev-parse HEAD) trac-admin "$tracenv" changeset added "$repos"$REV
The post-commit hook will do nothing if you only update the repository by pushing to it.
Be sure to set the hook scripts as executable.
#### Mercurial
For Mercurial, add the following entries to the .hgrc file of each repository accessed by Trac (if TracMercurial is installed in a Trac plugins directory, download hooks.py and place it somewhere accessible):
[hooks]
; If mercurial-plugin is installed globally
; If mercurial-plugin is installed in a Trac plugins directory
[trac]
env = /path/to/env
### Per-request synchronization
If the post-commit hooks are not available, the environment can be set up for per-request synchronization. The sync_per_request attribute for each repository in the database and in trac.ini must be set to true.
Note that in this case, the changeset listener extension point is not called, and therefore plugins that depend on the changeset added and modified events won't work correctly. For example, automatic changeset references cannot be used with implicit synchronization.
## Automatic changeset references in tickets
You can automatically add a reference to the changeset as a ticket comment whenever changes are committed to the repository. The description of the commit needs to contain one of the following patterns:
• Refs #123 - to reference this changeset in #123 ticket
• Fixes #123 - to reference this changeset and close #123 ticket with the default status fixed
This functionality requires installing a post-commit hook as described in explicit synchronization, and enabling the optional commit updater components through the Plugins admin panel or by adding the following line to the [components] section of your trac.ini:
tracopt.ticket.commit_updater.* = enabled
For more information, see the documentation of the CommitTicketUpdater component in the Plugins admin panel and the CommitTicketUpdater page.
## Troubleshooting
### My trac-post-commit-hook doesn't work anymore
You must now use the optional components from tracopt.ticket.commit_updater.*, which you can activate through the Plugins admin page, or by directly modifying the [components] section in the trac.ini. Be sure to use explicit synchronization.
See CommitTicketUpdater#Troubleshooting for more troubleshooting tips.
### Git control files missing
If your repository is not browseable and you find a message in the log that looks like:
2017-08-08 10:49:17,339 Trac[PyGIT] ERROR: GIT control files missing in '/path/to/git-repository'
2017-08-08 10:49:17,339 Trac[git_fs] ERROR: GitError: GIT control files not found, maybe wrong directory?
First check that the path to your repository is correct. If the path is correct, you may have a permission problem whereby the web server cannot access the repository. You can use Git to verify the repository. On a Debian-like Linux OS, the following command should help:
\$ sudo -u www-data git --git-dir=/path/to/git-repository fsck
On other platforms you may need to modify the command to use the user under which the webserver runs. | 2022-09-28 22:01:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5975714325904846, "perplexity": 6530.035524177747}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00577.warc.gz"} |
https://www.effortlessmath.com/math-topics/how-to-solve-a-quadratic-equation-by-completing-the-square/ | # How to Solve a Quadratic Equation by Completing the Square?
Completing the Square is a way used to solve a quadratic equation by changing the form of the equation. In this step-by-step guide, you learn more about the method of completing the square.
When we want to convert a quadratic expression of the form $$ax^2+ bx+c$$ to the vertex form, we use the completing the square method.
## Step by Step guide to completing the square
Completing a square is a method used to convert a quadratic expression of the form $$ax^2+ bx+c$$ to the vertex form $$a(x-h)^2+k$$. The most common application of completing the square is in solving a quadratic equation. This can be done by rearranging the expression obtained after completing the square: $$a(x + m)^2+n$$, so that the left side is a perfect square trinomial.
Completing the square method is useful in the following cases:
• Converting a quadratic expression into vertex form.
• Analyzing at which point the quadratic expression has minimum or maximum value.
### Completing the square method
The most common application of completing the square method is factorizing a quadratic equation, and henceforth finding the roots and zeros of a quadratic polynomial or a quadratic equation. We know that a quadratic equation in the form of $$ax^2+bx+c=0$$ can be solved by the factorization method. But sometimes, factorization of the quadratic expression $$ax^2+bx+c$$ is complex or impossible.
### Completing the square formula
Completing a square formula is a method for converting a quadratic polynomial or equation to a complete square with an additional constant value. A quadratic expression in variable $$x$$: $$ax^2+ bx+ c$$, where $$a, b$$ and $$c$$ are any real numbers but $$a≠0$$, can be converted into a perfect square with some additional constant by using completing the square formula or technique.
Completing the square formula is a method that can be used to find the roots of the given quadratic equations, $$ax^2+bx+c$$, where $$a, b$$ and $$c$$ are any real numbers but $$a≠0$$.
Formula for completing the square:
The formula for completing the square is:
$$\color{blue}{ax^2 + bx + c ⇒ a(x + m)^2+ n}$$
where:
• $$m$$ is any real number
• $$n$$ is a constant term
Instead of using the complicated step-by-step method to complete the square, we can use the simple formula below to complete the square. To complete the square in the expression $$ax^2+bx + c$$, first find:
$$\color{blue}{m= \frac{b}{2a}}$$ , $$\color{blue}{n=c – (\frac{b^2}{4a})}$$
Substitute these values in: $$ax^2+bx +c = a(x + m)^2+n$$. These formulas are derived geometrically.
### Solving quadratic equations using completing the square method
Let’s complete the square in the expression $$ax^2+bx+c$$ using the square and rectangle in Geometry. The coefficient of $$x^2$$ must be made $$1$$ by taking $$a$$ as the common factor. We get,
$$ax^2+bx+c= a(x^2+\frac{b}{a}x+\frac{c}{a})$$→$$(1)$$
Now, consider the first two expressions $$x^2$$ and $$(\frac{b}{a}) x$$. Let us consider a square of side $$x$$ (whose area is $$x^2$$). Also consider a rectangle of length $$(\frac{b}{a})$$ and breadth $$(x)$$ (whose area is $$(\frac{b}{a})x)$$.
Now, divide the rectangle into two equal parts. The length of each rectangle will be $$\frac{b}{2a}$$.
Attach half of this rectangle to the right side of the square and the remaining half to the bottom of the square.
To complete a geometric square, there is some shortage which is a square of side $$\frac{b}{2a}$$. The square of the area $$[(\frac{b}{2a})^2]$$ should be added to $$x^2+ (\frac{b}{a})x$$ to complete the square.
But, we cannot just add, we need to subtract it to retain the expression’s value. Therefore, to complete the square:
$$x^2+ (\frac{b}{a})x= x^2+ (\frac{b}{a})x + (\frac{b}{2a})^2 – (\frac{b}{2a})^2$$
$$=x^2+ (\frac{b}{a})x+(\frac{b}{2a})^2 – \frac{b^2}{4a^2}$$
Multiplying and dividing $$(\frac{b}{a})x$$ with $$2$$ gives:
$$x^2 + (\frac{2⋅x⋅b}{2a}) + (\frac{b}{2a})^2 – \frac{b^2}{4a^2}$$
By using the identity, $$x^2+ 2xy + y^2 = (x + y)^2$$
The above equation can be written as,
$$x^2+ bax = (x + \frac{b}{2a})^2 – (\frac{b^2}{4a^2})$$
By substituting this in $$(1)$$:
$$ax^2+bx+c = a((x + \frac{b}{2a})^2 – \frac{b^2}{4a^2}+\frac{c}{a})= a(x + \frac{b}{2a})^2 – \frac{b^2}{4a }+c= a(x +\frac{b}{2a})^2+ (c- \frac{b^2}{4a})$$
This is of the form $$a(x+m)^2+n$$, where,
$$m= \frac{b}{2a}$$, $$n=c – (\frac{b^2}{4a})$$
### How to apply completing the square method?
Let’s learn how to apply the completing the square method using an example.
Example: Complete the square in the expression $$-4x^2- 8x-12$$.
First, we should make sure that the coefficient of $$x^2$$ is $$1$$. If the coefficient of $$x^2$$ is not $$1$$, we will place the number outside as a common factor. We will get:
$$-4x^2- 8x – 12 = -4(x^2 + 2x + 3)$$
Now, the coefficient of $$x^2$$ is $$1$$.
• Step 1: Find half of the $$x$$-factor. Here, the coefficient $$x$$ is $$2$$. Half of $$2$$ is $$1$$.
• Step 2: Find the square of the number above. $$1^2=1$$
• Step 3: Add and subtract the above number after the $$x$$ term in the expression whose coefficient of $$x^2$$ is $$1$$. This means, $$-4(x^2+2x+3) = -4(x^2+2x+1-1+3)$$.
• Step 4: Factorize the perfect square trinomial formed using the first $$3$$ terms using the identity $$x^2+2xy+ y^2 = (x + y)^2$$. In this case, $$x^2+ 2x+1 = (x + 1)^2$$. The above expression from Step $$3$$ becomes: $$-4(x^2+ 2x + 1-1+3)= -4((x + 1)^2- 1+3)$$
• Step 5: Simplify the last two numbers. Here, $$-1+3=2$$. Thus, the above expression is: $$-4x^2- 8x – 12 = -4(x + 1)^2-8$$. This is of the form $$a(x + m)^2+ n$$. Hence, we have completed the square. Thus, $$-4x^2- 8x- 12 = -4(x + 1)^2-8$$
Note: To complete the square in an expression $$ax^2+ bx + c$$:
• Make sure the coefficient of $$x^2$$ is $$1$$.
• Add and subtract $$(\frac{b}{2})^2$$ after the $$x$$ term and simplify.
### Completing the Square – Example 1:
Use completing the square method to solve $$x^2-4x-5=0$$.
Solution:
First, transpose the constant term to the other side of the equation:
$$x^2- 4x = 5$$
Then, take half of the coefficient of the $$x$$-term, which is $$-4$$, including the sign, which gives $$-2$$. Take the square of $$-2$$ to get $$+4$$, and add this squared value to both sides of the equation:
$$x^2- 4x+ 4= 5 + 4 ⇒ x^2- 4x + 4 = 9$$
This process creates a quadratic expression that is a perfect square on the left-hand side of the equation. We can replace the quadratic equation with the squared-binomial form:
$$(x – 2)^2= 9$$
Now that we have completed the expression to create a perfect-square binomial, let us solve:
$$(x-2)^2= 9$$
$$(x – 2) = ±\sqrt{9}$$
$$x-2=±3$$
$$x=2+3=5$$ , $$x=2-3=-1$$
$$x = 5, -1$$
## Exercises for Completing the Square
### Solve each equation by completing the square.
1. $$\color{blue}{x^2+12x+32=0}$$
2. $$\color{blue}{x^2-6x-3=0}$$
3. $$\color{blue}{x^2-10x+16=0}$$
4. $$\color{blue}{2x^2+7x+6=0}$$
1. $$\color{blue}{x=-4, -8}$$
2. $$\color{blue}{x=3+2\sqrt{3},\:3-2\sqrt{3}}$$
3. $$\color{blue}{x=2,8}$$
4. $$\color{blue}{x=-\frac{3}{2}, -2}$$
### What people say about "How to Solve a Quadratic Equation by Completing the Square?"?
No one replied yet.
X
30% OFF
Limited time only!
Save Over 30%
SAVE $5 It was$16.99 now it is \$11.99 | 2022-10-02 22:44:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9243908524513245, "perplexity": 220.28999536052584}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00522.warc.gz"} |
https://datascience.stackexchange.com/questions/39024/implemented-early-stopping-but-came-across-the-error-sgdclassifier-not-fitted-e | # Implemented early stopping but came across the error SGDClassifier: Not fitted error in sklearn
Below is the simpler implementation of early stopping which i came across the book and wanted to try it.
# Implement SGD Classifier
sgd_clf = SGDClassifier(random_state=42,
warm_start=True,
n_iter=1,
learning_rate='constant',
eta0=0.0005)
minimum_val_error = float('inf')
best_epoch = None
best_model = None
for epoch in range(1000):
sgd_clf.fit(X_train_scaled,y_train)
predictions = sgd_clf.predict(X_val_scaled)
error = mean_squared_error(y_val,predictions)
if error < minimum_val_error:
minimum_val_error = error
best_epoch = epoch
best_model = clone(sgd_clf)
Once the above snippet is executed, best model and best epoch are stored in variable best_model and best_epoch.So, to test the best_model, i ran the below statement.
y_test_predictions = best_model.predict(X_test)
But then i came across the error This SGDClassifier instance is not fitted yet
Any hints on how to solve this, would be greatly helpful. Thanks
It is because clone will only copy the estimator with the same parameters, but not with the attached data. So it results a new estimator that has not been fit on the data. Hence, you couldn't use it to make prediction.
Instead of clone, you can use either pickle or joblib.
1. pickle
import pickle
...
for epoch in range(1000):
...
if error < minimum_val_error:
best_model = pickle.dumps(sgd_clf)
Later if you want to use the stored model:
sgd_clf2 = pickle.loads(best_model)
y_test_predictions = sgd_clf2.predict(X_test)
2. joblib
You can also use joblib, and store the model to the disk.
from sklearn.externals import joblib
...
joblib.dump(sgd_clf, 'filename.joblib')
To use the stored model
clf = joblib.load('filename.joblib') | 2021-09-17 14:04:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20279671251773834, "perplexity": 4618.418869391242}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780055645.75/warc/CC-MAIN-20210917120628-20210917150628-00222.warc.gz"} |
https://labs.tib.eu/arxiv/?author=Erez%20Berg | • ### Signatures of fractionalization in spin liquids from interlayer thermal transport(1708.02584)
Oct. 11, 2018 cond-mat.str-el
Quantum spin liquids (QSLs) are intriguing phases of matter possessing fractionalized excitations. Several quasi-two dimensional materials have been proposed as candidate QSLs, but direct evidence for fractionalization in these systems is still lacking. In this paper, we show that the inter-plane thermal conductivity in layered QSLs carries a unique signature of fractionalization. We examine several types of gapless QSL phases - a $Z_2$ QSL with either a Dirac spectrum or a spinon Fermi surface, and a $U(1)$ QSL with a Fermi surface. In all cases, the in-plane and $c-$axis thermal conductivities have a different power law dependence on temperature, due to the different mechanisms of transport in the two directions: in the planes, the thermal current is carried by fractionalized excitations, whereas the inter-plane current is carried by integer (non-fractional) excitations. In layered $Z_2$ and $U(1)$ QSLs with a Fermi surface, the $c-$axis thermal conductivity is parametrically smaller than the in-plane one, but parametrically larger than the phonon contribution at low temperatures.
• ### Monte Carlo Studies of Quantum Critical Metals(1804.01988)
Metallic quantum critical phenomena are believed to play a key role in many strongly correlated materials, including high temperature superconductors. Theoretically, the problem of quantum criticality in the presence of a Fermi surface has proven to be highly challenging. However, it has recently been realized that many models used to describe such systems are amenable to numerically exact solution by quantum Monte Carlo (QMC) techniques, without suffering from the fermion sign problem. In this article, we review the status of the understanding of metallic quantum criticality, and the recent progress made by QMC simulations. We focus on the cases of spin density wave and Ising nematic criticality. We describe the results obtained so far, and their implications for superconductivity, non-Fermi liquid behavior, and transport in the vicinity of metallic quantum critical points. Some of the outstanding puzzles and future directions are highlighted.
• ### From one-dimensional charge conserving superconductors to the gapless Haldane phase(1802.02316)
We develop a framework to analyze one-dimensional topological superconductors with charge conservation. In particular, we consider models with $N$ flavors of fermions and $(\mathbb{Z}_2)^N$ symmetry, associated with the conservation of the fermionic parity of each flavor. For a single flavor, we recover the result that a distinct topological phase with exponentially localized zero modes does not exist due to absence of a gap to single particles in the bulk. For $N>1$, however, we show that the ends of the system can host low-energy, exponentially-localized modes. The analysis can readily be generalized to systems in other symmetry classes. To illustrate these ideas, we focus on lattice models with $SO\left(N\right)$ symmetric interactions, and study the phase transition between the trivial and the topological gapless phases using bosonization and a weak-coupling renormalization group analysis. As a concrete example, we study in detail the case of $N=3$. We show that in this case, the topologically non-trivial superconducting phase corresponds to a gapless analogue of the Haldane phase in spin-1 chains. In this phase, although the bulk is gapless to single particle excitations, the ends host spin-$1/2$ degrees of freedom which are exponentially localized and protected by the spin gap in the bulk. We obtain the full phase diagram of the model numerically, using density matrix renormalization group calculations. Within this model, we identify the self-dual line studied by Andrei and Destri [Nucl. Phys. B, 231(3), 445-480 (1984)], as a first-order transition line between the gapless Haldane phase and a trivial gapless phase. This allows us to identify the propagating spin-$1/2$ kinks in the Andrei-Destri model as the topological end-modes present at the domain walls between the two phases.
• ### Translationally invariant non-Fermi liquid metals with critical Fermi-surfaces: Solvable models(1801.06178)
We construct examples of translationally invariant solvable models of strongly-correlated metals, composed of lattices of Sachdev-Ye-Kitaev dots with identical local interactions. These models display crossovers as a function of temperature into regimes with local quantum criticality and marginal-Fermi liquid behavior. In the marginal Fermi liquid regime, the dc resistivity increases linearly with temperature over a broad range of temperatures. By generalizing the form of interactions, we also construct examples of non-Fermi liquids with critical Fermi-surfaces. The self energy has a singular frequency dependence, but lacks momentum dependence, reminiscent of a dynamical mean field theory-like behavior but in dimensions $d<\infty$. In the low temperature and strong-coupling limit, a heavy Fermi liquid is formed. The critical Fermi-surface in the non-Fermi liquid regime gives rise to quantum oscillations in the magnetization as a function of an external magnetic field in the absence of quasiparticle excitations. We discuss the implications of these results for local quantum criticality and for fundamental bounds on relaxation rates. Drawing on the lessons from these models, we formulate conjectures on coarse grained descriptions of a class of intermediate scale non-fermi liquid behavior in generic correlated metals.
• ### Is charge order induced near an antiferromagnetic quantum critical point?(1710.02158)
We investigate the interplay between charge order and superconductivity near an antiferromagnetic quantum critical point using sign-problem-free Quantum Monte Carlo simulations. We establish that, when the electronic dispersion is particle-hole symmetric, the system has an emergent SU(2) symmetry that implies a degeneracy between $d$-wave superconductivity and charge order with $d$-wave form factor. Deviations from particle-hole symmetry, however, rapidly lift this degeneracy, despite the fact that the SU(2) symmetry is preserved at low energies. As a result, we find a strong suppression of charge order caused by the competing, leading superconducting instability. Across the antiferromagnetic phase transition, we also observe a shift in the charge order wave-vector from diagonal to axial. We discuss the implications of our results to the universal phase diagram of antiferromagnetic quantum-critical metals and to the elucidation of the charge order experimentally observed in the cuprates.
• ### Topological transitions and fractional charges induced by strain and magnetic field in carbon nanotubes(1608.05976)
Aug. 28, 2017 cond-mat.mes-hall
We show that carbon nanotubes (CNT) can be driven through a topological phase transition using either strain or a magnetic field. This can naturally lead to Jackiw-Rebbi soliton states carrying fractionalized charges, similar to those found in a domain wall in the Su-Schrieffer-Heeger model, in a setup with a spatially inhomogeneous strain and an axial field. Two types of fractionalized states can be formed at the interface between regions with different strain: a spin-charge separated state with integer charge and spin zero (or zero charge and spin $\pm \hbar/2$), and a state with charge $\pm e/2$ and spin $\pm \hbar/4$. The latter state requires spin-orbit coupling in the CNT. We show that in our setup, the precise quantization of the fractionalized interface charges is a consequence of the symmetry of the CNT under a combination of a spatial rotation by $\pi$ and time reversal. Finally, we comment on the effects of many-body interaction on this phenomena.
• ### Dynamical susceptibility near a long-wavelength critical point with a nonconserved order parameter(1708.05308)
We study the dynamic response of a two-dimensional system of itinerant fermions in the vicinity of a uniform ($\mathbf{Q}=0$) Ising nematic quantum critical point of $d-$wave symmetry. The nematic order parameter is not a conserved quantity, and this permits a nonzero value of the fermionic polarization in the $d-$wave channel even for vanishing momentum and finite frequency: $\Pi(\mathbf{q} = 0,\Omega_m) \neq 0$. For weak coupling between the fermions and the nematic order parameter (i.e. the coupling is small compared to the Fermi energy), we perturbatively compute $\Pi (\mathbf{q} = 0,\Omega_m) \neq 0$ over a parametrically broad range of frequencies where the fermionic self-energy $\Sigma (\omega)$ is irrelevant, and use Eliashberg theory to compute $\Pi (\mathbf{q} = 0,\Omega_m)$ in the non-Fermi liquid regime at smaller frequencies, where $\Sigma (\omega) > \omega$. We find that $\Pi(\mathbf{q}=0,\Omega)$ is a constant, plus a frequency dependent correction that goes as $|\Omega|$ at high frequencies, crossing over to $|\Omega|^{1/3}$ at lower frequencies. The $|\Omega|^{1/3}$ scaling holds also in a non-Fermi liquid regime. The non-vanishing of $\Pi (\mathbf{q}=0, \Omega)$ gives rise to additional structure in the imaginary part of the nematic susceptibility $\chi^{''} (\mathbf{q}, \Omega)$ at $\Omega > v_F q$, in marked contrast to the behavior of the susceptibility for a conserved order parameter. This additional structure may be detected in Raman scattering experiments in the $d-$wave geometry.
• ### Quantized large-bias current in the anomalous Floquet-Anderson insulator(1708.05023)
Dec. 10, 2019 cond-mat.mes-hall
We study two-terminal transport through two-dimensional periodically driven systems in which all bulk Floquet eigenstates are localized by disorder. We focus on the Anomalous Floquet-Anderson Insulator (AFAI) phase, a topologically-nontrivial phase within this class, which hosts topologically protected chiral edge modes coexisting with its fully localized bulk. We show that the unique properties of the AFAI yield remarkable far-from-equilibrium transport signatures: for a large bias between leads, a quantized amount of charge is transported through the system each driving period. Upon increasing the bias, the chiral Floquet edge mode connecting source to drain becomes fully occupied and the current rapidly approaches its quantized value.
• ### Non-Fermi-liquid at (2+1)d ferromagnetic quantum critical point(1612.06075)
July 28, 2017 cond-mat.str-el
We construct a two-dimensional lattice model of fermions coupled to Ising ferromagnetic critical fluctuations. Using extensive sign-problem-free quantum Monte Carlo simulations, we show that the model realizes a continuous itinerant quantum phase transition. In comparison with other similar itinerant quantum critical points (QCPs), our QCP shows much weaker superconductivity tendency with no superconducting state down to the lowest temperature investigated, hence making the system a good platform for the exploration of quantum critical fluctuations. Remarkably, clear signatures of non-Fermi-liquid behavior in the fermion propagators are observed at the QCP. The critical fluctuations at the QCP partially resemble Hertz-Millis-Moriya behavior. However, careful scaling analysis reveals that the QCP belongs to a different universality class, deviating from both (2+1)d Ising and Hertz-Millis-Moriya predictions.
• ### Fractional chiral superconductors(1707.06654)
Two-dimensional $p_x+ip_y$ topological superconductors host gapless Majorana edge modes, as well as Majorana bound states at the core of $h/2e$ vortices. Here we construct a model realizing the fractional counterpart of this phase: a fractional chiral superconductor. Our model is composed of an array of coupled Rashba wires in the presence of strong interactions, Zeeman field, and proximity coupling to an $s$-wave superconductor. We define the filling factor as $\nu=l_{\text{so}}n/4$, where $n$ is the electronic density and $l_{\text{so}}$ is the spin-orbit length. Focusing on filling $\nu=1/m$, with $m$ being an odd integer, we obtain a tractable model which allows us to study the properties of the bulk and the edge. Using an $\epsilon$-expansion with $m=2+\epsilon$, we show that the bulk Hamiltonian is gapped and that the edge of the sample hosts a chiral $\mathbb{Z}_{2m}$ parafermion theory with central charge $c=\frac{2m-1}{m+1}$. The tunneling density of states associated with this edge theory exhibits an anomalous energy dependence of the form $\omega^{m-1}$. Additionally, we show that $\mathbb{Z}_{2m}$ parafermionic bound states reside at the cores of $h/2e$ vortices. Upon constructing an appropriate Josephson junction in our system, we find that the current-phase relation displays a $4\pi m$ periodicity, reflecting the underlying non-abelian excitations.
• ### Superconductivity mediated by quantum critical antiferromagnetic fluctuations: The rise and fall of hot spots(1609.09568)
In several unconventional superconductors, the highest superconducting transition temperature $T_{c}$ is found in a region of the phase diagram where the antiferromagnetic transition temperature extrapolates to zero, signaling a putative quantum critical point. The elucidation of the interplay between these two phenomena - high-$T_{c}$ superconductivity and magnetic quantum criticality - remains an important piece of the complex puzzle of unconventional superconductivity. In this paper, we combine sign-problem-free Quantum Monte Carlo simulations and field-theoretical analytical calculations to unveil the microscopic mechanism responsible for the superconducting instability of a general low-energy model, called spin-fermion model. In this approach, low-energy electronic states interact with each other via the exchange of quantum critical magnetic fluctuations. We find that even in the regime of moderately strong interactions, both the superconducting transition temperature and the pairing susceptibility are governed not by the properties of the entire Fermi surface, but instead by the properties of small portions of the Fermi surface called hot spots. Moreover, $T_{c}$ increases with increasing interaction strength, until it starts to saturate at the crossover from hot-spots dominated to Fermi-surface dominated pairing. Our work provides not only invaluable insights into the system parameters that most strongly affect $T_{c}$, but also important benchmarks to assess the origin of superconductivity in both microscopic models and actual materials.
• ### Quantum chaos in an electron-phonon bad metal(1705.07895)
May 22, 2017 cond-mat.str-el
We calculate the scrambling rate $\lambda_L$ and the butterfly velocity $v_B$ associated with the growth of quantum chaos for a solvable large-$N$ electron-phonon system. We study a temperature regime in which the electrical resistivity of this system exceeds the Mott-Ioffe-Regel limit and increases linearly with temperature - a sign that there are no long-lived charged quasiparticles - although the phonons remain well-defined quasiparticles. The long-lived phonons determine $\lambda_L$, rendering it parametrically smaller than the theoretical upper-bound $\lambda_L \ll \lambda_{max}=2\pi T/\hbar$. Significantly, the chaos properties seem to be intrinsic - $\lambda_L$ and $v_B$ are the same for electronic and phononic operators. We consider two models - one in which the phonons are dispersive, and one in which they are dispersionless. In either case, we find that $\lambda_L$ is proportional to the inverse phonon lifetime, and $v_B$ is proportional to the effective phonon velocity. The thermal and chaos diffusion constants, $D_E$ and $D_L\equiv v_B^2/\lambda_L$, are always comparable, $D_E \sim D_L$. In the dispersive phonon case, the charge diffusion constant $D_C$ satisfies $D_L\gg D_C$, while in the dispersionless case $D_L \ll D_C$.
• ### Transverse fields to tune an Ising-nematic quantum critical transition(1704.07841)
April 25, 2017 cond-mat.str-el
The paradigmatic example of a continuous quantum phase transition is the transverse field Ising ferromagnet. In contrast to classical critical systems, whose properties depend only on symmetry and the dimension of space, the nature of a quantum phase transition also depends on the dynamics. In the transverse field Ising model, the order parameter is not conserved and increasing the transverse field enhances quantum fluctuations until they become strong enough to restore the symmetry of the ground state. Ising pseudo-spins can represent the order parameter of any system with a two-fold degenerate broken-symmetry phase, including electronic nematic order associated with spontaneous point-group symmetry breaking. Here, we show for the representative example of orbital-nematic ordering of a non-Kramers doublet that an orthogonal strain or a perpendicular magnetic field plays the role of the transverse field, thereby providing a practical route for tuning appropriate materials to a quantum critical point. While the transverse fields are conjugate to seemingly unrelated order parameters, their non-trivial commutation relations with the nematic order parameter, which can be represented by a Berry-phase term in an effective field theory, intrinsically intertwines the different order parameters.
• ### Quantized magnetization density in periodically driven systems(1610.03590)
We study micromotion in two-dimensional periodically driven systems in which all bulk Floquet eigenstates are localized by disorder. We show that this micromotion gives rise to a quantized time-averaged magnetization density when the system is filled with fermions. Furthermore we find that a quantized current flows around the boundary of any filled region of finite extent. The quantization has a topological origin: we relate the time-averaged magnetization density to the winding number characterizing the new phase identified in Phys. Rev. X 6, 021013 (2016). We thus establish that the winding number invariant can be accessed directly in bulk measurements, and propose an experimental protocol to do so using interferometry in cold atom based realizations.
• ### Superconductivity and non-Fermi liquid behavior near a nematic quantum critical point(1612.01542)
March 6, 2017 cond-mat.str-el
Using determinantal quantum Monte Carlo, we compute the properties of a lattice model with spin $\frac 1 2$ itinerant electrons tuned through a quantum phase transition to an Ising nematic phase. The nematic fluctuations induce superconductivity with a broad dome in the superconducting $T_c$ enclosing the nematic quantum critical point. For temperatures above $T_c$, we see strikingly non-Fermi liquid behavior, including a "nodal - anti nodal dichotomy" reminiscent of that seen in several transition metal oxides. In addition, the critical fluctuations have a strong effect on the low frequency optical conductivity, resulting in behavior consistent with "bad metal" phenomenology.
• ### Fate of the one-dimensional Ising quantum critical point coupled to a gapless boson(1609.02599)
Feb. 20, 2017 cond-mat.str-el
The problem of a quantum Ising degree of freedom coupled to a gapless bosonic mode appears naturally in many one dimensional systems, yet surprisingly little is known how such a coupling affects the Ising quantum critical point. We investigate the fate of the critical point in a regime, where the weak coupling renormalization group (RG) indicates a flow toward strong coupling. Using a renormalization group analysis and numerical density matrix renormalization group (DMRG) calculations we show that, depending on the ratio of velocities of the gapless bosonic mode and the Ising critical fluctuations, the transition may remain continuous or become fluctuation-driven first order. The two regimes are separated by a tri-critical point of a novel type.
• ### Edge--Entanglement correspondence for gapped topological phases with symmetry(1612.02831)
Dec. 8, 2016 cond-mat.str-el
The correspondence between the edge theory and the entanglement spectrum is firmly established for the chiral topological phases. We study gapped, topologically ordered, non-chiral states with a conserved $U(1)$ charge and show that the entanglement Hamiltonian contains not only the information about topologically distinct edges such phases may admit, but also which of them will be realized in the presence of symmetry breaking/conserving perturbations. We introduce an exactly solvable, charge conserving lattice model of a $\mathbb{Z}_2$ spin liquid and derive its edge theory and the entanglement Hamiltonian, also in the presence of perturbations. We construct a field theory of the edge and study its RG flow. We show the precise extent of the correspondence between the information contained in the entanglement Hamiltonian and the edge theory.
• ### No-go theorem for a time-reversal invariant topological phase in noninteracting systems coupled to conventional superconductors(1605.07179)
We prove that a system of non-interacting electrons proximity coupled to a conventional s-wave superconductor cannot realize a time reversal invariant topological phase. This is done by showing that for such a system, in either one or two dimensions, the topological invariant of the corresponding symmetry class (DIII) is always trivial. Our results suggest that the pursuit of Majorana bound states in time-reversal invariant systems should be aimed at interacting systems or at proximity to unconventional superconductors.
• ### Topological Superconductivity in a Planar Josephson Junction(1609.09482)
We consider a two-dimensional electron gas with strong spin-orbit coupling contacted by two superconducting leads, forming a Josephson junction. We show that in the presence of an in-plane Zeeman field the quasi-one-dimensional region between the two superconductors can support a topological superconducting phase hosting Majorana bound states at its ends. We study the phase diagram of the system as a function of the Zeeman field and the phase difference between the two superconductors (treated as an externally controlled parameter). Remarkably, at a phase difference of $\pi$, the topological phase is obtained for almost any value of the Zeeman field and chemical potential. In a setup where the phase is not controlled externally, we find that the system undergoes a first-order topological phase transition when the Zeeman field is varied. At the transition, the phase difference in the ground state changes abruptly from a value close to zero, at which the system is trivial, to a value close to $\pi$, at which the system is topological. The critical current through the junction exhibits a sharp minimum at the critical Zeeman field, and is therefore a natural diagnostic of the transition. We point out that in presence of a symmetry under a modified mirror reflection followed by time reversal, the system belongs to a higher symmetry class and the phase diagram as a function of the phase difference and the Zeeman field becomes richer.
• ### Quantum critical properties of a metallic spin density wave transition(1609.08620)
We report on numerically exact determinantal quantum Monte Carlo simulations of the onset of spin-density wave (SDW) order in itinerant electron systems captured by a sign-problem-free two-dimensional lattice model. Extensive measurements of the SDW correlations in the vicinity of the phase transition reveal that the critical dynamics of the bosonic order parameter are well described by a dynamical critical exponent z = 2, consistent with Hertz-Millis theory, but are found to follow a finite-temperature dependence that does not fit the predicted behavior of the same theory. The presence of critical SDW fluctuations is found to have a strong impact on the fermionic quasiparticles, giving rise to a dome-shaped superconducting phase near the quantum critical point. In the superconducting state we find a gap function that has an opposite sign between the two bands of the model and is nearly constant along the Fermi surface of each band. Above the superconducting $T_c$ our numerical simulations reveal a nearly temperature and frequency independent self energy causing a strong suppression of the low-energy quasiparticle spectral weight in the vicinity of the hot spots on the Fermi surface. This indicates a clear breakdown of Fermi liquid theory around these points.
• ### Spin density wave order, topological order, and Fermi surface reconstruction(1606.07813)
Sept. 21, 2016 hep-th, cond-mat.str-el
In the conventional theory of density wave ordering in metals, the onset of spin density wave (SDW) order co-incides with the reconstruction of the Fermi surfaces into small 'pockets'. We present models which display this transition, while also displaying an alternative route between these phases via an intermediate phase with topological order, no broken symmetry, and pocket Fermi surfaces. The models involve coupling emergent gauge fields to a fractionalized SDW order, but retain the canonical electron operator in the underlying Hamiltonian. We establish an intimate connection between the suppression of certain defects in the SDW order, and the presence of Fermi surface sizes distinct from the Luttinger value in Fermi liquids. We discuss the relevance of such models to the physics of the hole-doped cuprates near optimal doping.
• ### Interaction-driven topological superconductivity in one dimension(1605.09385)
We study one-dimensional topological superconductivity in the presence of time-reversal symmetry. This phase is characterized by having a bulk gap, while supporting a Kramers' pair of zero-energy Majorana bound states at each of its ends. We present a general simple model which is driven into this topological phase in the presence of repulsive electron-electron interactions. We further propose two experimental setups and show that they realize this model at low energies. The first setup is a narrow two-dimensional topological insulator partially covered by a conventional s-wave superconductor, and the second is a semiconductor wire in proximity to an s-wave superconductor. These systems can therefore be used to realize and probe the time-reversal invariant topological superconducting phase. The effect of interactions is studied using both a mean-field approach and a renormalization group analysis.
• ### Signatures of topological Josephson junctions(1604.04287)
July 25, 2016 cond-mat.mes-hall
Quasiparticle poisoning and diabatic transitions may significantly narrow the window for the experimental observation of the $4\pi$-periodic $dc$ Josephson effect predicted for topological Josephson junctions. Here, we show that switching current measurements provide accessible and robust signatures for topological superconductivity which persist in the presence of quasiparticle poisoning processes. Such measurements provide access to the phase-dependent subgap spectrum and Josephson currents of the topological junction when incorporating it into an asymmetric SQUID together with a conventional Josephson junction with large critical current. We also argue that pump-probe experiments with multiple current pulses can be used to measure the quasiparticle poisoning rates of the topological junction. The proposed signatures are particularly robust, even in the presence of Zeeman fields and spin-orbit coupling, when focusing on short Josephson junctions. Finally, we also consider microwave excitations of short topological Josephson junctions which may complement switching current measurements.
• ### Non-quasiparticle transport and resistivity saturation: A view from the large-N limit(1607.05725)
July 19, 2016 cond-mat.str-el
The electron dynamics in metals are usually well described by the semiclassical approximation for long-lived quasiparticles. However, in some metals, the scattering rate of the electrons at elevated temperatures becomes comparable to the Fermi energy; then, this approximation breaks down, and the full quantum-mechanical nature of the electrons must be considered. In this work, we study a solvable, large-$N$ electron-phonon model, which at high temperatures enters the non-quasiparticle regime. In this regime, the model exhibits "resistivity saturation" to a temperature-independent value of the order of the quantum of resistivity - the first analytically tractable model to do so. The saturation is not due to a fundamental limit on the electron lifetime, but rather to the appearance of a second conductivity channel. This is suggestive of the phenomenological "parallel resistor formula", known to describe the resistivity of a variety of saturating metals.
• ### Ising nematic quantum critical point in a metal: a Monte Carlo study(1511.03282)
The Ising nematic quantum critical point (QCP) associated with the zero temperature transition from a symmetric to a nematic {\it metal} is an exemplar of metallic quantum criticality. We have carried out a minus sign-free quantum Monte Carlo study of this QCP for a two dimensional lattice model with sizes up to $24\times 24$ sites. The system remains non-superconducting down to the lowest accessible temperatures. The results exhibit critical scaling behavior over the accessible ranges of temperature, (imaginary) time, and distance. This scaling behavior has remarkable similarities with recently measured properties of the Fe-based superconductors proximate to their putative nematic QCP. | 2020-03-29 17:47:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6565243601799011, "perplexity": 842.2736275302739}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370495413.19/warc/CC-MAIN-20200329171027-20200329201027-00293.warc.gz"} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.