url
stringlengths 15
1.13k
| text
stringlengths 100
1.04M
| metadata
stringlengths 1.06k
1.1k
|
---|---|---|
https://slideplayer.com/slide/2289443/ | ### Similar presentations
3 4 Parts of a Fraction = the number of parts
Add and Subtract Fractions Parts of a Fraction 3 = the number of parts 4 = the total number of parts that equal a whole copyright©amberpasillas2010
3 4 Parts of a Fraction = numerator = denominator
A Common Multiple is a number that is a multiple of two or more numbers. Some multiples of 3 & 6 6, 12, 18, 24, 30, 36, 42 copyright©amberpasillas2010
The LCM is the smallest multiple that two numbers have in common. 5 = 5, 10, 15, 20, 25, 30 6 = 6, 12, 18, 24, 30, 36 copyright©amberpasillas2010
To Add or Subtract Fractions With Unlike Denominators
Add and Subtract Fractions To Add or Subtract Fractions With Unlike Denominators List the multiples of each denominator to find the LCM. 1 5 = 5, 10, 15, 20, 25, 30 1 + 10 = 10, 20, 30, 40, 50 copyright©amberpasillas2010
To Add or Subtract Fractions With Unlike Denominators
Add and Subtract Fractions To Add or Subtract Fractions With Unlike Denominators Compare your lists. Which multiples do 5 and 10 have in common? 1 5 = 5, 10, 15, 20, 25, 30 1 + 10 = 10, 20, 30, 40, 50 copyright©amberpasillas2010
To Add or Subtract Fractions With Unlike Denominators
Add and Subtract Fractions To Add or Subtract Fractions With Unlike Denominators Use the lowest common multiple as the denominator. 1 5 = 5, 10, 15, 20, 25, 30 1 + 10 = 10, 20, 30, 40, 50 copyright©amberpasillas2010
To Add or Subtract Fractions With Unlike Denominators
Add and Subtract Fractions To Add or Subtract Fractions With Unlike Denominators This number is also called the least common denominator. 1 5 = 5, 10, 15, 20, 25, 30 1 + 10 = 10, 20, 30, 40, 50 copyright©amberpasillas2010
SHORTCUT: Check to see if the smaller denominator divides evenly into the larger denominator. If it does, use the larger denominator for your LCD or LCM. 1 5 will divide evenly into 10, so 10 is your denominator. 5 1 + 10 copyright©amberpasillas2010
1 5 10 1 + 10 10 Add and Subtract Fractions
Rewrite the fractions using the least common denominator or least common multiple. 1 5 10 1 + 10 10 copyright©amberpasillas2010
Find the equivalent fractions for 1/5 & 1/10 with 10 as the denominator. You know that 1/10 is equal to 1/10 so Put a 1 over the Denominator of 10. 1 5 10 1 1 + 10 10 copyright©amberpasillas2010
Find the equivalent fractions for 1/5 & 1/10 with 10 as the denominator. To find the top number, ask yourself what do you multiply the 5 by to get 10. 1 5 10 1 1 + 10 10 copyright©amberpasillas2010
Find the equivalent fractions for 1/5 & 1/10 with 10 as the denominator. 2 Since you are looking for the equivalent fraction you know the numerator must also be multiplied by 2. 1 x 2 = 5 10 x 2 1 1 + 10 10 copyright©amberpasillas2010
2 1 5 10 1 1 + 10 10 Now just add the numerators. x 2 = x 2
Add and Subtract Fractions Now just add the numerators. Remember when adding fractions you never add the denominators. 2 1 x 2 = 5 10 x 2 1 1 + 10 10 3 10 copyright©amberpasillas2010
Identity Property states that any number multiplied by one equals itself This is useful to know so you can find a common denominator for adding and subtracting fractions.
1 2 8 1 + 8 8 Use the short cut to find the Least Common Denominator
Add and Subtract Fractions Use the short cut to find the Least Common Denominator (LCD). 1 2 8 1 + 8 8 copyright©amberpasillas2010
1 2 8 1 + 8 8 x 4 = x 1 = Now find the equivalent fractions for
Add and Subtract Fractions Now find the equivalent fractions for 1/2 & 1/8. 1 2 8 x 4 = 1 + 8 x 1 = 8 Ask what do you multiply 2 by to get 8 and what do you multiply 8 by to get 8. copyright©amberpasillas2010
1 4 2 8 1 + 8 8 x 4 = x 4 = x 1 = Since you are writing equivalent
Add and Subtract Fractions Since you are writing equivalent fractions, now multiply the numerator by the same number you did on the denominator. 1 4 x 4 = 2 8 x 4 = 1 + 8 x 1 = 8 copyright©amberpasillas2010
1 4 2 8 1 1 + 8 8 x 4 = x 4 = x 1 = x 1 = Since you are
Add and Subtract Fractions Since you are writing equivalent fractions, now multiply the numerator by the same number you did on the denominator. 1 4 x 4 = 2 8 x 4 = 1 1 x 1 = + 8 x 1 = 8 copyright©amberpasillas2010
1 4 2 8 1 1 + 8 8 5 8 x 4 = x 4 = x 1 = x 1 = Now add numerators.
Add and Subtract Fractions Now add numerators. The denominator stays the same. 1 4 x 4 = 2 8 x 4 = 1 1 x 1 = + 8 x 1 = 8 5 8 copyright©amberpasillas2010
2 5 15 1 + 3 15 Find the least least common multiple for 5
Add and Subtract Fractions Find the least least common multiple for 5 and 3. Write it as your denominator 2 5 15 1 + 3 15 copyright©amberpasillas2010
2 5 15 1 + 3 15 x 3 = x 5 = Ask yourself what do you multiply the
Add and Subtract Fractions Ask yourself what do you multiply the denominator by to get 15. 2 5 15 x 3 = 1 + 3 x 5 = 15 copyright©amberpasillas2010
2 6 5 15 1 + 3 15 x 3 = x 3 = x 5 = Multiply the numerator by the same
Add and Subtract Fractions Multiply the numerator by the same number you did in the denominator 2 6 x 3 = 5 15 x 3 = 1 + 3 x 5 = 15 copyright©amberpasillas2010
2 6 5 15 1 5 + 3 15 x 3 = x 3 = x 5 = x 5 = Multiply the numerator
Add and Subtract Fractions Multiply the numerator by the same number you did in the denominator. 2 6 x 3 = 5 15 x 3 = 1 5 x 5 = + 3 x 5 = 15 copyright©amberpasillas2010
Add your numerators. The denominator stays the same. 2 6 x 3 = 5 15 x 3 = 1 5 x 5 = + 3 x 5 = 15 11 15 copyright©amberpasillas2010
1 2 x 2 = 6 12 x 2 = 1 3 x 3 = + 4 x 3 = 12 5 12 copyright©amberpasillas2010
Add and Subtract Fractions ALWAYS SIMPLIFY YOUR ANSWER IF YOUR ANSWER IS AN IMPROPER FRACTION! 4 12 x 3 = 5 15 x 3 = 2 10 x 5 = + 3 x 5 = 15 22 15 copyright©amberpasillas2010
1 7 15 22 15) 22 15 15 7 Add and Subtract Fractions
5 20 x 4 = 6 24 x 4 = 1 3 x 3 = - 8 x 3 = 24 17 24 copyright©amberpasillas2010
2 6 x 3 = 3 9 x 3 = 1 1 x 1 = - 9 x 1 = 9 5 9 copyright©amberpasillas2010
4 2 x 2 = 4 8 x 2 = 2 2 x 1 = + 8 x 1 = 8 6 3 = 8 4 copyright©amberpasillas2010
12 4 x 3 = 15 5 x 3 = 10 2 x 5 = - 15 3 x 5 = 2 15 copyright©amberpasillas2010
REMEMBER: To Always Show Your Work and The Giant One!
1 Add Fractions 12 4 15 5 10 2 + 3 15 22 15 #11 x 3 = x 3 = x 5 =
Find a common denominator. 15 5 x 3 = 10 2 2) Add the numerators. x 5 = + 3 x 5 = 15 3) Keep the common denominator the same. 22 7 1 15 15 4) Simplify or reduce. Change improper fractions to mixed #s. 15) 22 15 7
- Subtract Fractions 5 x 4 = 20 6 x 4 = 24 1 x 3 = 3 x 3 = 8 24 17 24
#12 5 x 4 = 20 6 x 4 = 24 Always SHOW YOUR WORK! This includes the Giant One. 1 x 3 = 3 - x 3 = 8 24 17 Giant One 24 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8717786073684692, "perplexity": 190.44529520394752}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400198287.23/warc/CC-MAIN-20200920161009-20200920191009-00245.warc.gz"} |
https://cstheory.stackexchange.com/questions/45545/is-unary-pi-2-subsetsum-conp-complete | # Is unary $\Pi_2$-SUBSETSUM coNP-complete?
Consider the following problem:
for given integers $$a_1, \ldots, a_{2n}$$ and $$A$$ that are given in unary representation
define is it true that
for every $$S \subseteq \{1, ..., 2n \}$$ such that $$|S|=n$$ there exists $$H \subseteq S$$ such that
$$\sum_{i \in H} a_i = A.$$
This problem belongs to coNP. Indeed, for fixed $$S$$ we can define the existence of required $$H$$ in polynomial time since there exists a pseudo-polynomial algorithm for SUBSET-SUM.
Question: Is this problem coNP-complete?
UPD I have understood that a hardness result about this problem would give a partial answer to Stefan Szeider's question that he asked in his paper Monadic Second Order Logic on Graphs with Local Cardinality Constraints.
At this paper the author generalized Courcelle’s theorem by the following way:
We consider problems of the following form: Given a graph $$G$$ and for each vertexv of $$G$$ set $$\alpha(v)$$ of non-negative integers. The question is whether there exists a set $$S$$ of vertices or edges of $$G$$ that
(i) satisfies a fixed MSO-expressible property, and
(ii) for each vertex $$v$$ of $$G$$, the number of vertices in $$S$$ adjacent to $$v$$ plus the number of edges in $$S$$ incident with $$v$$ belongs to the set $$\alpha(v)$$. We call such a problem an MSO problem for graphs with local cardinality constraints,or MSO-LCC problem, for short.
The main result of the paper is the following theorem.
Theorem For every constant $$k \ge 1$$, every MSO-LCC problem can be solved in polynomial time for graphs of treewidth at most $$k$$.
It is natural to consider more general question.
We consider the more general class of Q-MSO-LCC problems where cardinality constraints are applied to second-order variablesthat are arbitrarily quantified (not just existentially as for MSO-LCC problems). We show, however, that there exist Q-MSO-LCC problems that are already NP-hard for graphs of treewidth $$2$$.
However, the example of an NP-hard problem that can be expressed as Q-MSO-LCC requires two alternations of quantifies. We do not know an example of NP-hard problem that can be expressed as Q-MSO-LCC problem with one alternation of quantifies.
I claim that unary $$\Pi_2$$-SUBSETSUM can be expressed as Q-MSO-LCC problem with one alternation of quantifies. For given $$a_1, a_2, a_3,..., a_{2n}$$ and $$A$$ consider the graph with $$a_1, a_2, a_3,..., a_{2n} + 2n + 2$$ vertices:
Now let $$X_1$$ be a set of vertices that is some subset of vertices of the second (children of $$R$$) and the third levels with the following property: If a vertex $$v$$ of the second level belongs to $$X_1$$ then all children of $$v$$ belong to $$X_1$$. We require that $$X_1$$ contains $$n$$ vertices of the second level (so, $$R$$ has $$n$$ neighborhoods in $$X_1$$).
Let $$X_2$$ be some subset of $$X_1$$ with the same property: if a vertex $$v$$ of the second level belongs to $$X_2$$ then all children of $$v$$ belong to $$X_2$$. We require that $$X_2$$ contains $$A$$ vertices of the third level (so, $$T$$ has $$A$$ neighborhoods in $$X_2$$).
I claim that $$(a_1,..., a_{2n}, A)$$ belongs to $$\Pi_2$$-unary-subsetsum iff for all $$X_1$$ there exists $$X_2$$ with all properties written above. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 53, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9392170310020447, "perplexity": 200.32616751160703}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305141.20/warc/CC-MAIN-20220127042833-20220127072833-00176.warc.gz"} |
http://www.talkstats.com/search/1169079/ | # Search results
1. ### Problem with Windows 7 login
My forgot the admin password of my Windows computer so I downloaded a password recovery program called Ophcrack. When I tried to boot it from the USB flash, showed me a blue screen. I have important data sources on the computer so i need to fix it asap. If you know, please kindly suggest the... | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9186573624610901, "perplexity": 2303.7622468256304}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655878519.27/warc/CC-MAIN-20200702045758-20200702075758-00510.warc.gz"} |
http://proceedings.mlr.press/v70/pennington17a.html | # Geometry of Neural Network Loss Surfaces via Random Matrix Theory
Jeffrey Pennington, Yasaman Bahri ;
Proceedings of the 34th International Conference on Machine Learning, PMLR 70:2798-2806, 2017.
#### Abstract
Understanding the geometry of neural network loss surfaces is important for the development of improved optimization algorithms and for building a theoretical understanding of why deep learning works. In this paper, we study the geometry in terms of the distribution of eigenvalues of the Hessian matrix at critical points of varying energy. We introduce an analytical framework and a set of tools from random matrix theory that allow us to compute an approximation of this distribution under a set of simplifying assumptions. The shape of the spectrum depends strongly on the energy and another key parameter, $\phi$, which measures the ratio of parameters to data points. Our analysis predicts and numerical simulations support that for critical points of small index, the number of negative eigenvalues scales like the 3/2 power of the energy. We leave as an open problem an explanation for our observation that, in the context of a certain memorization task, the energy of minimizers is well-approximated by the function $1/2(1-\phi)^2$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9286559224128723, "perplexity": 243.47339934555416}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891811795.10/warc/CC-MAIN-20180218081112-20180218101112-00683.warc.gz"} |
https://www.physicsforums.com/threads/divergence-theorem.545164/ | # Divergence theorem
1. Oct 29, 2011
### wasi-uz-zaman
HI experts
i want to know the physical significance of divergence theorem i.e how volume integral changes to surface integral - how can i explain in simple words.
2. Oct 29, 2011
### arildno
Hi!
Basically, the divergence can be thought of as the (relative) rate of change of volume.
But, we may do this in another way, namely by calculating the net flux across the original surface.
3. Oct 29, 2011
thanks alot | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9922568798065186, "perplexity": 1254.926821105937}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794864872.17/warc/CC-MAIN-20180522190147-20180522210147-00539.warc.gz"} |
https://admin.onepetro.org/BHRICMPT/BHR15/conference/All-BHR15 | Skip Nav Destination
# Proceedings Papers
17th International Conference on Multiphase Production Technology
June 10–12, 2015
Cannes, France
All Days
Paper presented at the 17th International Conference on Multiphase Production Technology, Cannes, France, June 2015.
Paper presented at the 17th International Conference on Multiphase Production Technology, Cannes, France, June 2015.
Paper presented at the 17th International Conference on Multiphase Production Technology, Cannes, France, June 2015.
Paper presented at the 17th International Conference on Multiphase Production Technology, Cannes, France, June 2015.
Paper presented at the 17th International Conference on Multiphase Production Technology, Cannes, France, June 2015.
Paper presented at the 17th International Conference on Multiphase Production Technology, Cannes, France, June 2015.
Paper presented at the 17th International Conference on Multiphase Production Technology, Cannes, France, June 2015.
Paper presented at the 17th International Conference on Multiphase Production Technology, Cannes, France, June 2015.
Paper presented at the 17th International Conference on Multiphase Production Technology, Cannes, France, June 2015.
Paper presented at the 17th International Conference on Multiphase Production Technology, Cannes, France, June 2015.
Paper presented at the 17th International Conference on Multiphase Production Technology, Cannes, France, June 2015.
Paper presented at the 17th International Conference on Multiphase Production Technology, Cannes, France, June 2015.
Paper presented at the 17th International Conference on Multiphase Production Technology, Cannes, France, June 2015.
Paper presented at the 17th International Conference on Multiphase Production Technology, Cannes, France, June 2015.
Paper presented at the 17th International Conference on Multiphase Production Technology, Cannes, France, June 2015.
Paper presented at the 17th International Conference on Multiphase Production Technology, Cannes, France, June 2015.
Paper presented at the 17th International Conference on Multiphase Production Technology, Cannes, France, June 2015.
Paper presented at the 17th International Conference on Multiphase Production Technology, Cannes, France, June 2015.
Paper presented at the 17th International Conference on Multiphase Production Technology, Cannes, France, June 2015.
Paper presented at the 17th International Conference on Multiphase Production Technology, Cannes, France, June 2015.
Paper presented at the 17th International Conference on Multiphase Production Technology, Cannes, France, June 2015.
Paper presented at the 17th International Conference on Multiphase Production Technology, Cannes, France, June 2015.
Paper presented at the 17th International Conference on Multiphase Production Technology, Cannes, France, June 2015.
Paper presented at the 17th International Conference on Multiphase Production Technology, Cannes, France, June 2015.
Paper presented at the 17th International Conference on Multiphase Production Technology, Cannes, France, June 2015.
Paper presented at the 17th International Conference on Multiphase Production Technology, Cannes, France, June 2015.
Paper presented at the 17th International Conference on Multiphase Production Technology, Cannes, France, June 2015.
Paper presented at the 17th International Conference on Multiphase Production Technology, Cannes, France, June 2015.
Paper presented at the 17th International Conference on Multiphase Production Technology, Cannes, France, June 2015.
Paper presented at the 17th International Conference on Multiphase Production Technology, Cannes, France, June 2015.
Paper presented at the 17th International Conference on Multiphase Production Technology, Cannes, France, June 2015.
Paper presented at the 17th International Conference on Multiphase Production Technology, Cannes, France, June 2015.
Paper presented at the 17th International Conference on Multiphase Production Technology, Cannes, France, June 2015.
Paper presented at the 17th International Conference on Multiphase Production Technology, Cannes, France, June 2015. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8146898746490479, "perplexity": 4136.962336506232}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949035.66/warc/CC-MAIN-20230329213541-20230330003541-00160.warc.gz"} |
http://mathhelpforum.com/algebra/112874-basic-logirithms-basic-expanding-brackets-print.html | # Basic Logirithms & Basic expanding the brackets
• November 6th 2009, 09:02 PM
student0451
Basic Logirithms & Basic expanding the brackets
Hello everyone,
I need help with logarithms that are in fraction form.
such as
evaluate the following for x:
log ^ 2 x = 1/7
log ^ 2 x = 1/5
log ^ 2 x = 2/7
I also need to be sure if expanding this equation for e.g is correct?
expand the following:
(a + b) (a + b + c)
a^2 + ab + ac + ba + b^2 + bc
& do I plus the powers or times them in the brackets?
expand the following:
(a^2 + b^2) (a^3 + b^3 + c^2)
a^5 + a^2b^5 + a^2c^2 + b^2a^3 + b ^ 5 + b ^2c^ 2
• November 6th 2009, 09:19 PM
Bacterius
Quote:
evaluate the following for x
You must mean "solve the following for x" ? If yes, then :
$log(2x) = \frac{\ 1}{7}$
$2x = 10^{\frac{\ 1}{7}}$
$x = \frac{\ 10^{\frac{\ 1}{7}}}{2}$
$(a + b)(a + b + c)$
$a(a + b + c) + b(a + b + c)$
$a^2 + ab + ac + ba + b^2 + bc$
$a^2 + b^2 + 2ab + ac + bc$
But maybe I don't understand what you wrote, try using the LaTeX editor ?
• November 6th 2009, 09:19 PM
VonNemo19
Quote:
Originally Posted by student0451
Hello everyone,
I need help with logarithms that are in fraction form.
such as
evaluate the following for x:
log ^ 2 x = 1/7
log ^ 2 x = 1/5
log ^ 2 x = 2/7
I also need to be sure if expanding this equation for e.g is correct?
expand the following:
(a + b) (a + b + c)
a^2 + ab + ac + ba + b^2 + bc
& do I plus the powers or times them in the brackets?
expand the following:
(a^2 + b^2) (a^3 + b^3 + c^2)
a^5 + a^2b^5 + a^2c^2 + b^2a^3 + b ^ 5 + b ^2c^ 2
1. Do you mean
$\log_2x=\frac{1}{7}$ or $\log^2x=\frac{1}{7}$ ?
• November 6th 2009, 09:21 PM
student0451
i am so sorry about my typing,
http://www.mathhelpforum.com/math-he...6f55baf4-1.gif
it is the base of 2
log 2 x = 1/7
so 2 is the base of the log
and thank you for helping me.
• November 6th 2009, 09:24 PM
Bacterius
Then it becomes (I think) :
$log_2(x) = \frac{\ 1}{7}$
$x = 2^{\frac{\ 1}{7}}$
When writing indicators, use the underscore char, it is much more appropriate : log_2(x).
• November 6th 2009, 09:24 PM
VonNemo19
Quote:
Originally Posted by student0451
i am so sorry about my typing,
http://www.mathhelpforum.com/math-he...6f55baf4-1.gif
it is the base of 2
log 2 x = 1/7
so 2 is the base of the log
and thank you for helping me.
$\log_2x=\frac{1}{7}\Rightarrow{x}=2^{1/7}=\sqrt[7]{2}$
• November 6th 2009, 09:28 PM
student0451
yep
and thank you, vondera19 & Bacterius.
I knew there was something wrong between the way I expanded brackets. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 12, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8884656429290771, "perplexity": 2473.7252207927745}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131305143.93/warc/CC-MAIN-20150323172145-00000-ip-10-168-14-71.ec2.internal.warc.gz"} |
https://en.m.wikipedia.org/wiki/Projective_hierarchy | # Projective hierarchy
In the mathematical field of descriptive set theory, a subset ${\displaystyle A}$ of a Polish space ${\displaystyle X}$ is projective if it is ${\displaystyle {\boldsymbol {\Sigma }}_{n}^{1}}$ for some positive integer ${\displaystyle n}$. Here ${\displaystyle A}$ is
• ${\displaystyle {\boldsymbol {\Sigma }}_{1}^{1}}$ if ${\displaystyle A}$ is analytic
• ${\displaystyle {\boldsymbol {\Pi }}_{n}^{1}}$ if the complement of ${\displaystyle A}$, ${\displaystyle X\setminus A}$, is ${\displaystyle {\boldsymbol {\Sigma }}_{n}^{1}}$
• ${\displaystyle {\boldsymbol {\Sigma }}_{n+1}^{1}}$ if there is a Polish space ${\displaystyle Y}$ and a ${\displaystyle {\boldsymbol {\Pi }}_{n}^{1}}$ subset ${\displaystyle C\subseteq X\times Y}$ such that ${\displaystyle A}$ is the projection of ${\displaystyle C}$; that is, ${\displaystyle A=\{x\in X\mid \exists y\in Y(x,y)\in C\}}$
The choice of the Polish space ${\displaystyle Y}$ in the third clause above is not very important; it could be replaced in the definition by a fixed uncountable Polish space, say Baire space or Cantor space or the real line.
## Relationship to the analytical hierarchy
There is a close relationship between the relativized analytical hierarchy on subsets of Baire space (denoted by lightface letters ${\displaystyle \Sigma }$ and ${\displaystyle \Pi }$ ) and the projective hierarchy on subsets of Baire space (denoted by boldface letters ${\displaystyle {\boldsymbol {\Sigma }}}$ and ${\displaystyle {\boldsymbol {\Pi }}}$ ). Not every ${\displaystyle {\boldsymbol {\Sigma }}_{n}^{1}}$ subset of Baire space is ${\displaystyle \Sigma _{n}^{1}}$ . It is true, however, that if a subset X of Baire space is ${\displaystyle {\boldsymbol {\Sigma }}_{n}^{1}}$ then there is a set of natural numbers A such that X is ${\displaystyle \Sigma _{n}^{1,A}}$ . A similar statement holds for ${\displaystyle {\boldsymbol {\Pi }}_{n}^{1}}$ sets. Thus the sets classified by the projective hierarchy are exactly the sets classified by the relativized version of the analytical hierarchy. This relationship is important in effective descriptive set theory.
A similar relationship between the projective hierarchy and the relativized analytical hierarchy holds for subsets of Cantor space and, more generally, subsets of any effective Polish space.
## Table
Lightface Boldface Σ00 = Π00 = Δ00 (sometimes the same as Δ01) Σ00 = Π00 = Δ00 (if defined) Δ01 = recursive Δ01 = clopen Σ01 = recursively enumerable Π01 = co-recursively enumerable Σ01 = G = open Π01 = F = closed Δ02 Δ02 Σ02 Π02 Σ02 = Fσ Π02 = Gδ Δ03 Δ03 Σ03 Π03 Σ03 = Gδσ Π03 = Fσδ ⋮ ⋮ Σ0<ω = Π0<ω = Δ0<ω = Σ10 = Π10 = Δ10 = arithmetical Σ0<ω = Π0<ω = Δ0<ω = Σ10 = Π10 = Δ10 = boldface arithmetical ⋮ ⋮ Δ0α (α recursive) Δ0α (α countable) Σ0α Π0α Σ0α Π0α ⋮ ⋮ Σ0ωCK1 = Π0ωCK1 = Δ0ωCK1 = Δ11 = hyperarithmetical Σ0ω1 = Π0ω1 = Δ0ω1 = Δ11 = B = Borel Σ11 = lightface analytic Π11 = lightface coanalytic Σ11 = A = analytic Π11 = CA = coanalytic Δ12 Δ12 Σ12 Π12 Σ12 = PCA Π12 = CPCA Δ13 Δ13 Σ13 Π13 Σ13 = PCPCA Π13 = CPCPCA ⋮ ⋮ Σ1<ω = Π1<ω = Δ1<ω = Σ20 = Π20 = Δ20 = analytical Σ1<ω = Π1<ω = Δ1<ω = Σ20 = Π20 = Δ20 = P = projective ⋮ ⋮
## References
• Kechris, A. S. (1995), Classical Descriptive Set Theory, Berlin, New York: Springer-Verlag, ISBN 978-0-387-94374-9
• Rogers, Hartley (1987) [1967], The Theory of Recursive Functions and Effective Computability, First MIT press paperback edition, ISBN 978-0-262-68052-3 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 28, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9257562756538391, "perplexity": 4295.946912011988}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583745010.63/warc/CC-MAIN-20190121005305-20190121031305-00412.warc.gz"} |
http://www.gtmath.com/2018/04/ | ## Parameter Estimation - Part 1 (Reader Request)
Preliminaries: How do mathematicians model randomness?, Monte Carlo Simulation - Part 2, Proof of the Law of Large Numbers
The preliminary post How do mathematicians model randomness? introduced random variables, their probability distributions, and parameters thereof (namely, mean, variance, and standard deviation). This post, the response to a reader request from Anonymous, will cover estimation of parameters based on random sampling. I will explain the difference between parameters and statistics, introduce the concept of estimator bias, and address the reader request's specific question about unbiased estimation of standard deviation.
In Part 2 of this post, I will present a well known historical application of parameter estimation, the German Tank Problem, and compare methods of estimating an unknown population size. Finally, in Part 3, I will introduce complete and sufficient statistics, which allow us to prove that the best estimator among the candidates in Part 2 is the unique minimum variance unbiased estimator.
### Parameters and statistics
Recall from the first preliminary post that we model random phenomena with random variables and their probability distributions. Key characteristics of these distributions (mean, variance, standard deviation, etc.) are called parameters.
Parameters are defined based on all possible values of a random variable (the population) weighted by their relative frequencies of occurrence (i.e. their probabilities). For example, the mean (denoted $\mu_{X}$, ${\Bbb E}(X)$, or simply $\mu$ when there is no ambiguity as to the underlying random variable) of an outcome of interest (i.e. random variable) $X$ is defined as $$\mu = \sum_{i}{x_i {\Bbb P}(x_i)}$$ where each $x_i$ is one of the possible values of $X$ and the sum runs over all possible values (in the continuous case, the sum would be replaced by an integral).
In practice, we do not have access to all possible values of $X$ and their probabilities. Instead, we typically have a sample of observed values, $x_1, x_2, \dotsc, x_n$. Given such a sample, we could estimate $\mu$ using the sample mean $$\bar{x} = \frac{1}{n}\sum_{i=1}^{n}{x_i}$$ The sample mean is a statistic, a value calculated based on sample data, which can be used to estimate the (unknown) parameter value. Notice that $\bar{x}$ can take on different values depending on which random sample we used to calculate it. In other words, $\bar{x}$ is itself a random variable.
### Estimator bias, Bessel's correction
As random variables, statistics have their own probability distributions, known as sampling distributions, and thus their own means, variances, standard deviations, etc. We actually already touched upon this fact in the earlier posts Monte Carlo Simulation - Part 2 and Proof of the Law of Large Numbers, in which we proved that the sample mean $\bar{x}$ has expected value $\mu$ and variance $\frac{\sigma^2}{n}$ (a fact that we will use below).
Since ${\Bbb E}(\bar{x}) = \mu$, we say that the sample mean is an unbiased estimator of the population mean $\mu$. By the same logic, we may estimate the population variance $\sigma^2 = \sum_{i}{(x_i-\mu)^{2}{\Bbb P}(x_i)}$ using the statistic $$s^2_1= \frac{1}{n} \sum_{i=1}^{n}{(x_i-\bar{x})^2}$$ However, this statistic is not an unbiased estimator of $\sigma^2$. In order to see why this is the case, we can compute the expected value ${\Bbb E}(\sigma^2 - s_1^2)$, which would be zero if $s_1^2$ were unbiased: \begin{align} {\Bbb E} \left[ \sigma^2 - s_1^2 \right] &= {\Bbb E} \left[ \frac{1}{n}\sum_{i=1}^{n}{(x_i - \mu)^2} - \frac{1}{n}\sum_{i=1}^{n}{(x_i-\bar{x})^2}\right] \\[2mm] &= \frac{1}{n}{\Bbb E}\left[ \sum_{i=1}^{n}{\left( \left( x_i^2 - 2 x_i \mu + \mu^2) - (x_i^2 - 2 x_i \bar{x} + \bar{x}^2 \right) \right)} \right] \\[2mm] &= {\Bbb E}\left[ \mu^2 - \bar{x}^2 + \frac{1}{n}\sum_{i=1}^{n}{(2x_i (\bar{x}-\mu))} \right] \\[2mm] &= {\Bbb E}\left[ \mu^2 - \bar{x}^2 + 2\bar{x}(\bar{x}-\mu) \right] \\[2mm] &= {\Bbb E}\left[ \mu^2 - 2\bar{x}\mu + \bar{x}^2 \right] \\[2mm] &= {\Bbb E} \left[ (\bar{x} - \mu)^2 \right] \\[2mm] &= \rm{Var}(\bar{x}) \\[2mm] &= \frac{\sigma^2}{n} \end{align} Since ${\Bbb E}[\sigma^2 - s_1^2] = {\Bbb E}[\sigma^2] - {\Bbb E}[s_1^2] = \sigma^2 - {\Bbb E}[s_1^2]$, the above implies that $${\Bbb E}[s_1^2] = \sigma^2 - \frac{\sigma^2}{n} = \frac{n-1}{n}\sigma^2$$ Therefore, the statistic $s^2 = \frac{n}{n-1}s_1^2 = \frac{1}{n-1}\sum_{i=1}^{n}{(x_i-\bar{x})^2}$, known as the sample variance, has expected value $\sigma^2$ and is thus an unbiased estimator of the population variance.
The replacement of $\frac{1}{n}$ with $\frac{1}{n-1}$ in the sample variance formula is known as Bessel's correction. The derivation above shows that the bias in $s_1^2$ arises due to the fact that $(x_i-\bar{x})$ underestimates the actual quantity of interest, $(x_i-\mu)$, by $(\bar{x}-\mu)$ for each $x_i$. Therefore, the bias is the variance of $\bar{x}$, which we proved to be $\frac{\sigma^2}{n}$ in Proof of the Law of Large Numbers. Using $s^2$ instead of $s_1^2$ corrects for this bias.
### Estimation of the standard deviation
Given $s^2$ is an unbiased estimator of $\sigma^2$, we may expect that the sample standard deviation $s=\sqrt{s^2}$ would also be an unbiased estimator of the population standard deviation $\sigma$. However, \begin{align} {\Bbb E}\left[ s^2 \right] &= \sigma^2 \\[2mm] \Rightarrow {\Bbb E}\left[ \sqrt{s^2} \right] &< \sqrt{{\Bbb E}\left[ s^2 \right]} = \sqrt{\sigma^2} = \sigma \end{align} where the inequality follows from Jensen's inequality and the fact that the square root is a concave function (since the area above it is concave, not convex). In other words, $s$ underestimates $\sigma$ on average.
Unfortunately, for estimating the population standard deviation, there is no easy correction as there is for the variance. The size of the necessary correction depends on the distribution of the underlying random variable. For the Normal distribution, there is a complicated exact formula, but simply replacing the $n-1$ in the denominator with $n-1.5$ eliminates most of the bias (with the remaining bias decreasing with increasing sample size). A further adjustment is possible for other distributions and depends on the excess kurtosis, a measure of the "heavy-tailedness" of the distribution in excess of that of the Normal distribution.
While the specific corrections are beyond the scope of this post, for the brave, there is an entire Wikipedia article dedicated to exactly this topic.
### Other measures of estimator quality
Zero bias is certainly a desirable quality for a statistic to have, but an estimator's quality depends on more than just its expected value. A statistic's variance tells us how large of a spread (from its expected value) we may expect when calculating the statistic based on various samples. Just as a statistic with large bias is not particularly helpful, neither is one with no bias but a large variance.
The notion of consistency ties bias and variance together nicely: a consistent estimator is one which converges in probability to the population parameter. This means that, as $n \rightarrow \infty$, the probability of an error greater than some specified amount $\epsilon$ approaches zero. This further implies that both the bias and the variance tend to zero as the sample size grows.
For example, the (weak) law of large numbers implies that $\bar{x}$ is a consistent estimator of $\mu$, as $\lim_{n \rightarrow \infty}{{\Bbb P} \left[ \left| \bar{x} - \mu \right| \geq \epsilon \right]} = 0$ for any $\epsilon > 0$. Furthermore, $s_1^2$ and $s^2$ are both consistent estimators of $\sigma^2$, while $s$ is a consistent estimator of $\sigma$. These examples show that both biased and unbiased estimators can be consistent.
That will do it for Part 1 of this post. Thanks for reading, and look out for Parts 2 and 3, coming up soon. Thanks to Anonymous for the great reader request.
### Sources:
Wikipedia- unbiased estimation of standard deviation
Wikipedia - Bessel's correction
Quora post- estimator bias vs. variance | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9948557615280151, "perplexity": 338.55056430961463}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057796.87/warc/CC-MAIN-20210926022920-20210926052920-00656.warc.gz"} |
https://mechanismsrobotics.asmedigitalcollection.asme.org/PVP/proceedings-abstract/PVP2014/46018/V004T04A014/279572 | Detailed unsteady fluid force and phase measurements for a single tube oscillating purely in the streamwise direction in a rotated triangular tube array subjected to air-water two-phase cross-flow have been conducted in this study for homogeneous void fractions between 0% and 90%. Additionally the streamwise steady forces were measured in two-phase flow at a Reynolds number (based on the pitch velocity), Re = 7.2 × 104. The results are compared to those previously obtained for transverse direction oscillations. The measurement results show that the magnitude of the force coefficients for both directions (drag and lift) is comparable both in trend and quantitatively. However, the phase in the drag direction is negative while that for the lift is positive. The range of variation of the phase is also significantly smaller for the drag direction. Noting that negative phase corresponds to positive damping and vice versa, this observation confirms previous findings of lack of instability in the drag direction for a single flexible tube in a rotated triangular tube array. The drag steady fluid force coefficients were found to increase with dimensionless displacement in the flow direction for the entire range of void fractions considered. The derivative of the measured steady fluid force coefficient, which is an important factor in fluidelastic instability study using the quasi-steady model, was found to remain positive in the drag direction. The effect of void fraction on the unsteady fluid force coefficient and other dynamic parameters such as hydrodynamic mass and damping are also discussed.
This content is only available via PDF. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9544116258621216, "perplexity": 827.9700864063108}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056578.5/warc/CC-MAIN-20210918214805-20210919004805-00159.warc.gz"} |
https://www.computer.org/csdl/trans/tc/1974/01/01672366-abs.html | Issue No. 01 - January (1974 vol. 23)
ISSN: 0018-9340
pp: 21-33
R.M. Keller , Department of Electrical Engineering, Princeton University
ABSTRACT
Of concern here are asynchronous modules, i.e., those whose activity is regulated by initiation and completion signals with no clocks being present. First a number of operating conditions are described that are deemed essential or useful in a system of asynchronous modules, while retaining an air of independence of particular hardware implementations as much as possible. Second, some results are presented concerning sets of modules that are universal with respect to these conditions. That is, from these sets any arbitrarily complex module may be constructed as a network. It is stipulated that such constructions be speed independent, i.e., independent of the delay time involved in any constituent modules. Furthermore it is required that the constructions be delay insensitive in the sense that an arbitrary number of delay elements may be inserted into or removed from connecting lines without effecting the external behavior of the network.
INDEX TERMS
Asynchronous, module, networks, parallel, speed-independent, switching.
CITATION
R.M. Keller, "Towards a Theory of Universal Speed-Independent Modules", IEEE Transactions on Computers, vol. 23, no. , pp. 21-33, January 1974, doi:10.1109/T-C.1974.223773
CITATIONS
SEARCH
SHARE
97 ms
(Ver 3.3 (11022016)) | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8388652801513672, "perplexity": 1566.0597372663099}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607245.69/warc/CC-MAIN-20170523005639-20170523025639-00120.warc.gz"} |
https://www.lutsko.com/publication/00059 | # [059] Hydrodynamics of an endothermic gas with application to bubble cavitation
James F. Lutsko, "Hydrodynamics of an endothermic gas with application to bubble cavitation", J. of Chemical Physics, 125, 164319 (2006) http://jimlutsko.github.io/files/Lutsko_JCP_2006.pdf
## Abstract
The hydrodynamics for a gas of hard spheres which sometimes experience inelastic collisions resulting in the loss of a fixed, velocity-independent, amount of energy Delta is investigated with the goal of understanding the coupling between hydrodynamics and endothermic chemistry. The homogeneous cooling state of a uniform system and the modified Navier-Stokes equations are discussed and explicit expressions given for the pressure, cooling rates, and all transport coefficients for D dimensions. The Navier-Stokes equations are solved numerically for the case of a two-dimensional gas subject to a circular piston so as to illustrate the effects of the enegy loss on the structure of shocks found in cavitating bubbles. It is found that the maximal temperature achieved is a sensitive function of Delta with a minimum occurring near the physically important value of Delta 12 000 K 1 eV. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8598405122756958, "perplexity": 1037.1425270671123}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046149929.88/warc/CC-MAIN-20210723143921-20210723173921-00344.warc.gz"} |
http://mathhelpforum.com/algebra/118030-log.html | Math Help - Log
1. Log
Prove that if $5^{3t-2}=10^{t+1}$ then $t=\frac{log 250}{log 12.5}$
I know how to solve it. just duno how to rearrange to get answer required.
thanks
2. Originally Posted by RRH
$
5^{3t-2}=10^{t-1}
$
$
log 5^{3t-2}=log 10^{t-1}
$
$
(3t-2)log 5=(t-1)log 10
$
$
3tlog 5-2log5=tlog 10-log 10
$
$
3tlog 5-tlog 10=2log 5-log10
$
$
t(3log 5-log 10)=2log 5-log 10
$
$
t(log 5^3-log10)=log 5^2-log 10
$
$
t(log125-log 10)=log 25-log 10
$
$
t(\frac{log 125}{log 10})=\frac{log 25}{log 10}
$
$
t=\frac{log 25}{log 10}*\frac{log 10}{log 125}
$
hmmm, I get a different answer than you do
$
t=\frac{log 250}{log 1250}
$
sry it's plus 1 not minus 1
3. Originally Posted by BabyMilo
sry it's plus 1 not minus 1
no problem I will rework it so you can see the steps
$
5^{3t-2}=10^{t+1}
$
$
log 5^{3t-2}=log 10{t+1}
$
$
(3t-2)log 5=(t+1)log 10
$
$
3tlog 5-2log 5=tlog 10+log 10
$
$
3tlog 5-tlog 10=log 10+2log 5
$
$
t(3log 5-log 10)=log 10+2log5
$
$
t(log5^3-log10)=log 10+log5^2
$
$
t(log 125-log 10)=log 10+log 25
$
$
tlog (\frac{125}{10})=log 250
$
$
tlog 12.5=log 250
$
$
t=\frac{log 250}{log 12.5}
$
much better | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 24, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8333461284637451, "perplexity": 1685.7819343466633}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510273381.44/warc/CC-MAIN-20140728011753-00314-ip-10-146-231-18.ec2.internal.warc.gz"} |
https://events.berkeley.edu/index.php/calendar/sn/?event_ID=112553&date=2017-11-13&tab=all_events | ## Combinatorics Seminar: Frieze patterns
Seminar | November 13 | 12:10-1 p.m. | 939 Evans Hall | Note change in date
Khrystyna Serhiyenko, UC Berkeley
Department of Mathematics
A frieze is a lattice of shifted rows of positive integers satisfying a diamond rule: the determinant of every 2 by 2 matrix formed by the neighboring entries is 1. Friezes were first studied by Conway and Coxeter in 1970's, but they gained fresh interest in the last decade in relation to cluster algebras. In particular, there exists a bijection between friezes and cluster algebras of type A. We introduce mutations of friezes, that are compatible with cluster mutations, and describe the resulting entries using combinatorics of quiver representations. We will also discuss an important generalization of the classical friezes, called $sl_k$ friezes, where the determinant of every k by k matrix is 1. In a similar manner, we investigate how $sl_k$ friezes can be obtained from cluster algebras associated to Grassmannians Gr(k,n). This is joint work with K. Baur, E. Faber, S. Gratz, and G. Todorov.
[email protected] | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8894304633140564, "perplexity": 1382.0655086836123}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986677964.40/warc/CC-MAIN-20191018055014-20191018082514-00454.warc.gz"} |
http://math.stackexchange.com/questions/114293/evaluate-a-character-sum-sum-limits-r-1p-1-2r-left-fracrp-r | # Evaluate a character sum $\sum\limits_{r = 1}^{(p - 1)/2}r \left( \frac{r}{p} \right) = 0$ for a prime $p \equiv 7 \pmod 8$
I do not know how to prove the following statement:
Let $p \equiv 7 \pmod 8$ be a prime. Then $$\sum\limits_{r = 1}^{\frac{p - 1}{2}}r \left( \frac{r}{p} \right) = 0$$ where $\left( \dfrac{\cdot}{\cdot} \right)$ is the Legendre symbol.
Could anybody help me to answer this question?
-
Dear ksj03: Welcome to math.stackexchange. Is this a homework problem? If so, please add (homework) tag. Also it is helpful to show us what methods did you try? and how did you fail or get stuck? – user2468 Feb 28 '12 at 4:38
How did you fail? J.D., if OP knew how he failed he would know how to answer! He can only know what fails, but not how... – Patrick Da Silva Feb 28 '12 at 4:39
This is not a homework! I just encountered it as an exercise in a book, then I wanted to solve it. Of course if you can solve it in a minute and would like to tell me, I will appreciate very much. If you do not want to take more time after an unsuccessful try, I still thank you very much. – ksj03 Feb 28 '12 at 5:07
Didn't you - or someone - just ask this question yesterday? – Gerry Myerson Feb 28 '12 at 6:42
Found it - math.stackexchange.com/questions/113954/the-bound-of-valuation. That's not the way to do things here. – Gerry Myerson Feb 28 '12 at 6:56
I now have a completely elementary proof. I thought I'd say a little bit about how I found this. I suspected that the only relevant properties of primes which are $7 \mod 8$ are that $2$ is a QR and $-1$ is not. Just using these two facts gave me tons of relationships between sums of Legnedre symbols, but I was getting lost in a pile of relationships without being able to pick out the ones I needed. I cut down the clutter in two ways: (1) I was frequently breaking my sums up into parts. I decided that I was only going to break the set $\{ 1,2, \ldots, p-1 \}$ into four pieces, no more, and pursue that line to the end. If it failed, I'd go back and try more pieces. (2) Since all the relations I was finding were linear, I didn't have to try to fit them into a logical chain. I just had to write down everything I knew, and what I wanted to conclude; it was then a matter of mechanical linear algebra whether or not my givens implied my conclusion.
So, we will partition $\{ 1,2, \ldots, p-1 \}$ into $4$ sets. An element $r$ is in
• $A$ if $r$ is odd and $r < p/2$
• $B$ if $r$ is even and $r < p/2$
• $C$ if $r$ is odd and $r>p/2$
• $D$ if $r$ is even and $r>p/2$.
For $X$ one of the sets $A$, $B$, $C$, $D$, write $S(X)$ for $\sum_{r \in X} \left( \frac{r}{p} \right)$ and write $T(X)$ for $\sum_{r \in X} r \left( \frac{r}{p} \right)$. We have the following relations (exercise!) $$\begin{array}{r@{}c@{}lr@{}c@{}l} S(D)&=&-S(A) & T(D) &=&- (p S(A) - T(A)) \\ S(C)&=&-S(B) & T(C)&=&- (p S(B) - T(B)) \\ S(A)+S(B)&=&S(B)+S(D) &2(T(A)+T(B)) &=& T(B)+T(D) \end{array}$$
The left three equations imply that $(S(A), S(B), S(C), S(D)) = (0,h,-h,0)$ for some $h$. Then the right three imply that $(T(A),T(B),T(C),T(D)) = (x,-x,-x-ph,x)$ for some $x$. None of this required creative thought; I just found the kernel of a $6 \times 8$ matrix.
Your desired conclusion is $T(A)+T(B)=0$, which we see is true. The equality between Dirichlet's expressions is $S(A)+S(B) = -(1/p) (T(A)+T(B)+T(C)+T(D))$, which we also see is true.
-
Oh, okay. Now I see why they're relevant to the problem but I'll have to think some about all those relations. – anon Feb 29 '12 at 1:04
I think the big display is cut off at the left margin and needs editing. – Gerry Myerson Feb 29 '12 at 3:26
Since you note that you are doing this as an exercise in a book, I imagine you might like a hint more than a proof. Look at the following two formulas, both due to Dirichlet: $$L(1) = - \frac{\pi}{\sqrt{q}} \sum_{n=1}^{q-1} \frac{n}{q} \left( \frac{n}{q} \right).$$ $$L(1) = \frac{\pi}{\sqrt{q} \left(2-\left( \frac{2}{q} \right) \right) } \sum_{n=1}^{(q-1)/2} \left( \frac{n}{q} \right)$$
Setting these two formulas equal to each other and doing some elementary manipluation will yield your result.
Proofs of both of these formulas can be found in Chapter 1 of Davenport's Multiplicative Number Theory, available online through google books. I would bet it is possible to eliminate the infite series and give a direct proof of the equality of the two sides, but I don't see it right now.
-
Thanks for all the help. I have a proof now. Since $p\equiv7\mod 8$, $(\frac{-1}{p}) = -1$ and $(\frac{2}{p}) = 1$. Thus we have $$\sum\limits_{{1\leq r\leq p - 1}}r(\frac{r}{p}) = \sum\limits_{1\leq r\leq \frac{p - 1}{2}}2r(\frac{r}{p}) - \sum\limits_{1\leq r\leq \frac{p - 1}{2}}(p - 2r)(\frac{r}{p}) = \sum\limits_{1\leq r\leq \frac{p - 1}{2}}(4r - p)(\frac{r}{p})$$ and $$\sum\limits_{1\leq r\leq p - 1}r(\frac{r}{p}) = \sum\limits_{1\leq r\leq\frac{p - 1}{2}}r(\frac{r}{p}) - \sum\limits_{1\leq r\leq\frac{p - 1}{2}}(p - r)(\frac{r}{p}) = \sum\limits_{1\leq r\leq \frac{p - 1}{2}}(2r - p)(\frac{r}{p})$$ hence $\sum\limits_{1\leq r\leq \frac{p - 1}{2}}r(\frac{r}{p}) = 0$.
-
I added double dollar signs in order to make the formulas more readable. You should consider doing this in the future anytime you have a bulky mathematical expression. – Alex Becker Feb 29 '12 at 6:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9029221534729004, "perplexity": 286.70436149893976}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645294817.54/warc/CC-MAIN-20150827031454-00010-ip-10-171-96-226.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/254352/how-can-i-approach-to-prove-the-function-is-one-to-one-given-specific-condition | # how can i approach to prove the function is one-to-one given specific condition?
In the book(complex variable ;herb silverman), there is a proof about univalent function. My question is how to prove the proposition in special case that $f(z) = f(z_0)$. I take several approachs such as transforming function as $g(z):=f(z)+az$ and applies it similarly but it doesn't work. I really yearn for a method to solve it.
Let $f(z)$ be analytic in a simply connected domain $D$ and on its boundary, the simple closed contour $C$. If $f(z)$ is one-to-one on $C$, then $f(z)$ is one-to-one in $D$.
Proof. Choose a point $z_0$ in $D$ such that $w_0 = f(z_0) ≠ f(z)$ for $z$ on $C$. According to the argument principle, the number of zeros of $f(z)−f(z_0)$ in $D$ is given by $(1/2π)\Delta C \arg{f(z) − f(z_0)}$. By hypothesis, the image of $C$must be a simple closed contour, which we shall denote by $C$. Thus the net change in the argument of $w − w_0 = f(z) − f(z_0)$ as $w = f(z)$ traverses the contour $C$ is either $+2π$ or $−2π$, according to whether the contour is traversed counterclockwise or clockwise. Since $f(z)$ assumes the value $w_0$ at least once in $D$, we must have That is, $f(z)$ assumes the value $f(z_0)$ exactly once in $D$. This proves the theorem for all points $z_0$ in D at which $f(z) ≠ f(z_0)$ when $z$ is on $C$.
If $f(z) = f(z_0)$ at some point on $C$, then the expression $\Delta C \arg {f(z) − f(z_0)}$ is not defined. We leave for the reader the completion of the proof in this special case.
-
thanks nameless for improved format. – JY. Dec 9 '12 at 11:38
Sorry, but the proof you give doesn't make sense ... are you sure of all the equal signs? Shouldn't there be some $\neq$ ? – wisefool Dec 9 '12 at 13:40
@wisefool you right. I'am sorry that there are mistyping. – JY. Dec 9 '12 at 15:41
No matter what the proof should be, but if the problem is the case when, for a given $z_0\in C$ there is $z\in D$ such that $f(z)=f(z_0)$, then the solution might be the following: the image $f(D\cup C)$ is a simply connected domain with boundary $f(C)$, and $f$ is holomorphic on $D$, therefore for any open ball $B$ around $z$, $f(B)$ is an open set around $f(z)$ (open mapping theorem). But $f(z)=f(z_0)\in f(C)$ is on the boundary of $f(D\cup C)$ and this is a contraddiction: $f(z)\in f(C)=bf(D)$ but $f(B)$ is an open set and $f(z)\in f(B)\subseteq f(D)$, therefore $f(z)$ is an inner point of $f(D)$. So, the described situation is impossible.
PS: the fact that $f(C)$ is the boundary of the image is a simple application of the argument principle: any point outside the bounded part of the complement of $f(C)$ has winding number $0$, so it has $0$ preimages.
well, $f(C)$ is a one-to-one image of a closed simple curve, so it is a closed simple curve; $f(D)$ is connected and is contained in $\mathbb{C}\setminus f(C)$. By Jordan theorem (which is essentially a winding number argument), the latter is a disconnected set with two connected components, say $A^+$ and $A^-$. Therefore $f(D\cup C)$ is either $A^+\cup f(C)=\overline{A}^+$ or $A^-\cup f(C)=\overline{A}^-$. One of these two is unbounded, but $D\cup C$ is compact, so its continuous image has to be compact, hence $f(C\cup D)$ is the closure of the bounded component of $\mathbb{C}\setminus f(C)$. – wisefool Dec 11 '12 at 13:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9666053056716919, "perplexity": 104.8395827797867}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644062327.4/warc/CC-MAIN-20150827025422-00033-ip-10-171-96-226.ec2.internal.warc.gz"} |
https://electowiki.org/wiki/Maximum_Constrained_Approval_Bucklin | # Maximum Constrained Approval Bucklin
Maximum Constrained Approval Bucklin (MCAB) is a multiwinner method devised by Kristofer Munsterhjelm, based on Bucklin voting. It uses linear optimization to calculate candidate support by assuming earlier surplus transfers were maximally favorable to the candidate in question, and thus reduces the strategic impact of lowering or raising a winning candidate on a ballot.
MCAB works in multiple rounds, each of which sets an implicit approval cutoff for every ballot. The first round considers first preferences as approved, the second round considers first and second preferences, and so on. For each round, the method evaluates every remaining unelected candidate. The unelected candidate with the greatest support is elected if his support is greater than a Droop quota, and a round may elect more than one candidate.
To determine the support of a particular candidate X, MCAB uses a constraint mechanism when counts implicit approvals. To be Droop proportional, MCAB deweights voters who approve of elected candidates, after they're elected. Unlike BTV and STV, however, MCAB does not directly decide which voters are to be deweighted. Instead it adds a constraint to the next rounds that a Droop quota of voters preferring each elected candidate must be discarded. When counting the support for X in later rounds, it maximizes the possible support for X given those constraints. This is called making the case for X.
Determining which voters to eliminate to maximize the support of a candidate subject to earlier constraints is relatively simple to do by linear programming, but hard to do by hand; MCAB can't be counted entirely by hand.
What ended up as MCAB was initially proposed in 2017[1] and simplified later that year[2]. The method detailed here has been further modified from the EM posts to resist Woodall free riding.
## Determining the support for X
Let ${\displaystyle r}$ be the rank and round number, ${\displaystyle n}$ the number of candidates so far, and ${\displaystyle c_k}$ the kth elected candidate. Consider unranked candidates to be ranked equal below every explicitly ranked candidate, i.e. never approved in any round. The linear program for determining the support of candidate X as (n+1)th candidate is:
```maximize: sum over all voters v: support[v][n+1]
```
```subject to:
for i = 1 ... n+1:
(sum over all voters v: support[v][i]) > Droop quota (1)
```
``` for all voters v,
for i = 1 ... n+1: (2)
support[v][i] >= 0 if voter v ranks c_i at or
higher than rank r
support[v][i] = 0 otherwise
```
``` for all voters v: (3)
(sum over i = 1 ... n+1: support[v][i]) <= v's initial weight
```
where ${\displaystyle c_{n+1}}$ is provisionally defined as X for the purpose of determining X's support.
The three clauses do the following:
1. imposes the Droop constraint: that any elected candidate ${\displaystyle c_i}$ must have more than a Droop quota's worth of approvals according to the implicit approval cutoff for round r.
2. defines support: ${\displaystyle c_i}$'s support is the number of voters who rank ${\displaystyle c_i}$ at or above rank ${\displaystyle r}$.
3. defines each voter's budget: no voter can spread more support across the candidates than his ballot's initial weight. The initial weight is 1 per voter for ordinary elections, or some other value in case of a weighted vote.
If the linear program is infeasible, then there's no way for X to obtain support exceeding a Droop quota, and so X is disqualified from being elected in that round. Among the remaining candidates, the candidate with the greatest support is elected.
## Procedure
With the linear programming for determining the support of X defined, the MCAB procedure is this:
2. Mark every unelected candidate as qualified for the round.
3. For every unelected undisqualified candidate X:
1. Solve the linear program to find X's support with r = round.
4. If every unelected candidate is disqualified by the linear program:
1. If all ranks have been considered, the method is done.
2. Otherwise, increment the round number and go to 2.
5. If not all candidates have zero support, elect the candidate with greatest support and go to 3.
As a shortcut, the procedure can be stopped once every seat has been filled, since every remaining candidate will be disqualified in every round from that point on.
## Example
The vote management example from Wikipedia's article on Schulze STV, https://en.wikipedia.org/wiki/Schulze_STV#Scenario
### Without vote management
The unmanaged ballot set is
• 12: A>B>C
• 38: A>C>B
• 13: C>A>B
• 27: B
The Droop quota for two seats is 30.
#### First round, r = 1
A has 50 votes and is elected. No other candidate passes the Droop quota, and as there are no equal ranks, no A-first surplus can contribute to getting anyone else elected, so the linear program will mark all as disqualified afterwards. The combined surplus of the A-first voters come out to 50-30=20, but we don't know which particular ballots will be eliminated.
#### Second round, r = 2
Making the case for B: The optimum assignment is to allocate 12 of the 20 remaining votes to A>B>C, and then eliminate 8 of the A>C>B ballots. This gives B a support score of 27 (from the B-first ballots) + 12 (from the A>B>C ballots) for a total of 39.
Making the case for C: The optimum assignment is to allocate all 20 remaining votes to A>C>B. This gives C a score of 20 + 13 (from the C>A ballots) = 33.
Both candidates have exceeded the Droop quota, but B has greater support, so B is elected. After this, technically speaking, C gets another shot.
Making the case for C, again: Of the 20 remaining votes from electing A, 3 of these must go to electing B, so that 27 + 3 >= 30. That leaves 30 for C, which is not enough to clear the Droop quota. So C is disqualified.
Now every remaining candidate (namely C) is disqualified and the procedure is over. The winners are A and B.
### With vote management
The vote-managed ballot set is
• 12: A>B>C
• 26: A>C>B
• 25: C>A>B (now includes some dishonest A>C>B voters)
• 27: B
#### First round, r = 1
As above, A is elected and then everybody else is disqualified. The surplus is 8 (38 - 30).
#### Second round, r = 2
Making the case for B: The optimum assignment is to allocate all 8 surplus votes to A>B>C, and then eliminate 4 A>B>C ballots and every A>C>B ballot. Thus B's max support is 27 + 8 = 35.
Making the case for C: The optimum assignment is to allocate all 8 surplus votes to A>C>B and eliminate the remaining A>C>B ballots, as well as every A>B>C ballot. Doing so gives a support of 25 (from the C>A ballots) + 8 = 33.
Both candidates have exceeded the Droop quota, but B has greater support and so is elected.
The case for C again goes as above. Because two Droop quotas have been elected, there are not enough votes left to get anyone else above the Droop quota.
Since A and B were elected in both cases, the vote-management failed.
## Criterion compliances
MCAB passes the following criteria:
MCAB fails the following criteria:
### Weak invulnerability to Hylland free riding
Suppose B is elected in round p and then later, the method arrives at round q. A voter who votes B ahead of C will contribute to C's support when the method makes the case for C regardless of whether he voted B ahead or not, unless B would not have been elected in any earlier round without his support. In that respect, Hylland free riding has no impact on MCAB. However, suppose there is another voter who votes B ahead of D. When making the case for D, MCAB needs to allocate a Droop quota of votes towards B since B was elected earlier. The B>C voter makes himself available to cover B's deficit when MCAB makes the case for someone who is not C. Had he not voted for B, he would not be thus available, and perhaps MCAB would have needed to exclude the B>D voter instead, electing C instead of D. So while doing Hylland free riding is more risky than in BTV, it can still pay off, and thus MCAB fails weak invulnerability to Hylland free riding.
#### Example
Without vote management:
• 12: A>B>C>D
• 38: A>C>D>B
• 13: C>A>D>B
• 27: B
The first two ranks are the same as in the Schulze STV example, so A and B are elected.
With vote management:
• 12: A>B>C>D
• 22: A>C>D>B
• 29: C>D>A>B (dishonest A>C>D>B and C>A>D>B voters)
• 27: B
In the first round, A wins. In the second round, the optimal allocation when making the case for B is to remove 22 A>C ballots and 8 A>B ballots. B's score is thus 31. C's score is unchanged, so the free-riding pays off: C wins.
### Monotonicity
Like BTV, MCAB fails the monotonicity criterion due to a lookahead problem[3]. However, MCAB passes the two criteria above as long as the candidate to elect in a round is chosen (by some method) from the set of candidates with above Droop quota support for that round. Thus, it is possible that a variant that uses a yet unknown lookahead criterion instead of electing the candidate with the greatest support, could pass monotonicity.
## References
1. Munsterhjelm, K. (2017-01-06). "Bucklin multiwinner method". Election-methods mailing list archives.
2. Munsterhjelm, K. (2017-09-15). "A simpler vote management-resistant Bucklin LP". Election-methods mailing list archives.
3. Munsterhjelm, K. (2018-02-18). "Path dependence monotonicity failure in BTV". Election-methods mailing list archives. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 8, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8024013042449951, "perplexity": 4285.0319519295035}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943746.73/warc/CC-MAIN-20230321193811-20230321223811-00104.warc.gz"} |
http://physics.stackexchange.com/questions/28275/proving-that-the-weak-hypercharge-gauge-field-is-not-a | # Proving that the weak hypercharge gauge field is not A
Under the electroweak gauge group $SU(2)_LU(1)_Y$ one identifies the 4 gauge fields $W^+, W^-, W^0, B$. After symmetry breaking $W^0$ and $B$ mix to give the observed fields $Z^0$ and $A$. Is there an intuitive argument showing immediatly that $A$ can not be identified with $B$?
-
Yes--- the electric charge is unbroken, from the zero mass of the photon, so B would have to be unbroken. But B commutes with all the generators of the SU(2), so all the electroweak doublets would have the same electric charge. But this is impossible, as the only things in a family with a given electric charge are unique--- they can't make an SU(2) doublet with anything else.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9725446105003357, "perplexity": 290.3554910119462}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999673298/warc/CC-MAIN-20140305060753-00075-ip-10-183-142-35.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/me-a-vector-problem.286624/ | # Me(a vector problem)
1. Jan 22, 2009
plz help me(a vector problem)
1. The problem statement, all variables and given/known data
If $$\vec{}A$$ has constant magnitude show that $$\vec{}A$$ and d$$\vec{}A$$/dt are perpendicular provided that d$$\vec{}A$$/dt $$\neq$$0
also give one physical example of this problem.
plz help me in this problem.
thnx.
2. Relevant equations
3. The attempt at a solution
2. Jan 22, 2009
### chrisk
Re: plz help me(a vector problem)
This can be thought of as a circular motion problem with constant radius. Since the radius vector has a constant magnitude (constant radius), then only the position of the vector changes with time. Draw two vectors with magnitue A, one displaced an amount delta theta from the other. Find the vector difference which will be delta A, then take the limit of delta A divided by delta theta as delta theta goes to zero.
3. Jan 22, 2009
Re: plz help me(a vector problem)
i didnt got u !!!
Hey Chris buddy can u plzz solve it for me plzz..
4. Jan 22, 2009
### Staff: Mentor
Re: plz help me(a vector problem)
No, we will not solve this for you. You have been given a great hint, and now the rest is up to you.
Please re-read the Rules link at the top of the page, especially the part about how you must do the bulk of the work on homework/coursework problems.
Show your work, and if you have a specific question, we can offer hints and tutorial help. We do not do your homework for you.
5. Jan 22, 2009
Re: plz help me(a vector problem)
ooops sorry !!
I didnt knew the rules..
it wont happen next tym.
6. Jan 22, 2009
### Dick
Re: plz help me(a vector problem)
The magnitude of A squared is A.A (dot product). Is that a help? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8637387752532959, "perplexity": 2758.909774823076}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169776.21/warc/CC-MAIN-20170219104609-00111-ip-10-171-10-108.ec2.internal.warc.gz"} |
https://mathoverflow.net/questions/206749/for-what-g-is-repds-3-ad-grothendieck-equivalent-to-repg/206756#206756 | # For what $G$ is $Rep(D(S_3))_{ad}$ Grothendieck equivalent to $Rep(G)$?
Given a fusion category $\mathcal C$, the Grothendieck Ring $K_0(\mathcal C)$ is the $\mathbb Z$-based ring whose basis elements correspond to isomorphism classes of simple objects and whose multiplication is given by
$$X\times Y = \sum_Z N_{XY}^Z Z, N_{XY}^Z=\vert \mathcal C(X\otimes Y,Z)\vert$$
Two fusion categories $\mathcal C$ and $\mathcal D$ are said to be Grothendieck equivalent if $K_0(\mathcal C)\cong K_0(\mathcal D)$.
Given $\mathcal C$, the adjoint subcategory $\mathcal C_{ad}$ is the full fusion subcategory of $\mathcal C$ generated by $X\otimes X^*$, where $X$ is simple.
$Rep(D(S_3))$ has eight simple objects $\{1,\epsilon, \phi_{i=1,\ldots,4},\psi_\pm\}$ and $Rep(D(S_3))_{ad}$ is the subcategory generated by $\{1,\epsilon,\phi_{i=1,\ldots,4}\}$. Its Grothendieck ring is commutative and determined by
\begin{align*} \epsilon \otimes \epsilon &\cong 1 \\ \epsilon \otimes \phi_i &\cong \phi_i \\ \phi_i \otimes \phi_i &\cong 1 \oplus \epsilon \oplus \phi_i \\ \phi_i \otimes \phi_j &\cong \phi_k \oplus \phi_l & i\neq j \neq k \neq l \\ \end{align*}
$Rep(D(S_3))$ is modular so $Rep(D(S_3))_{ad}$ is braided(properly premodular). $Rep(D(S_3))_{ad}$ also admits a braiding with S-matrix
$$\left( \begin{array}{cccccc} 1 & 1 & 2 & 2 & 2 & 2 \\ 1 & 1 & 2 & 2 & 2 & 2 \\ 2 & 2 & 4 & 4 & 4 & 4 \\ 2 & 2 & 4 & 4 & 4 & 4 \\ 2 & 2 & 4 & 4 & 4 & 4 \\ 2 & 2 & 4 & 4 & 4 & 4 \\ \end{array}\right).$$
This braiding is symmetric $(s_{ab}=d_a d_b)$ and so $Rep(D(S_3))_{ad}$ is equivalent as a fusion category to $Rep(G)$ for some finite group $G$. What is this $G$?
• Can you tell us the dimensions of the simples? That would narrow the search considerably. I can't extract that information at a glance from what you've written. May 15, 2015 at 22:43
• They are {1,1,2,2,2,2}. May 15, 2015 at 22:44
• I know that this is not $D_9$. It has objects of the same dimension, but different tensor structure on the two dimensional objects. There, only one object gives a $Rep(S_3)$ subcategory. May 15, 2015 at 22:46
• In that case it looks like the only possible candidate among the groups of order 18 (groupprops.subwiki.org/wiki/Groups_of_order_18) is $(C_3 \times C_3) \rtimes C_2$, with $C_2$ acting by inverse. May 15, 2015 at 23:13
Despite my love of the finite group game, let me give an argument that doesn't use the classification of groups of order 18. The 1-dimensional objects correspond to representations of the abelianization. So your group must have abelianization $C_2$, and so its commutator subgroup must be a group of size 9. Note that this splits as a semidirect product because there's a 2-sylow subgroup.
A full tensor subcategory of $\mathrm{Rep}(G)$ which is closed under summands must be of the form $\mathrm{Rep}(G/N)$. (The proof of this is roughly the same as the proof that faithful representations tensor generate.) Thus your group must have $S_3$ as a quotient in four different ways. Hence the commutator subgroup must be elementary abelian and the $C_2$ must act on each of the factors by inversion (and thus on the whole thing by inversion). | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9940145611763, "perplexity": 190.34060328842565}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662521883.7/warc/CC-MAIN-20220518083841-20220518113841-00479.warc.gz"} |
http://www.contrib.andrew.cmu.edu/~ryanod/?cat=88 | ## Chapter 6 notes
The ${\mathbb F}_2$-polynomial representation of a boolean function $f$ is often called its algebraic normal form. It seems to have first been explicitly introduced by Zhegalkin in 1927 [Zhe27].
[...]
[...]
## §6.5: Highlight: Fooling ${\mathbb F}_2$-polynomials
Recall that a density $\varphi$ is said to be $\epsilon$-biased if its correlation with every ${\mathbb F}_2$-linear function $f$ is at most $\epsilon$ in magnitude. In the lingo of pseudorandomness, one says that $\varphi$ fools the class of ${\mathbb F}_2$-linear functions:
[...]
## §6.4: Applications in learning and testing
In this section we describe some applications of our study of pseudorandomness.
[...]
## §6.3: Constructions of various pseudorandom functions
In this section we give some constructions of boolean functions with strong pseudorandomness properties.
[...]
We began our study of boolean functions in Chapter 1.2 by considering their polynomial representations over the real field. In this section we take a brief look at their polynomial representations over the field ${\mathbb F}_2$, with $\mathsf{False}$, $\mathsf{True}$ being represented by $0, 1 \in {\mathbb F}_2$ as usual. Note that in the field ${\mathbb [...] ## §6.1: Notions of pseudorandomness The most obvious spectral property of a truly random function$\boldsymbol{f} : \{-1,1\}^n \to \{-1,1\}$is that all of its Fourier coefficients are very small (as we saw in Exercise 5.8). [...] ## Chapter 6: Pseudorandomness and${\mathbb F}_2\$-polynomials
In this chapter we discuss various notions of pseudorandomness for boolean functions; by this we mean properties of a fixed boolean function which are in some way characteristic of randomly chosen functions. We will see some deterministic constructions of pseudorandom probability density functions with small support; these have algorithmic application in the field of derandomization. [...] | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.953290581703186, "perplexity": 588.2740432069253}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578530040.33/warc/CC-MAIN-20190420200802-20190420222802-00318.warc.gz"} |
http://jdh.hamkins.org/tag/generic-multiverse/ | # Pseudo-countable models
• J. D. Hamkins, “Pseudo-countable models,” mathematics arXiv, 2022.
[Bibtex]
@ARTICLE{Hamkins:Pseudo-countable-models,
author = {Joel David Hamkins},
title = {Pseudo-countable models},
journal = {mathematics arXiv},
year = {2022},
volume = {},
number = {},
pages = {},
month = {},
note = {manuscript under review},
abstract = {},
keywords = {under-review},
source = {},
doi = {},
eprint = {2210.04838},
archivePrefix={arXiv},
primaryClass={math.LO},
url = {http://jdh.hamkins.org/pseudo-countable-models},
}
Abstract. Every mathematical structure has an elementary extension to a pseudo-countable structure, one that is seen as countable inside a suitable class model of set theory, even though it may actually be uncountable. This observation, proved easily with the Boolean ultrapower theorem, enables a sweeping generalization of results concerning countable models to a rich realm of uncountable models. The Barwise extension theorem, for example, holds amongst the pseudo-countable models—every pseudo-countable model of ZF admits an end extension to a model of ZFC+V=L. Indeed, the class of pseudo-countable models is a rich multiverse of set-theoretic worlds, containing elementary extensions of any given model of set theory and closed under forcing extensions and interpreted models, while simultaneously fulfilling the Barwise extension theorem, the Keisler-Morley theorem, the resurrection theorem, and the universal finite sequence theorem, among others.
# Workshop on the Set-theoretic Multiverse, Konstanz, September 2022
Masterclass of “The set-theoretic multiverse” ten years after
Focused on mathematical and philosophical aspects of the set-theoretic multiverse and the pluralist debate in the philosophy of set theory, this workshop will have a master class on potentialism, a series of several speakers, and a panel discussion. To be held 21-22 September 2022 at the University of Konstanz, Germany. (Contact organizers for Zoom access.)
I shall make several contributions to the meeting.
### Master class tutorial on potentialism
I shall give a master class tutorial on potentialism, an introduction to the general theory of potentialism that has been emerging in recent work, often developed as a part of research on set-theoretic pluralism, but just as often branching out to a broader application. Although the debate between potentialism and actualism in the philosophy of mathematics goes back to Aristotle, recent work divorces the potentialist idea from its connection with infinity and undertakes a more general analysis of possible mathematical universes of any kind. Any collection of mathematical structures forms a potentialist system when equipped with an accessibility relation (refining the submodel relation), and one can define the modal operators of possibility $\Diamond\varphi$, true at a world when $\varphi$ is true in some larger world, and necessity $\Box\varphi$, true in a world when $\varphi$ is true in all larger worlds. The project is to understand the structures more deeply by understanding their modal nature in the context of a potentialist system. The rise of modal model theory investigates very general instances of potentialist system, for sets, graphs, fields, and so on. Potentialism for the models of arithmetic often connects with deeply philosophical ideas on ultrafinitism. And the spectrum of potentialist systems for the models of set theory reveals fundamentally different conceptions of set-theoretic pluralism and possibility.
### The multiverse view on the axiom of constructibility
I shall give a talk on the multiverse perspective on the axiom of constructibility. Set theorists often look down upon the axiom of constructibility V=L as limiting, in light of the fact that all the stronger large cardinals are inconsistent with this axiom, and furthermore the axiom expresses a minimizing property, since $L$ is the smallest model of ZFC with its ordinals. Such views, I argue, stem from a conception of the ordinals as absolutely completed. A potentialist conception of the set-theoretic universe reveals a sense in which every set-theoretic universe might be extended (in part upward) to a model of V=L. In light of such a perspective, the limiting nature of the axiom of constructibility tends to fall away.
### Panel discussion: The multiverse view—challenges for the next ten years
This will be a panel discussion on the set-theoretic multiverse, with panelists including myself, Carolin Antos-Kuby, Giorgio Venturi, and perhaps others.
# The ontology of mathematics, Japan Association for the Philosophy of Science, June 2022
I shall give the Invited Lecture for the Annual Meeting (online) of the Japanese Association for the Philosophy of Science, 18-19 June 2022.
Abstract. What is the nature of mathematical ontology—what does it mean to make existence assertions in mathematics? Is there an ideal mathematical realm, a mathematical universe, that those assertions are about? Perhaps there is more than one. Does every mathematical assertion ultimately have a definitive truth value? I shall lay out some of the back-and-forth in what is currently a vigorous debate taking place in the philosophy of set theory concerning pluralism in the set-theoretic foundations, concerning whether there is just one set-theoretic universe underlying our mathematical claims or whether there is a diversity of possible set-theoretic conceptions.
# Pluralism in the ontology of mathematics, MaMuPhi, Paris, February 2022
This will be a talk for the conference L’indépendance mathématique et ses limites logiques, an instance of the MAMUPHI seminar (mathématiques – musique – philosophie), organized by Mirna Džamonja, 12 February 2022. Most talks will be in-person in Paris, but my talk will be on Zoom via https://u-pec-fr.zoom.us/j/86448599486 at 4:30 pm CET (10:30 am EST).
Abstract: What is the nature of mathematical ontology—what does it mean to make existence assertions in mathematics? Is there an ideal mathematical realm, a mathematical universe, that those assertions are about? Perhaps there is more than one. Does every mathematical assertion ultimately have a definitive truth value? I shall lay out some of the back-and-forth in what is currently a vigorous debate taking place in the philosophy of set theory concerning pluralism in the set-theoretic foundations, concerning whether there is just one set-theoretic universe underlying our mathematical claims or whether there is a diversity of possible set-theoretic conceptions.
# Set-theoretic blockchains
• M. E. Habič, J. D. Hamkins, L. D. Klausner, J. Verner, and K. J. Williams, “Set-theoretic blockchains,” Archive for Mathematical Logic, 2019.
[Bibtex]
@ARTICLE{HabicHamkinsKlausnerVernerWilliams2018:Set-theoretic-blockchains,
author = {Miha E. Habič and Joel David Hamkins and Lukas Daniel Klausner and Jonathan Verner and Kameryn J. Williams},
title = {Set-theoretic blockchains},
journal="Archive for Mathematical Logic",
year="2019",
month="Mar",
day="26",
abstract="Given a countable model of set theory, we study the structure of its generic multiverse, the collection of its forcing extensions and ground models, ordered by inclusion. Mostowski showed that any finite poset embeds into the generic multiverse while preserving the nonexistence of upper bounds. We obtain several improvements of his result, using what we call the blockchain construction to build generic objects with varying degrees of mutual genericity. The method accommodates certain infinite posets, and we can realize these embeddings via a wide variety of forcing notions, while providing control over lower bounds as well. We also give a generalization to class forcing in the context of second-order set theory, and exhibit some further structure in the generic multiverse, such as the existence of exact pairs.",
issn="1432-0665",
doi="10.1007/s00153-019-00672-z",
note = {},
abstract = {},
eprint = {1808.01509},
archivePrefix = {arXiv},
primaryClass = {math.LO},
keywords = {},
source = {},
url = {http://wp.me/p5M0LV-1M8},
}
Abstract. Given a countable model of set theory, we study the structure of its generic multiverse, the collection of its forcing extensions and ground models, ordered by inclusion. Mostowski showed that any finite poset embeds into the generic multiverse while preserving the nonexistence of upper bounds. We obtain several improvements of his result, using what we call the blockchain construction to build generic objects with varying degrees of mutual genericity. The method accommodates certain infinite posets, and we can realize these embeddings via a wide variety of forcing notions, while providing control over lower bounds as well. We also give a generalization to class forcing in the context of second-order set theory, and exhibit some further structure in the generic multiverse, such as the existence of exact pairs.
# A question in set-theoretic geology: if $M[G][K]=M[H][K],$ then can we conlude $M[G]=M[H]$?
I was recently asked this interesting question on set-theoretic geology by Iian Smythe, a set-theory post-doc at Rutgers University; the problem arose in the context of one of this current research projects.
Question. Assume that two product forcing extensions are the same $$M[G][K]=M[H][K],$$ where $M[G]$ and $M[H]$ are forcing extensions of $M$ by the same forcing notion $\mathbb{P}$, and $K\subset\mathbb{Q}\in M$ is both $M[G]$ and $M[H]$-generic with respect to this further forcing $\mathbb{Q}$. Can we conclude that $$M[G]=M[H]\ ?$$ Can we make this conclusion at least in the special case that $\mathbb{P}$ is adding a Cohen real and $\mathbb{Q}$ is collapsing the continuum?
It seems natural to hope for a positive answer, because we are aware of many such situations that arise with forcing, where indeed $M[G]=M[H]$. Nevertheless, the answer is negative. Indeed, we cannot legitimately make this conclusion even when both steps of forcing are adding merely a Cohen real. And such a counterexample implies that there is a counterexample of the type mentioned in the question, simply by performing further collapse forcing.
Theorem. For any countable model $M$ of set theory, there are $M$-generic Cohen reals $c$, $d$ and $e$, such that
1. The Cohen reals $c$ and $e$ are mutually generic over $M$.
2. The Cohen reals $d$ and $e$ are mutually generic over $M$.
3. These two pairs produce the same forcing extension $M[c][e]=M[d][e]$.
4. But the intermediate models are different $M[c]\neq M[d]$.
Proof. Fix $M$, and let $c$ and $e$ be any two mutually generic Cohen reals over $M$. Let us view them as infinite binary sequences, that is, as elements of Cantor space. In the extension $M[c][e]$, let $d=c+e \mod 2$, in each coordinate. That is, we get $d$ from $c$ by flipping bits, but only on coordinates that are $1$ in $e$. This is the same as applying a bit-flipping automorphism of the forcing, which is available in $M[e]$, but not in $M$. Since $c$ is $M[e]$-generic by reversing the order of forcing, it follows that $d$ also is $M[e]$-generic, since the automorphism is in $M[e]$. Thus, $d$ and $e$ are mutually generic over $M$. Further, $M[c][e]=M[d][e]$, because $M[e][c]=M[e][d]$, as $c$ and $d$ were isomorphic generic filters by an isomorphism in $M[e]$. But finally, $M[c]$ and $M[d]$ are not the same, because from $c$ and $d$ together we can construct $e$, because we can tell exactly which bits were flipped. $\Box$
If one now follows the $e$ forcing with collapse forcing, one achieves a counterexample model of the type mentioned in the question, namely, with $M[c][e*K]=M[d][e*K]$, but $M[c]\neq M[d]$.
I have a feeling that my co-authors on a current paper in progress, Set-theoretic blockchains, on the topic of non-amalgamation in the generic multiverse, will tell me that the argument above is an instance of some of the theorems we prove in the latter part of that paper. (Miha, please tell me in the comments, if you see this, or tell me where I have seen this argument before; I think I made this argument or perhaps seen it before.) The paper is
• M. E. Habič, J. D. Hamkins, L. D. Klausner, J. Verner, and K. J. Williams, “Set-theoretic blockchains,” Archive for Mathematical Logic, 2019.
[Bibtex]
@ARTICLE{HabicHamkinsKlausnerVernerWilliams2018:Set-theoretic-blockchains,
author = {Miha E. Habič and Joel David Hamkins and Lukas Daniel Klausner and Jonathan Verner and Kameryn J. Williams},
title = {Set-theoretic blockchains},
journal="Archive for Mathematical Logic",
year="2019",
month="Mar",
day="26",
abstract="Given a countable model of set theory, we study the structure of its generic multiverse, the collection of its forcing extensions and ground models, ordered by inclusion. Mostowski showed that any finite poset embeds into the generic multiverse while preserving the nonexistence of upper bounds. We obtain several improvements of his result, using what we call the blockchain construction to build generic objects with varying degrees of mutual genericity. The method accommodates certain infinite posets, and we can realize these embeddings via a wide variety of forcing notions, while providing control over lower bounds as well. We also give a generalization to class forcing in the context of second-order set theory, and exhibit some further structure in the generic multiverse, such as the existence of exact pairs.",
issn="1432-0665",
doi="10.1007/s00153-019-00672-z",
note = {},
abstract = {},
eprint = {1808.01509},
archivePrefix = {arXiv},
primaryClass = {math.LO},
keywords = {},
source = {},
url = {http://wp.me/p5M0LV-1M8},
}
.
# Nonamalgamation in the Cohen generic multiverse, CUNY Logic Workshop, March 2018
This will be a talk for the CUNY Logic Workshop on March 23, 2018, GC 6417 2-3:30pm.
Abstract. Consider a countable model of set theory $M$ in the context of all its successive forcing extensions and grounds. This generic multiverse has long been known to exhibit instances of nonamalgamation: one can have two extensions $M[c]$ and $M[d]$, both adding a merely a generic Cohen real, which have no further extension in common. In this talk, I shall describe new joint work that illuminates the extent of non-amalgamation: every finite partial order (and more) embeds into the generic multiverse over any given model in a way that preserves amalgamability and non-amalgamability. The proof uses the set-theoretic blockchain argument (pictured above), which has affinities with constructions in computability theory in the Turing degrees. Other arguments, which also resemble counterparts in computability theory, show that the generic multiverse exhibits the exact pair phenonemon for increasing chains. This is joint work with Miha Habič, myself, Lukas Daniel Klausner and Jonathan Verner. The paper will be available this Spring.
# Upward closure and amalgamation in the generic multiverse of a countable model of set theory
• J. D. Hamkins, “Upward closure and amalgamation in the generic multiverse of a countable model of set theory,” RIMS Kyôkyûroku, p. 17–31, 2016.
[Bibtex]
@ARTICLE{Hamkins2016:UpwardClosureAndAmalgamationInTheGenericMultiverse,
author = {Joel David Hamkins},
title = {Upward closure and amalgamation in the generic multiverse of a countable model of set theory},
journal = {RIMS {Ky\^oky\^uroku}},
year = {2016},
volume = {},
number = {},
pages = {17--31},
month = {},
newton = {ni15066},
url = {http://wp.me/p5M0LV-1cv},
eprint = {1511.01074},
archivePrefix = {arXiv},
primaryClass = {math.LO},
abstract = {},
keywords = {},
source = {},
issn = {1880-2818},
}
Abstract. I prove several theorems concerning upward closure and amalgamation in the generic multiverse of a countable transitive model of set theory. Every such model $W$ has forcing extensions $W[c]$ and $W[d]$ by adding a Cohen real, which cannot be amalgamated in any further extension, but some nontrivial forcing notions have all their extensions amalgamable. An increasing chain $W[G_0]\subseteq W[G_1]\subseteq\cdots$ has an upper bound $W[H]$ if and only if the forcing had uniformly bounded essential size in $W$. Every chain $W\subseteq W[c_0]\subseteq W[c_1]\subseteq \cdots$ of extensions adding Cohen reals is bounded above by $W[d]$ for some $W$-generic Cohen real $d$.
This article is based upon I talk I gave at the conference on Recent Developments in Axiomatic Set Theory at the Research Institute for Mathematical Sciences (RIMS) at Kyoto University, Japan in September, 2015, and I am extremely grateful to my Japanese hosts, especially Toshimichi Usuba, for supporting my research visit there and also at the CTFM conference at Tokyo Institute of Technology just preceding it. This article includes material adapted from section section 2 of Set-theoretic geology, joint with G. Fuchs, myself and J. Reitz, and also includes a theorem that was proved in a series of conversations I had with Giorgio Venturi at the Young Set Theory Workshop 2011 in Bonn and continuing at the London 2011 summer school on set theory at Birkbeck University London.
# Being HOD-of-a-set is invariant throughout the generic multiverse
$\newcommand\HOD{\text{HOD}}$The axiom $V=\HOD$, introduced by Gödel, asserts that every set is ordinal definable. This axiom has a subtler foundational aspect than might at first be expected. The reason is that the general concept of “object $x$ is definable using parameter $p$” is not in general first-order expressible in set theory; it is of course a second-order property, which makes sense only relative to a truth predicate, and by Tarski’s theorem, we can have no first-order definable truth predicate. Thus, the phrase “definable using ordinal parameters” is not directly meaningful in the first-order language of set theory without further qualification or explanation. Fortunately, however, it is a remarkable fact that when we allow definitions to use arbitrary ordinal parameters, as we do with $\HOD$, then we can in fact make such qualifications in such a way that the axiom becomes first-order expressible in set theory. Specifically, we say officially that $V=\HOD$ holds, if for every set $x$, there is an ordinal $\theta$ with $x\in V_\theta$, for which which $x$ is definable by some formula $\psi(x)$ in the structure $\langle V_\theta,{\in}\rangle$ using ordinal parameters. Since $V_\theta$ is a set, we may freely make reference to first-order truth in $V_\theta$ without requiring any truth predicate in $V$. Certainly any such $x$ as this is also ordinal-definable in $V$, since we may use $\theta$ and the Gödel-code of $\psi$ also as parameters, and note that $x$ is the unique object such that it is in $V_\theta$ and satisfies $\psi$ in $V_\theta$. (Note that inside an $\omega$-nonstandard model of set theory, we may really need to use $\psi$ as a parameter, since it may be nonstandard, and $x$ may not be definable in $V_\theta$ using a meta-theoretically standard natural number; but fortunately, the Gödel code of a formula is an integer, which is still an ordinal, and this issue is the key to the issue.) Conversely, if $x$ is definable in $V$ using formula $\varphi(x,\vec\alpha)$ with ordinal parameters $\vec\alpha$, then it follows by the reflection theorem that $x$ is defined by $\varphi(x,\vec\alpha)$ inside some $V_\theta$. So this formulation of $V=HOD$ is expressible and exactly captures the desired second-order property that every set is ordinal-definable.
Consider next the axiom $V=\HOD(b)$, asserting that every set is definable from ordinal parameters and parameter $b$. Officially, as before, $V=\HOD(b)$ asserts that for every $x$, there is an ordinal $\theta$, formula $\psi$ and ordinals $\vec \alpha<\theta$, such that $x$ is the unique object in $V_\theta$ for which $\langle V_\theta,{\in}\rangle\models\psi(x,\vec\alpha,b)$, and the reflection argument shows again that this way of defining the axiom exactly captures the intended idea.
The axiom I actually want to focus on is $\exists b\,\left( V=\HOD(b)\right)$, asserting that the universe is $\HOD$ of a set. (I assume ZFC in the background theory.) It turns out that this axiom is constant throughout the generic multiverse.
Theorem. The assertion $\exists b\, (V=\HOD(b))$ is forcing invariant.
• If it holds in $V$, then it continues to hold in every set forcing extension of $V$.
• If it holds in $V$, then it holds in every ground of $V$.
Thus, the truth of this axiom is invariant throughout the generic multiverse.
Proof. Suppose that $\text{ZFC}+V=\HOD(b)$, and $V[G]$ is a forcing extension of $V$ by generic filter $G\subset\mathbb{P}\in V$. By the ground-model definability theorem, it follows that $V$ is definable in $V[G]$ from parameter $P(\mathbb{P})^V$. Thus, using this parameter, as well as $b$ and additional ordinal parameters, we can define in $V[G]$ any particular object in $V$. Since this includes all the $\mathbb{P}$-names used to form $V[G]$, it follows that $V[G]=\HOD(b,P(\mathbb{P})^V,G)$, and so $V[G]$ is $\HOD$ of a set, as desired.
Conversely, suppose that $W$ is a ground of $V$, so that $V=W[G]$ for some $W$-generic filter $G\subset\mathbb{P}\in W$, and $V=\HOD(b)$ for some set $b$. Let $\dot b$ be a name for which $\dot b_G=b$. Every object $x\in W$ is definable in $W[G]$ from $b$ and ordinal parameters $\vec\alpha$, so there is some formula $\psi$ for which $x$ is unique such that $\psi(x,b,\vec\alpha)$. Thus, there is some condition $p\in\mathbb{P}$ such that $x$ is unique such that $p\Vdash\psi(\check x,\dot b,\check{\vec\alpha})$. If $\langle p_\beta\mid\beta<|\mathbb{P}|\rangle$ is a fixed enumeration of $\mathbb{P}$ in $W$, then $p=p_\beta$ for some ordinal $\beta$, and we may therefore define $x$ in $W$ using ordinal parameters, along with $\dot b$ and the fixed enumeration of $\mathbb{P}$. So $W$ thinks the universe is $\HOD$ of a set, as desired.
Since the generic multiverse is obtained by iteratively moving to forcing extensions to grounds, and each such movement preserves the axiom, it follows that $\exists b\, (V=\HOD(b))$ is constant throughout the generic multiverse. QED
Theorem. If $V=\HOD(b)$, then there is a forcing extension $V[G]$ in which $V=\HOD$ holds.
Proof. We are working in ZFC. Suppose that $V=\HOD(b)$. We may assume $b$ is a set of ordinals, since such sets can code any given set. Consider the following forcing iteration: first add a Cohen real $c$, and then perform forcing $G$ that codes $c$, $P(\omega)^V$ and $b$ into the GCH pattern at uncountable cardinals, and then perform self-encoding forcing $H$ above that coding, coding also $G$ (see my paper on Set-theoretic geology for further details on self-encoding forcing). In the final model $V[c][G][H]$, therefore, the objects $c$, $b$, $P(\omega)^V$, $G$ and $H$ are all definable without parameters. Since $V\subset V[c][G][H]$ has a closure point at $\omega$, it satisfies the $\omega_1$-approximation and cover properties, and therefore the class $V$ is definable in $V[c][G][H]$ using $P(\omega)^V$ as a parameter. Since this parameter is itself definable without parameters, it follows that $V$ is parameter-free definable in $V[c][G][H]$. Since $b$ is also definable there, it follows that every element of $\HOD(b)^V=V$ is ordinal-definable in $V[c][G][H]$. And since $c$, $G$ and $H$ are also definable without parameters, we have $V[c][G][H]\models V=\HOD$, as desired. QED
Corollary. The following are equivalent.
1. The universe is $\HOD$ of a set: $\exists b\, (V=\HOD(b))$.
2. Somewhere in the generic multiverse, the universe is $\HOD$ of a set.
3. Somewhere in the generic multiverse, the axiom $V=\HOD$ holds.
4. The axiom $V=\HOD$ is forceable.
Proof. This is an immediate consequence of the previous theorems. $1\to 4\to 3\to 2\to 1$. QED
Corollary. The axiom $V=\HOD$, if true, even if true anywhere in the generic multiverse, is a switch.
Proof. A switch is a statement such that both it and its negation are necessarily possible by forcing; that is, in every set forcing extension, one can force the statement to be true and also force it to be false. We can always force $V=\HOD$ to fail, simply by adding a Cohen real. If $V=\HOD$ is true, then by the first theorem, every forcing extension has $V=\HOD(b)$ for some $b$, in which case $V=\HOD$ remains forceable, by the second theorem. QED
# Upward closure in the generic multiverse of a countable model of set theory, RIMS 2015, Kyoto, Japan
This will be a talk for the conference Recent Developments in Axiomatic Set Theory at the Research Institute for Mathematical Sciences (RIMS) in Kyoto, Japan, September 16-18, 2015.
Abstract. Consider a countable model of set theory amongst its forcing extensions, the ground models of those extensions, the extensions of those models and so on, closing under the operations of forcing extension and ground model. This collection is known as the generic multiverse of the original model. I shall present a number of upward-oriented closure results in this context. For example, for a long-known negative result, it is a fun exercise to construct forcing extensions $M[c]$ and $M[d]$ of a given countable model of set theory $M$, each by adding an $M$-generic Cohen real, which cannot be amalgamated, in the sense that there is no common extension model $N$ that contains both $M[c]$ and $M[d]$ and has the same ordinals as $M$. On the positive side, however, any increasing sequence of extensions $M[G_0]\subset M[G_1]\subset M[G_2]\subset\cdots$, by forcing of uniformly bounded size in $M$, has an upper bound in a single forcing extension $M[G]$. (Note that one cannot generally have the sequence $\langle G_n\mid n<\omega\rangle$ in $M[G]$, so a naive approach to this will fail.) I shall discuss these and related results, many of which appear in the “brief upward glance” section of my recent paper: G. Fuchs, J. D. Hamkins and J. Reitz, Set-theoretic geology. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9380636215209961, "perplexity": 513.0739241250898}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711001.28/warc/CC-MAIN-20221205000525-20221205030525-00485.warc.gz"} |
https://brilliant.org/discussions/thread/quadratic-equation-2/ | ## Definition
The quadratic formula gives us the solutions, or roots, to a quadratic equation of the form $ax^2 + bx + c$:
$x = \frac{-b \pm \sqrt{b^2 - 4ac}}{2a}$
## Technique
### What is the sum of the possible solutions of $x^2 - 2x - 3 = 0$?
\begin{aligned} x &= \frac{-(-2) \pm \sqrt{(2)^2 - 4(1)(-3)}}{2(1)} \\ x &= 1 \pm 2 \\ x &=3 \text{ or } x=-1 \end{aligned}
The the answer is $3-1=2$. $_\square$
The quadratic formula is helpful even when the solutions are complex numbers:
### The roots of $x^2 - 4x + 5$ are two complex numbers, $z_1$ and $z_2$. What is the sum of $z_1$ and $z_2$?
\begin{aligned} x &= \frac{-(-4) \pm \sqrt{(-4)^2-4(1)(5)}}{2(1)} \\ x &= \frac{4 \pm \sqrt{-4}}{2} \\ x &= 2 \pm i \end{aligned}
Thus, the two roots are $z_1 = 2-i$ and $z_2 = 2 + i$ and their sum is $(2-i) + (2 + i ) = 4$. $_\square$
Whether the roots are real or complex depends on the quadratic formula's discriminant, $b^2 - 4ac$, the expression inside the square root. The roots are real when the discriminant is positive and complex when the discriminant is negative.
## Application and Extensions
### For what value of $c$ will $2x^2 + 7x + c = 0$ have only a single real root?
The quadratic formula's $\pm$ tells us that there will always be two roots unless the discriminant is equal to 0. So,
\begin{aligned} b^2 - 4ac &= 0\\ (7)^2 - 4(2)c &= 0\\ c &= \tfrac{49}{8} \end{aligned} _\square
### $x$ is a negative number such that $x^2+9x -22 = 0$. What is the sum of all possible values of $y$ which satisfy the equation $x = y^2 - 13y + 24$?
Since $x^2+9x -22 = 0$, we know that
\begin{aligned} x &= \frac{-9 \pm \sqrt{81-4(-22)}}{2} \\ &= \frac{-9 \pm \sqrt{169}}{2} \\ &= \frac{-9 \pm 13}{2} \\ &= 2 \text{ or } -11 \end{aligned}
Since $x$ is negative, $x=-11$. So now we need to solve $-11 = y^2 - 13y + 24$.
This produces the following quadratic equation:
\begin{aligned} y^2 - 13y + 35 &=0 \\ (y-5)(y-7)&=0 \end{aligned}
Thus $y=5 \text{ or } 7$, and the answer is 12. $_\square$
Note by Arron Kau
5 years, 5 months ago
MarkdownAppears as
*italics* or _italics_ italics
**bold** or __bold__ bold
- bulleted- list
• bulleted
• list
1. numbered2. list
1. numbered
2. list
Note: you must add a full line of space before and after lists for them to show up correctly
paragraph 1paragraph 2
paragraph 1
paragraph 2
[example link](https://brilliant.org)example link
> This is a quote
This is a quote
# I indented these lines
# 4 spaces, and now they show
# up as a code block.
print "hello world"
# I indented these lines
# 4 spaces, and now they show
# up as a code block.
print "hello world"
MathAppears as
Remember to wrap math in $ ... $ or $ ... $ to ensure proper formatting.
2 \times 3 $2 \times 3$
2^{34} $2^{34}$
a_{i-1} $a_{i-1}$
\frac{2}{3} $\frac{2}{3}$
\sqrt{2} $\sqrt{2}$
\sum_{i=1}^3 $\sum_{i=1}^3$
\sin \theta $\sin \theta$
\boxed{123} $\boxed{123}$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 44, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000091791152954, "perplexity": 1364.8585459504345}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027317688.48/warc/CC-MAIN-20190822235908-20190823021908-00282.warc.gz"} |
https://brilliant.org/discussions/thread/shortest-distance/ | ×
# Shortest Distance
I wonder what the shortest distance from any point to any graph of an equation is.
An example problem is: Find the shortest distance between $$(0,0)$$ and $$y=\frac{x^2-3}{\sqrt{2}}$$.
Solution to Example Problem: Any random point on the graph of the equation would be $$(x,\frac{x^2-3}{\sqrt{2}})$$. Using the distance formula to find the distance between the origin and that graph, we simplify and get $$d^2=(\frac{x^2-3}{\sqrt{2}})^2$$. Simplifying this further, we substitute $$x^2$$ with $$a$$ and get $$a+(\frac{a-3}{\sqrt{2}})^2= a+\frac{a^2}{2} -3a+\frac{9}{2}=\frac{a^2}{2}-2a+\frac{9}{2}$$.
Now, we must complete the square, completing it in the form $$a(a-b)^2+c$$, where $$c$$ is the minimum value. So $$d^2=(a-2)^2+\frac{5}{2}$$. The minimum of $$d^2=\frac{5}{2}$$. Therefore, the minimum of $$d=\boxed{\frac{\sqrt{10}}{2}}$$.
Note by Lucas Chen
3 years, 5 months ago
Sort by:
Isn't it interesting how the shortest distance from a point $$(x_{1},y_{1})$$ and line $$y=mx+b$$ is $$\frac{|y_{1}-mx_{1}-b|}{\sqrt{m^2+1}}$$?
Or if you have the points $$(x_{1},y_{1})$$, and the line $$Ax_{1}+By_{1}+C=0$$, the shortest distance between them is $$\frac{|Ax_{1}+By_{1}+C|}{\sqrt{A^2+B^2 }}$$? · 3 years, 5 months ago | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9528466463088989, "perplexity": 148.84612537674616}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608067.23/warc/CC-MAIN-20170525121448-20170525141448-00580.warc.gz"} |
https://www.physicsforums.com/threads/massive-objects-with-super-solar-metallicities-at-z-6.97350/ | # Massive objects with super-solar metallicities at z~6
## Massive, highly metallized galaxies/quasars falsify the BB at what redshift?
22.2%
0 vote(s)
0.0%
0 vote(s)
0.0%
0 vote(s)
0.0%
0 vote(s)
0.0%
0 vote(s)
0.0%
0 vote(s)
0.0%
44.4%
22.2%
10. ### Other
22.2%
1. Oct 29, 2005
### turbo
Fan et al published a paper several years ago that has since garnered over 200 citations, many of them recent. The authors studied the spectra of three objects at z~6 and determined that the objects are quasars, each with masses of several billions of Suns, residing in massive hosts, each with a lower mass limit of 10 trillion Suns. To top it off, the spectra of these objects indicate that they have super-solar metallicities.
http://www.journals.uchicago.edu/AJ/journal/issues/v122n6/201316/201316.text.html [Broken]
I have a question for the denizens of General Astronomy and Cosmology: Does the observation of extremely massive objects with super-solar metallicity at z~6 pose enough problems with the heirarchical model of structure formation to falsify the Big Bang theory? If not, at what epoch would such observations falsify the BB?
Last edited by a moderator: May 2, 2017
2. Oct 29, 2005
### matt.o
I think this is a good question. I am not sure of the exact answer to this, but I think the z~6 observations can be explained away as extreme objects, analogous to ULIRGS etc at lower redshifts. I think the new generatons telescopes will unvail less extreme objects at the same redshift, but I don't think there will be much to see at higher redshifts. Obvously if we see things at z~20 like these extreme objects, there will be some trouble!
3. Oct 29, 2005
### turbo
Isn't z~20 about the time of first light - the period when the low-metallicity Pop III stars begin to ignite? These are the stars that supposedly supplied the metals from which these structures must have formed. We've got to allow some time for these stars to form and to go supernova to provide the metallicity of the IGM. What's your best guess for the time scale?
4. Oct 29, 2005
### matt.o
I don't know, turbo-1. I think this would probably take some pretty detailed calculations to work out. Simulations need to be done including both baryons and dark matter, plus we need to know the IMF etc, which is a challenge to determine even locally. z~20 is just an overestimate on my part because I don't know at exactly which redshift massive, metal rich objects will rule out standard big bang.
5. Oct 30, 2005
### Chronos
Early metallicity is really not a player these days. Reionization studies already push the envelope way past z~6. That is a bigger issue in my mind. There is also the problem of observing objects past z~10. There is a huge amount of neutral hydrogen that absorbs the wavelengths of interest.
6. Oct 30, 2005
### Garth
This has already been discussed in Is there an Age Problem in the Mainstream Model? .
There is already a problem with quasar APM 08279+5255 at a red shift of 3.91, which in the concordance model yields a universe age of 1.6 Gyr., a little short of the 3 - 2 Gyr required to produce its metallicity of 2 - 3 times solar. One paper to check is [URL [Broken] astro-ph/0504031]An old quasar in a young dark energy-dominated universe?[/URL].
Already we have to reconcile the present observations by either modifying our stellar nucleosynthesis model or the cosmological model.
As I have no idea how the stellar nucleosynthesis model could be modified I could not vote in this poll.
Garth
Last edited by a moderator: May 2, 2017
7. Oct 30, 2005
### Chronos
Oh come on, Garth. There are many possible explanations. Let's not cherry pick them. You are a bright guy, what alternative explanations might fit?
8. Oct 30, 2005
### Garth
The data is mis-interpreted,or contaminated?
9. Oct 30, 2005
### turbo
This has been discussed, yes, but I want to know how people on this forum address the issue, and quantify it. I hope that as people vote, they will also explain the reasoning behind their opinions. There are some very bright people hanging out here and I want to know if they think that the BB theory can be falsified by observations and not merely further constrained. It's an important question, with implications that extend to the very nature and value of scientific inquiry.
10. Oct 30, 2005
### SpaceTiger
Staff Emeritus
I've said it many times before and I will say it again. You can't, at this moment, falsify or confirm the big bang based on metallicity measurements. Metal enrichment is one of the most difficult processes to model in astrophysics today and metallicity measurements are unreliable, particularly at high-redshift. This is not like big bang nucleosynthesis, where the theory has a very specific set of predictions that can be derived from the basic parameters of the universe. You need to understand Pop III stellar nucleosynthesis, Pop III supernovae, the growth of structure at high-redshift, the growth and distribution of quasars, the low-metallicity stellar IMF, and probably more. We don't have what I would call a satisfactory understanding of any of those things.
I voted "Other" because I don't know, or even have an inkling of, the answer. I don't think anybody does. It could be that, when all the calculations and observations are done, the big bang would be falsified with supersolar metallicities at z~3. However, I very seriously doubt it and, if forced, I would probably tend towards high redshifts (~20). But that would be little more than a guess at this point.
11. Oct 30, 2005
### turbo
Thanks, ST. The hosts for these quasars are massive, though, and that is a factor. With lower limits on their masses of 10 trillion MSol, these are very large structures to observe so early in the life of the universe.
12. Oct 31, 2005
### Chronos
I do agree with ST's position. Nucleosynthesis is a wild guess in the observable universe. I voted for z~20 because the BB models seems to exclude most other possibilities and there are studies suggesting this is a reasonable threshold to test the model. Unfortunately, there are other studies that suggest this realm is difficult, if not impossible, to probe. I hope there are ways around this.
13. Oct 31, 2005
### hellfire
I voted z ~ 20 thinking mainly about the current cosmological model, may be with some possible variations, but now I am not really sure whether z ~ 20 would actually disprove every cosmological model based on big-bang. May be I had better voted "No such observations can falsify the BB" or "Others"...
Last edited: Oct 31, 2005
14. Oct 31, 2005
### Chronos
I think z~20 is a very reasonable limit without breaking the bank. Any finite model in time must break down before the physics do.
15. Nov 3, 2005
### wstevenbrown
I voted for z=20, tho I would not be seriously uncomfortable until z=40. In those intervals, the speed of aging/evolution in the young universe can be fine-tuned by the addition of Primordial Black Holes (of mass greater than Luna), and stable decay relics of those with mass less than Luna. Those latter would be essentially massive WIMPs of neutral charge (possibly SUSY particles, possibly 'other'). In the mass range between a mountain and Luna, we would 'detect' them as low-mass BH's which for some reason don't decay via Hawking radiation, tho they might radiate due to infalling matter causing transient instability.
The number of epicycles is limited only by imagination. Falsifying the BB becomes a ... doctrinal issue. S | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8112843036651611, "perplexity": 1787.1064599587571}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125946807.67/warc/CC-MAIN-20180424154911-20180424174911-00017.warc.gz"} |
https://socratic.org/questions/how-do-you-solve-x-26-x-5-0 | Precalculus
Topics
How do you solve (x+26)/(x+5)>=0?
Aug 6, 2016
$x \in \left(- \infty , - 26\right] \cup \left(- 5 , \infty\right)$
Explanation:
If $x = - 5$ then the denominator of the rational expression is zero and the quotient is undefined. So $- 5$ is not part of the solution set.
$\textcolor{w h i t e}{}$
Case $\boldsymbol{x < - 5}$
If $x < - 5$ then $\left(x + 5\right) < 0$
Multiply both sides of the inequality by $\left(x + 5\right)$ and reverse the inequality (since $\left(x + 5\right) < 0$) to get:
$x + 26 \le 0$
Subtract $26$ from both sides to get:
$x \le - 26$
So $\left(- \infty , - 26\right]$ is part of the solution set.
$\textcolor{w h i t e}{}$
Case $\boldsymbol{x > - 5}$
If $x > - 5$ then $\left(x + 5\right) > 0$
Multiply both sides of the inequality by $\left(x + 5\right)$ to get:
$x + 26 \ge 0$
Subtract $26$ from both sides to get:
$x \ge - 26$
Since $x > - 5$ this is already true.
So $\left(- 5 , \infty\right)$ is part of the solution set.
$\textcolor{w h i t e}{}$
Conclusion
The solution set is:
$\left(- \infty , - 26\right] \cup \left(- 5 , \infty\right)$
Impact of this question
386 views around the world
You can reuse this answer | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 25, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9911964535713196, "perplexity": 378.21596484007904}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141486017.50/warc/CC-MAIN-20201130192020-20201130222020-00461.warc.gz"} |
http://mathhelpforum.com/discrete-math/75758-negating-normal-form.html | # Math Help - Negating normal form
1. ## Negating normal form
Hi!
I have a formula given like this(the first V is supposed to be up side down, /\ as in "and" not "or"):
$\sim (\sim PV \sim(\sim\sim Q \vee \sim R)) \\$
How do I approach this to "simplifie" it (negating normal form).
Any help would be greatly appriciated!
Eg.
$\sim (P=>Q) \\$
Which equalies/on a simpler form using De Morgan's law
$P.\sim Q \\$
. is "and" as in upside down V
2. Hello, jokke22!
Simplify: . $\sim \bigg[\sim\!P\: \wedge \sim(\sim\sim\! Q \:\vee \sim\!R)\bigg] \\$
We have: . $\sim\bigg[\sim\!P\:\wedge \sim(\sim\sim\!Q\:\vee \sim\!R)\bigg]$
. . . . . . $= \;\;\sim\bigg[P\:\wedge \sim(Q \:\vee \sim\!R)\bigg]$. . double negative
. . . . . . $= \;\;\sim\bigg[P\:\wedge \sim\!Q\:\wedge R\bigg]$. . . . DeMorgan's Law
. . . . . . $=\;\;\sim\!P \:\vee Q \:\vee \sim\!R$. . . . . DeMorgan's Law
3. This is what you posted.
$\begin{gathered}
\neg \left[ {P \vee \neg \left( {\neg \neg Q \vee \neg R} \right)} \right] \hfill \\
\neg \left[ {P \vee \neg \left( {Q \vee \neg R} \right)} \right] \hfill \\
\neg \left[ {P \vee \left( {\neg Q \wedge R} \right)} \right] \hfill \\
\left[ {\neg P \wedge \neg \left( {\neg Q \wedge R} \right)} \right] \hfill \\
\left[ {\neg P \wedge \left( {Q \vee \neg R} \right)} \right] \hfill \\
\end{gathered}$
Did you post it correctly?
4. Appriciate it mate! Thank you!
Since I have your attention, im struggling a bit with this one as well:
$\sim\bigg[(P->Q)->\sim(R->S)\bigg]$
5. Originally Posted by Plato
This is what you posted.
$\begin{gathered}
\neg \left[ {P \vee \neg \left( {\neg \neg Q \vee \neg R} \right)} \right] \hfill \\
\neg \left[ {P \vee \neg \left( {Q \vee \neg R} \right)} \right] \hfill \\
\neg \left[ {P \vee \left( {\neg Q \wedge R} \right)} \right] \hfill \\
\left[ {\neg P \wedge \neg \left( {\neg Q \wedge R} \right)} \right] \hfill \\
\left[ {\neg P \wedge \left( {Q \vee \neg R} \right)} \right] \hfill \\
\end{gathered}$
Did you post it correctly?
Yes that is correct, EXCEPT for the first $\vee$ that should be $\wedge$, sorry for my bad typing here, not familiar with all the commands yet!
6. Originally Posted by jokke22
Appriciate it mate! Thank you!
Since I have your attention, im struggling a bit with this one as well:
$\sim\bigg[(P->Q)->\sim(R->S)\bigg]$
~[(p--->q)----->~(r---->s)] =
=~[~(~pvq) v ~(~r v s)]=..................by material implication:P---->Q=~P V Q
= (~pvq) ^ (~r v s)=.........................by de morgan...
=(p---->q) ^ (r----->s).....................by material implication again
7. Hello again, jokke22!
$\sim\bigg[(P \to Q)\;\to\; \sim\!(R \to S)\bigg]$
$\sim\bigg[(\sim\! P \vee Q) \;\to\;\sim(\sim\! R \vee S)\bigg]$ . . def. of Implication
$\sim\bigg[(\sim\!P \vee Q) \;\to\; (R\: \wedge \sim\!S)\bigg]$. . . .DeMorgan
$\sim\bigg[\sim(\sim\!P \vee Q) \;\vee\; (R \:\wedge \sim\!S)\bigg]$ . . def. of Implication
. . . $(\sim\!P \vee Q) \;\wedge\; \sim(R \:\wedge \sim\!S)$ . . .DeMorgan
. . . $(\sim\!P \vee Q) \;\wedge\; (\sim\!R \vee S)$ . . . . DeMorgan
8. Originally Posted by Soroban
Hello again, jokke22!
$\sim\bigg[(\sim\! P \vee Q) \;\to\;\sim(\sim\! R \vee S)\bigg]$ . . def. of Implication
$\sim\bigg[(\sim\!P \vee Q) \;\to\; (R\: \wedge \sim\!S)\bigg]$. . . .DeMorgan
$\sim\bigg[\sim(\sim\!P \vee Q) \;\vee\; (R \:\wedge \sim\!S)\bigg]$ . . def. of Implication
. . . $(\sim\!P \vee Q) \;\wedge\; \sim(R \:\wedge \sim\!S)$ . . .DeMorgan
. . . $(\sim\!P \vee Q) \;\wedge\; (\sim\!R \vee S)$ . . . . DeMorgan
Thank you yet again! As to benes!
Now I see how these can be solved by adding these laws one by one to simplify them!
As to the laws and such do you know of any great resources which I can look at as well?
9. As for this:
$\sim\bigg[$ $\sim (\ P \to Q) \vee\sim R \bigg]$
$\sim\bigg[$ $\sim (\sim \ P \vee Q) \vee\sim R \bigg]$
$\sim\bigg[$ $P \vee (Q \vee\sim R) \bigg]$
$\bigg[$ $\sim P \wedge (Q \vee\sim R) \bigg]$
$\sim P \wedge (\sim Q \wedge R)$
Gave it a try but pretty sure this one is wrong though... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 34, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9639049768447876, "perplexity": 2174.668661349856}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644062327.4/warc/CC-MAIN-20150827025422-00345-ip-10-171-96-226.ec2.internal.warc.gz"} |
https://cs.stackexchange.com/questions/51462/deterministic-linear-time-algorithm-to-check-if-one-array-is-a-sorted-version-of?noredirect=1 | # Deterministic linear time algorithm to check if one array is a sorted version of the other
Consider the following problem:
Input: two arrays $A$ and $B$ of length $n$, where $B$ is in sorted order.
Query: do $A$ and $B$ contain the same items (with their multiplicity)?
What is the fastest deterministic algorithm for this problem?
Can it be solved faster than sorting them? Can this problem be solved in deterministic linear time?
• FWIW the probabilistic approach is hashing with an order-independent hash function. Carter and Wegman wrote one of the original papers on this (sciencedirect.com/science/article/pii/0022000081900337), but I haven't seen anything in the citations of that paper that suggests a deterministic algorithm (so far). – KWillets Jan 5 '16 at 18:31
• The statement you quote is about the Turing machine model, which is only of theoretical interest. Algorithms are usually analyzed with respect to the RAM model. – Yuval Filmus May 17 '16 at 22:27
• ah, then that's the model I'm looking for. I adjusted the question. – Albert Hendriks May 18 '16 at 6:55
• Why don't you just sum the items in the array and then compare the summation ? Regarding your title, it is linear and answers the question 'is one array the sorted version of other? '. I'm aware that it is not the Turing machine model, but a practical solution. – atayenel May 18 '16 at 7:17
• @AlbertHendriks You (most probably) can't sort an array in $O(n\log n)$ on a Turing machine. Some lower bounds on SAT (e.g. cs.cmu.edu/~ryanw/automated-lbs.pdf) are actually for the RAM machine, sorry for my misleading earlier comment. – Yuval Filmus May 18 '16 at 8:44
You haven't specified your computation model, so I will assume the comparison model.
Consider the special case in which the array $B$ is taken from the list $$\{1,2\} \times \{3,4\} \times \cdots \times \{2n-1,2n\}.$$ In words, the $i$th element is either $2i-1$ or $2i$.
I claim that if the algorithm concludes that $A$ and $B$ contain the same elements, that the algorithm has compared each element in $B$ to its counterpart in $A$. Indeed, suppose that the algorithm concludes that $A$ and $B$ contain the same elements, but never compares the first element of $B$ to its counterpart in $A$. If we switch the first element then the algorithm would proceed in exactly the same way, even though the answer is different. This shows that the algorithm must compare the first element (and any other element) to its counterpart in $A$.
This means that if $A$ and $B$ contain the same elements, then after verifying this the algorithm knows the sorted order of $A$. Hence it must have at least $n!$ different leaves, and so it takes time $\Omega(n\log n)$.
• I would have thought this would imply that $P = \Omega(n\log n)$ in general, but apparently the comparison model is different with that. – Albert Hendriks Jan 5 '16 at 19:04
• @AlbertHendriks, it is the same model used to show n lg n lower bound for sorting. It means that it the only operation you can perform is comparison then you cannot do better. I think this answers your question. – Kaveh Jan 6 '16 at 9:24
• [Cntd] we don't have stronger bounds even for sorting! and if you can sort faster than n lg n then you can use that for solving the problem faster than n lg n. – Kaveh Jan 6 '16 at 9:26
• @AlbertHendriks, do you know about linear time algorithms for sorting integers? Look it up in CLRS. Your case might be one of the cases where we can sort in linear time. – Kaveh Jan 6 '16 at 9:33
• Integers can be sorted in $O(n\log\log n)$ (see nada.kth.se/~snilsson/fast-sorting), or in expected time $O(n\sqrt{\log\log n})$ (see ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=1181890), or even in linear time if the word size is large enough (see LNCS 8503, p. 26ff). – Yuval Filmus Jan 6 '16 at 9:44
This answer considers a different model of computation: the unit-cost RAM model. In this model, machine words have size $O(\log n)$, and operations on them take $O(1)$ time. We also assume for simplicity that each array element fits in one machine word (and so is at most $n^{O(1)}$ in magnitude).
We will construct a linear time randomized algorithm with one-sided error (the algorithm might declare the two arrays to contain the same elements even if this is not the case) for the more difficult problem of determining whether two arrays $a_1,\ldots,a_n$ and $b_1,\ldots,b_n$ contain the same elements. (We don't require any of them to be sorted.) Our algorithm will make an error with probability at most $1/n$.
The idea is that the following identity holds iff the arrays contain the same elements: $$\prod_{i=1}^n (x-a_i) = \prod_{i=1}^n (x-b_i).$$ Computing these polynomials exactly will take too much time. Instead, we choose a random prime $p$ and a random $x_0$ and test whether $$\prod_{i=1}^n (x_0-a_i) \equiv \prod_{i=1}^n (x_0-b_i) \pmod{p}.$$ If the arrays are equal, the test will always pass, so let's concentrate on the cases in which the arrays are different. In particular, some coefficient of $\prod_{i=1}^n (x-a_i) - \prod_{i=1}^n (x-b_i)$ is non-zero. Since $a_i,b_i$ have magnitude $n^{O(1)}$, this coefficient has magnitude $2^n n^{O(n)} = n^{O(n)}$, and so it has at most $O(n)$ prime factors of size $\Omega(n)$. This means that if we choose a set of at least $n^2$ primes $p$ of size at least $n^2$ (say), then for a random prime $p$ of this set it will hold with probability at least $1-1/n$ that $$\prod_{i=1}^n (x-a_i) - \prod_{i=1}^n (x-b_i) \not\equiv 0 \pmod{p}.$$ A random $x_0$ modulo $p$ will witness this with probability $1-n/p \geq 1-1/n$ (since a polynomial of degree at most $n$ has at most $n$ roots).
In conclusion, if we choose a random $p$ of size roughly $n^2$ among a set of at least $n^2$ different primes, and a random $x_0$ modulo $p$, then when the arrays don't contain the same elements, our test will fail with probability $1-O(1/n)$. Running the test takes time $O(n)$ since $p$ fits into a constant number of machine words.
Using polynomial time primality testing and since the density of primes of size roughly $n^2$ is $\Omega(1/\log n)$, we can choose a random prime $p$ in time $(\log n)^{O(1)}$. Choosing a random $x_0$ modulo $p$ can be implemented in various ways, and is made easier since in our case we don't need a completely uniform random $x_0$.
In conclusion, our algorithm runs in time $O(n)$, always outputs YES if the arrays contain the same elements, and outputs NO with probability $1-O(1/n)$ if the arrays don't contain the same elements. We can improve the error probability to $1-O(1/n^C)$ for any constant $C$.
• While this algorithm is randomized, it explains how to implement the ideas in some of the other answers so that they actually work. It also has an advantage over the hashtable approach: it is in-place. – Yuval Filmus Jan 6 '16 at 10:22
• I think the OP doesn't like probabilistic algorithms as he didn't like the expected linear time algorithm using a hash table. – Kaveh Jan 6 '16 at 10:25
• Kaveh you're right. But of course this solution is also interesting and should be kept, it solves the case for probabilistic algorithms. Also, I think it uses the model that I'm looking for. – Albert Hendriks Jan 6 '16 at 10:27
• I'm just wondering if the notation O(1/n) is correct. Of course I know what you mean, but I think by the definition of big-O this is equivalent to O(1). – Albert Hendriks Jan 6 '16 at 10:33
• Not at all. It's a quantity bounded by $C/n$ for large enough $n$. That's a better guarantee than $O(1)$. – Yuval Filmus Jan 6 '16 at 10:34
i will propose another algorithm (or at least a scheme of such an algorithm)
The scheme assumes the values (assumed "integers") are within a (narrow?) range between $[min,max]$
1. In $O(n)$ time scanning the two arrays, we can find the min and max values for both and their multiplicity, if these differ, the arrays are not permutations of each other
2. Subtract the min from all values from both arrays (here the fact that one array is already in sorted order is not taken into account, presumably this can be improved)
3. Assume the values in the arrays represent masses and we apply an acceleration/velocity to each of magnitude $1$ (this can be improved to a magnitude of $c > 1$ under certain cases)
4. move the masses until they reach the maximum value max-min, this has a complexity of $O((max-min)n)$. This allows to find both same values and their multiplicity, if these differ, the arrays are not permutations of each other. Else decide the arrays are permutations of each other.
note the above algorithm scheme can be (deterministicaly) quite fast in many practical situations.
The above algorithm scheme is a variation on a linear-time sorting algorithm employing "moving masses". The physical intuition behind the "moving masses" sorting algorithm is this:
Assume each item's value actually represents its mass magnitude and imagine arranging all items in a line and applying the same acceleration force.
Then each item will move up to a distance related to its mass, more massive less distance and vice-versa. Then to retrieve the sorted items simply collect the items in reverse order by distance traveled.
This algorithm is linear-time and deterministic, but there is a caveat in that the amount of initial acceleration force and distance to travel (or time to wait) is related to the distribution of values (i.e the "masses", the $max-min$ factor above). One can also try to discretize the space for the items to travel into a grid and gain a constant factor in algorithm speed (and use a fast sorting routine to sort different items in the same cell).
In this respect, the above algorithm is similar to numerical-based sorting algorithms (e.g radix-sort, counting-sort)
One may think that this algorithm might not mean much, but it shows at least one thing. That, "fundamentaly", at a physical level, sorting arbitrary numbers is a linear-time operation in the number of items.
• In terms of collecting the items in reverse order of distance travelled, wouldn't that translate to comparisons at the implementation level, and at that point do you not have to sort the "distances"? – JustAnotherSoul May 18 '16 at 15:24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8293247222900391, "perplexity": 340.7607434479341}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670643.58/warc/CC-MAIN-20191121000300-20191121024300-00354.warc.gz"} |
https://quant.stackexchange.com/questions/28412/variance-of-brownian-motion | # Variance of Brownian Motion
Can someone point me into the right direction to calculate this one: $E(B^4_t)=3t^2$
I had tried using the following property with no luck:
$E(B^4_t)=E(B^2_tB^2_t)=E(\int B^2 dt )E(\int B^2 dt )=[E(\int B^2 dt )]^2=[\int E(B^2) dt]^2=[\int t dt]^2$
Any other suggestion will be appreciated. Thanks!
Apply Itô's Lemma to $W_t^4$: $$\text{d}(W_t^4)=4W_t^3\text{d}W_t+6W_t^2\text{d}t$$
Integrate: $$W_t^4=4\int_0^tW_s^3\text{d}W_s+6\int_0^tW_s^2\text{d}s$$
The first term is an Itô integral, which is by construction a martingale, with expectation $0$ hence: $$E[W_t^4]=6\int_0^tE[W_s^2]\text{d}s=6\int_0^ts\text{d}s=3t^2$$
At time $t$, Brownian Motion $B_t$ is simply a normal random variable $N(0,t)$.
The Moment Generating Function for a normal $N(\mu,\sigma^2)$ random variable is as follows: $$M(x) = exp(\mu x + \frac{1}{2}\sigma^2 x^2)$$ Furthermore, the fourth moment is given as the fourth derivative of this equation: $$M''''(x) = exp(\mu x + \frac{1}{2}\sigma^2 x^2)\Big( (\mu + \sigma^2x)^4 + 6\sigma^2(\mu + \sigma^2 x)^2 + 3\sigma^4 \Big)$$ So the expectation of $B_t^4$ is just the fourth moment, evaluated at $x=0$ (with parameters $\mu = 0$, $\sigma^2 = t$): $$E(B_t^4) = M''''(0) = 3\sigma^4 = 3t^2$$
• It is also possible to use Ito lemma with function $f(B_t)=B_t^{4}$, but this is an elegant approach as well. – Jan Sila Aug 1 '16 at 9:07
Other Way
$$\mathbb{E}\left[ \,{{e}^{iuB_t}} \right]=\exp \left( iu\,\mathbb{E}\left[ B_t \right]+\frac{1}{2}{{(\,iu\,)}^{2}}\operatorname{Var}(B_t\,) \right)={\exp \left( -\frac{1}{2}{{u}^{2}}t \right)}$$ We know $$\mathbb{E}\left[{{e}^{iuB_t}} \right]=E\left[1+iuB_t-\frac{1}{2\,!}{{u}^{2}}{{B_t}^{2}}-\frac{1}{3\,!}i{{u}^{3}}{{B_t}^{3}}+\frac{1}{4\,!}{{u}^{4}}{{B_t}^{4}}+\cdots \right]$$ therefore
$${\exp \left( -\frac{1}{2}{{u}^{2}}t \right)}=1+iu\mathbb{E}\left[ B_t \right]-\color{green}{\frac{1}{2!}{{u}^{2}}\mathbb{E}\left[ {{B_t}^{2}} \right]}-\frac{1}{3!}i{{u}^{3}}E\left[ {{B_t}^{3}}\right]+\color{red}{\frac{1}{4!}{{u}^{4}}\mathbb{E}\left[ {{B_t}^{4}} \right]}+\cdots \tag1$$ On the other hand $$\exp \left( -\frac{1}{2}{{u}^{2}}t \right)=1-\color{green}{\frac{1}{2}\,{{u}^{2}}t}+\color{red}{\frac{1}{2!}\left( \frac{1}{4}{{u}^{4}}{{t}^{2}} \right)}-\frac{1}{3 !}\left( \frac{1}{8}{{u}^{6}}{{t}^{3}} \right)+\frac{1}{4 !}\left( \frac{1}{16}{{u}^{8}}{{t}^{4}} \right)-\cdots\tag2$$ $(1)$ and $(2)$ $$\frac{1}{4!}{{u}^{4}}\mathbb{E}\left[ {{B_t}^{4}} \right]=\frac{1}{2 !}\left( \frac{1}{4}{{u}^{4}}{{t}^{2}} \right)$$ thus $$\mathbb{E}\left[ {{B_t}^{4}} \right]=3t^2$$ Generally, we have
\left\{ \begin{align} & E\left[ {{B}^{2n+1}}(t) \right]=0\,\,\,\,\,\,\,\,\,\,\,\,\,\, \\ & \quad E\left[ {{B}^{2n}}(t) \right]=\frac{(2n)!}{{{2}^{n}}n\,!}\,{{t}^{n}} \\ \end{align} \right. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9997522234916687, "perplexity": 398.8640849224038}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038101485.44/warc/CC-MAIN-20210417041730-20210417071730-00539.warc.gz"} |
https://arxiv.org/abs/0908.2133 | astro-ph.CO
(what is this?)
# Title: Effects of Baryon Dissipation on the Dark Matter Virial Scaling Relation
Abstract: We investigate effects of baryon dissipation on the dark matter virial scaling relation between total mass and velocity dispersion and the velocity bias of galaxies in groups and clusters using self-consistent cosmological simulations. We show that the baryon dissipation increases the velocity dispersion of dark matter within the virial radius by 5% - 10%. The effect is mainly driven by the change in density and gravitational potential in inner regions of cluster, and it is larger in lower mass systems where gas cooling and star formation are more efficient. We also show that the galaxy velocity bias depends on how galaxies are selected. Galaxies selected based on their stellar mass exhibit no velocity bias, while galaxies selected based on their total mass show positive bias of ~10%, consistent with previous results based on collisionless dark matter- only simulations. We further find that observational estimates of galaxy velocity dispersion are unbiased with respect to the velocity dispersion of dark matter, provided galaxies are selected using their stellar masses and and their velocity dispersions are computed with more than twenty most massive galaxies. Velocity dispersions estimated with fewer galaxies, on the other hand, can lead to significant underestimate of dynamical masses. Results presented in this paper should be useful in interpretating high-redshift groups and clusters as well as cosmological constraints derived from upcoming optical cluster surveys.
Comments: 7 pages, 5 figures, accepted by ApJ for publication Subjects: Cosmology and Nongalactic Astrophysics (astro-ph.CO) DOI: 10.1088/0004-637X/708/2/1419 Cite as: arXiv:0908.2133 [astro-ph.CO] (or arXiv:0908.2133v2 [astro-ph.CO] for this version)
## Submission history
From: Erwin Tin-Hay Lau [view email]
[v1] Fri, 14 Aug 2009 20:24:41 GMT (67kb)
[v2] Mon, 16 Nov 2009 20:13:09 GMT (69kb) | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8814160823822021, "perplexity": 2437.6465345137526}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720973.64/warc/CC-MAIN-20161020183840-00358-ip-10-171-6-4.ec2.internal.warc.gz"} |
https://meridian.allenpress.com/radiation-research/article-abstract/124/1s/S56/38965/Micronuclei-and-Clonogenicity-Following-Low-and?searchresult=1 | Plateau-phase human fibroblasts were irradiated at either low dose rate (∼0.6 Gy/h) or high dose rate (78 Gy/h) with γ rays and then released from contact inhibition. The frequency of cells containing micronuclei monitored at daily intervals showed that induction was dependent on both dose and dose rate with a peak incidence at 3 days postirradiation. Cumulative frequency distributions indicated a reduction by a factor of 4 when the dose was delivered chronically as opposed to acutely. Distributions also suggested that micronuclei-containing cells persist over days, while the dose responses (different by a factor of 2.8) for both high and low dose rate indicated a plateau, particularly following higher doses at low dose rate. Data were not consistent with this response being due to cell cycle delay. Delayed plating resulted in both a reduced incidence of cells with micronuclei and enhanced survival following high- but not low-dose-rate irradiation, with the response being complete by 6 h. Cell surviving fraction and the fraction of cells with micronuclei were negatively correlated, but the relationships were different between the high- and low-dose-rate irradiations. This divergence mitigates against using low-dose-rate responsiveness of the short-term micronucleus assay as an indicator of the initial slope of the acute dose-rate survival curve.
This content is only available as a PDF. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.909531831741333, "perplexity": 3691.5416121180874}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358560.75/warc/CC-MAIN-20211128134516-20211128164516-00471.warc.gz"} |
http://owenduffy.net/blog/?p=1501 | Mining K5SO’s measurements of Sun noise
K5SO published an article in which he reported measurements of Sun noise rise over cold sky for K5SO, W2UHI and VK7MO.
Above is his graphical summary of the measurements.
This article concentrates on just the measurements of K5SO station and the underlying model.
It is worth noting that the observations of all three stations fall within less than 25% of the range shown in the graph… so most of the graph is an extrapolation, and over a relatively huge range.
K5SO gives a model for the behaviour, it is SN(dB)=10*log(((0.72*(SF-64))+47)/A).
Note that he defines SF to be the solar flux at 2800MHz whereas the noise measurements are made at 1296MHz.
This expression can be simplified to SN(dB)=10*log((0.72*SF+0.92)/A), or as a simple ratio SN=(0.72*SF+0.92)/A =0.72/A*SF+0.92/A …(eqn 1).
For the purpose of this article, I will refer to K5SO’s quantity SN as Y, the ratio of noise pointing at the Sun to that pointing to cold sky.
The power received when pointing to the Sun includes both noise due to the Sun itself (lets call it Th) plus other noise, external and internal to the system (lets call that Tc).
So, Y=(Th+Tc)/Tc=Th/Tc+1 …(eqn 2).
Note the similarity of form of eqn 2 and eqn 1. That suggests that K5SO’s factor of 0.92/A is in fact 1.
Above is a comparison of the two formulas. A value of A=0.55 has been used to calibrate to K5SO’s curve.
So, we now know that Th/Tc=0.72/0.55*SF=1.31*SF. The factor 1.31 accounts for station antenna gain, and Sun noise at 2800MHz relative to system noise (Tc) at 1296MHz. That implies that there is a constant relationship between received Sun noise at 1296MHz and that at 2800MHz… but is there?
Above is a comparison of SF at 1415MHz compared to 2695MHz as measured at Learmonth observatory over 45 days in 2014. Clearly there is not a fixed relationship between them (they are produced in different parts of the chromosphere), and there is likely to be a somewhat similar variation in solar flux at 1296MHz compared to 2800MHz if one was to measure it.
Using 2800MHz SF just adds statistical noise. Whilst solar flux is not measured in these observatories at 1296MHz, they measure a number of frequencies and It should be better to use a sensible interpolation of those measurements.
Above is a plot of the distribution of the ratio of solar flux at 1415MHz to 2695MHz for the same data. This variation would give rise to 3σ uncertainty of 0.5dB, a little worse probably for the actual case of 1296MHz from 2800MHz, a significant amount which can easily be reduced subtantially by a better projection of 1296MHz from the observatory data.
K5SO gets a Y factor of about 21dB when 2800MHz solar flux is 100SFU. If we were to assume that at that time, solar flux at 1296MHz was say 85SFU, we can calculate G/T=20dB/K (making a beamwidth correction for a 39dB dish). Assuming antenna gain to be 39dB, that indicates a system noise temperature (Tc) of 78K which is credible.
W2UHI’s gain is probably around 36dB, G/T is probably around 16dB/K and assuming antenna gain of 36dB implies a system noise temperature (Tc) of 102K which is again believable.
Similarly, VK7MO’s gain is probably around 28dB, G/T is probably around 9dB/K and assuming antenna gain of 28dB implies a system noise temperature (Tc) of 113K which is again believable.
So, above is a graph similar to K5SO for a family of G/T values. The graph is constructed using the technicque and formulas given in Measuring G/T. (One must apply a beamwidth correction factors for narrow beamwidth antennas, and in the above chart that is calculated based on G/T and an assumed system noise temperature Ts.)
The similarity of the above chart and K5SO’s shows that conventional theory underlies K5SO’s experimental observations.
References
• Duffy, O. 2006. Effective use of a Low Noise Amplifier on VHF/UHF. VK1OD.net (offline).
• ———. 2007. Measuring system G/T ratio using Sun noise. VK1OD.net (offline).
• ———. 2009. Quiet sun radio flux interpolations. http://owenduffy.net/calc/qsrf/index.htm.
• ———. 2014. Measuring G/T. http://owenduffy.net/blog/?p=1490.
• ITU-R. 2000. Recommendation ITU-R S.733-2 (2000) Determination of the G/T ratio for earth stations operating in the fixed-satellite service .
• K5SO 2007. Sun noise. http://www.k5so.com/Using_sun_noise.html (accessed 24/04/14). | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9453299045562744, "perplexity": 2834.7733909715166}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170708.51/warc/CC-MAIN-20170219104610-00198-ip-10-171-10-108.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/simple-harmonic-motion-problem.656804/ | # Homework Help: Simple harmonic motion problem
1. Dec 4, 2012
### DLH112
1. The problem statement, all variables and given/known data
This is a 3 part problem and I've successfully solved the first 2 parts, but I don't know what I did wrong in the third part.
1) mass of 346 g on a spring with constant 26.8 N/m on a horizontal + frictionless surface.
Amplitude is 6.7 cm. In part 1 i found the total energy to be 0.0601526 J. and in part 2 i found the max speed to be 0.58966337 m/s.
part 3 is "What is the magnitude of the velocity of the mass when the displacement is equal to 3.6 cm? answer in m/s"
2. Relevant equations
E = KE + PE , 1/2KA^2 = 1/2kx^2 + 1/2mv^2
3. The attempt at a solution
using the energy from part 1 as E...
0.0601526 = (0.5)(26.8)(0.036)^2 + (0.5)(0.346)v^2
0.061526 = 0.0185328 + (0.5) (0.346)v^2
0.0416198 = (0.5)(0.346)v^2
0.2405768786 = v^2
0.4904863694 m/s = V .... is apparently wrong
2. Dec 4, 2012
### Staff: Mentor
Double check the highlighted term.
3. Dec 4, 2012
### DLH112
ah thank you. i was accidentally using 28.6 instead of 26.8. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9236630797386169, "perplexity": 1829.1356017494384}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376828507.57/warc/CC-MAIN-20181217113255-20181217135255-00569.warc.gz"} |
http://mathhelpforum.com/trigonometry/142905-trigonometry-question.html | Math Help - Trigonometry question
1. Trigonometry question
Find all sides and angles to the triangle ABC with C a right angle, a=6 and b=4.
I've found c using pythagorus theorm and the answer is 7.21
So I'm trying to find B, and I've done Sin(B)=4/c=4/7.21=0.5547
And so I've pressed the inverse button on my calculator and it keeps giving me the answer 37.439 when the answer should be 33.69... I've already tried an online calculator and it gives me the same answer as I've got on my calculator, and I know the answer is wrong, but how do I get the right answer?
2. Originally Posted by brumby_3
Find all sides and angles to the triangle ABC with C a right angle, a=6 and b=4.
I've found c using pythagorus theorm and the answer is 7.21
So I'm trying to find B, and I've done Sin(B)=4/c=4/7.21=0.5547
And so I've pressed the inverse button on my calculator and it keeps giving me the answer 37.439 when the answer should be 33.69... I've already tried an online calculator and it gives me the same answer as I've got on my calculator, and I know the answer is wrong, but how do I get the right answer?
There are 3 angle settings on your calculator. You need to make sure your calculator is set to degrees (at the moment it is set to another measure of angles) hence the answer you are getting is not in degrees.
BTW In these sorts of questions it is best to use primary (given) information so that you don't have compounding rounding errors. It would have been better to use tan(B)=4/6 which avoids using your rounded answer of 7.21 in further calculations.
3. Haha thanks, you're right that was the problem, cheers! | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8460888862609863, "perplexity": 338.01674287517443}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042987628.47/warc/CC-MAIN-20150728002307-00009-ip-10-236-191-2.ec2.internal.warc.gz"} |
https://examproblems4lossmodels.wordpress.com/tag/burr-distribution/ | ## Exam C Practice Problem 1 – Working with Mixture Distributions
Problem 1-A
You are given:
• The claim size $X$ for a policyholder randomly chosen from a large group of insureds is a mixture of a Burr distribution with $\alpha=1$, $\theta=\sqrt{8000}$ and $\gamma=2$ and a Pareto distribution with $\alpha=1$ and $\theta=8000$.
• The mixture distribution of $X$ has equal mixing weights.
Calculate the median of $X$.
$\text{ }$
$\displaystyle (A) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 400$
$\displaystyle (B) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 405$
$\displaystyle (C) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 450$
$\displaystyle (D) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 475$
$\displaystyle (E) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 4045$
$\text{ }$
$\text{ }$
$\text{ }$
$\text{ }$
Problem 1-B
You are given:
• The claim size $X$ in the current year for a policyholder randomly chosen from a large group of insureds is a mixture of a Burr distribution with $\alpha=2$, $\theta=\sqrt{1000}$ and $\gamma=2$ and a Pareto distribution with $\alpha=2$ and $\theta=1000$.
• The mixture distribution of $X$ has mixing weights 90% (for the Burr distribution) and 10% (for the Pareto distribution).
• Suppose that the claim size for the chosen policyholder in the next year will increase 20% due to inflation.
What is the probability that the claim size in the next year will exceed 50?
$\text{ }$
$\displaystyle (A) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0.16$
$\displaystyle (B) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0.18$
$\displaystyle (C) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0.21$
$\displaystyle (D) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0.23$
$\displaystyle (E) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0.29$
___________________________________________________________________________________
$\text{ }$
$\text{ }$
$\text{ }$
$\text{ }$
$\text{ }$
$\text{ }$
$\text{ }$
$\text{ }$
___________________________________________________________________________________
___________________________________________________________________________________
$\copyright \ 2013 \ \ \text{Dan Ma}$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 40, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9495927095413208, "perplexity": 563.3808290497763}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141189038.24/warc/CC-MAIN-20201127015426-20201127045426-00329.warc.gz"} |
https://groupprops.subwiki.org/wiki/Nearly_normal_subgroup | # Nearly normal subgroup
This page describes a subgroup property obtained as a composition of two fundamental subgroup properties: subgroup of finite index and normal subgroup
View other such compositions|View all subgroup properties
## Definition
### Symbol-free definition
A subgroup of a group is said to be nearly normal if it satisfies the following equivalent conditions:
1. It has finite index in its normal closure.
2. It is a subgroup of finite index of a normal subgroup of the whole group.
This article defines a term that has been used or referenced in a journal article or standard publication, but may not be generally accepted by the mathematical community as a standard term.[SHOW MORE]
This article defines a subgroup property: a property that can be evaluated to true/false given a group and a subgroup thereof, invariant under subgroup equivalence. View a complete list of subgroup properties[SHOW MORE]
This subgroup property is a finitarily tautological subgroup property: when the ambient group is a finite group, the property is satisfied.
View other such subgroup properties
This is a variation of normal subgroup|Find other variations of normal subgroup | Read a survey article on varying normal subgroup
## Relation with other properties
### Stronger properties
Property Meaning Proof of implication Proof of strictness (reverse implication failure) Intermediate notions
Normal subgroup
Subgroup of finite index
### Weaker properties
Property Meaning Proof of implication Proof of strictness (reverse implication failure) Intermediate notions
Conjugate-commensurable subgroup
Subgroup for which any join of conjugates is a join of finitely many conjugates
Almost subnormal subgroup
## Metaproperties
Metaproperty name Satisfied? Proof Statement with symbols
Transitive subgroup property No It is possible to have $H \le K \le G$ with $H$ nearly normal in $K$ and $K$ nearly normal in $G$, but $H$ is not nearly normal in $G$.
Trim subgroup property Yes Every group is nearly normal in itself; the trivial subgroup is nearly normal in every group.
Intermediate subgroup condition Yes Nearly normal satisfies intermediate subgroup condition If $H \le K \le G$ and $H$ is nearly normal in $G$, then $H$ is nearly normal in $K$.
Transfer condition Yes Nearly normal satisfies transfer condition If $H, K \le G$ with $H$ nearly normal in $G$, then $H \cap K$ is nearly normal in $K$.
Inverse image condition Yes Nearly normal satisfies inverse image condition If $H \le G$ is nearly normal and $\varphi:M \to G$ is a homomorphism, then $\varphi^{-1}(H)$ is nearly normal in $M$.
Image condition Yes Nearly normal satisfies image condition If $H \le G$ is nearly normal and $\varphi:G \to M$ is a surjective homomorphism, then $\varphi(H)$ is nearly normal in $M$.
Finite-intersection-closed subgroup property Yes Nearly normal is finite-intersection-closed If $H,K$ are nearly normal subgroups of $G$, then $H \cap K$ is also a nearly normal subgroup.
Finite-join-closed subgroup property Yes Nearly normal is finite-join-closed If $H,K$ are nearly normal subgroups of $G$, then $\langle H, K \rangle$ is also a nearly normal subgroup.
Conjugate-join-closed subgroup property Yes Nearly normal is conjugate-join-closed A join of any number of conjugates of a nearly normal subgroup of a group is nearly normal.
## References
• Groups with finite classes of conjugate subgroups by B.H. Neumann, Math. Z., 63, 1955, Pages 76-96 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 31, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9244151711463928, "perplexity": 1032.47067202037}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146341.16/warc/CC-MAIN-20200226084902-20200226114902-00518.warc.gz"} |
https://www.vovns.com/2022/09/basic-elements-of-hmm-model-hidden.html | # Basic Elements of the HMM Model (Hidden Markov Model)
## I. The basic elements of the HMM model (Hidden Markov Model)
### 1. Five basic elements of HMM
In HMM, there are 5 basic elements: {I, O, A, B, π}. Let us have an introduction to these 5 basic elements in combination with the sequence marking task:
(1) I: state sequence. Here, it refers to the label behind each word.
(2) O: Observation sequence. Here, it refers to each word itself.
(3) A: State transition probability matrix. Here, it refers to the probability that a certain annotation will transfer to the next annotation.
(4) B: Observation probability matrix, that is, emission probability matrix. Here, it refers to the probability of generating a certain word under a certain label.
(5) π: Initial probability matrix. Here, refers to the initialization probability of each annotation.
in:
I = ( i 1 , i 2 , i 3 . . . i N ) I = ({i_1,i_2,i_3...i_N})I=(i
1
,i
2
,i
3
...i
N
) state sequence
O = ( o 1 , o 2 , o 3 . . . o N ) O = ({o_1,o_2,o_3...o_N})O=(o
1
,o
2
,o
3
...o
N
) observation sequence
A = [ a i j ] N N A = [a_{ij}]_{N*N}A=[a
ij
]
NN
State transition probability matrix
B = [ b j ( k ) ] N N B = [b_j(k)]_{N*N}B=[b
j
(k)]
NN
Observation probability matrix
π initial state probability vector
2. HMM model
Model: λ = (A,B,π)
A, B, π are the three elements of the Hidden Markov Model
These three elements are obtained through statistics, these three values are the parameters of the model, and the process of obtaining these three values is the process of model training, so the HMM algorithm is a relatively simple algorithm.
The model has been known, and it can be considered that the fully connected path of each state has been known. Given an observation sequence, the optimal path among all paths is solved by the veterbi algorithm.
## 3. Two assumptions of the HMM model
### (1) Homogeneous Markov assumption (also called first-order Markov assumption)
The state of the hidden Markov chain at any time t only depends on the state at the previous time, and has nothing to do with the state at other times and the observed state.
P ( i t i t − 1 , o t − 1 , . . . , i 1 ) = P ( i t i t − 1 ) P(i_t|i_{t-1},o_{t-1},..., i_1) = P(i_t|i_{t-1})P(i
t
i
t−1
,o
t−1
,...,i
1
)=P(i
t
i
t−1
)
### (2) Observational Independence Hypothesis
The observed state at any time only depends on the state of the Markov chain at that time, and has nothing to do with the observed states at other times.
The above elements can be counted from the training corpus. Finally, based on these statistics, we apply the Viterbi algorithm to calculate the label sequence behind the word sequence.
## 2. There are three application scenarios for the Hidden Markov Model
We only use one of them for named entity recognition - to find the most likely label sequence behind the observation sequence.
Finally, let's talk about the three problems that HMM solves:
1. Evaluation (probability calculation problem)
Knowing the model parameters λ= (A, B, π), calculate the probability of a certain observation sequence, that is, find P(O|λ)
2. Learning (learning problems)
Given a sequence of observations O = ( o 1 , o 2 , . . . , o n ) O=(o_1,o_2,...,o_n)O=(o
1
,o
2
,...,o
n
), how to adjust the model parameters λ=(π, A, B) to maximize P(O|λ)? , this is the algorithm to find the model
3. Decoding (prediction problem or decoding problem) is the most commonly used
Given an observation sequence O and a model λ, find the most probable state sequence S(s1,s2,…st+1).
For example: through entity labeling and training, we can get the model λ, now given an observation sequence = I work in Phoenix Finance, to find the most likely named entities, and want to find the corresponding state sequence S = (I, in , Phoenix Finance, Work).
3. Find the model λ: solve the second problem
HMM is a generative model, so the joint probability is sought
Note: When we usually say, finding the model refers to finding the objective function. For example, in linear regression, our objective function is $h(λ)=λ_1X+λ_2$, and finding the objective function only requires the parameter λ, so usually We say that seeking model is seeking parameters.
1
P ( X , Y ) P(X,Y)P(X,Y)
Fourth, Viterbi algorithm (Viterbi): solve the third problem
The Viterbi algorithm mainly uses dynamic programming to solve the prediction problem of HMM: always model and observation sequence, find the most probable state sequence
Suppose the sequence of states is: x 1 , x 2 , x 3 . . . x N x_1,x_2,x_3 ... x_Nx
1
,x
2
,x
3
...x
N
The corresponding observation sequence is: y 1 , y 3 , y 3 . . . y N y_1,y_3,y_3 ... y_Ny
1
,y
3
,y
3
...y
N
Then our problem is transformed into: the known input sequence y 1 , y 3 , y 3 . . . y N y_1,y_3,y_3 ... y_Ny
1
,y
3
,y
3
...y
N
, corresponding to the most probable Chinese characters x 1 , x 2 , x 3 . . . x N x_1,x_2,x_3 ... x_Nx
1
,x
2
,x
3
...x
N
. What is the most likely sequence of Chinese characters?
formula:
x 1 , x 2 , x 3 . . . x N = A r g M a x P ( x 1 , x 2 , x 3 . . . x N y 1 , y 3 , y 3 . . . y N ) = A r g M a x ∏ i = 1 N P ( y i x i ) P ( x i x i − 1 ) x_1,x_2,x_3 ... x_N = ArgMaxP(x_1,x_2,x_3 ... x_N|y_1,y_3,y_3 . .. y_N) = ArgMax \prod_{i=1}^N P(y_i|x_i)*P(x_i|x_{i-1})x
1
,x
2
,x
3
...x
N
=ArgMaxP(x
1
,x
2
,x
3
...x
N
y
1
,y
3
,y
3
...y
N
)=ArgMax∏
i=1
N
P(y
i
x
i
)P(x
i
x
i−1
)
where the formula A r g M a x ∏ i = 1 N P ( y i x i ) P ( x i x i − 1 ) ArgMax \prod_{i=1}^N P(y_i|x_i)*P(x_i|x_{i-1 })ArgMax∏
i=1
N
P(y
i
x
i
)P(x
i
x
i−1
) is mainly transformed by the Bayesian formula
We know that the Bayesian formula is: P ( A B ) = P ( B A ) P ( A ) P ( B ) P(A|B) = \frac {P(B|A)*P(A )}{P(B)}P(AB)=
P(B)
P(BA)P(A)
Then P ( x y ) = P ( y x ) P ( x ) P ( y ) P(x|y) = \frac {P(y|x)*P(x)}{P(y) }P(xy)=
P(y)
P(yx)P(x)
, where P(y) is a known constant, where P(x) is actually P ( x t x t − 1 ) P(x_t|x_{t-1})P(x
t
x
t−1
), according to the Markov hypothesis, the current moment hypothesis is related to the previous moment.
For example, enter the observation sequence:
I love China
O O O B
O B O I
O O B I
B O I B
That is, the third line sought is the optimal path:
Fourth, the Viterbi algorithm (Viterbi)
Note: During the calculation of the viberbi algorithm, the shortest path between two points is calculated, not the shortest path between the two layers.
1
1. Nature
If the path p with the highest probability (or the shortest path) passes through a certain point, such as a on the way, then the sub-path Q from the starting point S to a on this path must be the shortest path between S and X22.
Otherwise, replacing Q with the shortest path R from S to a constitutes a shorter path than P, which is obviously contradictory. It is proved that the optimality principle is satisfied.
2. Algorithms
If you find the shortest path between S and E, what better way than to traverse all the paths?
In fact there must be a shortest path among all paths:
Let's start solving step by step from scratch:
(1) First, the starting point is S, and there are three possible paths from S to A column: S-A1, S-A2, S-A3, as shown in the following figure:
We cannot arbitrarily say which segment of S-A1, S-A2, and S-A3 must be part of the global shortest path. So far, any segment may be an alternative to the global shortest path.
(2). Then start the second layer
As shown above, there are only 3 paths through B1:
S-A1-B1
S-A2-B1
S-A3-B1
If the final second layer node passes through B1, the shortest path must be selected from these three paths, then the other two can be deleted.
<2> Then we start the second node of the second layer:
Similarly, as shown in the figure above, there are 3 paths through B2:
S-A1-B2
S-A2-B2
S-A3-B2
If the final second layer node passes through B2, the shortest path must be selected from these three paths, and the other two can be deleted.
<3> Then we start the third node of the second layer:
Similarly, as shown in the figure above, there are also 3 paths through B3:
S-A1-B3
S-A2-B3
S-A3-B3
If the final second layer node passes through B3, one of the shortest paths must be selected from these three paths, and the other two can be deleted.
<4> After all the stages of the second layer are traversed, there are three paths left.
We don't yet have enough information to tell which one must be a subpath of the global shortest path.
(3) Then we continue the algorithm at the third layer:
We don't yet have enough information to tell which one must be a subpath of the global shortest path.
(4) Then we continue the algorithm at the last layer:
Point E is already the end point. We can know which one is the shortest path by comparing the total length of the three paths.
Labels: | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9631854891777039, "perplexity": 830.920402074602}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00433.warc.gz"} |
https://brilliant.org/problems/complex-analysis-applicationsii/ | # Complex analysis applications-II
Calculus Level 5
$\large \sum_{n = -\infty}^\infty \frac{1}{(1 + i + n)^2}$
Evaluate to 6 decimal places. Note that the summation is indexed by $$n$$, and that $$i$$ denotes the imaginary unit.
Bonus: Find a beautiful closed form for the sum.
× | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9984618425369263, "perplexity": 716.3979368681589}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645604.18/warc/CC-MAIN-20180318091225-20180318111225-00737.warc.gz"} |
http://complexity-explorables.org/explorables/neighbors/ | EXPLORABLES
This explorable illustrates and compares three vaccination strategies in complex networks. In this type of model nodes are people and network links potential transmission paths. "Vaccination" then means that nodes are disconnected from the network because they can no longer acquire or transmit a disease. Vaccination thus effectively dilutes the network. Two strategies, A and B, are straightforward to understand. A third one, C, is a bit odd and counterintuitive at first glance:
• strategy A: A set of nodes is chosen at random and vaccinated
• strategy B: The most connected nodes are chosen and vaccinated
• strategy C: A set of nodes is chosen at random and then one of each node's neighbors is vaccinated
Intuitively we expect strategy B to be more efficient than A and C and this is indeed so. Strategy A and C, however, sound equivalent. But they are not. In fact, strategy C is more effective than strategy A. The reason for this will be explained below.
To get most out of this explorable, try following the sequence of experiments outlined below.
## This is how it works
In the display you see a network of 200 individuals. The network has one component so if a contagion process were to advance through this network eventually every node could be affected. Highly connected individuals are displayed a bit larger than those with a small number of links. The connectivity of a node is measure by its degree the number of links (neighbors) it has.
When you vaccinate a fraction (slider) of the population by pressing one of the large buttons, a set of nodes will be disconnected from the network. For low vaccination the network doesn't change much. If the fraction is sufficiently large the network will be disconnected.
The large buttons correspond to the three strategies above. When you press them once, vaccination will be performed. Pressing them again will reset the network.
### Experiment 1
Turn the vaccination slider to about 38%. Now press button A. You should see that all the vaccinated individuals are now isolated and move to the periphery. Yet, a considerable fraction of the network is still in one large component (check out the explorable Blob to learn about giant components in networks), the network hasn't really disintegrated, because 38% is too low for this strategy.
Press the button again to reset the system.
### Experiment 2
Now try strategy B. In this case high degree nodes are removed. As expected, in this scenario the network becomes very sparse and even the largest component is very small. Removing high degree nodes, we effectively remove many more links. The network falls apart.
Note also that among the isolated nodes, many aren't vaccinated. This effect is called herd-immunity, the indirect isolation of nodes.
The problem in reality is, though, that we often do not know who the high degree superspreaders are. Well, let's try strategy C.....
### Experiment 3
Here's now a clever strategy. Press button C. Again, a random set of nodes is picked but instead of vaccinating these nodes, we vaccinate a random neighbor of each of them. So we remove the same number of nodes as in the other two strategies.
However, by comparing the size of the largest component in strategy A and C, we see that typically this largest component is significantly smaller for C than for A. Therefore strategy C is more effective!
What is happening?
## Your friends have more friends than you
A peculiar property of complex networks, especially those with heterogeneous node connectivity, is that on average a node's neighbors' degree is larger than the node's own degree. This is known as the friendship paradox. When reading this, it sounds almost schizophenic. Why should my "friend" exhibit different properties? After all I am my friend's friend, too?
It turns out that the secret is hidden in the term "on average" and that we are comparing different averages. In one case we are averaging over nodes, in the other case we are averaging over links. Another way of thinking about it is this: When we pick a random set of nodes, (strategy A), the probability of say picking node $$n$$ is the same for all nodes, $$1/N$$ for each node. When we pick a random neighbor of a random node, the probability of picking a node is proportional to the target node's degree $$q$$ so we are not sampling the nodes uniformly. We are more likely to pick a node with higher degree.
If one does the math for this, one can show that the mean degree of a neighbor $$q_0$$ is given by
$q_0 = (1+\sigma^2) k_0$
where $$k_0$$ is the mean node degree and $$\sigma^2$$ the variance in node degree. This equation states that a neighbor's node degree is always larger than the average node degree, the effect is stronger for networks with broad degree distributions.
### Erdős–Rényi & Barabási–Albert
In the control panel you can chose two different types of networks, the Barabási–Albert (BA) and the Erdős–Rényi (ER) network. The BA network has a much stronger variation in node degree compared to the ER network. Therefore the effects explained above should be stronger in the BA network. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.868332028388977, "perplexity": 738.7945967412513}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912203168.70/warc/CC-MAIN-20190324022143-20190324044143-00438.warc.gz"} |
https://www.arxiv-vanity.com/papers/hep-ex/0507079/ | # A Search for Periodicities in the 8B Solar Neutrino Flux Measured by the Sudbury Neutrino Observatory
July 18, 2005; Revised August 4, 2005; to appear as Phys Rev D 72, 052010
###### Abstract
A search has been made for sinusoidal periodic variations in the B solar neutrino flux using data collected by the Sudbury Neutrino Observatory over a 4-year time interval. The variation at a period of one year is consistent with modulation of the B neutrino flux by the Earth’s orbital eccentricity. No significant sinusoidal periodicities are found with periods between 1 day and 10 years with either an unbinned maximum likelihood analysis or a Lomb-Scargle periodogram analysis. The data are inconsistent with the hypothesis that the results of the recent analysis by Sturrock et al., based on elastic scattering events in Super-Kamiokande, can be attributed to a 7% sinusoidal modulation of the total B neutrino flux.
###### pacs:
26.65+t, 95.75.Wx, 14.60.St, 96.60.Vg
SNO Collaboration
## I Introduction
There have been recent reports of periodic variations in the measured solar neutrino fluxes sturrock_sk1 ; sturrock_sk2 ; sturrock_sk3 ; sturrock_gallex1 ; sturrock_gallex2 ; sturrock_gallex3 ; milsztajn ; ranucci . Other analyses of these same data, including analyses by the experimental collaborations themselves, have failed to find such evidence sk ; pandola . The reported periods have been claimed to be related to the solar rotational period. Particularly relevant for this paper is a claimed 7% amplitude modulation in Super-Kamiokande’s B neutrino flux at a frequency of 9.43 y sturrock_sk2 ; sturrock_sk3 . Because solar rotation should not produce variations in the solar nuclear fusion rate, non-standard neutrino properties have been proposed as an explanation. For example, the coupling of a neutrino magnetic moment to rotating magnetic fields inside the Sun might cause solar neutrinos to transform into other flavors through a resonant spin flavor precession mechanism rsfp1 ; rsfp2 ; rsfp3 . Periodicities in the solar neutrino flux, if confirmed, could provide evidence for new neutrino physics beyond the commonly accepted picture of matter-enhanced oscillation of massive neutrinos.
This paper presents a search for periodicities in the data from the Sudbury Neutrino Observatory (SNO). SNO is a real-time, water Cherenkov detector located in the Inco, Ltd. Creighton nickel mine near Sudbury, Ontario, Canada sno_nim . SNO observes charged-current (CC) and neutral-current (NC) interactions of B neutrinos on deuterons in 1 ktonne of DO, as well as neutrino-electron elastic scattering (ES) interactions. By comparing the observed rates of CC, NC, and ES interactions, SNO has demonstrated that a substantial fraction of B electron neutrinos produced inside the Sun transform into other active neutrino flavors cc_prl ; d2o_prl ; dn_prl ; salt_prl ; nsp .
SNO’s combination of real-time detection, low backgrounds, and sensitivity to different neutrino flavors give it unique capabilities in a search for neutrino flux periodicities. Chief among these is the ability to do an unbinned analysis, in which the event times of individual neutrino events are used as inputs to a maximum likelihood fit.
This paper presents results from an unbinned maximum likelihood analysis and a more traditional Lomb-Scargle periodogram analysis for SNO’s pure DO and salt phase data sets. Previous analyses of data from other experiments have used the Lomb-Scargle periodogram scargle and binned maximum likelihood techniques to search for periodicities in the solar neutrino data. These data generally consist of flux values measured in a number of time bins of unequal size. Because analyses of binned data can be sensitive to the choice of binning, which can also produce aliasing effects, it is desirable to avoid binning the data if possible. Section II describes the data sets. Section III contains the results of a general search for any periodicities with periods between 1 day and 10 years. Section IV presents limits on the amplitudes at two specific frequencies: the 9.43 y modulation of the B neutrino flux claimed by Sturrock et al. sturrock_sk2 ; sturrock_sk3 , and a yearly modulation due to the Earth’s orbital eccentricity.
## Ii Description of the SNO data sets
The data included in these analyses consist of the selected neutrino events for the initial phase of SNO, in which the detector contained pure Dd2o_prl , and for SNO’s salt phase, in which 2 tonnes of NaCl were added to the DO to increase the neutron detection efficiency for the NC reaction nsp . Each data set is divided into runs of varying length during which the detector was live for solar neutrino events. The DO data set consists of 559 runs starting on November 2, 1999, and spans a calendar period of 572.2 days during which the total neutrino livetime was 312.9 days. The salt phase of SNO started on July 26, 2001, 59.7 calendar days after the end of the pure DO phase of the experiment. The salt data set contains 1212 runs and spans a calendar period of 762.7 days during which the total neutrino livetime was 398.6 days. The intervals between runs during which SNO was not recording solar neutrino events correspond to run transitions, detector maintenance, calibration activities, periods when the detector was off, etc. Deadtime incurred within a run, mostly due to spallation cuts that remove events occurring within 20 seconds after a muon, can be neglected, since such deadtime is incurred randomly at average intervals much shorter than the periods of interest for this analysis. This deadtime is 2.1% for the DO data set and 1.8% for the salt data set.
The event selection for the data sets is similar to that in d2o_prl and nsp . Events were selected inside a reconstructed fiducial volume of cm and above an effective kinetic energy of MeV (DO) or MeV (salt). The salt data set contains 4722 events, as in nsp . During the salt analysis described in nsp a background of “event bursts”, consisting of two or three neutron-like events occurring in a short time interval, was identified and removed with a cut that eliminated any event occurring within 50 ms of an otherwise acceptable candidate neutrino event. The source of these 11 bursts is not certain, but they may have been produced by atmospheric neutrino interactions. For this analysis a similar cut removing any event occurring within 150 ms of another event was applied to the DO data, reducing the number of selected events from 2928, as in d2o_prl , to 2924. The timing window for the cut in the DO data is longer than for the salt data to account for the longer neutron capture time in pure DO.
An important element of a periodicity analysis is exact knowledge of when each data-taking run began and ended. These run boundaries define the time exposure of the data set, which itself may induce frequency components that could impact a periodicity analysis. The unbinned maximum likelihood analysis described below makes explicit use of these run boundary times, and all Monte Carlo simulations are generated using the exact run boundaries, even if the simulated data are binned in a following analysis. These precautions avoid ad hoc assumptions about the distribution of the time exposure within any time bin. The measured time for each event was measured with a global positioning system (GPS) clock to a precision of ns, but rounded to 10 ms accuracy for the analysis. The run boundary times were determined from the times of the first and last events in each run with a precision of ms.
Figure 1 displays the solar neutrino event rate in livetime corrected 1-day bins over the total exposure time of both phases of SNO sno_data . The DO and salt data sets may be individually examined for periodicities, or the combined data from both phases can be jointly searched. It should be noted that the relative amounts of CC, NC, and ES events are different for the DO and salt data, with the salt data set containing a much higher fraction of NC events.
Although SNO’s data sets are dominated by solar neutrino events, they also contain a small number of non-neutrino backgrounds, primarily neutrons produced through photodisintegration of deuterons by internal or external radioactivity. The total estimated number of background events is 123 for the DO data set (4.2% of the total rate), and 260 events for the salt data set (5.5% of the total rate). Although the background rate is not entirely constant, the backgrounds are small and stable enough that they can be neglected in this analysis.
## Iii General Periodicity Search
Both an unbinned maximum likelihood analysis and a Lomb-Scargle periodogram with 1-day binning were used to search SNO’s data for periodicities. Results are presented below for the DO, salt, and combined data from each method, along with evaluations of the sensitivity of each method to sinusoidal variations of various periods and amplitudes. The periodicity searches were carried out over the sum of CC, NC, and ES events.
Extensive use was made of Monte Carlo data sets to evaluate the statistical significance of the results and the sensitivity of each method. To determine the statistical significance of any peak in the frequency spectrum, 10,000 Monte Carlo data sets with events generated randomly within the run boundaries for each phase were used, with mean event rates in each phase matching those observed in SNO’s data sets. The number of events in each Monte Carlo data set was drawn from a Poisson distribution with the same average rate as the data, and the events were distributed uniformly within the run boundaries footnote1 . These “null-hypothesis” Monte Carlo data sets were used to determine the probability that a data set drawn from a constant rate distribution would produce a false positive detection of a periodicity. To determine the sensitivity of an analysis to a real periodicity, 1,000 Monte Carlo data sets were generated for each of several combinations of frequencies and amplitudes, with the events drawn from a time distribution of the form . The sensitivity for any frequency and amplitude is then defined as the probability that the analysis will reject the null hypothesis of a constant rate at the 99% confidence level.
### iii.1 Unbinned Maximum Likelihood Method
The unbinned maximum likelihood method tests the hypothesis that the observed events are drawn from a rate distribution given by
ϕ(t)=N(1+Acos(2πft+δ)) (1)
relative to the hypothesis that they are drawn from a constant rate distribution (). is the fractional amplitude of the periodic variation about the mean, is a phase offset, and is a normalization constant for the rate. Equation 1 serves as the probability density function (PDF) for the observed event times, which are additionally constrained to occur only within run boundaries (i.e., if is not between the start and end times of any run).
With fixed, the maximum of the extended likelihood as a function of the individual event times is calculated for a data set as
lnL(tk|N,A,δ,f)=−runs∑j=1∫tjftjiϕ(t)dt+events∑k=1ln(ϕ(tk)) (2)
where the first term is a sum over all runs of an integral evaluated between each run’s start and stop times and , and accounts for Poisson fluctuations in the signal amplitude. The second term is a sum over the events in the data set, and is the time of the th event. The log likelihood is maximized as a function of , , and to yield , while is kept fixed. Then the constraint is imposed, removing the dependence of on both and , and the log likelihood is maximized over the remaining free parameter to yield . By the likelihood ratio theorem likelihood the difference will approximately have a distribution with two degrees of freedom (since the choice also removes the dependence on the phase ). Thus will follow a simple exponential if the true value of is zero. Therefore, at any single frequency , the probability of observing under the null hypothesis that is approximately . This null hypothesis test is carried out for a large set of frequencies scanning the region of interest.
Equation 2 includes both a floating offset and an amplitude as free parameters. Allowing both of these parameters to vary is necessary to deal with very low frequencies, for which and become degenerate parameters. Simply fixing to the mean rate, as was done in sturrock_sk3 , will be prone to bias at the very lowest frequencies, but gives virtually identical results to the floating offset procedure when the length of the data set is longer than the period , since in this case enough cycles are sampled to break the degeneracy between and .
Equations 1 and 2 are adequate to test for periodicity in a single data set, but for a combined analysis of SNO’s DO and salt data sets, account must be taken of the differing mean rates owing to different detection efficiencies and energy thresholds in the two phases. This can be done by generalizing to:
ϕ(t) = Nd2o(1+Acos(2πft+δ)), if t∈ D2O run ϕ(t) = Nsalt(1+Acos(2πft+δ)), if t∈ salt run
This PDF allows different normalization constants for the two data sets, while retaining the assumption that the flux variation has the same fractional amplitude in both the DO and salt data.
#### iii.1.1 Results for the SNO data sets
Figure 2 shows as a function of frequency for the DO, salt, and combined data sets at 3650 frequencies with periods ranging from 10 years down to 1 day, with a sampling interval of . This corresponds to an oversampling of the number of independent Fourier frequencies for continuous data by a factor of approximately 5-6 for the separate DO and salt data sets, and a factor of 2.6 for the combined data set. The maximum value of for the DO data set is at a period of 3.50 days ( days). The largest peak found in the salt data has a height of at a period of 1.03 days ( days), while the combined data set has its largest peak of at a period of 2.40 days ( days).
Figure 3 shows the distribution of maximum peak heights for 10,000 Monte Carlo data sets generated with no periodicity and analyzed identically to SNO’s combined data set. Distributions for the DO and salt Monte Carlo data sets analyzed individually look similar. Of the 10,000 simulated data sets, 35% yielded at least one peak with , exceeding the largest peak seen in SNO’s combined data set. For the DO Monte Carlo data sets, 72% had a peak larger than the observed largest peak of , while 14% of the salt Monte Carlo data sets yielded a peak larger than the peak seen in the data. Therefore, none of the observed peaks are statistically significant.
Under the null hypothesis of no time variability, the probability of any individual frequency having smaller than some threshold is approximately . If all 3650 scanned frequencies were statistically independent, the probability that all peaks would be smaller than would be . However, the 3650 scanned frequencies are not strictly independent, since a finite data set has limited frequency resolution, and neighboring frequencies are correlated. If is the effective number of independent frequencies, then the probability distribution for the height of the largest peak approximately follows
P(Z) dZ∝e−Z(1−e−Z)F−1 dZ (3)
The effective number of independent frequencies increases with the length of the data set and number of detected events. Fitting the Monte Carlo distributions for to this equation yields for the DO, for the salt, and for the combined data set. Figure 3 shows this fit for the combined analysis. These values are consistent with expectations based on the oversampling factors described in Section III.1.1 orford . Although Equation 3 appears to model well, quoted significance levels are always determined directly from the Monte Carlo distributions and not from the analytic formula. To ensure that no significant peaks were missed, the combined analysis of the actual data (but not the Monte Carlo data sets) was repeated with the sampling increased by a factor of five. No new peaks were found.
#### iii.1.2 Sensitivity to sinusoidal periodicities
Distributions of the maximum peak height for Monte Carlo data sets, such as in Figure 3, readily yield the threshold for which 99% of Monte Carlo data sets generated without periodicity would yield a maximum peak height of or less. This threshold defines the peak height at which the null hypothesis of no time variation is rejected at the 99% confidence level, and equals 12.10, 12.20, and 12.65 for the DO, salt, and combined data sets respectively.
Monte Carlo data sets drawn from rate distributions with sinusoidal periodicities of various periods and amplitudes were analyzed to determine the probability of rejecting the null hypothesis at the 99% C.L.
Figure 4 shows the amplitudes as a function of period at which the method has a 50% (90%) probability of rejecting the null hypothesis, for simulations of SNO’s combined data set. While the sensitivity varies as a function of period, a signal must have an amplitude of approximately 8% to be discovered 50% of the time.
### iii.2 The Lomb-Scargle periodogram
The Lomb-Scargle periodogram is a method for searching unevenly sampled data for sinusoidal periodicities scargle and provides an alternative to the unbinned maximum likelihood technique described above.
The Lomb-Scargle power at frequency is calculated from the measured flux values in independent time bins as:
P(f)=12σ2⎛⎜ ⎜⎝[∑Ni=1wi(y(ti)−¯y)cos(2πf(ti−τ))]2∑Ni=1wicos2(2πf(ti−τ))+[∑Ni=1wi(y(ti)−¯y)sin(2πf(ti−τ))]2∑Ni=1wisin2(2πf(ti−τ))⎞⎟ ⎟⎠ (4)
where the phase factor satisfies:
tan(4πfτ)=∑Ni=1wisin(4πfti)∑Ni=1wicos(4πfti)
Each bin is weighted in proportion to the inverse of its squared uncertainty divided by the average value of the inverse of the squared uncertainty (so ), as in sturrock_sk3 . In Equation 4 is the livetime-weighted mean time for the th bin, and and are the weighted mean and weighted variance of the data for all the bins calculated with the weighting factors .
Like the maximum likelihood method, the power in the Lomb-Scargle periodogram at any single frequency is expected to approximately follow an exponential distribution if the data set is drawn from a constant rate distribution. The same methods of evaluating the significance of the largest peak and the sensitivity of the method to periodic signals can be employed, making use of large numbers of Monte Carlo data sets.
In sk the Super-Kamiokande collaboration used an unweighted Lomb-Scargle periodogram ( to search its data set for periodicities, a choice that was criticized in sturrock_sk3 . The analysis presented here used the weighted Lomb-Scargle periodogram.
For the Lomb-Scargle method SNO’s recorded events were binned in 1-day intervals (see Figure 1), and the livetime, the livetime-weighted mean time , and the event rate were calculated for each bin. To prevent biases stemming from the assumption of Gaussian statistics, any bin in which fewer than five events would be expected based upon that bin’s livetime and the mean event rate was combined with the following bin(s) so that the expected number of events in all bins was greater than five. The uncertainty on the rate in each bin was taken to be the square root of the expected number of events in that bin for a constant rate. This calculation of the uncertainty is appropriate if one views the Lomb-Scargle method as a null hypothesis test of the no-periodicity hypothesis; however, using the observed number of events instead to calculate does not change the conclusions of this study.
When doing a combined DO + salt analysis one must account for the different mean event rates in the two phases. In the Lomb-Scargle analysis this was accomplished by scaling the rates and uncertainties on the salt data bins by the ratio of the weighted mean DO rate to the weighted mean salt rate.
#### iii.2.1 Results for the SNO data sets
Figure 5 shows the Lomb-Scargle periodograms for the DO, salt, and combined data sets. A total of 7300 frequencies were tested with periods ranging from 10 years to 2 days, with a sampling step of footnote2 . Because the data were binned in one-day intervals, the analysis was restricted to frequencies less than 0.5 days to avoid potential binning effects. The maximum peak height for the DO data set is at a period of 2.45 days ( days). The largest salt peak has a height of at a period of 2.33 days ( days), while the combined data set has its largest peak of at a period of 2.42 days ( days).
The probability of observing a larger peak than that actually seen in the Lomb-Scargle periodogram, if the rate were constant, was estimated using the previously described 10,000 Monte Carlo data sets having no periodicity. Under the null hypothesis of no time variability, the probability of getting a peak larger than the biggest peak seen in the Lomb-Scargle periodogram is 46% for the DO data set, 65% for the salt data set, and 27% for the combined data sets. As with the unbinned maximum likelihood method, no evidence for time variability is seen.
#### iii.2.2 Sensitivity to sinusoidal periodicities
Monte Carlo data sets generated with sinusoidal periodicities were used to estimate the sensitivity of the Lomb-Scargle method to signals of various periods and amplitudes. Figure 4 shows the amplitudes as a function of frequency at which the analysis would detect the signal 50% or 90% of the time. In each case the signal is considered to be detected if the Lomb-Scargle method rejects the null hypothesis of a constant rate at the 99% confidence level. The threshold for rejecting the null hypothesis at the 99% C.L. is for the DO data, for the salt data, and for the combined analysis. Figure 6 shows example maximum power distributions for a 20-day period with amplitudes of 0, 10, 15, 20, and 25% for the combined analysis.
#### iii.2.3 Systematic checks
Many checks of the Lomb-Scargle periodogram were made to verify that the results are robust. In particular, all data and Monte Carlo results were recomputed for (a) a range of bin sizes, from 1-day to 5-days in fractional day steps, (b) a range of starting times of the first bin in fractional day steps, and (c) different values of the frequency sampling step. There was no evidence for time variability under any of these scenarios.
## Iv Limits at Specific Frequencies of Interest
The sensitivity calculations in Sections III.1.2 and III.2.2 are appropriate when the frequency of the signal is not known a priori, and could occur anywhere in the frequency search band. The threshold for claiming a detection at the 99% C.L. must accordingly be set relatively high to reduce the false alarm probability, which was found from Monte Carlo simulations but is approximately given by integrating Equation 3 above the detection threshold, to . However, if the frequency of interest is specified a priori, then a more restrictive and sensitive test can be done using the fitted amplitude at that frequency. Two particular frequencies of interest are the 7% variation in the Super-Kamiokande data at a frequency of y claimed by Sturrock et al. sturrock_sk3 , and the annual modulation of the neutrino flux by the Earth’s orbital eccentricity.
### iv.1 Test at f=9.43 y−1
Sturrock et al. have claimed evidence for a periodicity in Super-Kamiokande’s neutrino data at a frequency of y (0.0258 days) with an amplitude of 7% sturrock_sk3 . Examination of SNO’s unbinned maximum likelihood results in the interval from 9.33-9.53 y yielded no value larger than in either the DO, the salt, or the combined data sets sk_comparison . The best-fit amplitude for the combined data set inside this frequency interval is . This disagrees with a 7% amplitude periodicity in the B neutrino flux by 3.6 sigma. It must be remarked that SNO’s limit applies to a modulation of the summed rates of CC, ES, and NC events above their respective energy thresholds, whereas the reported 7% periodicity in the Super-Kamiokande data is a modulation of the elastic scattering rate from B neutrinos above a total electron energy threshold of 5 MeV. The best-fit amplitudes for the DO and salt data sets are and respectively.
### iv.2 Eccentricity Result
The Earth’s orbital eccentricity is expected to produce a rate variation proportional, in excellent approximation, to , where is the eccentricity of the orbit, is the Earth’s orbital frequency, and is the time of perihelion. Maximum sensitivity to this effect is obtained if and are fixed to their known values and the combined data sets are fit for only. This has been implemented using the unbinned maximum likelihood technique. The best-fit eccentricity is , in good agreement with the expected value. The difference in the log likelihoods for the best fit compared to is 1.394. The probability of obtaining a larger value of the log likelihood difference if is 9.5%. Figure 7 displays the relative event rate for the combined data as a function of the time since perihelion.
## V Conclusions
Data from SNO’s DO and salt phases have been examined for time periodicities using an unbinned maximum likelihood method and the Lomb-Scargle periodogram. No evidence for any sinusoidal variation is seen in either data set or in a combined analysis of the two data sets. This general search for sinusoidal variations with periods between 1 day and 10 years has significant sensitivity to periodicities with amplitudes larger than . The best-fit amplitude for a sinusoidal variation in the total B neutrino flux at a frequency of 9.43 y is %, which is inconsistent with the hypothesis that the results of the recent analysis by Sturrock et al. sturrock_sk3 , based on elastic scattering events in Super-Kamiokande, can be attributed to a 7% modulation of the B neutrino flux. A fit for the eccentricity of the Earth’s orbit from the modulation at a period of one year yields , in good agreement with the known value of 0.0167.
## Acknowledgments
This research was supported by: Canada: Natural Sciences and Engineering Research Council, Industry Canada, National Research Council, Northern Ontario Heritage Fund, Atomic Energy of Canada, Ltd., Ontario Power Generation, High Performance Computing Virtual Laboratory, Canada Foundation for Innovation; US: Dept. of Energy, National Energy Research Scientific Computing Center; UK: Particle Physics and Astronomy Research Council. This research has been enabled by the use of WestGrid computing resources, which are funded in part by the Canada Foundation for Innovation, Alberta Innovation and Science, BC Advanced Education, and the participating research institutions. WestGrid equipment is provided by IBM, Hewlett Packard and SGI. We thank the SNO technical staff for their strong contributions. We thank Inco, Ltd. for hosting this project. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8828373551368713, "perplexity": 905.35870322716}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570765.6/warc/CC-MAIN-20220808031623-20220808061623-00180.warc.gz"} |
https://derive-it.com/tag/integration/ | Integration
## How to Integrate in a Spherical Coordinate System
Review of Integration Integration with Cartesian coordinates is simple. The general form is $\int\int\int f(x,y,z)dxdydz$ in which […]
Back To Top | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9969945549964905, "perplexity": 1638.1205607312615}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499888.62/warc/CC-MAIN-20230131154832-20230131184832-00587.warc.gz"} |
https://www.raucci.net/tag/collatz/ | ## Posts Tagged ‘collatz’
### A (apparently) simple problem …
giovedì, Agosto 26th, 2021
So when I say a simple mathematical problem most would think that I am kidding but I am not, there are many unsolved mathematical problems in the world but this is so simple but yet unsolved.
The problem is called “3n + 1 problem” or “Collatz conjecture”. To understand the problem first we need to understand what it is, so basically just pick natural number if the number is odd then we mutiple the number with 3 and add 1, if the number is even we divide it by 2. We apply these conditions to resultant value. in other words:
In modular arithmetic notation, define the function f as follows:
Now form a sequence by performing this operation repeatedly, beginning with any positive integer, and taking the result at each step as the input at the next.
The Collatz conjecture is: This process will eventually reach the number 1, regardless of which positive integer is chosen initially.
To demonstrate the problem let’s consider a number 5; since it is odd we apply 3n+1:
$$3 \cdot 5 + 1 = 16\left( {{\text{even}}} \right);16/2 = 8;8/2 = 4;4/2 = 2;2/2 = 1$$
So we get a value of one but we apply conditions fruther we will be stuck in loop which is 4, 2, 1.
This video sums up the problem well
If the conjecture is false, it can only be because there is some starting number which gives rise to a sequence that does not contain 1. Such a sequence would either enter a repeating cycle that excludes 1, or increase without bound. No such sequence has been found.
Since I am from engineering background here is the Python code for Collatz conjecture:
def collatz(n):
while n > 1:
print(n, end=' ')
if (n % 2):
# n is odd
n = 3*n + 1
else:
# n is even
n = n//2
print(1, end='')
n = int(input('Enter n: '))
print('Sequence: ', end='')
collatz(n)
The above code is the demonstration of Collatz conjecture… | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8863256573677063, "perplexity": 769.820732777827}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057857.27/warc/CC-MAIN-20210926083818-20210926113818-00607.warc.gz"} |
https://fr.maplesoft.com/support/help/Maple/view.aspx?path=Groebner%2FLeadingTerm | Groebner
Compute the leading term of a polynomial
Compute the leading monomial of a polynomial
Compute the leading coefficient of a polynomial
TrailingTerm
Compute the trailing term of a polynomial
Parameters
f - polynomial or list or set of polynomials T - MonomialOrder or ShortMonomialOrder J - PolynomialIdeal tord - ShortMonomialOrder
Description
• The LeadingTerm command computes the largest (or leading) term of a polynomial f with respect to the monomial order T and returns the sequence (leading coefficient, leading monomial). If T is a ShortMonomialOrder then f must be a polynomial in the ring implied by T. If T is a MonomialOrder created with the Groebner[MonomialOrder] command, then f must be a member of the algebra used to define T. The LeadingTerm command automatically maps onto lists and sets.
• The LeadingMonomial and LeadingCoefficient commands behave identically to LeadingTerm, but return only leading monomials or coefficients, respectively. The LeadingMonomial command has an additional syntax, LeadingMonomial(J, tord) computes the ideal of leading monomials for a PolynomialIdeal J. This typically requires the computation of a Groebner basis.
• The TrailingTerm command is identical to LeadingTerm, except the smallest (or trailing) term of a polynomial f is computed. It returns the sequence (trailing coefficient, trailing monomial).
• To compare or sort monomials with respect to a monomial order, use the TestOrder command. For a description of the monomial orders that are available in Maple, see the Monomial Orders help page.
• Note that the leadcoeff, leadterm, and leadmon commands have been superseded by LeadingCoefficient, LeadingMonomial, and LeadingTerm, respectively. (Warning: the notions of monomials and terms were interchanged; see Groebner[terminology] for details). The lowercase commands may not be supported in a future Maple release.
Examples
> $\mathrm{with}\left(\mathrm{Groebner}\right):$
> $p≔-18x{y}^{5}z-96x{y}^{4}{z}^{2}+9x{y}^{4}-592x{y}^{3}z+45{y}^{5}+240{y}^{4}z+320x{y}^{2}+1600{y}^{3}$
${p}{≔}{-}{18}{}{x}{}{{y}}^{{5}}{}{z}{-}{96}{}{x}{}{{y}}^{{4}}{}{{z}}^{{2}}{+}{9}{}{x}{}{{y}}^{{4}}{-}{592}{}{x}{}{{y}}^{{3}}{}{z}{+}{45}{}{{y}}^{{5}}{+}{240}{}{{y}}^{{4}}{}{z}{+}{320}{}{x}{}{{y}}^{{2}}{+}{1600}{}{{y}}^{{3}}$ (1)
> $\mathrm{LeadingTerm}\left(p,\mathrm{plex}\left(x,y\right)\right)$
${-}{18}{}{z}{,}{x}{}{{y}}^{{5}}$ (2)
> $\mathrm{LeadingTerm}\left(p,\mathrm{plex}\left(x,y,z\right)\right)$
${-18}{,}{x}{}{{y}}^{{5}}{}{z}$ (3)
> $\mathrm{TrailingTerm}\left(p,\mathrm{plex}\left(x,y,z\right)\right)$
${1600}{,}{{y}}^{{3}}$ (4)
> $\mathrm{LeadingTerm}\left(p,\mathrm{plex}\left(z,y,x\right)\right)$
${-96}{,}{x}{}{{y}}^{{4}}{}{{z}}^{{2}}$ (5)
> $\mathrm{LeadingCoefficient}\left(p,\mathrm{plex}\left(z,y,x\right)\right)$
${-96}$ (6)
> $\mathrm{LeadingMonomial}\left(p,\mathrm{plex}\left(z,y,x\right)\right)$
${x}{}{{y}}^{{4}}{}{{z}}^{{2}}$ (7)
> $P≔5{x}^{2}+y+{z}^{2}:$
> $Q≔3xy-1:$
> $\mathrm{LeadingMonomial}\left(\left[P,Q\right],\mathrm{tdeg}\left(x,y,z\right)\right)$
$\left[{{x}}^{{2}}{,}{x}{}{y}\right]$ (8)
> $\mathrm{with}\left(\mathrm{PolynomialIdeals}\right):$
> $\mathrm{LeadingMonomial}\left(⟨P,Q⟩,\mathrm{tdeg}\left(x,y,z\right)\right)$
$⟨{{x}}^{{2}}{,}{x}{}{y}{,}{y}{}{{z}}^{{2}}⟩$ (9) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 22, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.914070188999176, "perplexity": 2213.881522031968}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949701.56/warc/CC-MAIN-20230401063607-20230401093607-00054.warc.gz"} |
https://tug.org/pipermail/texworks/2009q3/001514.html | # [texworks] Window Show Tags and Window Show Contents
Wed Sep 9 15:20:53 CEST 2009
```On 9 Sep 2009, at 14:08, Bruno Voisin wrote:
> Is there a way to merge the two lists that appear in the Tags pane,
> namely
>
> - Bookmarks for the tags added manually using the above syntax, and
>
> - Outline for the tags identified automatically by TeXworks based on
> the LaTeX markup \section etc.
As my previous message might suggest, it's not clear to me how this
would work unless you either (a) specify where in the document
hierarchy the "%:" bookmarks belong, or (b) flatten the "outline" to a
simple list of bookmarks.
You could easily do the latter by changing the <level> of all the
LaTeX sectioning commands to zero (in configuration/tag-patterns.txt),
but it seems to me that it would be much less useful in that form. To
do the former, you'd need to either decide that all your bookmarks
belong at a certain level (which, it seems to me, would tend to
disrupt the "real" outline unless you're very careful where you use
them -- which kind of defeats the point of being able to add them
wherever you wish), or else define distinct tags for each level and
use them appropriately (tricky to get right).
Now that I think about it, there's another possibility (not currently
supported): perhaps we could say that a bookmark comment with level -1
means that the tag gets inserted into the outline at whatever the
current level is, so it doesn't break up the structure. (Or maybe zero
should mean that, and -1 goes to the separate list.) Hmmm, something | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9025661945343018, "perplexity": 2717.6693908906386}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202642.32/warc/CC-MAIN-20190322094932-20190322120932-00281.warc.gz"} |
http://tex.stackexchange.com/questions/9681/how-to-draw-venn-diagrams-especially-complements-in-latex/9809 | # How to draw Venn diagrams (especially: complements) in LaTeX
What I am up to is to write some exercises dealing with logical formulas for my students, like:
And the students should draw these formulas on Venn diagrams. At the end of the lesson, I really would like to print the correct answer for them. I found a great resource on a forum thread at latex-community.org, which helped me a lot to make up some Venn diagrams with tikz, but have some problems with visualizing complements, like ~A.
A simple, modified version of the TeX file found on the forum linked above, can be seen below, which produces the following expression:
\documentclass{letter}
\usepackage{tikz}
\def\firstcircle{(90:1.75cm) circle (2.5cm)}
\def\secondcircle{(210:1.75cm) circle (2.5cm)}
\def\thirdcircle{(330:1.75cm) circle (2.5cm)}
\begin{document}
\begin{tikzpicture}
\begin{scope}
\clip \secondcircle;
\fill[cyan] \thirdcircle;
\end{scope}
\begin{scope}
\clip \firstcircle;
\fill[cyan] \thirdcircle;
\end{scope}
\draw \firstcircle node[text=black,above] {$A$};
\draw \secondcircle node [text=black,below left] {$B$};
\draw \thirdcircle node [text=black,below right] {$C$};
\end{tikzpicture}
\end{document}
Which looks like:
Could anyone please help me out plotting/defining some expressions dealing with complements? A nice example could be:
That should look like: (image from Wikipedia)
I do not insist on the red color :)
I would like to use the simplest possible solution, as I would like to mass generate the exercises with the help of R. So any suggestion dealing with gnuplot, R or any other opensource packages is welcome. Thank you!
Thank you @Leo Liu, you helped me a lot! I modified a bit the code you suggested to be able to color the area outside of the two circles also (in the H universe), but have no idea how to set a background to that polygon also. The code:
\begin{tikzpicture}[fill=gray]
% left hand
\scope
\clip (-2,-2) rectangle (2,2)
(1,0) circle (1);
\fill (0,0) circle (1);
\endscope
% right hand
\scope
\clip (-2,-2) rectangle (2,2)
(0,0) circle (1);
\fill (1,0) circle (1);
\endscope
% outline
\draw (0,0) circle (1) (0,1) node [text=black,above] {$A$}
(1,0) circle (1) (1,1) node [text=black,above] {$B$}
(-2,-2) rectangle (3,2) node [text=black,above] {$H$};
\end{tikzpicture}
And the image generated:
I will also look for even odd rule in the near future which does not make sense for me at the moment but looks really simple and promising!
-
Not super relevant for drawing complements, but for venn diagrams in tikz you could check out: texample.net/tikz/examples/venn-diagram – Seamus Jan 25 '11 at 19:58
Thank you @Seamus! – daroczig Jan 25 '11 at 21:19
Although a lot of time has passed, someone who wander here might also find this link helpful. – quapka Apr 19 '14 at 15:42
@quapka, I did. Thanks. – CroCo Oct 11 '15 at 3:43
There are several ways to draw Venn diagrams. The simplest for $\overline{A \cap B}$ may be:
\tikz \fill[even odd rule] (0,0) circle (1) (1,0) circle (1);
The key to this question is even odd rule in TikZ (based on PostScript and PDF).
Moreover, you can also use \clip to fill the complement of a set, without using even odd rule:
\begin{tikzpicture}[fill=gray]
% left hand
\scope
\clip (-2,-2) rectangle (2,2)
(1,0) circle (1);
\fill (0,0) circle (1);
\endscope
% right hand
\scope
\clip (-2,-2) rectangle (2,2)
(0,0) circle (1);
\fill (1,0) circle (1);
\endscope
% outline
\draw (0,0) circle (1)
(1,0) circle (1);
\end{tikzpicture}
Here, we find out that TikZ is lack of a \unfill command which is provided by MetaPost, thus we must use an extra rectangle to clip the path.
For updated question:
Well, I must say that this will be easier, if you fill $A \cap B$ with white color:
\begin{tikzpicture}
\filldraw[fill=gray] (-2,-2) rectangle (3,2);
\scope % A \cap B
\clip (0,0) circle (1);
\fill[white] (1,0) circle (1);
\endscope
% outline
\draw (0,0) circle (1)
(1,0) circle (1);
\end{tikzpicture}
However, it is not so easy to fill such a area using clipping (warning: it's somewhat difficult to use, only for fun):
\begin{tikzpicture}[fill=gray]
% left hand
\scope
\clip (-2,-2) rectangle (0.5,2)
(1,0) circle (1);
\clip (-2,-2) rectangle (0.5,2);
\fill (-2,-2) rectangle (3,2);
\endscope
% right hand
\scope
\clip (0.5,-2) rectangle (3,2)
(0,0) circle (1);
\clip (0.5,-2) rectangle (3,2);
\fill (-2,-2) rectangle (3,2);
\endscope
% outline
\draw (-2,-2) rectangle (3,2);
\draw (0,0) circle (1)
(1,0) circle (1);
\end{tikzpicture}
Hints:
• The result using multiple path in one \clip command depends on the direction of the path.
• Use another \clip again to get rid of the half circle being filled.
-
Thank you @Leo Liu, that really looks simple and perfect! I need some time to read about this solution and understand the syntax, I will be back to accept the answer after some experiment! – daroczig Jan 25 '11 at 17:30
I have just realized, that the example is not correct, as the area outside of both circles also should be colored. I added more details to my question (with example image). – daroczig Jan 25 '11 at 18:13
@daroczig: I updated the answers, one is easy to use and the other using clipping is more powerful. – Leo Liu Jan 25 '11 at 19:59
thank you for your detailed example, I have learned a lot! – daroczig Jan 25 '11 at 21:19
An example for Venn diagrams with transparency by Till Tantau and Kjell Magne Fauske, from the TikZ Example gallery:
\documentclass{article}
\usepackage{tikz}
\usetikzlibrary{shapes,backgrounds}
\begin{document}
\pagestyle{empty}
\def\firstcircle{(0,0) circle (1.5cm)}
\def\secondcircle{(60:2cm) circle (1.5cm)}
\def\thirdcircle{(0:2cm) circle (1.5cm)}
\begin{tikzpicture}
\begin{scope}[shift={(3cm,-5cm)}, fill opacity=0.5]
\fill[red] \firstcircle;
\fill[green] \secondcircle;
\fill[blue] \thirdcircle;
\draw \firstcircle node[below] {$A$};
\draw \secondcircle node [above] {$B$};
\draw \thirdcircle node [below] {$C$};
\end{scope}
\end{tikzpicture}
\end{document}
-
Thank you @Stefan for posting (+1), this is really great indeed. – daroczig Aug 28 '11 at 15:27
How are the color values combined in the overlap? It doesn't seem like red+green+blue should make deep blue. – Trevor Alexander Jan 15 '14 at 2:32
@Trevor Alexander: I guess that is because of the order -first red is mixed with the white backgrund, and then green is put on top of that. Finally blue is mixed on top of that color to give the deep blue. – Hans-Peter E. Kristiansen May 5 '15 at 2:37
run it with xelatex if you need a pdf
\documentclass{minimal}
\usepackage{pstricks}
\begin{document}
\begin{pspicture}(6,4)
\psset{linewidth=1.5pt}
\psframe[fillcolor=red!30,fillstyle=solid](6,4)
\psclip{\pscircle(2,2){1.5}}
\pscircle[fillcolor=white,fillstyle=solid](4,2){1.5}
\endpsclip
\pscircle(4,2){1.5}\pscircle(2,2){1.5}
\end{pspicture}
\end{document}
-
Thank you very much @Herbert! I am not familiar with pstricks and XeTeX yet, but these look promising. I will look after based on your suggestion. – daroczig Jan 25 '11 at 21:21
A solution with the new package tkz-euclide ( based on TikZ )
\documentclass{scrartcl}
\usepackage[usenames,dvipsnames]{xcolor}
\usepackage{tkz-euclide}
\usetkzobj{all}
\definecolor{fondpaille}{cmyk}{0,0,0.1,0}
\color{Maroon}
\tkzSetUpColors[background=fondpaille,text=Maroon]
\begin{document}
\begin{tikzpicture}
\tkzDefPoint(0,0){A}
\tkzDefPoint(4,0){B}
\begin{scope}
\tkzClipCircle(A,B) \tkzClipCircle(B,A)
\tkzFillCircle[color=blue!20](A,B)
\end{scope}
\tkzDrawCircle(A,B)
\tkzDrawCircle(B,A)
\end{tikzpicture}
\end{document}
-
there is a problem with the linewidth in the clipped area, you have to choose another order, first clipping, then drawing – Herbert Jan 27 '11 at 16:58
Yes Herbert, you are right ! I need to (re) draw the arcs or I need to use a scope. – Alain Matthes Jan 27 '11 at 17:04
and by the way, the optional argument usenames for xcolor is obsolete for years ... – Herbert Jan 27 '11 at 19:28
@ Herbert Why ? I have this option in the xcolor documentation and I use some color's names from this option. I never see a warning with this option but I suppose that you are right. Where can I find this information ? – Alain Matthes Jan 28 '11 at 6:26
thank you for this alternate solution which looks promising and simple also! – daroczig Jan 30 '11 at 13:10
If you use MetaPost or Asymptote, there will be a different method: buildcycle.
For example, Asymptote:
size(200);
defaultpen(black+1);
pair A = (0,0), B = (1,0);
path inter = buildcycle(arc(A,1,-90,90), arc(B, 1,90,270));
path outer = box((-2,-2), (3,2));
fill(outer, mediumgray); unfill(inter);
// or use:
// fill(outer ^^ inter, evenodd+mediumgray);
draw(outer ^^ circle(A,1) ^^ circle(B,1));
And you don't have to use a language to draw Venn diagrams. Inkscape can also deal with them.
-
@Leo Liu: and thanks also for this alternate solution! – daroczig Jan 25 '11 at 21:20
@Leo How to manage the font with Inskape ? I think it's not a good solution with LaTeX. The best solutions and very well integrated are pstricks and tikz. Tools like geogebra or texgraph are fine but you can not adjust the size of your figures automatically in conjunction with the text. – Alain Matthes Jan 27 '11 at 16:59
@Altermundus: There are some extensions to enable Inkscape to use LaTeX labels. See wiki.inkscape.org/wiki/index.php/ExtensionsRepository. – Leo Liu Jan 27 '11 at 19:07
@Leo (+1) The venn.mp macro might be useful too. – chl Jan 27 '11 at 20:24
@Leo And for Asymptote, see e.g. asymptote.sourceforge.net/doc/LaTeX-usage.html. – chl Jan 27 '11 at 20:27
User defined constants:
\const{HalfCanvas}{1.5}% half of canvas width or height
\const{InitAngleD}{30}% initial angle
\documentclass{beamer}
\usepackage[nomessages]{fp}
\def\const#1#2{%
\expandafter\FPeval\csname#1\endcsname{round(#2:3)}%
\pstVerb{/#1 \csname#1\endcsname\space def}%
}
\usepackage[active,tightpage]{preview}
\PreviewBorder=12pt
\PreviewEnvironment{pspicture}
\def\init{%
%
% user defined constants
\const{HalfCanvas}{1.5}% half of canvas width or height
\const{InitAngleD}{30}% initial angle
%
% internal used constants
\const{AngleBD}{InitAngleD+120}%
\const{AngleCD}{InitAngleD+240}%
}
\def\hold{\psgrid\pause}
\begin{document}
\begin{frame}
\begin{pspicture}(-3,-3)(3,3)
\init
\pnode(\MainR;\AngleBD){B}
\pnode(\MainR;\AngleCD){C}
\psset{linestyle=none,fillstyle=solid,style=gridstyle,opacity=0.999}\hold
\pscircle[fillcolor=blue](C){!ChildR}\hold
\begin{psclip}{\pscircle[fillcolor=green](B){!ChildR}}
\pscircle[fillcolor=red](C){!ChildR}
\end{psclip}\hold
\begin{psclip}{\pscircle[fillcolor=red](A){!ChildR}}
\pscircle[fillcolor=blue](B){!ChildR}
\end{psclip}\hold
\begin{psclip}{\pscircle[fillstyle=none](A){!ChildR}}
\pscircle[fillcolor=green](C){!ChildR}
\end{psclip}\hold
\psset{fillstyle=none}
\begin{psclip}{\pscircle(A){!ChildR}\pscircle(C){!ChildR}}
\pscircle[fillstyle=solid,fillcolor=white](B){!ChildR}
\end{psclip}\hold
\psset{linestyle=solid}
\pscircle(A){!ChildR}\hold
\pscircle(B){!ChildR}\hold
\pscircle(C){!ChildR}
\end{pspicture}
\end{frame}
\end{document}
-
This looks pretty cool :) Thanks a lot! – daroczig Jul 24 '12 at 12:40
There's a venn package on CTAN:
"Creating Venn diagrams with MetaPost."
draw_venn_two(true,false,true,false) shifted (2in,1in);
draws a diagram with the outer box shaded, with the first circle but not the second unshaded, with the intersection of the two shaded, and with the second circle but not the first unshaded. That is, this is a picture of the complement of the symmetric difference of A and B.
-
Thanks for this simple solution (+1), I'll have to check it out. – daroczig Aug 28 '11 at 15:29
The "package" shouldn't be called "package". There is no documentation for it. – buhtz Jan 28 at 10:24
Here is a nice introduction I found to drawing Venn diagrams:
http://users.ju.edu/hduong/math220/venn_diagrams.pdf
I put it here, since this question is one of the first results google gives when looking for "how to draw Venn diagrams" (naturally), and also other places refer here, but I feel none of the answers give an introduction to drawing Venn diagrams for someone who doesn't know anything about using TikZ (as I was when I reached here). This introduction does explain the basics rather well.
-
I would suggest actually including what you can from the linked post in a full-fledged answer, rather than just linking to an outside source (thereby avoiding potential future link rot). – Werner Dec 9 '14 at 18:21
This is indeed useful, thank you @ur-ben-ari-tishler – daroczig Dec 9 '14 at 19:04
There is a simple package to Venn diagrams, maybe somebody likes this: package venndiagram. I'm using it, it's enough for me.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8267796635627747, "perplexity": 2168.064002011752}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701157443.43/warc/CC-MAIN-20160205193917-00154-ip-10-236-182-209.ec2.internal.warc.gz"} |
https://lessonplanet.com/lesson-plans/trapezoidal-rule | # Trapezoidal Rule Teacher Resources
Find Trapezoidal Rule lesson plans and worksheets
Showing 1 - 24 of 28 resources
Lesson Planet
#### Exploration 4: Definite Integrals by Trapezoidal Rule
For Students 10th - 12th
In this integral worksheet, learners geometrically estimate distance in given problems. They use the trapezoidal rule to determine definite integrals. Students explore distance and velocity. This three-page worksheet contains...
Lesson Planet
#### Numerical Methods of Integration
For Teachers 11th - 12th
Students review Riemann Sums with rectangles, used to approximate the area under a curve. They listen as the teacher introduces the Trapezoidal Rule to approximate the area under a curve. Students use Simpson's Rule to find the area...
1 In 1 Collection
Lesson Planet
#### Investigating Area Under a Curve
For Teachers 9th - 11th Standards
Taking a lesson straight from the calculus classroom and making it algebra appropriate is the task of the day. Learners go step by step through calculating different area approximations underlines and curves, reviewing many key geometry...
Lesson Planet
#### The Great Calculus Caper
For Teachers 11th - 12th Standards
Motivate your classes with a unique approach to presenting problems. With this "caper," pupils must solve AP Calculus problems to eliminate suspects and determine the perpetrator. Topics range from velocity to trapezoid rule to volume of...
Lesson Planet
#### Graphs and Data
For Teachers 7th - 10th
Students investigate poverty using graphs. In this algebra lesson, students collect data on the effects of poverty and use different rules to analyze the data. They graph and use the trapezoid rule to interpret the data.
Lesson Planet
#### Trapezoid Rule, Simpson's Rule, Midpoint Rule
For Students 11th - Higher Ed
In this math worksheet, students practice using the rules of Trapezoid, Simpson, and Midpoint. Then they practice using the rules and justify the solutions.
Lesson Planet
#### Basic Calculus II: Numerical Integration
For Students Higher Ed
In this numerical integration worksheet, students approximate the value of an integral using the methods taught in the class. They use left-hand Riemann sums, right-hand Riemann sums, the midpoint method and the trapezoidal rule.
Lesson Planet
#### Calculus Practice Test: Velocity, Functions
For Students 10th - 11th
In this calculus worksheet, students observe graphs and identify the limits of the functions listed in the graph. They determine the definite integrals and derivatives. Students use the trapezoid rule to estimate distance. This...
Lesson Planet
#### Collecting Driving Data
For Students 11th - Higher Ed
Give AP Calculus classes the opportunity to check the accuracy of their calculations! A calculus activity involves the collection of data, the application of mathematics, and the analysis of the accuracy of results. Young mathematicians...
Lesson Planet
#### Area Under A Curve
For Teachers 9th - 11th
Calculus students use the derivative and integral to solve problems involving areas. They calculate the area under a curve as they follow a robot off road making different curves along the drive, using Riemann Sums and Trapezoidal rules...
Lesson Planet
#### Review for Test on sections 6.2-6.3
For Teachers 12th
Twelfth graders review integration techniques to get ready for their test. In this calculus lesson, 12th graders review integrals, trig substitution, integration by parts, trapezoidal rules and simpson's rule in preparation for a test....
Lesson Planet
#### Test on Chapter 8
For Teachers 12th
Twelfth graders review integration techniques to get ready for their test. In this calculus lesson, 12th graders review integrals, trig substitution, integration by parts, trapezoidal rules and simpson's rule in preparation for a test....
Lesson Planet
#### Integrals
For Students 11th - 12th
For this Calculus worksheet, students evaluate problems with integrals that require the use of the trapezoidal rule, Simpson’s rule and decomposition into partial rations. The two page worksheet contains sixteen problems. Answers are...
Lesson Planet
#### Estimating he Value of a Definite Integral
For Students Higher Ed
For this definite integral worksheet, students solve three integrals for a defined value. They use the midpoint, trapezoidal, and Simpson's rules each one time to estimate the value of a definite integral.
Lesson Planet
#### Average Roller Coaster
For Teachers 11th - 12th
Students explore the concept of average value of a function. In this average value of a function lesson, students use their Ti-Nspire to determine the average value of a quadratic function. Students take the integrals of velocity...
Lesson Planet
#### AP Calculus Practice Exam
For Students 11th - 12th
In this practice exam learning exercise, students solve 17 multiple choice problems. Students find derivatives, points of continuity, maximums, minimums, integrals, and area of enclosed regions.
Lesson Planet
#### AP Calculus Practice Exam BC Version: Part B
For Students 11th - 12th
In this calculus learning exercise, students solve 17 multiple choice problems. Students find limits, summations, and derivatives of functions. Students find the area of an enclosed region between two curves.
Lesson Planet
#### Seventeen Various Integral Problems
For Students 11th - Higher Ed
In this integrals activity, students solve seventeen integral problems. One problem covers completing the square, one curve length, one partial fraction decomposition, one Simpson's rule, and several are trigonometric integrals.
Lesson Planet
#### AP Calculus Practice Exam
For Students 11th - 12th
In this Calculus worksheet, learners are provided with practice problems for their exam. Topics covered include derivatives, area bounded by a curve, local maximum, instantaneous rate of change, and the volume of a solid of revolution. ...
Lesson Planet
#### AP Calculus Practice Exam
For Students 12th
In this Calculus worksheet, 12th graders are provided with practice problems for their exam. Topics covered include limits, derivatives, area bounded by a curve, minimization of cost, and the volume of a solid of revolution. The four...
Lesson Planet
#### Integration Unit: Riemann Sum
For Teachers 11th - 12th Standards
Students investigate Riemann Sums. In this calculus lesson, students solve problems involving Riemann sums and Right Riemann Sums.
Lesson Planet
#### AP Calculus Practice Exam
For Students 12th - Higher Ed
For this calculus worksheet, students calculate the derivative, find the invertible given the derivative and review basic concepts for their AP calculus exam. There are 17 questions.
Lesson Planet
#### Numerical Integration
For Students 11th - Higher Ed
In this math worksheet, students practice using the number approximations to apply them to the integrations. They compare the numerical methods and graph the results in the data table.
Lesson Planet
#### A.P. Calculus Practice Exam AB Version- Section I- Part B
For Students 11th - Higher Ed
In this A. P. Calculus AB worksheet, students solve seventeen multiple-choice problems with a graphing calculator. This worksheet is designed as a practice test for the A.P. Exam and should be timed. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9424733519554138, "perplexity": 4207.820817087054}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655882051.19/warc/CC-MAIN-20200703122347-20200703152347-00324.warc.gz"} |
https://testbook.com/question-answer/the-discrete-time-system-described-by-yn-xn2--5faa2f0d7553fb544a77746c | # The discrete time system described by y(n) = x(n)2 is
This question was previously asked in
ESE Electronics 2012 Paper 1: Official Paper
View all UPSC IES Papers >
1. causal and linear
2. casual and non-linear
3. non-casual and linear
4. non-causal and non-linear
Option 2 : casual and non-linear
Free
CT 1: Current Affairs (Government Policies and Schemes)
38070
10 Questions 10 Marks 10 Mins
## Detailed Solution
Concept:
Linearity: Necessary and sufficient condition to prove the linearity of the system is that the linear system follows the laws of superposition i.e. the response of the system is the sum of the responses obtained from each input considered separately.
y{ax1[t] + bx2[t]} = a y{x1[t]} + b y{x2[t]}
Conditions to check whether the system is linear or not.
• The output should be zero for zero input
• There should not be any non-linear operator present in the system.
Causal system:
If O/P of the system is independent of the future value of input then the system is said to be causal. Causal systems are practical or physically reliable systems.
Analysis:
y(n) = x(n)2
Linearity check:
x1(n) → x1(n)2
x2(n) → x2(n)2
x3(n) = [x1(n) + x2(n)] → [x1(n) + x2(n)]2 = x1(n)2 + x2(n)2 + 2x1(n) x2(n)
≠ x1(n)2 + x2(n)2
∴ Non - linear
Causality check:
y(0) = x(0)2
y(1) = x(1)2
y(-1) = x(-1)2
∴ the system is causal. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8194505572319031, "perplexity": 3346.13236744104}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056656.6/warc/CC-MAIN-20210919005057-20210919035057-00018.warc.gz"} |
https://link.springer.com/article/10.1007%2Fs100520050013 | # Structure of the three-dimensional quantum euclidean space
• B.L. Cerchiai
• S. Schraml
• J. Wess
Theoretical physics
## Abstract.
As an example of a noncommutative space we discuss the quantum 3-dimensional Euclidean space $$\mathbb{R}^3_q$$ together with its symmetry structure in great detail. The algebraic structure and the representation theory are clarified and discrete spectra for the coordinates are found. The q-deformed Legendre functions play a special role. A completeness relation is derived for these functions.
## Keywords
Euclidean Space Special Role Representation Theory Algebraic Structure Discrete Spectrum
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
## Preview
Unable to display preview. Download preview PDF.
© Springer-Verlag Berlin Heidelberg 2000
## Authors and Affiliations
• B.L. Cerchiai
• 1 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8212668299674988, "perplexity": 1591.376888308584}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525829.33/warc/CC-MAIN-20190718211312-20190718233312-00017.warc.gz"} |
https://nbviewer.jupyter.org/gist/hellman/e6352b9f12c9759ef23dcd87b64ec8c0/0writeup.ipynb | # RCTF 2020 - infantECC¶
Here is the challenge code:
from Crypto.Util.number import getStrongPrime, bytes_to_long, long_to_bytes
from hashlib import sha256
p=getStrongPrime(512)
q=getStrongPrime(512)
R=Zmod(p*q)
Mx=R.random_element()
My=R.random_element()
b=My^2-Mx^3
E=EllipticCurve(R, [0,b])
Ep=EllipticCurve(GF(p), [0,b])
Eq=EllipticCurve(GF(q), [0,b])
Ecard=Ep.cardinality()*Eq.cardinality()
r=random_prime((p^^q)>>100)
s=inverse_mod(r, Ecard)
print((s,b))
print(s*E(Mx,My))
print(randint(0,Ecard)*E(Mx,My))
print(r^^(bytes_to_long(sha256(long_to_bytes(Mx)).digest())^^bytes_to_long(flag))<<256)
We are given a curve over $\mathbb{Z}_n$ with $n=pq$ an RSA-like modulus. First, note the special form of the curve: $y^2 \equiv x^3 + b \pmod{n}$. Whenever $p \equiv 2$, the curve $y^2 \equiv x^3 + b \pmod{p}$ has $p+1$ points for any $b$, such curves are called supersingular. Otherwise, the group order varies in $p\pm \mathcal{O}(\sqrt{p})$, which is hard to work with. Thus, we aim to get $p \equiv q \equiv 2 \pmod{3}$, which happens quite often.
Remark: we are not given $n$, but from $b$ and two points on the curve it is easy to recover.
We can immediately dismiss $n\equiv 2 \pmod{3}$ since this implies $p\equiv 1\pmod{3}$ or $q\equiv 1\pmod{3}$. However, we can not easily distinguish the bad case $p\equiv q \equiv 1\pmod{3}$ and the good case $p\equiv q\equiv 2\pmod{3}$, so the attack will only work with probability 1/2.
In the fortunate case, the cardinality of the curve over $\mathbb{Z}_n$ is equal to $(p+1)(q+1)=n+p+q+1$. So, we get an equation on the private/public exponent: $$rs \equiv 1 \pmod{n+p+q-1},$$ $$\Rightarrow r\cdot s - k(n+p+q-1)=1, ~~~ k < r < 2^{412}.$$
This equation is very similar to the one in the Boneh-Durfee attack on small secret RSA exponent. However, that attack requires $k < r < n^{0.292}$, while our $r$ is much larger: $r \approx n^{0.402}$. Do we have any other information?
Yes, it is somewhat hidden, but the least 256 bits of $r$ are given out in the last printed value! Can we use it to improve the Boneh-Durfee attack? Unfortunately, not directly, since the attack considers the equation mod $s$, and thus, the value of $r$ does not matter (it does matter indirectly by giving a bound on $k$, but learning bits of $r$ does not change the bound).
A possibility is to consider the equation modulo $2^{256}s$. Then we get an equation of the form $c+k(a+z) \equiv 0\pmod{2^{256}s}$, where $k<2^{412}, z=p+q<2^{513}$, and the modulus is close to $2^{1280}$. This could be comparable to the Boneh-Durfee bounds: $k<2^{0.282\cdot1280}=2^{360},z<2^{641}$. However, I did not manage to get it work because of $c\ne 1$.
## Alternative Approach¶
Instead, let's try to tackle with the equation by hands (and LLL).
Let's generate a sample instance.
In [1]:
from sage.all import log
from tqdm import tqdm
log2 = lambda val: f"{float(RR(log(abs(val),2))):.3f}"
proof.arithmetic(False)
from random import randint, seed
seed(101)
p = q = 1
while p % 3 != 2:
p = next_prime(randint(1,2**512))
while q % 3 != 2:
q = next_prime(randint(1,2**512))
n = p*q
R = Zmod(p*q)
Mx = R.random_element()
My = R.random_element()
b = My**2 - Mx**3
Ep = EllipticCurve(GF(p), [0,b])
Eq = EllipticCurve(GF(q), [0,b])
E = EllipticCurve(R, [0,b])
Ecard = Ep.cardinality()*Eq.cardinality()
#r = random_prime((p^^q)>>100)
r = next_prime(randint(0, (p^^q)>>100))
s = inverse_mod(r, Ecard)
print(" s", log2(s))
print(" r", log2(r))
print("rmax", log2((p^^q)>>100))
spt = s*E(Mx, My)
randpt = randint(0, Ecard)*E(Mx, My)
from hashlib import sha256
from Crypto.Util.number import bytes_to_long, long_to_bytes
flag = b"RCTF{Copper5mith_M3thod_f0r_ECC}"
v = r^^(bytes_to_long(sha256(long_to_bytes(Mx)).digest())^^bytes_to_long(flag))<<256
s 1020.453
r 404.274
rmax 410.041
In [2]:
k = (r*s-1)//(n+p+q+1)
print("k", log2(k))
assert r*s - k*(n+p+q+1) == 1
assert (1 + k*(n+1+p+q)) % s == 0
r0 = v % 2**256 # given
r1 = r >> 256
assert (1-r0*s + k*(n+1+p+q)) % 2**256 == 0
k 401.430
Let $r = 2^{256}r_1 + r_0$, where $r_0 < 2^{256}$. We can rewrite the main equation modulo $2^{256}s$: $$k(n+1)+k(p+q) \equiv sr_0-1 \pmod{2^{256}s}.$$ Let's try to find a multiple of this equation that will make the left part less than the modulus, or overflow it by a small amount.
In [3]:
mod = s*2**256
assert k*(n+1+p+q) % mod == (s*r0-1) % mod
In [4]:
weight = 2**512
m = matrix(ZZ, [
[n+1, weight*1],
[mod, 0]
])
for tn, t in m.LLL():
if t * tn < 0: continue
t //= weight
if t < 0:
tn = -tn
t = -t
break
else:
raise Exception("negative case, let's skip")
print("t", log2(t))
print("tn", log2(tn))
w = k * ((n + 1 + p + q) * t % mod) // mod
print("overflow", log2(w), w)
t 381.838
tn 895.010
overflow 20.756 1771398
The overflow has rather large variance over problem instances, it is about 20-32 bits.
... a part of writeup with intermediate results is missing ...
Define \begin{align} h &= \left\lfloor \frac{r_0t}{2^{256}} \right\rfloor,\\ u &= \left\lfloor \frac{t(n+1)}{2^{256}s} \right\rfloor,\\ w &= \left\lfloor \frac{(~t(n + 1 + p + q) \mod{2^{256}s})k}{2^{256}s} \right\rfloor. \end{align} Note that $u$ and $h$ are known, and $w$ is unknown but rather small (experimentally, around 20-32 bits). Then with overwhelming probability $$ku + w - r_1t = h.$$
In [5]:
u = t * (n+1) // mod
h = r0 * t // 2**256
w = k * ((n + 1 + p + q) * t % mod) // mod
assert h == k*u + w - r1*t
Once $w$ is guessed, we can recover $r_1$ modulo $u$: $$r_1 \equiv \frac{w-h}{t} \pmod{u}$$ Let $r_1 = r_{11}u + r_{10}$, where $0 \le r_{10} < u$. Note that $r_{11}$ is of size 20-32 bits
In [6]:
r10 = r1 % u
r11 = r1 // u
assert (-h + w) * inverse_mod(t, u) % u == r1 % u == r10
print("to guess (bit size):")
print("r11", log2(r11), log2(k//t))
print(" w", log2(w))
assert r == r0 + 2**256*u*r11 + 2**256*r10
to guess (bit size):
r11 19.593 19.593
w 20.756
Guessing both $r_{11}$ and $w$ is too much. However, we can exploit the curve group and mount a BSGS-like attack. We have $$r_1 = r_{11}u + r_{10} = r_{11}u + \left(\frac{w-h}{t}\mod{u}\right).$$ Since $r_0$ is known, we can express full secret $r$: $$r = r_0 + 2^{256}\left(\frac{w-h}{t}\mod{u}\right) + 2^{256}ur_{11}.$$ Let $G$ be an arbitrary point on the curve $E$ (e.g. one of those given). We know that $[rs-1]G=O$, therefore: $$[(r_0+2^{256}ur_{11})s-1]G = -[2^{256}\left(\frac{w-h}{t}\mod{u}\right)s]G.$$ We can precompute the left part for all possible values of $r_{11}$ and store in the table. Then we guess $w$ and compute the right part and check against the table. It's possible to optimize both steps so that 1-2 curve point additions per guess are needed.
In [7]:
G = spt
assert (r0 + 2**256*u*r11) * s * G - G == -(2**256*r10) * s * G
# reasonable bounds:
# high enough probability to solve,
# low enough memory req.
if 1: # full search
start_r = 0
start_w = 0
total_r = 2**22
total_w = 2**23
else: # fast check on known secrets
start_r = r11 - 5
start_w = w - 5
total_r = 10
total_w = 10
mask = 2**100-1 # hashing points
tab = {}
acc = (r0*s-1)*G
step = (2**256*u*s)*G
acc += start_r * step
perc = 0
for i in tqdm(range(total_r)):
tab[test] = start_r + i
acc += step
100%|██████████| 4194304/4194304 [03:14<00:00, 21537.50it/s]
In [8]:
H = -2**256*s*G
base = (-h) * inverse_mod(t, u) % u
step_e = inverse_mod(t, u) % u
cur_e = base
cur = cur_e * H
step = step_e * H
red = u * H
cur_e += start_w * step_e
cur_e %= u
cur = cur_e * H
for w in tqdm(range(total_w)):
if test in tab:
r11 = tab[test]
w += start_w
print("Solution:", w, r11)
r10 = (-h + w) * step_e % u
r = r0 + 2**256 * r10 + 2**256*u*r11
Mx = (r * spt).xy()[0]
print("recovered r", r)
m = (v ^^ r) >> 256
m = m ^^ bytes_to_long(sha256(long_to_bytes( int(Mx) )).digest())
print(long_to_bytes(m))
break
# optimized reduction mod u
# track the exponent and reduce if needed
cur_e += step_e
cur += step
if cur_e >= u:
cur_e -= u
cur -= red
21%|██ | 1771398/8388608 [02:29<09:17, 11875.62it/s]
Solution: 1771398 790644
recovered r 49943357289587115406335857308667372798949001275969321697728163810970501325410259988956553512194719612541025798846577063031
b'RCTF{Copper5mith_M3thod_f0r_ECC}'
On my laptop the whole attack with the chosen bounds runs in 6 minutes. Note that the bounds hold only with some probability, so multiple (typically around 10) instances have to be attempted (recall that $p,q$ can lead to non-supersingular curves and thus the probabilty is halved further).
## Conclusions¶
The method I used is rather unusual and I still have many gaps in its understanding. It is not exactly clear how to derive good bounds for $r_{11}$ and $w$ and why they have so large variance in size. A possible explanation is to look at the relations e.g. $k < r < ((p\oplus q)\gg 100)$. While $k$ can be close to the maximum, it has higher chances to be much lower: if $p$ and $q$ agree in several most significant bits, the bound is decreased; then if $r$ is small by chance, the bound is decreased further.
Also, accidentally I noticed that $\lfloor k/t\rfloor = r_{11} = \lfloor r_1/u \rfloor = \lfloor r/{2^{256}u} \rfloor$:
In [10]:
print(k//t, r1//u)
assert k // t == r11 == r1 // u == r // (2**256*u)
790644 790644
This is rather surprising, as $r$ an $k$ are unknowns which are related by a quantity dependent on $p+q$, and we can find such $t,u$ to make this relation to hold without factoring $n$. Also, $t$ was chosen rather arbitrarily!
Finally, it feels like the problem should be very easy once $w$ is guessed, but I didn't find a good method avoiding use of the curve.
PS: The intended solution seems to be using Coppersmith methods, but I haven't seen it yet. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9550365209579468, "perplexity": 2793.701486552099}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704799711.94/warc/CC-MAIN-20210126073722-20210126103722-00386.warc.gz"} |
http://mathhelpforum.com/algebra/95046-inequality-question.html | # Math Help - Inequality Question
1. ## Inequality Question
What's the best method for solving these without a graph.
Find $\left\{ x:x^{\frac{3}{2}}>x^2 \right\}$
Thanks for your help
Edit- Oops, wrong thread.
2. Originally Posted by Stroodle
What's the best method for solving these without a graph.
Find $\left\{ x:x^{\frac{3}{2}}>x^2 \right\}$
It should be clear to you that $x\ge 0$ else $x^{\frac{3}{2}}$ is not defined.
Therefore $x^3>x^4$.
Now go for it.
3. Originally Posted by Stroodle
What's the best method for solving these without a graph.
Find $\left\{ x:x^{\frac{3}{2}}>x^2 \right\}$
Thanks for your help
Edit- Oops, wrong thread.
$x^{\frac{3}{2}}-x^2>0$
$
x^{\frac{3}{2}}(1-x^{\frac{1}{2}})>0
$
Draw the number line .
THus the solution set is { ${x: 0}
Definitely not the best solution .
4. It's clear that both sides are positive, so you can square them, thus $x^3>x^4\implies x^2\cdot x(x-1)<0.$
I think you can continue. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8486930727958679, "perplexity": 2831.6185656350244}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042986357.49/warc/CC-MAIN-20150728002306-00142-ip-10-236-191-2.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/10701/if-every-prime-ideal-is-a-contracted-ideal-does-it-imply-that-the-induced-map-b | If every prime ideal is a contracted ideal, does it imply that the induced map between spectrums is surjective?
Let $f: A \rightarrow B$ be a ring homomorphism between commutative rings with identity. Then there exists an induced map $f' : Spec(B) \rightarrow Spec(A)$. If $f'$ is surjective, then clearly every prime ideal of $A$ is a contracted ideal. Now my question is, is the converse true?
-
Let $P$ be a prime ideal of $A$, and suppose that $P=f^{-1}(I)$ where $I$ is some ideal of $B$. Let $S=A-P$, a multiplicatively closed subset of $A$, and $T=f(S)$. Then $T$ is a multiplicatively closed subset of $B$ and is disjoint from $I$. By Zorn's lemma, there is an ideal $Q$ of $B$ maximal with respect to the conditions that $Q$ contains $I$ and $Q$ is disjoint from $T$. By a standard argument, $Q$ is prime. Moreover $f^{-1}(Q)=P$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9923800230026245, "perplexity": 30.85693112580288}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737864605.51/warc/CC-MAIN-20151001221744-00247-ip-10-137-6-227.ec2.internal.warc.gz"} |
http://tex.stackexchange.com/questions/37827/own-document-variables-with-and-without-arguments | # Own document variables with and without arguments
I use the following to introduce new document variables to my document. They should behave like \author or \title. So when they're used without parameter, they should print the corresponding variable. If they're called with parameter, they should set the corresponding variable to this value.
\newcommand*{\tutor}[1]{\gdef\@tutor{#1}}
Works ok, but I have to use
\tutor{new defined name}
to define the value and
\@tutor
to get the name printed instead of just using
\tutor
What's the simplest way to introduce \tutor as desired?
-
This is not how \title and \author work. You never use them without the brackets. Commands like \maketitle use \makeatletter\@title\makeatother etc. internally. – qubyte Dec 10 '11 at 17:50
To push this point home, there is an error in your question. The passage "when they're used without parameter, they should print the corresponding variable" is incorrect. – qubyte Dec 10 '11 at 17:59
This usage is very uncommon with LaTeX (it's not the way \author etc. works), but it is possible to make the macro look ahead with \@ifnextchar to see if the next character is a brace (which needs to be written as \bgroup because you need a matching pair of braces in the macro).
\newcommand*\tutor{%
\@ifnextchar\bgroup
{\gdef\@tutor}%
{\@tutor}%
}
This will expand to \gdef\@tutor if there is an argument (which will then be taken by it) or to \@tutor if not.
There is also the xparse package which allows to define macros with optional brace arguments and a way to test if the argument was there or not. See the package manual for the details. For this case it is IMHO a little overkill.
You could also define \tutor in the form you have in your post and redefine it at the begin of the document to be equal to \@tutor.
\newcommand*{\tutor}[1]{\gdef\@tutor{#1}}
% or
% \newcommand*{\tutor}{\gdef\@tutor}
\AtBeginDocument{\let\tutor\@tutor}
Then you can use \tutor{<argument>} only in the preamble and \tutor only in the document body.
Macros with an @ in the name need to be wrapped in \makeatletter ... \makeatother, except in package or class files, which do this automatically.
-
With xparse:
\usepackage{xparse}
\makeatletter
\def\@tutor{\ClassError{myclass}{Undefined \string\tutor}
{You need to say \string\tutor{<NAME>} before using \protect\tutor by itself}}
\NewDocumentCommand{\tutor}{g}{\IfNoValueTF{#1}{\@tutor}{\gdef\@tutor{#1}}}
\makeatother
I've also added an error message in case \tutor is used by itself before having defined its value (such definitions are used in class files, usually).
However, the usual strategy is to avoid having the user print explicitly the tutor's name; so only \tutor{<NAME>} should be at the user level and the class can use \@tutor to print the title page.
-
You can do it like this:
\makeatletter
\newcommand{\tutor}[1]{\newcommand{\@tutor}{#1}}
\makeatother
and then use it like this
\makeatletter\@tutor\makeatother
Which is not what you want. However, if you will be using the content a lot, then it is perhaps more usual to encapsulate the above in a newcommand
\makeatletter\newcommand{\thetutor}[0]{\@tutor}\makeatother
You can now use the content of \@tutor with \thetutor.
In short, it's simpler to just make two commands with a predictable difference between the names (in this case the).
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9358710050582886, "perplexity": 1666.3700052873376}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657125113.78/warc/CC-MAIN-20140914011205-00309-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"} |
http://mathhelpforum.com/math-topics/39068-complex-maths-measurement-print.html | # Complex Maths - Measurement
• May 20th 2008, 05:30 PM
Risrocks
Complex Maths - Measurement
As part of my Residential Drafting course I'm required to complete Complex Mathematical concepts, which I'm really struggling with. Your answer is really appreciated:
Q: A mass rests on an area of 80mm2 and exerts a force of 320 kg m/s2. Express the pressure exerted on this area in kN/cm2.
• May 20th 2008, 05:41 PM
topsquark
Quote:
Originally Posted by Risrocks
As part of my Residential Drafting course I'm required to complete Complex Mathematical concepts, which I'm really struggling with. Your answer is really appreciated:
Q: A mass rests on an area of 80mm2 and exerts a force of 320 kg m/s2. Express the pressure exerted on this area in kN/cm2.
$P = \frac{F}{A} = \frac{320~N}{80~mm^2} = 4~\frac{N}{mm^2}$
Now to change the unit:
$\left ( \frac{4~N}{1~mm^2} \right ) \cdot \left ( \frac{1~kN}{1000~N} \right ) \cdot \left ( \frac{10~mm}{1~cm} \right )^2$
(Note that the last factor is squared!)
$=0.4~kN/cm^2$
-Dan
• May 20th 2008, 07:10 PM
Risrocks | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9435373544692993, "perplexity": 1844.8867421538525}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163052286/warc/CC-MAIN-20131204131732-00020-ip-10-33-133-15.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/181748/can-we-extend-this-measure-uniqueness-theorem | # Can we extend this measure uniqueness theorem?
Let $\mu_1$ and $\mu_2$ finite measures on $\sigma$-algebra $\mathfrak B$ such that $\mu_1(X)=\mu_2(X)$, and $\mathcal A$ an intersection stable generator of $\mathfrak B$ such that $\mu_1(A)=\mu_2(A)$ for all $A\in\mathcal A$. It is well known that above hypothesis implies $\mu_1=\mu_2$. Can we have the same conclusion if $\mu_1$ and $\mu_2$ are totally finite signed measures? Thanks.
-
Do you still assume that $\mu_1(X)=\mu_2(X)$? In this case why wouldn't the same argument work? – Davide Giraudo Aug 12 '12 at 17:09
I drop that assumption. – Deco Aug 12 '12 at 19:28
Take $X$ an uncountable set, $\mathcal B$ the $\sigma$-algebra of countable sets and their complement, $\mathcal A$ the collection of countable subsets of $X$. Take $$\mu_1(A):=\begin{cases}0&\mbox{ if }A\mbox{ is countable},\\ 1&\mbox{ if }X\setminus A\mbox{ is countable,} \end{cases}$$ and $\mu_2:=2\mu_1$. $\mu_1$ and $\mu_2$ are finite measures, and $\mu_2-\mu_1$ coincides with the $0$ measure on $\mathcal A$ but not on $\mathcal B$.
The main problem is that the measure don't have the same total mass. If we take $\mu_1(X)=\mu_2(X)\in\Bbb R$, then we have $\mu_1=\mu_2$ by a similar argument than in the case of a finite non-negative measure.
I found this theorem in more than one papers: Let $\mu_1$ and $\mu_2$ be totally finite signed measures on the Borel $\sigma$-algebra $\mathfrak B$ of $\mathbb R^n$, and let $\mathcal A$ an intersection stable generator of $\mathfrak B$ such that $\mu_1(A)=\mu_2(A)$ for each set $A\in\mathcal A$. Then $\mu_1=\mu_2$. – Deco Aug 13 '12 at 15:13
If the generator contains $X$, no problem. But when it's not assumed I think the counter-example works. Do they give a reference in the paper? – Davide Giraudo Aug 13 '12 at 15:18
All of the books i read demand $\mu_1(X)=\mu_2(X)$ for this theorem, it is needed to construct d-system for proof. This make me confuse. – Deco Aug 13 '12 at 15:21
If H1 is the hypothesis: $\mu_1(A)=\mu_2(A)$ for $A$ in an intersection stable generating collection and H2 the hypothesis $\mu_1(X)=\mu_2(X)$, then (H1+H2) gives uniqueness, but not H1 alone. – Davide Giraudo Aug 13 '12 at 15:22
There is no information about $X$ being element of $\mathcal A$. They refer the proof to Maß- und Integrationstheorie. 3., erweiterte Auflage by Jurgen Elstrodt and Measure and integral by Konrad Jacobs. Unfortunately, i can access none of them. – Deco Aug 13 '12 at 15:27 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9395437240600586, "perplexity": 351.6082204472001}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042988065.26/warc/CC-MAIN-20150728002308-00119-ip-10-236-191-2.ec2.internal.warc.gz"} |
http://mathhelpforum.com/differential-geometry/91669-quantum-calculus.html | ## Quantum Calculus
Compute the $q$-derivative and the $h$-derivative of $f(x) = x^n$.
So $D_{q}x^n = \frac{(qx)^n-x^n}{(q-1)x} = \frac{q^n-1}{q-1}x^{n-1}$. Also $D_{h}x^n = \frac{(x+h)^n-x^n}{h}$.
Is this correct? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9454141855239868, "perplexity": 2258.5975724067484}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678683543/warc/CC-MAIN-20140313024443-00025-ip-10-183-142-35.ec2.internal.warc.gz"} |
http://mathoverflow.net/questions/90464/uniqueness-of-equilibrium-from-infinite-strategies | # Uniqueness of equilibrium from infinite strategies
I took the following game from the Peter Winkler collection (chapter "Games"):
Two numbers are chosen independently at random from the uniform distribution on [0,1]. Player A then looks at the numbers. She must decide which one of them to show to player B, who upon seeing it, guesses whether it's the larger or smaller of the two. If he guesses right, B wins, otherwise A wins. Payoff to a player is his/her winning probability.
One easily identifies the following mixed strategy Nash equilibrium:
"Player A shows the larger number with prob 1/2 and player B guesses 'larger' with prob 1/2"
The book also suggests a smart pure strategy for A, which is in effect identical to her mixed strategy above (in the sense that locks her winning prob at 1/2 regardless of B's strategy):
"Player A shows the number which is closer to 1/2"
A little thought shows that B also has a pure strategy in like manner:
"Player B guesses larger iff the number he sees exceeds 1/2"
Together these two strategies form a pure strategy Nash equilibrium.
To be clear, let me define a pure strategy for B as a function ${f}_{B}:[0,1]\longmapsto \{larger, smaller\}$, i.e., he assigns "larger" or "smaller" to every real in [0,1].
Similarly, A's pure strategy is a function $f_A(\{x,y\})=x\ or\ y$, i.e., she assigns x or y to every set $\{x,y\}$, where $x,y\in [0,1]$.
My question is: Is the above pure strategy Nash equilibrium unique? Given the above definition, there are infinite pure strategies for each player. Could other less obvious or highly artificial equilibria be constructed? How can we prove or disprove the uniqueness of equilibrium from those infinite strategies?
Edit: To avoid possible ambiguity in Steven's answer, I add the last sentence in the 2nd paragraph.
-
I think you're going to have to be careful about what you mean by equilibrium.
Definition 1: $(f_A,f_B)$ is an equilibrium if, taking $f_B$ as given, $f_A$ maximizes the expected value of $A's$ payoff to $(-,f_B)$ (and symmetrically).
Definition 2: $(f_A,f_B)$ is an equilibrium if, for every $(x,y)$, taking $f_B(x,y)$ as given, $f_A(x,y)$ maximizes $A's$ payoff to $(-,f_B(x,y))$ (and symmetrically).
In the first case, you are looking for the equilibrium in a single game. In the second case, you are looking for a family of equilibria in a family of games (parameterized by $(x,y)$).
If I take literally your request for equilibria in "this game" (singular), it would seem that Definition 1 applies. In that case, you can start with your Nash equilibrium, vary either player's strategy arbitrarily on any set of measure zero, and have another Nash equilibrium.
-
Thank you for reminding. Yes it's Definition 1 --- otherwise there's no need for the requirement "uniform distribution on [0,1]". But while varying ranges of equilibrium $f_A$ and $f_B$ on measure zero sets trivially leads you to a different equilibrium by the definition, is a more essential change possible? – user16033 Mar 7 '12 at 16:36
Given that calculation of probability is involved. It is appropriate to restrict strategies to be measurable functions.
"Player B guesses larger iff the number he sees exceeds 1/2". Denote this strategy of B ${S}_{B}^{*}$.
The first part of the argument proves (I hope) that ${S}_{B}^{*}$ is the only kind of pure strategy (except by a difference of measure zero) that B can adopt in any pure strategy equilibrium. The second part proves A's corresponding pure strategies for ${S}_{B}^{*}$. Their combinations then form all possible pure strategy equilibria.
By definition, B's pure strategy is to choose a measurable set $B_L\subseteq[0,1]$ such that he report larger for $x\in B_L$ and smaller for $x\in [0,1]/B_L$. Now say if $m(B_L)=a\neq1/2$, A can adopt the following pure strategy:
"Show the smaller number if both $x,y\in B_L$, otherwise show the larger number".
which guarantees her a winning probability $\geq a^2+(1-a)^2>1/2$. To counter this strategy of A, B's better off reverting to ${S}_{B}^{*}$. Hence $m(B_L)\neq1/2$ can't be an equilibrium pure strategy for B.
Now let $B_L\subseteq[0,1]$ and $m(B_L)=1/2$. Define $B_S=[0,1]/B_L$. Consider the following incomplete specification of a strategy for A:
"Show the smaller number if both $x,y\in B_L$; show the larger if both $x,y\in B_S$"
which already guarantees winning probability $\geq1/2$ for A. What about the remaining situations, i.e., $x\in B_L$ and $y\in B_S$?
For any measurable $B\subseteq B_L$ with $m(B)>0$, we can define $C=\{x\in B_S|x>y, \forall y\in B\}$. Suppose there exists such a $B$ such that $m(C)>0$, then A can adopt the following pure strategy:
"Show the smaller number if both $x,y\in B_L$, otherwise show the larger number"
which will guarantee her winning probability $\geq 1/2+2m(C)m(B)>1/2$, which can't be an equilibrium pure strategy for the same reason as in the $m(A_L)\neq 1/2$ case. Because $B\subseteq B_L$ was arbitrary, we then know in a pure strategy equilibrium it is necessary that the set $\{x\in B_S|x>y, \forall y\in B_L\}$ has measure zero. Hence, to conclude, necessary conditions for strategies of B in a pure equilibrium:
1. $m(B_L)=m(B_S)=1/2$
2. $\{x\in B_S|x>y, \forall y\in B_L\}$ has measure zero.
However, we already know ${S}_{B}^{*}$ and its possible variations on measure zero sets are strategies that B can adopt in a pure strategy equilibrium. Since these are exactly those strategies that satisfy 1 and 2. We conclude the only pure equilibrium strategy for B is ${S}_{B}^{*}$ (except by a difference of a measure zero set).
Given ${S}_{B}^{*}$, as long as $x\in [0,1/2]$ and $y\in (1/2,1]$, choice of $f_A$ is irrelevant, and player B will guess correctly. Since this happens with probability 1/2, the best A can do is to salvage all remaining situations, which amounts to show the smaller one if both numbers fall into (1/2,1] and larger one if both into [0,1/2], and achieve a winning probability of 1/2.
And this should include all possible pure strategy equilibria.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9682713747024536, "perplexity": 480.7681835857728}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398447860.26/warc/CC-MAIN-20151124205407-00158-ip-10-71-132-137.ec2.internal.warc.gz"} |
https://pokerswebs.com/tag/wpc16-dashboard/ | # wpc16 dashboard
Casino
#### Wpc16.dashboard
Hola If you are looking for [kw]? Then, this is the place where you can find some sources that provide detailed information. [kw] [scraped_data] [faqs] I hope the above sources help you with the information related to [kw]. If not, reach through the comment section.
Casino
#### Wpc16 | Dashboard
Hola If you are looking for [kw]? Then, this is the place where you can find some sources that provide detailed information. [kw] [scraped_data] [faqs] I hope the above sources help you with the information related to [kw]. If not, reach through the comment section.
Casino
#### Wpc16/dashboard
Hello If you are looking for [kw]? Then, this is the place where you can find some sources that provide detailed information. [kw] [scraped_data] [faqs] I hope the above sources help you with the information related to [kw]. If not, reach through the comment section.
Casino
#### Wpc16/dashboard
Hello If you are looking for [kw]? Then, this is the place where you can find some sources that provide detailed information. [kw] [scraped_data] [faqs] I hope the above sources help you with the information related to [kw]. If not, reach through the comment section.
Casino
#### Wpc16 Dashboard
Hola If you are looking for [kw]? Then, this is the place where you can find some sources that provide detailed information. [kw] [scraped_data] [faqs] I hope the above sources help you with the information related to [kw]. If not, reach through the comment section.
Casino
#### Wpc16/dashboard
Hello If you are looking for [kw]? Then, this is the place where you can find some sources that provide detailed information. [kw] [scraped_data] [faqs] I hope the above sources help you with the information related to [kw]. If not, reach through the comment section.
Casino
#### Wpc16.dashboard
Hello If you are looking for [kw]? Then, this is the place where you can find some sources that provide detailed information. [kw] [scraped_data] [faqs] I hope the above sources help you with the information related to [kw]. If not, reach through the comment section.
Casino
#### Wpc16/dashboard
Hello If you are looking for [kw]? Then, this is the place where you can find some sources that provide detailed information. [kw] [scraped_data] [faqs] I hope the above sources help you with the information related to [kw]. If not, reach through the comment section.
Casino
#### Wpc16 Dashboard
Hi If you are looking for [kw]? Then, this is the place where you can find some sources that provide detailed information. [kw] [scraped_data] [faqs] I hope the above sources help you with the information related to [kw]. If not, reach through the comment section. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9724550247192383, "perplexity": 1163.6870289862616}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949387.98/warc/CC-MAIN-20230330194843-20230330224843-00673.warc.gz"} |
http://eprint.iacr.org/2015/376 | ## Cryptology ePrint Archive: Report 2015/376
Cryptography from Post-Quantum Assumptions
Raza Ali Kazmi
Abstract: In this thesis we present our contribution in the field of post-quantum cryptography. We introduce a new notion of {\em weakly Random-Self-Reducible} public-key cryptosystem and show how it can be used to implement secure Oblivious Transfer. We also show that two recent (Post-quantum) cryptosystems can be considered as {\em weakly Random-Self-Reducible}. We introduce a new problem called Isometric Lattice Problem and reduce graph isomorphism and linear code equivalence to this problem. We also show that this problem has a perfect zero-knowledge interactive proof with respect to a malicious verifier; this is the only hard problem in lattices that is known to have this property.
Category / Keywords: foundations / | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8761630654335022, "perplexity": 990.8866916168081}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218188213.41/warc/CC-MAIN-20170322212948-00653-ip-10-233-31-227.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/447678/let-f-be-a-function-from-n-n-longrightarrow-n-show-that-if-f-is-computabl | Let F be a function from $N ^{n} \longrightarrow N$. Show that if F is computable/ recursive then its graph is computable
Let F be a function from $N ^{n} \longrightarrow N$. Show that if F is computable then its graph is computable.
According to the definition of computable/recursive I am looking at, a relation is computable if its characteristic function is computable. It is also known that the relation of equality is computable. I want to show that given $R(a,b) \leftrightarrow F(a)=b$, the relation R is a computable function.
Here is the definition of computability I am using:
The computable functions are the functions $N ^{n} \longrightarrow N$ obtained inductively by the following rules: (1) $+: N^{2} \longrightarrow N, \cdot: N^{2} \longrightarrow N, \chi _{\leq} : N^{2} \longrightarrow N$ ,and the coordinate functions are computable.
(2) If $G: N^{m} \longrightarrow$ is computable and $H_{1},...H_{m} : N^{n} \longrightarrow N$ are computable the so is the function $F = G(H_{1},...H_{m}) : N^{n} \longrightarrow N$ defined by$F(a)=G(H_{1}(a),...,H_{m}(a))$
(3) If $G : N{n+1} \longrightarrow$ is computable ,and for all $a\in N^{n}$ there exists N such that $G(a,x)=0$, then the function( $F: N^{n} \longrightarrow N$) defined by $F(a)= \mu x(G(a,x)=0)$ is computable.
The graph is computable if its characteristic function is computable.
Thanks for any help
-
Need more definition on computability. What does it mean for a graph to be computable? – user86828 Jul 19 '13 at 21:42
Okay, I put up the definition. Thanks – Jmaff Jul 19 '13 at 21:54
I recommend an appeal to the Church-Turing thesis. – Quinn Culver Jul 22 '13 at 21:24
This sounds good but I am unsure about how your $f(F,y)$ function works. Let's say $F$ has domain and range, $N^{n} \longrightarrow N$. Now the composed function f(F,y) has domain $N^{n+1}$ and range $\{ 0,1\}$ . Thus I think it is more specifically $f(F(x), I_{n+1}^{n+1} (x,y))$ According to our definition of computability (2) though, I thought that the fist coordinate should be a function with domain $N^{n+1}$ Thus I am wondering if there is a way to get $F(x)$ from the n+1 tuple (x,y). – Jmaff Jul 19 '13 at 22:24
I do not understand your confusion. I do not understand what $I_{n+1}^{n+1}$ means. You can get $F(x)$ from the n+1-tuple $(x,y)$. Apply the projection function to it. $F(x_1,x_2 ...) = x_i$ is the projection function and is recursive. Now, the graph of F is defined to be the set of points (x, y) in $N^n \times N = N^{n+1}$ such that $F(x) =y$ The characteristic function of a set of points in $N^{n+1}$ is the function that returns 1 for a point if it is in the set, and 0 if it is not. – user86828 Jul 21 '13 at 18:11
The function $I_{n+1} ^{n+1}(x_{1},...,x_{n+1})$ is the projection function to $x_{n+1}$. I was thinking that one could use the projection function but if I want to apply it to the n+1 tuple (x,y) where $x \in N ^{n}$ then I don't see how to apply projection function to get out an entire n-tuple, namely x. I thought that the projection functions only had single natural numbers as outputs. – Jmaff Jul 21 '13 at 19:50
you can apply the projection function to each part of x. So if $x = (x_1, x_2)...$, and you have the tuple $(x_1, x_2, ... y)$, you can apply one projection function to $x_1$, another to $x_2$, etc. F(x) is really $F(x_1, x_2, x_3...)$. So you compose F with all your projection functions. – user86828 Jul 24 '13 at 18:28 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9481056332588196, "perplexity": 184.30587883222898}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394937.4/warc/CC-MAIN-20160624154954-00052-ip-10-164-35-72.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/a-definitions-for-the-terms-the-limit-does-not-exists.53850/ | # A definitions for the terms the limit does not exists
1. Nov 23, 2004
### quasar987
a definitions for the terms "the limit does not exists"
Since my textbook doesn't have a definitions for the terms "the limit does not exists" and "the limit goes to infinity", I tried to make them up. I'd like to know if they're correct.
1) Consider $f: \mathcal{D}\longrightarrow \mathbb{R}$ a function and $x_0$ an accumulation point of $\mathcal{D}$. We say that the limit as x approaches $x_0$ goes to positive infinity (resp. negative infinity) if $\forall M \in \mathbb{R}, \ \exists \delta>0$ such that $x \in \mathcal{D} \cap V'(x_0,\delta) \Rightarrow f(x)>M$ (resp.$f(x)<M$), and we write
$$\lim_{x \rightarrow x_0} f(x) = +\infty \ (\mbox{resp. -\infty})$$
2) Consider $f: \mathcal{D}\longrightarrow \mathbb{R}$ a function. If $\mathcal{D}$ is unbounded superiorly (?) (i.e. has no upper bound), we say that the limit as x approaches positive infinity goes to positive infinity (resp. negative infinity) if $\exists N \in \mathbb{R}$ such that $\forall x \in \mathcal{D}, \ x>N \Rightarrow f(x)$ is strictly increasing (resp. strictly decreasing). In other words, we say that the limit as x approaches positive infinity goes to positive infinity (resp. negative infinity) if $\exists N \in \mathbb{R}$ such that $\forall y, z \in \mathcal{D}$ and $y, z>N, \ z>y \Rightarrow f(z)>f(y)$ (resp.$f(z)<f(y)$), and we write
$$\lim_{x \rightarrow \+\infty} f(x) = +\infty \ (\mbox{resp. -\infty})$$
3) We have an analogous definition for the limit as x goes to negative infinity if the domain has no lower bound. And finally,...
4) Consider $f: \mathcal{D}\longrightarrow \mathbb{R}$ a function and $x_0$ an accumulation point of $\mathcal{D}$. We say that the limit as x approaches $x_0$, $+\infty$ or $-\infty$ (whichever applies) does not exists if either
i) the limit goes to $+\infty$.
ii) the limit goes to $-\infty$.
iii) the limit is not unique.
Also, if you can think of another definition, or a caracterisation that would make the proofs easier, I'd be very interested to hear it.
Mmh, I can think of one for definition 2 and 3: For 2) "blah, blah" iif for all sequences $\{x_n\}$ such that $x_n \in \mathcal{D}$ and $\{x_n\}$ is strictly increasing for at least all n greater than a certain $N \in \mathbb{R}$ that has $+\infty$ for a limit, the corresponding sequence $\{f(x_n)\}$ has $+\infty$ (resp. $-\infty$) for a limit. The caracterisation for 3 is analogous.
Phew, this took 45 minutes to write!
Last edited: Nov 23, 2004
2. Nov 23, 2004
### mathwonk
1) is correct except that you omitted to say the "limit of what?"
2) looks wrong. i.e it differs greatly from the case in 1) which ti should resemble closely. i.e. the limit of f(x) as x goes to plus infinity, equals plus infinity iff, for every N, there is an M, such that for all x larger than M, f(x) is larger than N.
3) The definition of does nor exist also oooks highly suspicious. In general just negate the rpevious statements.
e.g. the limit of f(x) as x goes to x0 does not exist iff, for all t, there is some e>0, such that for all d > 0, there exists and x closer to x0 than d, and yet with f(x) further from t than e.
3. Nov 23, 2004
### Hurkyl
Staff Emeritus
IOW, start with the epsilon-delta way to say "the limit exists" and negate it. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9907601475715637, "perplexity": 352.2025582452424}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039743216.58/warc/CC-MAIN-20181116214920-20181117000920-00171.warc.gz"} |
https://www.varsitytutors.com/common_core_6th_grade_math-help/expressions-equations | # Common Core: 6th Grade Math : Expressions & Equations
## Example Questions
← Previous 1 3 4 5 6 7 8 9 71 72
Explanation:
Multiply:
Explanation:
Multiply:
Explanation:
Multiply:
### Example Question #4 : Write And Evaluate Numerical Expressions With Exponents: Ccss.Math.Content.6.Ee.A.1
Simplify:
Explanation:
When adding exponents, we first need to factor common terms.
Let's start by factoring out the following:
Factor.
### Example Question #5 : Write And Evaluate Numerical Expressions With Exponents: Ccss.Math.Content.6.Ee.A.1
Expand:
Explanation:
When a number is raised by an exponent, the base value is multiplied by itself the number of times that the exponential value indicates.
### Example Question #6 : Write And Evaluate Numerical Expressions With Exponents: Ccss.Math.Content.6.Ee.A.1
Expand:
Explanation:
When a number is raised by an exponent, the base value is multiplied by itself the number of times that the exponential value indicates.
### Example Question #1 : Write And Evaluate Numerical Expressions With Exponents: Ccss.Math.Content.6.Ee.A.1
Evaluate:
Explanation:
When a number is raised by an exponent, the base value is multiplied by itself the number of times that the exponential value indicates.
is expanded to .
The product is .
### Example Question #21 : Simple Exponents
Evaluate:
Explanation:
When a number is raised by an exponent, the base value is multiplied by itself the number of times that the exponential value indicates.
is expanded to .
The product is .
### Example Question #9 : Write And Evaluate Numerical Expressions With Exponents: Ccss.Math.Content.6.Ee.A.1
Expand:
Explanation:
To expand the exponent, we just multiply the base out by the exponent present.
Expand: | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8471330404281616, "perplexity": 2164.4735374623347}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370494331.42/warc/CC-MAIN-20200329105248-20200329135248-00527.warc.gz"} |
https://web2.0calc.com/questions/helpppppp-urgent-plzzzzz | +0
# helpppppp urgent plzzzzz
0
375
3
Guest Mar 5, 2015
#3
+93691
+5
Well, XY is 10 you are right about that.
So long as the method is correct I am not too concerned.
I am sure you can fix the careless errors I thought there was likely to be some.
I take it you were happy with the method ?
There is probably a much easier way, I have a reputation for taking the scenic route. :))
Melody Mar 5, 2015
#1
+93691
+5
I am not going to do this very elegently but perhaps i can do it.
First I am going to cut this square pyramid in half so I need a few more points.
X can be the midpoint of AB
Y is the midpoint of CD and
Z is the midpoint of EF
Consider triangle VXB <VXB=90, XB=5 VB=8 find VX
vx= sqrt( 64-25) = sqrt(39)
Now consider triangle VXY VX=VY=sqrt(39), XY=8 Find <VXY (I'm going to call it $$\theta$$ )
$$\\39=39+64-(2*\sqrt{39}*8cos\theta)\\\\ 64=16\sqrt{39}*cos\theta\\\\ 4/\sqrt{39}=cos\theta\\\\$$
$$\underset{\,\,\,\,^{{360^\circ}}}{{cos}}^{\!\!\mathtt{-1}}{\left({\frac{{\mathtt{4}}}{{\sqrt{{\mathtt{39}}}}}}\right)} = {\mathtt{50.169\: \!945\: \!446\: \!964^{\circ}}}$$
so <VXY= 50.17°
NOW lets consider triangle XZY <ZYX=30°, <ZXY=50.17° XY=8 Find XZ
$$\\\frac{XZ}{sin30}=\frac{8}{sin(180-30-50.17)}\\\\ \frac{XZ}{sin30}=\frac{8}{sin(98.3)}\\\\ XZ=\frac{8}{sin(98.3)}\times sin30\\\\$$
$${\frac{{\mathtt{8}}}{\underset{\,\,\,\,^{{360^\circ}}}{{sin}}{\left({\mathtt{98.3}}^\circ\right)}}}{\mathtt{\,\times\,}}\underset{\,\,\,\,^{{360^\circ}}}{{sin}}{\left({\mathtt{30}}^\circ\right)} = {\mathtt{4.042\: \!340\: \!325\: \!221\: \!254\: \!6}}$$
so $$XZ\approx 4.04 cm$$
NOW I am going to consider triangle AVB
VX=sqrt(39), XZ=4.04 so VZ=sqrt(39)-4.04 = 2.205 cm approx
NOW triangle EFV is similar to triangle ABV so
$$\\\frac{EF}{AB}=\frac{VZ}{VX}\\\\ \frac{EF}{10}=\frac{2.205}{\sqrt{39}}\\\\ EF=\frac{2.205}{\sqrt{39}}\times 10\\\\$$
$${\frac{{\mathtt{2.205}}}{{\sqrt{{\mathtt{39}}}}}}{\mathtt{\,\times\,}}{\mathtt{10}} = {\mathtt{3.530\: \!825\: \!791\: \!402\: \!171\: \!3}}$$
So if I have not done anything incorrectly the answer is EF= 3.53cm (approximately)
BLAST, I FOUND THE WRONG ONE. BUMMER!!! (ノ °益°)ノ 彡
nevermind:
$$\\\frac{VE}{VA}=\frac{VZ}{VX}\\\\ \frac{VE}{8}=\frac{2.205}{\sqrt{39}}\\\\ VE=\frac{2.205}{\sqrt{39}}\times 8\\\\$$
$${\frac{{\mathtt{2.205}}}{{\sqrt{{\mathtt{39}}}}}}{\mathtt{\,\times\,}}{\mathtt{8}} = {\mathtt{2.824\: \!660\: \!633\: \!121\: \!737}}$$
AE=AV-EV
AE=8-2.825
$${\mathtt{8}}{\mathtt{\,-\,}}{\mathtt{2.825}} = {\frac{{\mathtt{207}}}{{\mathtt{40}}}} = {\mathtt{5.175}}$$
AE is approx 5.18 cm
Melody Mar 5, 2015
#2
+5
The method looks good Melody, but, almost at the top, shouldn't XY = 10 rather than 8 ?
Guest Mar 5, 2015
#3
+93691
+5
Well, XY is 10 you are right about that.
So long as the method is correct I am not too concerned.
I am sure you can fix the careless errors I thought there was likely to be some.
I take it you were happy with the method ?
There is probably a much easier way, I have a reputation for taking the scenic route. :))
Melody Mar 5, 2015 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9429882168769836, "perplexity": 4593.503068958577}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583516135.92/warc/CC-MAIN-20181023111223-20181023132723-00377.warc.gz"} |
https://pt.ifixit.com/Answers/History/157334 | Ir para o conteúdo principal
Ajuda
## 19/03/2018
### Versão atual de: Sam Goldheart (detalhes da votação) , 19/03/2018
#### Texto:
`Well I had the same problem, But I tried all the Methods, But in the End It was Just Hardware Problem , And My Phone was out of Warranty So I fixed it By my self, It is Quite easy, I used [http://itechify.com/2015/09/16/how-to-recover-data-from-nexus-5-broken-screen/|this Guide]..`
#### Status:
deletedopen deletedopen
## 21/06/2016
### Editado por: iRobot , 21/06/2016
#### Texto:
`Well I had the same problem, But I tried all the Methods, But in the End It was Just Hardware Problem , And My Phone was out of Warranty So I fixed it By my self, It is Quite easy, I used [http://itechify.com/2015/09/16/how-to-recover-data-from-nexus-5-broken-screen/|this Guide]..`
#### Status:
opendeleted opendeleted
## 21/06/2016
### Editado por: natalia , 21/06/2016
#### Texto:
Well I had the same problem, But I tried all the Methods, But in the End It was Just Hardware Problem , And My Phone was out of Warranty So I fixed it By my self, It is Quite easy, I used [http://www.getmetravelled.com/|thishttp://itechify.com/2015/09/16/how-to-recover-data-from-nexus-5-broken-screen/|this Guide].. Well I had the same problem, But I tried all the Methods, But in the End It was Just Hardware Problem , And My Phone was out of Warranty So I fixed it By my self, It is Quite easy, I used [http://www.getmetravelled.com/|thishttp://itechify.com/2015/09/16/how-to-recover-data-from-nexus-5-broken-screen/|this Guide]..
`open`
## 25/08/2015
### Editado por: natalia , 25/08/2015
#### Texto:
Well I had the same problem, But I tried all the Methods, But in the End It was Just HArdwareHardware Problem , And My Phone was out of Warranty So I fixed it By my self, It is Quite easy, I used [http://techglen.com/2014/01/15/how-to-fix-nexus-4-power-button-or-volume-button/|thishttp://www.getmetravelled.com/|this Guide].. Well I had the same problem, But I tried all the Methods, But in the End It was Just HArdwareHardware Problem , And My Phone was out of Warranty So I fixed it By my self, It is Quite easy, I used [http://techglen.com/2014/01/15/how-to-fix-nexus-4-power-button-or-volume-button/|thishttp://www.getmetravelled.com/|this Guide]..
`open`
## 15/01/2014
### Mensagem original de: natalia , 15/01/2014
#### Texto:
`Well I had the same problem, But I tried all the Methods, But in the End It was Just HArdware Problem , And My Phone was out of Warranty So I fixed it By my self, It is Quite easy, I used [http://techglen.com/2014/01/15/how-to-fix-nexus-4-power-button-or-volume-button/|this Guide]..`
`open` | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9245485663414001, "perplexity": 3964.9399972440156}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153980.55/warc/CC-MAIN-20210730185206-20210730215206-00289.warc.gz"} |
https://cstheory.stackexchange.com/questions/29132/how-hard-is-it-to-find-a-well-distributed-subset-of-models-of-a-propositional/29214 | # How hard is it to find a “well-distributed” subset of models of a propositional formula?
We consider the propositional language $\mathcal{L}_{\mathit{PS}}$ defined over a finite alphabet $\mathit{PS}$ and the usual logical connectives. An interpretation is an assignment $\mathit{PS} \mapsto \{true, false\}$, and a model of a formula $\phi \in \mathcal{L}_{\mathit{PS}}$ is an interpretation $\omega$ which makes $\phi$ true in the usual way. The set of models of a formula $\phi$ is denoted $Mod(\phi)$. The Hamming distance $d_H(\omega, \omega')$ between two interpretations is the number differences between them, i.e., $d_H(\omega, \omega') = |\{x \in \mathit{PS} \mid \omega(x) \neq \omega'(x)\}|$.
I am interested in the computational complexity of the following decision problem. Given a formula $\phi$, an integer $k$ given in unary and a number $\alpha$, does there exist a set $\mathcal{I} \subseteq Mod(\phi)$ such that $|\mathcal{I}| \leq k$ and such that $\forall \omega' \in Mod(\phi)$, $\exists \omega \in \mathcal{I}$, $d_H(\omega, \omega') \leq \alpha$? Intuitively, this problem asks whether there is a subset (of size at most $k$) of models of a given formula such that every model of the formula is "close" enough (wrt the distance threshold $\alpha$) to some model of the subset.
Actually, I am also wondering about the hardness of the above problem when the set $Mod(\phi)$ is given explicitly in input (at least, this one is in $\mathbf{NP}$, but is it $\mathbf{NP}$-hard?)
Update:
The following is the answer for the case where the set $Mod(\phi)$ is explicitely given in input. The problem is $\mathsf{NP}$-hard by reduction from the vertex cover problem on cubic graph, which is known to be $\mathsf{NP}$-hard. Let $G = (V, A)$ be a cubic graph with $V = \{v_1, \dots, v_n\}$ and $A = \{a_1, \dots, a_m\}$ and $k$ be a positive integer. We associate with every edge $a_j \in A$ a propositional variable $f(a_j) = x_j$ and denote $\{x_1, \dots, x_m\} = \mathit{PS}$. We associate with every vertex $v_i \in V$ an interpretation $g(v_i) = \omega_i$ defined for every $j \in \{1, \dots, m\}$ as $\omega_i(x_j) = 1$ if the edge $a_j \in A$ is incident to the vertex $v_i$, $0$ otherwise. Denote $\{\omega_1, \dots, \omega_n\} = \mathcal{S}$ ($\mathcal{S}$ stands for the set of models $Mod(\phi)$ of some formula $\phi$, but which is here defined explicitely). One can see that for every $\omega_i, \omega_j \in \mathcal{S}$, $d_H(\omega_i, \omega_j) = 4$ if $v_i$ and $v_j$ are adjacent in $G$, otherwise $d_H(\omega_i, \omega_j) = 6$. Therefore, $G$ admits a vertex cover $V'$ such that $|V'| \leq k$ if and only if the set $\mathcal{I} = \{g(v_i) \mid v_i \in V'\}$ satisfies $\forall \omega_j \in \mathcal{S}$, $\exists \omega_i \in \mathcal{I}$, $d_H(\omega_i, \omega_j) \leq 4$.
For the succinct case (i.e., where considering a formula $\phi$ instead of a list of interpretations), then the problem is still open. From the answers of Marzio De Biasi and D.W., it is $\mathsf{coNP}$-hard. Moreover, it is in $\mathsf{NP}^\mathsf{NP} = \Sigma_2^{\rm P}$. Indeed, one can use the following non-deterministic algorithm with $\mathsf{NP}$ oracle: (i) guess a set $\mathcal{I}$ of interpretations such that $|\mathcal{I}| \leq k$; (ii) check in polynomial time that $\forall \omega \in \mathcal{I}$, $\omega \in Mod(\phi)$, and check using one call to the $\mathsf{NP}$-oracle that $\forall \omega' \in Mod(\phi)$, $\exists \omega \in \mathcal{I}$, $d_H(\omega, \omega') \leq \alpha$.
I still need to characterize the exact complexity of the problem. Intuitively, it seems that the problem is $\Sigma_2^{\rm P}$-hard (thus, it would be $\Sigma_2^{\rm P}$-complete in this case). Does anybody know an extension of the vertex cover problem to some $\Sigma_2^{\rm P}$-hard problem? Otherwise, a reduction from the validity problem of a QBF of the form $\exists X \forall Y \psi$ to our problem seems cumbersome.
Well, I don't know what the exact complexity class is, but the problem is hard: a polynomial-time algorithm for this problem would imply P=NP. Just take $k=0$; then the answer to your problem is yes if and only if $\phi$ is unsatisfiable. Therefore, any polynomial-time algorithm for your problem would imply P=NP.
If you don't like relying on the special case $k=0$, here's a related construction. Let $\psi$ be a boolean formula on $n$ variables $x_1,\dots,x_n$. Introduce $n+1$ new variables $y_0,\dots,y_n$. Define the formula
$$\phi := (\psi \land y_0 \land \dots \land y_n) \lor (\neg y_0 \land \dots \neg y_n).$$
Set $k=1$ and $\alpha=n$. If $\psi$ is satisfiable, the answer to your problem is no. If $\psi$ is not satisfiable, the answer to your problem is yes.
The first part of the question leads to a coNP-hard problem; this is a reduction from UNSAT.
Suppose that $\phi$ is a SAT formula with $n$ variables. Check if it is satisfied by $(1,1,...,1),(0,1,...1),(1,0,...,1),...(1,1,...,0)$ if yes, build a dum false instance of your problem. Otherwise build:
$$\phi' =( \phi ) \lor (x_1 \land ... \land x_n)$$
Which is satisfiable and $w_1 = (1,...,1) \in Mod(\phi')$. Note that for all $w \neq w_1$ we have $w \in Mod(\phi') \Leftrightarrow w \in Mod(\phi)$ , if you pick $k = 1$, and $\alpha= 1$; then $\mathcal{I}$ must contain only one element and must be at Hamming distance one from the other elements of the model.
But by construction the only $n$ models at Hamming distance 1 from $w_1$ $(0,1,...,1),(1,0,...,1)$ are not in $Mod(\phi)$and are not in $Mod(\phi')$; so your problem has a solution if and only if $\mathcal{I} = \{ w_1 \} = Mod(\phi' )$; if and only if the original $\phi$ is unsatisfiable.
• :-( D.W. posted a similar answer while I was writing mine ... – Marzio De Biasi Jan 13 '15 at 18:09
• I think the HRC problem is different, because the $p$ strings to find are not in the given set $S$. In particular, 1-HRC is NP-hard, while it is obviously in P in my case. – user109711 Jan 18 '15 at 4:49
• If the whole set $Mod(\phi)$ is given in input, then the problem is NP-hard in the case where any distance $d$ between interpretations can be considered. We reduce the set cover problem to ours. Given a graph $G = (V, A)$, associate with each vertex from V some unique interpretation. Define for every $(a, b) \in A$ the distance $d(a, b)$ as the length of the shortest path between $a$ and $b$ in $G$. Then there exists a set cover of size $\leq \alpha$ in $G$ if and only if the translated instance is a yes one, with $\alpha = 1$. But I cannot prove NP-hardness when using the Hamming distance. – user109711 Jan 18 '15 at 4:59
• In my previous comment, I mean "vertex cover problem", not "set cover problem". And the first occurrence of $\alpha$ should be replaced by a $k$. – user109711 Jan 18 '15 at 5:16
• @user109711: I saw that you posted a solution for the explicit case, but perhaps you can construct $Mod(\phi)$ implicitly using a DNF (of polynomial size w.r.t. $|Mod(\phi)|$) in which every clause is exactly $(l_1 \land ... \land l_n), l_i \in \{x_i, \neg x_i\}$ (negated or unnegated); see the part of my answer in which I build the elements of S (that part is correct). Don't forget that the implicit case is also coNP-hard (see the first part of my answer and DW's answer) – Marzio De Biasi Jan 18 '15 at 16:27
$\Sigma_2^{\rm P}$-hardness holds even when $k = 1$. We prove it by considering a reduction from the validity problem for quantified boolean formulas (QBFs) of the form $\exists X \forall Y \alpha$ where $X = \{x_1, \dots, x_n\}$ and $Y = \{y_1, \dots, y_m\}$ are two disjoint sets of propositional atoms and $Var(\alpha) = X \cup Y$. Consider such a QBF. Let us define new sets of fresh variables $Z = \{z_1, \dots, z_{m+1}\}$ and for every $i \in \{1, \dots, m\}$, $X^i = \{x_1^i, \dots, x_n^i\}$. Let us associate with the QBF the propositional formula $\phi$ defined over $\bigcup_{i = 1}^m{X^i} \cup X \cup Y \cup Z$ as:
$\phi = \bigwedge_{i=1}^n\bigwedge_{j=1}^m{x_i \leftrightarrow x_i^j} \wedge (\neg \alpha \rightarrow \bigwedge_{i = 1}^{m+1}{z_i})$.
Note that $|Var(\phi)| = n(m+1) + 2m + 1$.
We prove that the QBF $\exists X \forall Y \alpha$ is valid if and only if there exists an interpretation $\omega \in Mod(\phi)$ such that $\forall \eta \in Mod(\phi)$, $d_H(\omega, \eta) \leq n(m + 1) + m$.
($\Rightarrow$ part) Assume that $\exists X \forall Y \alpha$ is valid. Then there exists an assignment over $X$ such that any completion of it over $X \cup Y$ satisfies $\neg \alpha$. Let $\omega'$ be such an assignment over $X$. We must have $\omega'(z_1) = \dots = \omega'(z_{m+1}) = 1$. Then consider an assignement $\omega$ over $Var(\phi)$ satisfying the following conditions:
$\forall i \in \{1, \dots, n\}$, $\omega(x_i) = \omega(x^1_i) = \dots = \omega(x^m_i) = |1 - \omega'(x_i)|$, and for all other variables $s \in Y \cup Z$, $\omega(s) = 1$.
Let $\eta \in Mod(\phi)$. Then we fall in exactly two cases: (i) $\eta$ differs from $\omega$ on all variables from $X$. In this case, $\eta$ and $\omega'$ have the same value on all variables from $\bigcup_{i=1}^m{X^i} \cup X$, so $\eta$ shares the same property as $\omega'$ on the fact that $\eta \models \neg \alpha$. Thus $\eta$ and $\omega$ have the same value on all variables from $Z$, i.e., $\forall i \in \{1, \dots, m+1\}$, $\eta(z_i) = \omega(z_i) = 1$, so $\eta$ and $\omega$ share the same value on at least $m + 1$ variables. Hence, $d_H(\eta, \omega) \leq n(m + 1) + m$. (ii) $\eta$ and $\omega$ have the same value on some variable $x_a$ from $X$. Then they also have the same value on all variables $x^i_a$ from $X^i$, i.e., $\forall i \in \{1, \dots, m\}$, $\eta(x^i_a) = \omega(x^i_a)$, so $\eta$ and $\omega$ share the same value on at least $m + 1$ variables. Hence, $d_H(\eta, \omega) \leq |Var(\phi)| - (m + 1) = n(m + 1) + m$.
We just proved that $\forall \eta \in Mod(\phi)$, $d_H(\omega, \eta) \leq n(m + 1) + m$. That is, we proved that if QBF $\exists X \forall Y \alpha$ is valid, then there exists an interpretation $\omega \in Mod(\phi)$ such that $\forall \eta \in Mod(\phi)$, $d_H(\omega, \eta) \leq n(m + 1) + m$.
($\Leftarrow$ part) Assume that $\exists X \forall Y \alpha$ is not valid. So it holds that for any assignment over $X$ there is a completion of it over $X \cup Y$ satisfying $\neg \phi$. Let $\omega \in Mod(\phi)$. Consider the assignment $\eta$ over $\bigcup_{i = 1}^m{X^i} \cup X \cup Z$ satisfying the following conditions:
$\forall i \in \{1, \dots, n\}$, $\eta(x_i) = \eta(x^1_i) = \dots = \eta(x^m_i) = |1 - \omega(x_i)|$, and $\forall i \in \{1, \dots, m+1\}$, $\eta(z_i) = |1 - \omega(z_i)|$. By hypothesis, $\eta$ can completed to an assignment over $Var(\phi)$ such that $\eta \models \neg \phi$, and $\eta$ satisfies all constraints of $\phi$, thus $\eta \in Mod(\phi)$. Moreover, $\eta$ differs from $\omega$ on $n(m+1) + m + 1$ variables. So for every $\omega \in Mod(\phi)$, we can find an assignement $\eta \in Mod(\phi)$ such that $d_H(\eta, \omega) > n(m+1) + m$. Hence, there exists no interpretation $\omega \in Mod(\phi)$ such that $\forall \eta \in Mod(\phi)$, $d_H(\omega, \eta) \leq n(m + 1) + m$.
This shows that the QBF $\exists X \forall Y \alpha$ is valid if and only if there exists an interpretation $\omega \in Mod(\phi)$ such that $\forall \eta \in Mod(\phi)$, $d_H(\omega, \eta) \leq n(m + 1) + m$, which concludes the proof that the initial problem is $\Sigma_2^{\rm P}$-hard. Since we already knew that it is in $\Sigma_2^{\rm P}$, it is $\Sigma_2^{\rm P}$-complete. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9834138751029968, "perplexity": 118.67665657164622}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039550330.88/warc/CC-MAIN-20210421191857-20210421221857-00253.warc.gz"} |
https://math.stackexchange.com/questions/376589/extreme-value-theorem-proof-help | # Extreme Value Theorem proof help
Extreme Value Theorem: If $f$ is a continuous function on an interval [a,b],
then $f$ attains its maximum and minimum values on [a,b].
Proof from my book: Since $f$ is continuous, then $f$ has the least upper bound, call it $M$. Assume there is no value $c \in [a,b]$ for which $f(c)=M$.
Therefore, $f(x)<M$ for all $x \in [a,b]$. Define a new function $g$ by
$g(x)=\frac{1}{M-f(x)}$
Observe $g(x)>0$ for every $x\in[a,b]$ and that $g$ is continuous and bounded on [a,b]. Therefore there exists $K>0$ such that $g(x)\le K$ for every $x\in [a,b]$. Since for each $x \in [a,b]$,
$g(x)= \frac{1}{M-f(x)} \le K$ is equivalent to $f(x)\le M-\frac{1}{K}$,
we have contradicted the fact that $M$ was assumed to be the least upper bound of $f$ on [a,b].
Hence, there must be a balue $c\in[a,b]$ such that $f(c)=M$.
Q: Where does the function $g$ come from? Is there a popular alternative proof?
• using Bolzano-weierstrass ,there is an alternate nicer proof.see :en.wikipedia.org/wiki/Extreme_value_theorem – Halil Duru Apr 29 '13 at 20:46
• @StefanSmith, $f$ is bounded by the boundedness theorem – Elimination Jul 17 '14 at 15:52
• @Elimination : Thanks, you're right. I had to Google "the boundedness theorem". The proof should mention why $f$ must be bounded, unless it is clear from context, from something immediately before it in the OP's book. I still prefer the proof that Halil Duru cites. – Stefan Smith Jul 19 '14 at 0:24
The "simplest" proof I know goes something like this : If $M$ is the supremum of $f$, then there is a sequence $(x_n)$ such that $f(x_n) \to M$. Now, $(x_n)$ itself may not be convergent, but since $[a,b]$ is compact, it will have a convergent subsequence $(x_{n_k})$. Suppose $x_{n_k} \to c \in [a,b]$, then $f(x_{n_k}) \to f(c)$. But $f(x_{n_k})$ is a subsequence of $f(x_n)$, and hence must converge to $M$. Hence, $f(c) = M$.
This is quite a simple proof, isn't it? Why do you want a 'popular alternative proof'?
The proof can't be too simple, because the result is not true if $f$ is defined over $\mathbb Q$ instead of $\mathbb R$. For instance, define $f:\mathbb Q \to \mathbb Q$ by $f(x) = x^3 - x$. Then $f$ doesn't attain its maximum in $[-1,0]$, because $-\sqrt\frac{1}{3} \notin \mathbb Q$. Hence any proof of your theorem must use the properties of the real numbers in an essential way.
As an illuminating exercise, try to see where the proof breaks down if $f$ is only defined over the rational numbers.
Where does the function $g$ come from?
We need to show that $f(x)=M$ for some $x$. A natural move is to consider the difference between $f$ and $M$. Let $d(x)=M-f(x)$. $f(x)=M \leftrightarrow d(x)=0$. The reason that the definition $g(x)=(d(x))^{-1}$ uses the inverse of the difference is that if $g$ is bounded from above by $K>0$, than $d$ is bounded from below by $K^{-1}>0$. $g$ is bounded by the boundedness theorem, thus we know a positive lower bound of $d$. Applying the boundedness theorem directly to $d$ is useless because the lower bound of $d$ can be $0$. This is the intuition behind $g$.
You asked for a "popular alternative proof". This is an alternative proof. I don't know how popular it is, but I like it. It uses the Bolzano-Weierstrass theorem (convergent subsequences) but hardly anything else, no least upper bounds; it skips the step of proving boundedness, going straight for the maximum. It could be shortened by using the fact that the set of rational numbers is countable, but that seems unnecessarily sophisticated.
Given a continuous real-valued function $f$ on $[a,b]$, we will show that the set $Y = f([a,b])$ has a greatest element.
For each positive integer $n$, define a finite set $Q_n = \{\frac{p}{q}: p,q \text{ integers, } 0 < q \le n, |p| \le n\}$.
Choose $y_n\in Y$ so as to maximize the number of elements in the set $\{r\in Q_n: y_n > r\}$, and choose $x_n\in[a,b]$ with $f(x_n) = y_n$.
The sequence $\{x_n\}$ has a subsequence converging to a point $c\in[a,b]$. Since $f$ is continuous, the corresponding subsequence of $\{y_n\}$ converges to $f(c)$. We will show that $f(c)$ is the greatest element of $Y$.
Assume for a contradiction that $f(c)<y\in Y$. Choose a rational number $r$ so that $f(c)<r<y$. Because of the way $y_n$ was chosen, we have $y_n > r$ whenever $r\in Q_n$. Since $r\in Q_n$ for all sufficiently large $n$, we have $y_n > r > f(c)$ for all sufficiently large $n$. But this is absurd, since $\{y_n\}$ has a subsequence converging to $f(c)$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9886566996574402, "perplexity": 73.91694133457075}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998288.34/warc/CC-MAIN-20190616162745-20190616184745-00218.warc.gz"} |
https://cosmocoffee.info/viewtopic.php?f=2&t=864 | [0704.0908] Extragalactic Radio Sources and the WMAP Cold Spot
Authors: Lawrence Rudnick, Shea Brown, Liliya R. Williams Abstract: We detect a dip of 20-30% in the surface brightness and number counts of NVSS sources smoothed to a few degrees at the location of the WMAP cold spot. The dip has structure on scales of 1-10 degrees. We suggest that the dip in extragalactic brightness and number counts and the WMAP cold spot are physically related, i.e., that the coincidence is neither a statistical anomaly nor a WMAP foreground correction problem. Since the cold spot originates from structures at modest redshifts, there is no remaining need for non-Gaussian processes at the last scattering surface of the CMB. The late integrated Sachs-Wolfe effect, already seen statistically for NVSS source counts, may thus be seen to operate on a single region for the first time. To create the magnitude and angular size of the WMAP cold spot requires a completely empty void of radius 140 Mpc at z<1 along this line of sight. This is far outside the current expectations of the concordance cosmology, and adds to the anomalies seen in the CMB. [PDF] [PS] [BibTex] [Bookmark]
Discussion related to specific recent arXiv papers
Richard Lieu
Posts: 11
Joined: November 27 2005
Affiliation: University of Alabama, Huntsville
[0704.0908] Extragalactic Radio Sources and the WMAP Cold S
Another small and ignorable `glitch' for the standard
cosmological model?
Niayesh Afshordi
Posts: 49
Joined: December 17 2004
Affiliation: Perimeter Institute/ University of Waterloo
Contact:
[0704.0908] Extragalactic Radio Sources and the WMAP Cold Sp
I had not appreciated this paper, until Anze brought it up in another post. Another paper that has brought up the possibility of a huge void (~ 150 Mpc) is astro-ph/0612347. Does anybody know what is the probability of a void with \delta ~ -1, and this size within our horizon. Since this is probably very unlikely, what would be the required f_NL non-Gaussianity parameter that would lead to one of these in our horizon?
Anze Slosar
Posts: 183
Joined: September 24 2004
Affiliation: Brookhaven National Laboratory
Contact:
[0704.0908]
I think that Inoue and Silk have only 30% or so underdensity compared to these guys who require -1. I am pretty sure that any such void is completely impossible as a random fluctuation in a standard theory and that if true such (primordial?) voids must also be extremely rare if you want to avoid seeing them in the CMB and other power spectra. It would also be good to see an independent confirmation of this result.
Syksy Rasanen
Posts: 119
Joined: March 02 2005
Affiliation: University of Helsinki
[0704.0908] Extragalactic Radio Sources and the WMAP Cold
Voids do tend to be very underdense, $\delta\lesssim-0.9$. The radius of the largest observed void is around 40 Mpc (see astro-ph/0312533), which is a factor 3-4 smaller than the void suggested in the paper. I think it's true that it is unlikely to have so large so empty voids as random fluctuations in the standard picture. However, in the standard picture it is also difficult to get voids which as empty and as large as those which are indeed observed, so one should not be too confident about such statistics. Peebles has even called this a crisis of the CDM model, astro-ph/0101127.
(An easy way to get bigger and emptier voids might simply be to change the power spectrum on the relevant scales. Of course, this would also change the abundance of large overdense structures, which might not be unwanted, see astro-ph/0605393, astro-ph/0609686.)
Ben Gold
Posts: 81
Joined: September 25 2004
Affiliation: University of Minnesota
Contact:
Re: [0704.0908] Extragalactic Radio Sources and the WMAP Col
Syksy Rasanen wrote:Voids do tend to be very underdense, $\delta\lesssim-0.9$. The radius of the largest observed void is around 40 Mpc (see astro-ph/0312533), which is a factor 3-4 smaller than the void suggested in the paper.
(...)
I think there's a danger here of mixing up $\delta$ for galaxies and $\delta$ for total mass. The observations tend to report $\delta$ in galaxy counts or something similar; to get the same $\delta$ in total mass you have to assume that the bias is the same in voids as it is elsewhere. As far as I know (though I freely admit I've not kept up on this topic), simulations suggest that this isn't true; see for example Ostriker et al. astro-ph/0305203. So you might not even need to change the power spectrum. Maybe better simulations will make the whole "void problem" go away entirely?
Kaiki Taro Inoue
Posts: 2
Joined: June 11 2007
Affiliation: Kinki University
[0704.0908]
I have predicted the 'large' void at z~1 in the direction to the 3-sigma cold spot in astro-ph/0602478 before this paper. The chance of having such a large void should be very small but the 'volume effect' and the 'non-linear gravitational effect' could somehow enhance the chance of having such voids(here a 'void' means a low density region surrounded by a spherically symmetric non-linear wall). For linear fluctuations, a void with 30% underdense region with 200h^{-1}Mpc radius is a 11-13 sigma object at z=0(astro-ph/0612347). A spiky power spectrum could make such a structure without having any 'troubles' with other observations.
Richard Holman
Posts: 5
Joined: June 21 2005
Affiliation: Carnegie Mellon University
[0704.0908]
I'd like to point out that Laura Mersini-Houghton, Tomo Takahashi and I
predicted the existence of a void at $z\leq 1$ of size roughly 200 Mpc, in a couple of papers (arXiv: hep-th/0611223, hep-th/0612142) which studied the astrophysical signatures of our proposal for a dynamical selection of inflationary initial conditions. This involved considerations of both the backreaction of massive modes of the inflaton, as well as non-local entanglement of our inflating patch and other patches.
The net effect of all this, from the point of large scale
structure, is to add a negative, scale dependent, contribution to
the Newtonian potential $\Phi$, which in turn gives rise to a negative density contrast superimposed upon the positive density perturbations inside the Hubble radius, resulting in voids at scales roughly present Hubble scale $z\leq 1$ and size of ~140-200 Mpc today.
This appears to be what the authors
of astro-ph/07004.0908 observed. (I think there were also observations by WMAP and SDSS previously). These
observations seem to be in agreement with our theoretical prediction of the effects of nonlocal entanglements between inflationary patches. If true, then the cold spot discovery would be a very exciting test of such quantum gravitational effects and may provide the first indirect tests for mechanisms for the selection of initial conditions and open a new window of physics beyond the horizon.
Among other tests, our model can be tested independently by considering
correlations between cosmic shear and temperature anisotropies. It would be interesting to see whether our results are consistent with the work quoted by Inoue.
Kaiki Taro Inoue
Posts: 2
Joined: June 11 2007
Affiliation: Kinki University
[0704.0908]
It seems to me that addition of a negative scale dependent contribution to the Newtonian potential Φ, leads to a suppression of overall fluctuations on large-angle scales (larger than acoustic horizon scales at LSS). Therefore, it leads to a suppression of generating voids on 140-200Mpc at z<1. Any comments?
Alessio Notari
Posts: 7
Joined: November 26 2005
Affiliation: McGill U.
[0704.0908] Extragalactic Radio Sources and the WMAP Cold Sp
Maybe not many people are aware of this, but if such voids exists (radius $200/h$ Mpc and $\delta\approx -0.4$) and if we happen to live near the center (with a precision of about 10Mpc), this would give a quite good fit of cosmological observations (supernova Hubble diagram and other things) without the need for Dark Energy.
We have tried to fit the supernovas in astro-ph/0606703 (although in that paper there is a slightly different geometry, the result is almost the same for spherical voids). The $chi^2$ is worst than $Lambda$CDM, but still reasonable, and it can be made slightly better than that playing with the density profile. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8232211470603943, "perplexity": 1772.6934286798205}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540518627.72/warc/CC-MAIN-20191209093227-20191209121227-00495.warc.gz"} |
http://www.zora.uzh.ch/id/eprint/46007/ | # Effect of the stimulation level on the refractory behavior of the electrically stimulated auditory nerve
Lai, W K; Dillier, N (2010). Effect of the stimulation level on the refractory behavior of the electrically stimulated auditory nerve. In: 13. Jahrestagung der Deutschen Gesellschaft für Audiologie, Frankfurt, 17 March 2010 - 20 March 2010, 1-4.
## Abstract
The refractory behavior of the electrically stimulated auditory nerve can be described by the recovery function, which plots the ECAP amplitude in response to a masker/probe stimulus pair as a function of the time interval (Masker Probe Interval, MPI) between the two stimuli. The recovery function is characterized by two time
intervals or periods: In the first interval (the Absolute Refractory Period, ARP), typically lasting for 300 to 400us, the neurons stimulated by the masker are in absolute refractory and unable to respond to the probe stimulus. As the MPI is gradually increased beyond the ARP, the stimulated neural population is increasingly able to respond to the probe stimulus (i.e. relative refractory) as the inhibitory effects of the masker diminishes.
This second interval (the Relative Refractory Period, RRP) can be characterized by the time constant of an asymptotically increasing exponential function (Morsnowski et al. 2006). This recovery time constant provides an indication of the neurons’ temporal characteristics.
Previous reports (e.g. Battmer et al. 2004) suggest that this time constant is affected by the stimulation level used to determine the recovery function. Such a dependency would make it difficult to characterize the refractory behavior of the stimulated neurons using the recovery function.
In this study, the refractory behavior of the electrically stimulated auditory nerve with respect to stimulation level was examined retrospectively. It was expected that increasing the stimulation level would result in more deterministic behavior.
## Abstract
The refractory behavior of the electrically stimulated auditory nerve can be described by the recovery function, which plots the ECAP amplitude in response to a masker/probe stimulus pair as a function of the time interval (Masker Probe Interval, MPI) between the two stimuli. The recovery function is characterized by two time
intervals or periods: In the first interval (the Absolute Refractory Period, ARP), typically lasting for 300 to 400us, the neurons stimulated by the masker are in absolute refractory and unable to respond to the probe stimulus. As the MPI is gradually increased beyond the ARP, the stimulated neural population is increasingly able to respond to the probe stimulus (i.e. relative refractory) as the inhibitory effects of the masker diminishes.
This second interval (the Relative Refractory Period, RRP) can be characterized by the time constant of an asymptotically increasing exponential function (Morsnowski et al. 2006). This recovery time constant provides an indication of the neurons’ temporal characteristics.
Previous reports (e.g. Battmer et al. 2004) suggest that this time constant is affected by the stimulation level used to determine the recovery function. Such a dependency would make it difficult to characterize the refractory behavior of the stimulated neurons using the recovery function.
In this study, the refractory behavior of the electrically stimulated auditory nerve with respect to stimulation level was examined retrospectively. It was expected that increasing the stimulation level would result in more deterministic behavior. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9191948771476746, "perplexity": 2900.5798796907397}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948541253.29/warc/CC-MAIN-20171214055056-20171214075056-00330.warc.gz"} |
http://www.tutorsville.net/physics-formulas/kinetic_energy_formula.php | # Kinetic Energy Formula
When an object moves, it stores energy, which is called kinetic energy.
Kinetic Energy Formula is given by
$K.E=\frac{1}{2}mv^2$
Where m = mass of the body,
v is the velocity with which body is moving.
The Kinetic energy is expressed in Kgm2/s2.
Kinetic energy formula is used to calculate the mass, velocity or Kinetic energy of the body if any of the two quantities are given.
Kinetic Energy Problems
Below are problems on Kinetic energy, which help you to understand where you can use these problems.
Solved Examples
Question 1: A bus is moving with the velocity of 15 m/s and is having mass of 300 Kg. Calculate its Kinetic energy?
Solution:
Given: Mass of the body m = 300 Kg,
Velocity v = 15 m/s,
Kinetic energy is given by K.E = 15 mv2
= 15 $\times$ 300 Kg (15 m/s)2
= 4500 Kgm2/s2.
Chemistry Formulas Math Formulas
More Physics Formulas
Density formula Displacement formula Exponential decay formula Force formula Frequency formula Gravity formula Kinetic energy formula Potential energy formula Power formula Work formula Wavelength formula | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9967375993728638, "perplexity": 1235.6528590643559}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313536.31/warc/CC-MAIN-20190818002820-20190818024820-00216.warc.gz"} |
https://whatmaster.com/work-power-and-energy/ | Physics
# Work Power and Energy Formulas
### Mechanical work
Energy characteristics of motion are introduced on the basis of the concept of mechanical work or work of force . The work done by a constant force F is a physical quantity equal to the product of the modules of force and displacement multiplied by the cosine of the angle between the vectors of force F and displacement S :
Work is a scalar value. It can be either positive (0 ° ≤ α <90 °) or negative (90 ° < α ≤ 180 °). At α = 90 °, the work done by force is zero. In the SI system, work is measured in joules (J). The joule is equal to the work done by a force of 1 newton on moving 1 meter in the direction of the force.
If the force changes over time, then to find work, build a graph of the dependence of force on displacement and find the area of the figure under the schedule – this is the work:
An example of a force whose modulus depends on the coordinate (displacement) is the elastic force of a spring, which obeys Hooke’s law ( opr = kx ).
### Power
The work of force performed per unit of time is called power . The power P (sometimes denoted by the letter N ) is a physical quantity equal to the ratio of work A to the time interval t during which this work is done:
According to this formula, the average power is calculated , i.e. power characterizes the process. So, the work can be expressed through power: A = Pt (if of course the power and the time of the work is known). A unit of power is called a watt (W) or 1 joule per 1 second. If the motion is uniform, then:
According to this formula, we can calculate the instantaneous power (power at a given time), if instead of the speed we substitute the value of the instantaneous velocity into the formula. How to find out what power to count? If the problem asks for power at a time or at some point in space, then it is considered instantaneous. If you ask about the power for a certain period of time or a section of the path, then look for the average power.
Efficiency – efficiency , equal to the ratio of useful work to expended, or the useful power to expended:
What work is useful and what work is determined from the condition of a specific task by logical reasoning. For example, if a crane does work to lift a load to a certain height, then work on lifting the load will be useful (since the crane was created just for the sake of it), and spent work would be work done by an electric crane motor.
So, the useful and expended power does not have a strict definition, and they are found by logical reasoning. In each task, we ourselves must determine what was the goal of the work (useful work or power) in this task, and what was the mechanism or method of doing all the work (power or work expended).
In general, efficiency shows how effectively a mechanism transforms one kind of energy into another. If the power changes with time, then the work is found as the area of the figure below the graph of power versus time:
### Kinetic energy
A physical quantity equal to half the product of body mass per square of its velocity is called the kinetic energy of the body (the energy of motion) :
That is, if a car weighing 2000 kg moves at a speed of 10 m / s, then it has a kinetic energy equal to Е к = 100 kJ and is capable of doing work at 100 kJ. This energy can be converted into heat (when braking a car, the rubber of the wheels, road and brake discs are heated) or it can be spent on the deformation of the car and the body that the car collided with (in an accident). When calculating the kinetic energy, it does not matter where the car is moving, since the energy, like the work, is a scalar quantity.
The body has energy if it is able to do the work. For example, a moving body has kinetic energy, i.e. energy of motion, and is able to do work on the deformation of bodies or giving acceleration to the bodies with which the collision will occur.
The physical meaning of kinetic energy: in order for a body at rest m of mass to move at a speed v, it is necessary to do work equal to the obtained value of kinetic energy. If a body of mass m moves with velocity v , then to stop it, it is necessary to do work equal to its initial kinetic energy. During braking, the kinetic energy is mainly (except in the case of a collision, when the energy goes to deformation) is “taken away” by friction force.
The kinetic energy theorem: the work of the resultant force is equal to the change in the kinetic energy of the body:
The kinetic energy theorem is also valid in the general case when the body moves under the action of a changing force, the direction of which does not coincide with the direction of movement. It is convenient to apply this theorem in problems of acceleration and deceleration of the body.
### Potential energy
Along with the kinetic energy or the energy of motion in physics, the concept of potential energy or energy of interaction of bodies plays an important role .
The potential energy is determined by the mutual position of the bodies (for example, the position of the body relative to the surface of the Earth). The concept of potential energy can be introduced only for forces whose work does not depend on the trajectory of the body and is determined only by the initial and final positions (the so-called conservative forces ). The work of such forces on a closed trajectory is zero. This property has the power of gravity and the force of elasticity. For these forces, you can introduce the concept of potential energy.
The potential energy of the body in the field of gravity of the Earth is calculated by the formula:
The physical meaning of the body’s potential energy: potential energy is equal to the work that gravity does when the body is lowered to zero ( h is the distance from the center of gravity of the body to zero) If the body has potential energy, then it is able to do work when this body falls from height h to zero. The work of gravity is equal to the change in the potential energy of the body, taken with the opposite sign:
Often in the tasks on energy one has to find work on raising (turning over, getting out of the hole) the body. In all these cases, it is necessary to consider the movement not of the body itself, but only of its center of gravity.
The potential energy Ep depends on the choice of the zero level, that is, on the choice of the origin of the coordinates of the axis OY. In each task, the zero level is selected for reasons of convenience. Physical meaning is not the potential energy itself, but its change when the body moves from one position to another. This change does not depend on the choice of the zero level.
The potential energy of the stretched spring is calculated by the formula:
where: k – spring stiffness. A stretched (or compressed) spring can set in motion the body attached to it, that is, inform the body of kinetic energy. Therefore, such a spring has a reserve of energy. Stretching or compressing x must be calculated from the undeformed state of the body.
The potential energy of an elastically deformed body is equal to the work of the elastic force during the transition from a given state to a state with zero deformation. If in the initial state the spring was already deformed, and its elongation was equal to 1 , then upon transition to a new state with elongation 2 the elastic force will do work equal to the change in potential energy taken with the opposite sign (since the elastic force is always directed against the deformation body):
Potential energy during elastic deformation is the energy of interaction of separate parts of the body between themselves by the forces of elasticity.
The work of the friction force depends on the distance traveled (this kind of force, whose work depends on the trajectory and the distance traveled is called: dissipative forces ). The concept of potential energy for the friction force can not be entered.
### Efficiency
Efficiency (Efficiency) is a characteristic of the effectiveness of the system (device, machine) in relation to the conversion or transfer of energy. It is determined by the ratio of the useful energy used to the total amount of energy received by the system (the formula is already given above).
Efficiency can be calculated both through work and through power. Useful and expended work (power) is always determined by simple logical reasoning.
In electric motors, efficiency is the ratio of the (useful) mechanical work performed to the electrical energy received from the source. In heat engines – the ratio of useful mechanical work to the amount of heat expended. In electrical transformers, the ratio of electromagnetic energy produced in the secondary winding to the energy consumed by the primary winding.
By virtue of their commonality, the concept of efficiency makes it possible to compare and evaluate from a single point of view various systems such as nuclear reactors, electric generators and engines, thermal power plants, semiconductor devices, biological objects, etc.
Due to inevitable energy losses due to friction, heating of surrounding bodies, etc. Efficiency is always less than one. Accordingly, the efficiency is expressed in fractions of expended energy, that is, in the form of a correct fraction or in percent, and is a dimensionless quantity. Efficiency describes how a machine or mechanism works effectively. Efficiency of thermal power plants reaches 35–40%, internal combustion engines with supercharging and pre-cooling – 40–50%, dynamos and high-power generators – 95%, transformers – 98%.
The task in which you need to find efficiency or it is known, you need to start with a logical reasoning – what work is useful and what work is spent.
### The law of conservation of mechanical energy
The total mechanical energy is the sum of the kinetic energy (i.e., the energy of motion) and the potential (i.e., the energy of interaction of bodies by the forces of strength and elasticity):
If mechanical energy does not transform into other forms, for example, into internal (thermal) energy, then the sum of kinetic and potential energy remains unchanged. If the mechanical energy goes into heat, then the change in mechanical energy is equal to the work of the friction force or energy loss, or the amount of heat released, and so on, in other words, the change in total mechanical energy is equal to the work of external forces
The sum of the kinetic and potential energy of the bodies that make up the closed system (that is, that in which external forces do not act, and their work is equal to zero, respectively) and interacting with each other by the forces of elasticity and elasticity, remains unchanged:
This statement expresses the law of energy conservation (LEC) in mechanical processes . He is a consequence of Newton’s laws. The law of conservation of mechanical energy is executed only when the bodies in a closed system interact with each other by the forces of elasticity and aggression. In all problems on the law of conservation of energy there will always be at least two states of the system of bodies. The law states that the total energy of the first state will be equal to the total energy of the second state.
Algorithm for solving problems on the law of energy conservation:
1. Find the points of the initial and final body position.
2. Write down what or what energy the body possesses at these points.
3. Equate the initial and final energy of the body.
4. Add other necessary equations from previous topics in physics.
5. Solve the resulting equation or system of equations by mathematical methods.
It is important to note that the law of conservation of mechanical energy made it possible to obtain a connection between the coordinates and velocities of the body at two different points of the trajectory without analyzing the law of body motion at all intermediate points. The application of the law of conservation of mechanical energy can greatly simplify the solution of many problems.
In real conditions, almost always moving bodies, along with forces of elasticity and other forces, are acted upon by friction forces or environmental resistance forces. The work force of friction depends on the length of the path.
If friction forces act between the bodies that make up the closed system, then the mechanical energy is not conserved. Part of the mechanical energy is converted into the internal energy of the body (heating). Thus, the energy as a whole (ie, not only mechanical) is saved in any case.
With any physical interactions, energy does not arise and does not disappear. It only turns from one form to another. This experimentally established fact expresses the fundamental law of nature – the law of conservation and transformation of energy .
One of the consequences of the law of conservation and transformation of energy is the assertion that it is impossible to create a perpetual mobile (perpetuum mobile) – a machine that could do work indefinitely without spending energy.
If the task requires finding a mechanical work, then first select the method of its finding:
1. Work can be found by the formula: A = FS ∙ cos α . Find the force doing the work and the amount of body movement under the action of this force in the chosen frame of reference. Note that the angle must be selected between the force and displacement vectors.
2. The work of external force can be found as the difference of mechanical energy in the final and initial situations. Mechanical energy is equal to the sum of the kinetic and potential energies of the body.
3. Work on lifting the body at a constant speed can be found by the formula: A = mgh , where h is the height to which the center of gravity of the body rises .
4. Work can be found as a product of power for a while, i.e. according to the formula: A = Pt .
5. The work can be found as the area of the figure under the graph of force versus displacement or power versus time.
### The law of conservation of energy and the dynamics of rotational motion
The tasks of this topic are quite complex mathematically, but with the knowledge of the approach, they are solved using a completely standard algorithm. In all tasks you will have to consider the rotation of the body in a vertical plane. The decision will be reduced to the following sequence of actions:
1. It is necessary to determine the point of interest to you (the point at which it is necessary to determine the speed of the body, the force of the thread tension, the weight, and so on).
2. Write at this point the second law of Newton, given that the body rotates, that is, it has a centripetal acceleration.
3. Write the law of conservation of mechanical energy so that it contains the speed of the body at that very interesting point, as well as the characteristics of the state of the body in some state about which something is known.
4. Depending on the condition, express the velocity squared from one equation and substitute it into another.
5. Perform the rest of the necessary mathematical operations to obtain the final result.
When solving problems one must remember that:
• The condition of passing the top point when rotating on the thread with the minimum speed is the reaction force of the support N at the top point equal to 0. The same condition is satisfied when passing the top point of the dead loop.
• When rotating on a rod, the condition for passing the whole circle: the minimum speed at the top point is 0.
• The condition for the body to detach from the surface of the sphere – the reaction force of the support at the separation point is zero.
### Inelastic collisions
The law of conservation of mechanical energy and the law of conservation of momentum allow us to find solutions to mechanical problems in cases where the acting forces are unknown. An example of such tasks is the shock interaction of bodies.
Impact (or collision) is called short-term interaction of bodies, as a result of which their speeds undergo significant changes. During the collision of bodies between them there are short-term shock forces, the magnitude of which, as a rule, is unknown. Therefore, one cannot consider shock interaction directly using Newton’s laws. The application of the laws of conservation of energy and momentum in many cases makes it possible to exclude from consideration the process of collision itself and to obtain a relationship between the velocities of bodies before and after a collision, bypassing all intermediate values of these quantities.
Impact interaction of bodies often has to be dealt with in everyday life, in engineering and in physics (especially in the physics of the atom and elementary particles). In mechanics, two models of shock interaction are often used – absolutely elastic and absolutely inelastic impacts .
Absolutely inelastic shock is called such a shock interaction, in which the bodies join (stick together) with each other and move on as one body.
With an absolutely inelastic impact, mechanical energy is not conserved. It partially or completely goes into the internal energy of the bodies (heating). To describe any blows, you need to write down both the law of conservation of momentum and the law of conservation of mechanical energy, taking into account the heat released (it is extremely desirable to draw a picture beforehand).
### Absolutely elastic blow
Absolutely elastic impact is called a collision, in which the mechanical energy of a system of bodies is conserved. In many cases, the collisions of atoms, molecules and elementary particles obey the laws of absolutely elastic impact. With an absolutely elastic impact, along with the law of conservation of momentum, the law of conservation of mechanical energy is satisfied. A simple example of an absolutely elastic collision can be the central blow of two billiard balls, one of which was at rest before the collision.
The central impact of the balls is a collision, in which the speeds of the balls before and after the impact are directed along the line of the centers. Thus, using the laws of conservation of mechanical energy and momentum, one can determine the speeds of balls after a collision, if their velocities are known before the collision. The central strike is very rarely implemented in practice, especially when it comes to collisions of atoms or molecules. In the case of an off-center elastic collision, the velocities of the particles (balls) before and after the collision are not directed in one straight line.
A special case of non-central elastic impact can be collisions of two billiard balls of the same mass, one of which was fixed before the impact, and the speed of the second was not directed along the lines of the centers of the balls. In this case, the velocity vectors of the balls after the elastic collision are always directed perpendicular to each other.
### The laws of conservation. Challenging tasks
#### Several bodies
In some tasks on the law of conservation of energy, cables with the help of which certain objects move can have a mass (that is, not be weightless, as you might already get used to). In this case, the work of moving such cables (namely, their centers of gravity) must also be taken into account.
If two bodies connected by a weightless rod rotate in a vertical plane, then:
1. choose a zero level for calculating potential energy, for example, at the level of the axis of rotation or at the level of the lowest point where one of the loads is located, and they must make a drawing;
2. they write down the law of conservation of mechanical energy, in which the left side records the sum of the kinetic and potential energy of both bodies in the initial situation, and the right side records the sum of the kinetic and potential energy of both bodies in the final situation;
3. they take into account that the angular velocities of the bodies are the same, then the linear velocities of the bodies are proportional to the radii of rotation;
4. if necessary, write down Newton’s second law for each of the bodies separately.
#### Projectile rupture
In the event of a projectile rupture, the energy of explosives is released. To find this energy, it is necessary to take away the mechanical energy of the projectile before the explosion from the sum of the mechanical energies of the fragments after the explosion. We will also use the law of conservation of momentum, written down, in the form of a cosine theorem (vector method) or in the form of projections onto selected axes.
#### Heavy slab collisions
Let a light ball of mass m with speed n move towards a heavy plate that moves with speed v . Since the impulse of the ball is much less than the impulse of the plate, then after the impact the speed of the plate will not change, and it will continue to move at the same speed and in the same direction. As a result of the elastic impact, the ball will fly away from the plate. Here it is important to understand that the speed of the ball relative to the plate will not change . In this case, for the final ball speed we get:
Thus, the speed of the ball after impact increases by twice the speed of the wall. A similar reasoning for the case when the ball and the plate moved in the same direction before the impact leads to the result that the speed of the ball decreases by twice the speed of the wall:
#### Problems about the maximum and minimum values of the energy of colliding balls
In problems of this type, it is important to understand that the potential energy of the elastic deformation of the balls is maximal if the kinetic energy of their movement is minimal — this follows from the law of conservation of mechanical energy. The sum of the kinetic energies of the balls is minimal at the moment when the speeds of the balls will be the same in magnitude and directed in the same direction. At this moment, the relative velocity of the balls is zero, and the deformation and the potential energy associated with it is maximum. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.915975034236908, "perplexity": 263.83744225070905}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571745.28/warc/CC-MAIN-20220812170436-20220812200436-00627.warc.gz"} |
http://mathhelpforum.com/pre-calculus/36979-coordinate-systems.html | # Math Help - Coordinate Systems
1. ## Coordinate Systems
The variable chord PQ on the parabola with equation y^2 = 4x subtends a right angle at the origin O. By taking P as (t₁^2, 2t₁) and Q as (t₂^2, 2t₂), find a relation between t₁ and t₂ and hence show that PQ passes through a fixed point on the x-axis.
2. Originally Posted by geton
The variable chord PQ on the parabola with equation y^2 = 4x subtends a right angle at the origin O. By taking P as (t₁^2, 2t₁) and Q as (t₂^2, 2t₂), find a relation between t₁ and t₂ and hence show that PQ passes through a fixed point on the x-axis.
Let O be the origin then we have,
$(\text{slope})_{OP}\cdot (\text{slope})_{OQ} = -1$
$\frac{2t_1 - 0}{t_1 ^2 - 0}\cdot \frac{2t_2 - 0}{t_2 ^2 - 0} = -1$
$\color{blue}t_1 t_2 = -4$
Now the equation of the line passing through $(t_1 ^2, 2t_1)$ and $(t_2 ^2, 2t_2)$ is given by $(t_1 + t_2)y = 2x + 2t_2t_2$.
Substitute $\color{blue}t_1 t_2 = -4$ to get $(t_1 + t_2)y = 2x - 8$.
Clearly this line always passes through (4,0) which is on the x-axis.
3. Originally Posted by Isomorphism
Let O be the origin then we have,
$(\text{slope})_{OP}\cdot (\text{slope})_{OQ} = -1$
Thank you for your help. But I’ve confusion. I know that product of tangent & normal is equal to -1. So why we suppose to assume OP is tangent & OQ is normal or vice-versa?
4. Originally Posted by geton
Thank you for your help. But I’ve confusion. I know that product of tangent & normal is equal to -1. So why we suppose to assume OP is tangent & OQ is normal or vice-versa?
No.I did not say they are tangent and normal.
If two lines OP and OQ are perpendicular, then
Since your question claimed "The variable chord PQ on the parabola with equation y^2 = 4x subtends a right angle at the origin O", OP and OQ are perpendicular.
5. Thank you so much Isomorphism | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9275872707366943, "perplexity": 735.3482942444252}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416400377225.6/warc/CC-MAIN-20141119123257-00211-ip-10-235-23-156.ec2.internal.warc.gz"} |
https://mathoverflow.net/questions/262074/proving-that-a-certain-non-symmetric-matrix-has-an-eigenvalue-with-positive-real | Proving that a certain non-symmetric matrix has an eigenvalue with positive real part
Suppose that
• $X$ is the $n \times n$ matrix of all ones
• $Y$ is an arbitrary $n \times n$ matrix with zeroes on the diagonal and all other entries equal to $0$ or $1$
• $0 < \delta < 1$
Let $Z = -X - \delta Y$. If $Y$ has any ones, then does $Z$ have an eigenvalue with positive real part?
This question is based on the observations that:
1. if $Y$ is the matrix of all zeroes, then $Z$ has eigenvalues $0$ (with multiplicity $n-1$) and $-n$
2. if $Y$ is the matrix of all ones (besides the diagonal entries, which are all zero), then $Z$ has eigenvalues $\delta$ (with multiplicity $n-1$) and $(-1-\delta)n+\delta$.
3. If $Y$ is symmetric, then $Z$ has a positive eigenvalue if and only if $Z$ is not negative semidefinite. So $Z$ has a positive eigenvalue if $Y$ has any ones and is symmetric.
Is there a way to answer the question above when $Y$ is not symmetric?
Update 2017/02/24: I solved this problem using the Collatz-Weilandt formula. My proof is posted as an answer below.
Update 2017/02/14: The approach below is based on the suggestions about Lyapunov inequalities in Rodrigo de Azevedo's answer. This problem is still open, unless the answer to the question below is yes.
Suppose that the matrix $Y$ has some entry equal to $1$. Then $Z+Z^{T}$ has an eigenvalue with positive real part. In order to prove that $Z$ has an eigenvalue with positive real part, suppose for contradiction that all eigenvalues of $Z$ have nonpositive real parts.
By symmetry, all eigenvalues of $Z^{T}$ have nonpositive real parts, so both $Q = Z-\epsilon I$ and $Q^{T} = Z^{T}-\epsilon I$ have eigenvalues with strictly negative real parts. Thus there exist sets $A$ (resp. $B$) of symmetric positive definite matrices $X$ (resp. $Y$) such that $Q^{T} X + X Q < 0$ and $Q Y + Y Q^{T} < 0$.
Is it true that $A \cap B \neq \emptyset$?
If this is true, then it would imply that there exists $X \in A \cap B$ such that $(Q+Q^{T}) X + X (Q+Q^{T}) < 0$, i.e., $(Z+Z^{T}-2\epsilon I) X + X (Z+Z^{T}-2\epsilon I) < 0$. So all eigenvalues of $Z+Z^{T}-2\epsilon I$ would have negative real parts, which contradicts the original assumption that $Z+Z^{T}$ has an eigenvalue with positive real part if we choose $\epsilon$ sufficiently small.
• $\rm Y$ is an adjacency matrix. Do you have information on the underlying graph? – Rodrigo de Azevedo Feb 13 '17 at 21:28
• If $G$ is the directed graph corresponding to $Y$, then the question is to determine whether $Z$ has an eigenvalue with a positive real part if $G$ has any edges. This is the only information about the corresponding graph. – jtg Feb 13 '17 at 21:33
• Why not use Lyapunov equations? – Rodrigo de Azevedo Feb 13 '17 at 21:36
• If you see a way to answer this with Lyapunov equations, I'd be very grateful to know some more details. – jtg Feb 13 '17 at 21:39
In the case $n=3$, try $Y = \pmatrix{0 & 0 & 0\cr 0 & 0 & 1\cr 1 & 0 & 0\cr}$. The characteristic polynomial of $Z$ is $\lambda^3+3 \lambda^2-2 \delta \lambda+\delta^2= \lambda^3 + 2 \lambda^2 + (\lambda - \delta)^2$, which has no positive real roots for any real $\delta$.
• Sorry that my question was not clear enough before. By "positive eigenvalue" I meant an eigenvalue with a positive real part, not a positive real eigenvalue. This is corrected now. – jtg Feb 13 '17 at 19:46
Suppose we are given $\mathrm M \in \mathbb R^{n \times n}$. Stating that all the eigenvalues of $\mathrm M$ have strictly negative real parts is equivalent to stating that there is a symmetric positive definite $\mathrm X$ such that the Lyapunov linear matrix inequality (LMI)
$$\mathrm M^{\top} \mathrm X + \mathrm X \, \mathrm M \prec \mathrm O_n$$
holds. In other words, the open spectrahedron defined by the Lyapunov LMI above is non-empty if and only if all the eigenvalues of $\mathrm M$ have strictly negative real parts.
Thus, if the following semidefinite program (SDP) is infeasible
$$\begin{array}{ll} \text{minimize} & \langle \mathrm O_n, \mathrm X \rangle\\ \text{subject to} & \begin{bmatrix} \mathrm X & \mathrm O_n\\ \mathrm O_n & -\mathrm M^{\top} \mathrm X - \mathrm X \, \mathrm M\end{bmatrix} \succeq \mathrm O_{2n}\end{array}$$
then $\mathrm M$ has at least one eigenvalue with nonnegative real part.
• Thanks very much for explaining these details. If no such $X$ exists, then $M$ would have at least one eigenvalue with nonnegative real part. Is there a way to see that $M$ has an eigenvalue with a positive real part? – jtg Feb 13 '17 at 23:16
• @jtg Use $\tilde{\mathrm M} := \mathrm M - \epsilon \mathrm I_n$ instead of $\mathrm M$, where $\epsilon > 0$ is small. If the Lyapunov LMI holds, then all the eigenvalues of $\tilde{\mathrm M}$ have negative real parts, i.e., all the eigenvalues of $\mathrm M$ have real parts strictly less than $\epsilon$. If the SDP is infeasible, then all the eigenvalues of $\mathrm M$ have real parts equal to or greater than $\epsilon$. – Rodrigo de Azevedo Feb 13 '17 at 23:29
• That's a great idea, thanks for explaining this. I'll accept this as the answer as soon as I work out the rest of the details for the matrices in my original post. – jtg Feb 13 '17 at 23:40
• @jtg The LMI then becomes $$\begin{bmatrix} \mathrm X & \mathrm O_n\\ \mathrm O_n & 2 \epsilon \mathrm X -\mathrm M^{\top} \mathrm X - \mathrm X \, \mathrm M\end{bmatrix} \succeq \mathrm O_{2n}$$ – Rodrigo de Azevedo Feb 13 '17 at 23:48
Suppose that $Y_{ij} = 1$ for some $i, j$. Construct a vector $x$ with $n$ coordinates such that $x_t = 1+\frac{\delta}{n}$ if $t = i$ and $x_t = 1$ otherwise.
Note that $\frac{(-Zx)_t}{x_t} > n$ for each $1 \leq t \leq n$. By the Collatz-Weilandt formula, the Perron-Frobenius eigenvalue of $-Z$ exceeds $n$.
Since the trace of $-Z$ is $n$, the sum of the eigenvalues of $-Z$ is $n$. Thus $-Z$ has an eigenvalue with negative real part, so $Z$ has an eigenvalue with positive real part. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9693890810012817, "perplexity": 87.35912362996477}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232255165.2/warc/CC-MAIN-20190519201521-20190519223521-00411.warc.gz"} |
https://tex.stackexchange.com/questions/188704/weird-ifnum-with-compact-syntax?noredirect=1 | # weird ifnum with compact syntax [duplicate]
In some packages like tabularx, I find the following syntax in many places
{\ifnum0=}\fi
or even
\ifnum0={\fi}
But I can't figure out why this can be useful for? Is a register of some kind? Or is it a TeX-hack to generate error on specific occasion ?
## marked as duplicate by Manuel, Andrew Swann, user31729, Werner, zerothJul 9 '14 at 14:41
Ah the \ifnum funky brace groups beloved of TeX\halign programmers:-)
is part of the syntax for a number in TeX.
125
is a decimal
"7D
is hex and
}
is the character code of the specified character (which is also 125 as it happens).
So....
{\ifnum0=}\fi
the inner \ifnum is testing if 0=125 which is false so when expanded this is equivalent to { so starts a brace group. However if the tokens are not being expanded and TeX is just looking for matching {} pairs then it sees that as a matching pair so you can go
\def\foo{ {\ifnum0=}\fi }
but
\def\foo{ { }
is an error (or at least does not stop at that }.
Usually you can use implicit brace groups \bgroup and \egroup to use an unmatched { but some constructs demand an explicit { token and so this trick (explained by Knuth in the TeXBook comes in useful).
Usually if you find that an environment that uses & to separate alignment cells does not work in a nested alignment it is because the author forgot to use these groups in the definition.
• I suppose than the second version I provided is for getting the }`. "Funky brace groups" you said? :) – M'vy Jul 9 '14 at 13:23
• @M'vy Yes, the second one is the 'matching pair' here – Joseph Wright Jul 9 '14 at 13:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8298147320747375, "perplexity": 2908.3932107367054}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578596541.52/warc/CC-MAIN-20190423074936-20190423100936-00331.warc.gz"} |
https://nrich.maths.org/788 | ### Odd Differences
The diagram illustrates the formula: 1 + 3 + 5 + ... + (2n - 1) = n² Use the diagram to show that any odd number is the difference of two squares.
### Factorial
How many zeros are there at the end of the number which is the product of first hundred positive integers?
### Rachel's Problem
Is it true that $99^n$ has 2n digits and $999^n$ has 3n digits? Investigate!
# Triangular Triples
##### Age 14 to 16 Challenge Level:
Three numbers a, b and c are a Pythagorean triple if $a^2+ b^2= c^2$. The triangular numbers are:
$\frac{1\times 2}{2}, \frac{2\times 3}{2}, \frac{3\times 4}{2}, \frac{4\times 5}{2}$
Show that 8778, 10296 and 13530 are three triangular numbers and that they form a Pythagorean triple.
[In fact, these are the ONLY known set of three triangular numbers that form a Pythagorean triple.] | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8202232718467712, "perplexity": 446.27397635905993}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250615407.46/warc/CC-MAIN-20200124040939-20200124065939-00069.warc.gz"} |
http://www.mathworks.com/help/optim/ug/writing-objective-functions.html?nocookie=true | # Documentation
## Writing Objective Functions
### Types of Objective Functions
Many Optimization Toolbox™ solvers minimize a scalar function of a multidimensional vector. The objective function is the function the solvers attempt to minimize. Several solvers accept vector-valued objective functions, and some solvers use objective functions you specify by vectors or matrices.
Objective TypeSolversHow to Write Objectives
Scalar
`fmincon`
`fminunc`
`fminbnd`
`fminsearch`
`fseminf`
`fzero`
Writing Scalar Objective Functions
Nonlinear least squares
`lsqcurvefit`
`lsqnonlin`
Writing Vector and Matrix Objective Functions
Multivariable equation solving
`fsolve`
Multiobjective
`fgoalattain`
`fminimax`
Linear programming
`linprog`
Writing Objective Functions for Linear or Quadratic Problems
Mixed-integer linear programming
`intlinprog`
Linear least squares
`lsqlin`
`lsqnonneg`
`quadprog`
### Writing Scalar Objective Functions
#### Function Files
A scalar objective function file accepts one input, say `x`, and returns one scalar output, say `f`. The input `x` can be a scalar, vector, or matrix. A function file can return more outputs (see Including Derivatives).
For example, suppose your objective is a function of three variables, x, y, and z:
f(x) = 3*(xy)4 + 4*(x + z)2 / (1 + x2 + y2 + z2) + cosh(x – 1) + tanh(y + z).
1. Write this function as a file that accepts the vector `xin` = [x;y;z] and returns f:
```function f = myObjective(xin) f = 3*(xin(1)-xin(2))^4 + 4*(xin(1)+xin(3))^2/(1+norm(xin)^2) ... + cosh(xin(1)-1) + tanh(xin(2)+xin(3));```
2. Save it as a file named `myObjective.m` to a folder on your MATLAB® path.
3. Check that the function evaluates correctly:
```myObjective([1;2;3]) ans = 9.2666```
For information on how to include extra parameters, see Passing Extra Parameters. For more complex examples of function files, see Minimization with Gradient and Hessian Sparsity Pattern or Minimization with Bound Constraints and Banded Preconditioner.
Local Functions and Nested Functions. Functions can exist inside other files as local functions or nested functions. Using local functions or nested functions can lower the number of distinct files you save. Using nested functions also lets you access extra parameters, as shown in Nested Functions.
For example, suppose you want to minimize the `myObjective.m` objective function, described in Function Files, subject to the `ellipseparabola.m` constraint, described in Nonlinear Constraints. Instead of writing two files, `myObjective.m` and `ellipseparabola.m`, write one file that contains both functions as local functions:
```function [x fval] = callObjConstr(x0,options) % Using a local function for just one file if nargin < 2 options = optimoptions('fmincon','Algorithm','interior-point'); end [x fval] = fmincon(@myObjective,x0,[],[],[],[],[],[], ... @ellipseparabola,options); function f = myObjective(xin) f = 3*(xin(1)-xin(2))^4 + 4*(xin(1)+xin(3))^2/(1+sum(xin.^2)) ... + cosh(xin(1)-1) + tanh(xin(2)+xin(3)); function [c,ceq] = ellipseparabola(x) c(1) = (x(1)^2)/9 + (x(2)^2)/4 - 1; c(2) = x(1)^2 - x(2) - 1; ceq = [];```
Solve the constrained minimization starting from the point `[1;1;1]`:
```[x fval] = callObjConstr(ones(3,1)) Local minimum found that satisfies the constraints. Optimization completed because the objective function is non-decreasing in feasible directions, to within the default value of the function tolerance, and constraints are satisfied to within the default value of the constraint tolerance. x = 1.1835 0.8345 -1.6439 fval = 0.5383```
#### Anonymous Function Objectives
Use anonymous functions to write simple objective functions. For more information about anonymous functions, see What Are Anonymous Functions? in the MATLAB Programming Fundamentals documentation. Rosenbrock's function is simple enough to write as an anonymous function:
`anonrosen = @(x)(100*(x(2) - x(1)^2)^2 + (1-x(1))^2);`
Check that `anonrosen` evaluates correctly at ```[-1 2]```:
```anonrosen([-1 2]) ans = 104```
Minimizing `anonrosen` with `fminunc` yields the following results:
```options = optimoptions(@fminunc,'Algorithm','quasi-newton'); [x fval] = fminunc(anonrosen,[-1;2],options) Local minimum found. Optimization completed because the size of the gradient is less than the default value of the function tolerance. x = 1.0000 1.0000 fval = 1.2262e-10```
#### Including Derivatives
For `fmincon` and `fminunc`, you can include gradients in the objective function. You can also include Hessians, depending on the algorithm. The Hessian matrix Hi,j(x) = ∂2f/∂xixj.
The following table shows which algorithms can use gradients and Hessians.
`fmincon``active-set`OptionalNo
`interior-point`OptionalOptional separate function (see Hessian)
`sqp`OptionalNo
`trust-region-reflective`RequiredOptional
`fminunc``trust-region`RequiredOptional
`quasi-newton`OptionalNo
Benefits of Including Derivatives. If you do not provide gradients, solvers estimate gradients via finite differences. If you provide gradients, your solver need not perform this finite difference estimation, so can save time and be more accurate. Furthermore, solvers use an approximate Hessian, which can be far from the true Hessian. Providing a Hessian can yield a solution in fewer iterations.
For constrained problems, providing a gradient has another advantage. A solver can reach a point `x` such that `x` is feasible, but, for this `x`, finite differences around `x` always lead to an infeasible point. Suppose further that the objective function at an infeasible point returns a complex output, `Inf`, `NaN`, or error. In this case, a solver can fail or halt prematurely. Providing a gradient allows a solver to proceed. To obtain this benefit, you might also need to include the gradient of a nonlinear constraint function, and set the `GradConstr` option to `'on'`. See Nonlinear Constraints.
Choose Input Hessian for interior-point fmincon. The `fmincon` `interior-point` algorithm has many options for selecting an input Hessian. For syntax details, see Hessian. Here are the options, along with estimates of their relative characteristics.
HessianRelative Memory UsageRelative Efficiency
`'bfgs'` (default)High (for large problems)High
`'lbfgs'`Low to ModerateModerate
`'fin-diff-grads'`LowModerate
`'user-supplied'` with `'HessMult'`Low (can depend on your code)Moderate
`'user-supplied'` with `'HessFcn'`? (depends on your code)High (depends on your code)
Use the default `'bfgs'` Hessian unless you
The reason `'lbfgs'` has only moderate efficiency is twofold. It has relatively expensive Sherman-Morrison updates. And the resulting iteration step can be somewhat inaccurate due to the `'lbfgs'` limited memory.
The reason `'fin-diff-grads'` and `HessMult` have only moderate efficiency is that they use a conjugate gradient approach. They accurately estimate the Hessian of the objective function, but they do not generate the most accurate iteration step. For more information, see fmincon Interior Point Algorithm, and its discussion of the LDL approach and the conjugate gradient approach to solving Equation 6-52.
How to Include Derivatives.
1. Write code that returns:
• The objective function (scalar) as the first output
• The gradient (vector) as the second output
• Optionally, the Hessian (matrix) as the third output
2. Set the `GradObj` option to `'on'` with `optimoptions`.
3. Optionally, set the `Hessian` option to `'on'` or `'user-supplied'`.
For the `fmincon` `interior-point` solver, set the `Hessian` option to `'user-supplied'` and set the `'HessFcn'` option to `@hessianfcn`, where `hessianfcn` is a function that computes the Hessian of the Lagrangian. For details, see Hessian. For an example, see fmincon Interior-Point Algorithm with Analytic Hessian.
4. Optionally, check if your gradient function matches a finite-difference approximation. See Checking Validity of Gradients or Jacobians.
Tip For most flexibility, write conditionalized code. Conditionalized means that the number of function outputs can vary, as shown in the following example. Conditionalized code does not error depending on the value of the `GradObj` or `Hessian` option. Unconditionalized code requires you to set these options appropriately.
For example, consider Rosenbrock's function
$f\left(x\right)=100{\left({x}_{2}-{x}_{1}^{2}\right)}^{2}+{\left(1-{x}_{1}\right)}^{2},$
which is described and plotted in Solve a Constrained Nonlinear Problem. The gradient of f(x) is
$\nabla f\left(x\right)=\left[\begin{array}{c}-400\left({x}_{2}-{x}_{1}^{2}\right){x}_{1}-2\left(1-{x}_{1}\right)\\ 200\left({x}_{2}-{x}_{1}^{2}\right)\end{array}\right],$
and the Hessian H(x) is
$H\left(x\right)=\left[\begin{array}{cc}1200{x}_{1}^{2}-400{x}_{2}+2& -400{x}_{1}\\ -400{x}_{1}& 200\end{array}\right].$
`rosenthree` is an unconditionalized function that returns the Rosenbrock function with its gradient and Hessian:
```function [f g H] = rosenthree(x) % Calculate objective f, gradient g, Hessian H f = 100*(x(2) - x(1)^2)^2 + (1-x(1))^2; g = [-400*(x(2)-x(1)^2)*x(1)-2*(1-x(1)); 200*(x(2)-x(1)^2)]; H = [1200*x(1)^2-400*x(2)+2, -400*x(1); -400*x(1), 200];```
`rosenboth` is a conditionalized function that returns whatever the solver requires:
```function [f g H] = rosenboth(x) % Calculate objective f f = 100*(x(2) - x(1)^2)^2 + (1-x(1))^2; if nargout > 1 % gradient required g = [-400*(x(2)-x(1)^2)*x(1)-2*(1-x(1)); 200*(x(2)-x(1)^2)]; if nargout > 2 % Hessian required H = [1200*x(1)^2-400*x(2)+2, -400*x(1); -400*x(1), 200]; end end```
`nargout` checks the number of arguments that a calling function specifies. See Find Number of Function Arguments in the MATLAB Programming Fundamentals documentation.
The `fminunc` solver, designed for unconstrained optimization, allows you to minimize Rosenbrock's function. Tell `fminunc` to use the gradient and Hessian by setting `options`:
```options = optimoptions(@fminunc,'Algorithm','trust-region',... 'GradObj','on','Hessian','on');```
Run `fminunc` starting at `[-1;2]`:
```[x fval] = fminunc(@rosenboth,[-1;2],options) Local minimum found. Optimization completed because the size of the gradient is less than the default value of the function tolerance. x = 1.0000 1.0000 fval = 1.9310e-017```
If you have a Symbolic Math Toolbox™ license, you can calculate gradients and Hessians automatically, as described in Symbolic Math Toolbox Calculates Gradients and Hessians.
### Writing Vector and Matrix Objective Functions
Some solvers, such as `fsolve` and `lsqcurvefit`, have objective functions that are vectors or matrices. The main difference in usage between these types of objective functions and scalar objective functions is the way to write their derivatives. The first-order partial derivatives of a vector-valued or matrix-valued function is called a Jacobian; the first-order partial derivatives of a scalar function is called a gradient.
#### Jacobians of Vector Functions
If x is a vector of independent variables, and F(x) is a vector function, the Jacobian J(x) is
${J}_{ij}\left(x\right)=\frac{\partial {F}_{i}\left(x\right)}{\partial {x}_{j}}.$
If F has m components, and x has k components, J is an m-by-k matrix.
For example, if
$F\left(x\right)=\left[\begin{array}{c}{x}_{1}^{2}+{x}_{2}{x}_{3}\\ \mathrm{sin}\left({x}_{1}+2{x}_{2}-3{x}_{3}\right)\end{array}\right],$
then J(x) is
$J\left(x\right)=\left[\begin{array}{ccc}2{x}_{1}& {x}_{3}& {x}_{2}\\ \mathrm{cos}\left({x}_{1}+2{x}_{2}-3{x}_{3}\right)& 2\mathrm{cos}\left({x}_{1}+2{x}_{2}-3{x}_{3}\right)& -3\mathrm{cos}\left({x}_{1}+2{x}_{2}-3{x}_{3}\right)\end{array}\right].$
The function file associated with this example is:
```function [F jacF] = vectorObjective(x) F = [x(1)^2 + x(2)*x(3); sin(x(1) + 2*x(2) - 3*x(3))]; if nargout > 1 % need Jacobian jacF = [2*x(1),x(3),x(2); cos(x(1)+2*x(2)-3*x(3)),2*cos(x(1)+2*x(2)-3*x(3)), ... -3*cos(x(1)+2*x(2)-3*x(3))]; end```
#### Jacobians of Matrix Functions
The Jacobian of a matrix F(x) is defined by changing the matrix to a vector, column by column. For example, rewrite the matrix
$F=\left[\begin{array}{cc}{F}_{11}& {F}_{12}\\ {F}_{21}& {F}_{22}\\ {F}_{31}& {F}_{32}\end{array}\right]$
as a vector f:
$f=\left[\begin{array}{c}{F}_{11}\\ {F}_{21}\\ {F}_{31}\\ {F}_{12}\\ {F}_{22}\\ {F}_{32}\end{array}\right].$
The Jacobian of F is as the Jacobian of f,
${J}_{ij}=\frac{\partial {f}_{i}}{\partial {x}_{j}}.$
If F is an m-by-n matrix, and x is a k-vector, the Jacobian is an mn-by-k matrix.
For example, if
$F\left(x\right)=\left[\begin{array}{cc}{x}_{1}{x}_{2}& {x}_{1}^{3}+3{x}_{2}^{2}\\ 5{x}_{2}-{x}_{1}^{4}& {x}_{2}/{x}_{1}\\ 4-{x}_{2}^{2}& {x}_{1}^{3}-{x}_{2}^{4}\end{array}\right],$
then the Jacobian of F is
$J\left(x\right)=\left[\begin{array}{cc}{x}_{2}& {x}_{1}\\ -4{x}_{1}^{3}& 5\\ 0& -2{x}_{2}\\ 3{x}_{1}^{2}& 6{x}_{2}\\ -{x}_{2}/{x}_{1}^{2}& 1/{x}_{1}\\ 3{x}_{1}^{2}& -4{x}_{2}^{3}\end{array}\right].$
#### Jacobians with Matrix-Valued Independent Variables
If x is a matrix, define the Jacobian of F(x) by changing the matrix x to a vector, column by column. For example, if
$X=\left[\begin{array}{cc}{x}_{11}& {x}_{12}\\ {x}_{21}& {x}_{22}\end{array}\right],$
then the gradient is defined in terms of the vector
$x=\left[\begin{array}{c}{x}_{11}\\ {x}_{21}\\ {x}_{12}\\ {x}_{22}\end{array}\right].$
With
$F=\left[\begin{array}{cc}{F}_{11}& {F}_{12}\\ {F}_{21}& {F}_{22}\\ {F}_{31}& {F}_{32}\end{array}\right],$
and with f the vector form of F as above, the Jacobian of F(X) is defined as the Jacobian of f(x):
${J}_{ij}=\frac{\partial {f}_{i}}{\partial {x}_{j}}.$
So, for example,
If F is an m-by-n matrix and x is a j-by-k matrix, then the Jacobian is an mn-by-jk matrix.
### Writing Objective Functions for Linear or Quadratic Problems
The following solvers handle linear or quadratic objective functions:
• `linprog` and `intlinprog`: minimize
`f'x` = ```f(1)*x(1) + f(2)*x(2) +...+ f(n)*x(n)```.
Input the vector `f` for the objective. See the examples in Linear Programming and Mixed-Integer Linear Programming.
• `lsqlin` and `lsqnonneg`: minimize
`Cx - d`∥.
Input the matrix `C` and the vector `d` for the objective. See Linear Least Squares with Bound Constraints.
• `quadprog`: minimize
`1/2 * x'Hx` + `f'x`
= ```1/2 * (x(1)*H(1,1)*x(1) + 2*x(1)*H(1,2)*x(2) +...+ x(n)*H(n,n)*x(n)) + f(1)*x(1) + f(2)*x(2) +...+ f(n)*x(n)```.
Input both the vector `f` and the symmetric matrix `H` for the objective. See Quadratic Programming.
### Maximizing an Objective
All solvers attempt to minimize an objective function. If you have a maximization problem, that is, a problem of the form
$\underset{x}{\mathrm{max}}f\left(x\right),$
then define g(x) = –f(x), and minimize g.
For example, to find the maximum of tan(cos(x)) near x = 5, evaluate:
```[x fval] = fminunc(@(x)-tan(cos(x)),5) Local minimum found. Optimization completed because the size of the gradient is less than the default value of the function tolerance. x = 6.2832 fval = -1.5574```
The maximum is 1.5574 (the negative of the reported `fval`), and occurs at x = 6.2832. This answer is correct since, to five digits, the maximum is tan(1) = 1.5574, which occurs at x = 2π = 6.2832. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 16, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.895399808883667, "perplexity": 1438.764231778924}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131300441.51/warc/CC-MAIN-20150323172140-00163-ip-10-168-14-71.ec2.internal.warc.gz"} |
http://mathhelpforum.com/differential-equations/161959-complex-variable.html | # Math Help - Complex variable...
1. ## Complex variable...
I have problems solving: $\frac{dz}{dt}= \overline{z}e^{it}$
Seperating variables gives: $\frac{dz}{\overline{z}}=e^{it}dt$
But unfortunately, ...I'm not sure if we're able to integrate $\overline{z}$ w.r.t $z$
2. Just a shot in the dark here, but what about this:
$\dfrac{dz}{\bar{z}}=\dfrac{z\,dz}{z\bar{z}}=\dfrac {z\,dz}{|z|^{2}}?$
3. Thanks for the suggestion...but then again: Integrating $\frac{z}{|z|^2}$ w.r.t z is also a problem I wouldn't know how to solve..
The actual thing I'm trying to show here, is that not all solutions to this equation are bounded..., the question asks to solve the eq. explicitly
But I've no clue how. However, we may be able to use that $|\frac{dz}{dt}|=|z|$ for all t..., then try to show that all solution except the origin are unbounded.
(edit: i meant to show that not all solutions to this problem are bounded)
4. $|z|^2$ is a constant.
So you should get
$\displaystyle \frac{1}{|z|^2} \int{z\,dz} = \int{e^{it}\,dt}$
5. I don't fully understand. I agree if you said: $\int \frac{z}{|z|^2}dz=\int e^{it}dt$
I don't see why you could just treat $|z|^2$ as a constant. It's a real number depending on z. So why is it you can treat it as such?
$\int \frac{z}{|z|^2}dz = \frac{1}{|z|^2}\int zdz$...is this really true?
6. I'm with dinkydoe on this one. I would agree, from the DE, that
$\displaystyle\left|\frac{1}{\bar{z}}\,\frac{dz}{dt }\right|=\left|\frac{z}{|z|^{2}}\,\frac{dz}{dt}\ri ght|=\frac{1}{|z|^{2}}\left|z\,\frac{dz}{dt}\right |$
is a constant. But I don't see how you can get that $|z|^{2}$ is a constant.
7. Ok, so my professor hinted to something like a substitution $z= r(t)e^{i\phi}$. Then $\overline{z}= r(t)e^{-i\phi}$
So we have $\frac{d}{dt}[r(t)e^{i\phi}] = e^{it}r(t)e^{-i\phi}$
So... $r'(t)= r(t)e^{(t-2\phi)i}\Longrightarrow r(t)= e^{-ie^{(t-2\phi)i}}$
And finally $z= e^{[\phi-e^{(t-2\phi)i}]i}$..
But i'm not even sure if this is a valid way of arguing...Furthermore, doesn't this generate only bounded solutions.
8. Yeah, I was thinking of a substitution somewhat like that, but I ran into difficulties. That one is better, I think. Query: does your solution satisfy the original DE?
I'm not so sure that this generates only bounded solutions. You don't know that the exponent there is purely imaginary, do you? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 19, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.931800127029419, "perplexity": 384.38064712855686}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510273513.48/warc/CC-MAIN-20140728011753-00265-ip-10-146-231-18.ec2.internal.warc.gz"} |
http://papers.neurips.cc/paper/4195-a-two-stage-weighting-framework-for-multi-source-domain-adaptation | # NIPS Proceedingsβ
## A Two-Stage Weighting Framework for Multi-Source Domain Adaptation
[PDF] [BibTeX] [Supplemental]
### Abstract
Discriminative learning when training and test data belong to different distributions is a challenging and complex task. Often times we have very few or no labeled data from the test or target distribution but may have plenty of labeled data from multiple related sources with different distributions. The difference in distributions may be in both marginal and conditional probabilities. Most of the existing domain adaptation work focuses on the marginal probability distribution difference between the domains, assuming that the conditional probabilities are similar. However in many real world applications, conditional probability distribution differences are as commonplace as marginal probability differences. In this paper we propose a two-stage domain adaptation methodology which combines weighted data from multiple sources based on marginal probability differences (first stage) as well as conditional probability differences (second stage), with the target domain data. The weights for minimizing the marginal probability differences are estimated independently, while the weights for minimizing conditional probability differences are computed simultaneously by exploiting the potential interaction among multiple sources. We also provide a theoretical analysis on the generalization performance of the proposed multi-source domain adaptation formulation using the weighted Rademacher complexity measure. Empirical comparisons with existing state-of-the-art domain adaptation methods using three real-world datasets demonstrate the effectiveness of the proposed approach. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8260378241539001, "perplexity": 558.0668568917563}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655878753.12/warc/CC-MAIN-20200702111512-20200702141512-00518.warc.gz"} |
http://mathhelpforum.com/calculus/30253-fluid-pressure.html | # Math Help - Fluid Pressure
1. ## Fluid Pressure
Good Morning!
I could use some help with this fluid pressure/force problem.
A trough 10 ft long has a trapezoidal cross section that is 2 ft wide at the bottom, 4 feet wide at the top, and 3 feet high. If the trough is filled with oil (density 50 pounds per cubic foot) find the force exerted by the oil on one end of the trough.
I know that F=ma, W= Int(Force Equation) and the basic physics stuff. I'm just having trouble visualizing this and getting the integral all set up. I tried drawing a picture, but I don't think it helped me much...
Thanks in advance for any help you can provide!
2. Let $p(y)$ be the pressure exerted on the wall at a depth of $y$. Consider the infinitesimal strip at $y$ with height $dy$ and width equal to the width $w(y)$ of the trough at that depth. Then the force exerted on that strip is the area of the strip times the pressure exerted by the oil: $p(y)w(y)dy$.
Add up the force on all these strips (integrate from $y=3$ to $y=0$ or $y=0$ to $y=3$ depeding on how you think about the problem) and you should get the total force on the wall.
3. ## Still confused...
I'm still really confused, I keep getting really outlandish answers and I don't think the picture I drew is helping me figure out the integral. Thanks in advance...
4. Maybe this will help. Note that the two triangles are similar.
using proportions we get $\frac{3-x}{3}=\frac{y}{1}$ or solving for y
$y=\frac{3-x}{3}$
so the width of each strip is $2+2y=2+2 \cdot \frac{3-x}{3}$
the height of each stip is dx and the length is 10 so
$Volume=l \times w \times h$
so
$dV=10(2+2 \cdot \frac{3-x}{3})dx$
so the Volume is given by
$V=\int_0^310(2+2 \cdot \frac{3-x}{3})dx$
To finish multiply the volume by the desity of the oil per cubic ft | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 16, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9627888202667236, "perplexity": 360.5343355966234}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257826916.34/warc/CC-MAIN-20160723071026-00073-ip-10-185-27-174.ec2.internal.warc.gz"} |
http://mathhelpforum.com/advanced-statistics/138496-expected-return-variance-confidence.html | Expected Return, Variance and Confidence
Here is the exercise:
Matt has $14,000 to invests and decides to put$1000 in each of his 14 stocks picked at random from a large listed on the stock exchange. The mean return of the stocks in the group is 10% per year and the variance of the returns of the stocks in the group is 4% per year.
a. Calculate expected return and variance
b. Calculate 90% confidence interval for the portfolio returns
I know how to calculate the expected value E(X) = x1 P(X=x1) + x 2(X=x2) + ... not sure in this case though... is it just 1000(.10) + 1000(.10) + 1000(.10) +.... ??
Greatly appreciate some help | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9311283230781555, "perplexity": 1117.0074995434413}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257824757.62/warc/CC-MAIN-20160723071024-00226-ip-10-185-27-174.ec2.internal.warc.gz"} |
http://mathhelpforum.com/calculus/149548-evaluating-partial-derivative.html | # Math Help - Evaluating Partial Derivative
1. ## Evaluating Partial Derivative
The question is:
Evaluate $f_{xy}(0, 0)$ and $f_{yx}(0, 0)$ for $f(x, y) = xy\frac{x^2−y^2}{x^2+y^2}$.
What would you say about Clairaut's theorem?
I found fxy(x,y) but then I realized... wouldn't this be undefined if you evaluate at f(0,0)?
Thanks.
2. If you read the text of Clairaut's theorem carefully, you might see what the problem is getting at. What did you get for evaluating the mixed partial derivatives at the origin?
3. Originally Posted by Ackbeet
If you read the text of Clairaut's theorem carefully, you might see what the problem is getting at. What did you get for evaluating the mixed partial derivatives at the origin?
For $f_{xy}=\frac{x^8-y^8}{(x^2+y^2)^4}$ Is it not undefined if evaluated at the origin? Does this mean that the theorem doesn't apply?
4. Hmm. Not sure you took the derivative correctly. Just to be clear, the function is defined by
$f(x,y)=xy\frac{x^{2}-y^{2}}{x^{2}+y^{2}}.$
Is that correct? If so, what do you get, precisely, for $f_{x}(x,y)$? And then what do you get for $f_{xy}(x,y)$? Please show your steps.
5. $f(x,y) = \frac{x^3y - xy^3}{x^2+y^2}$
$f_x(x,y) = \frac{(3x^2y-y^3)(x^2+y^2) - 2x(x^3y-xy^3)}{(x^2+y^2)^2}
= \frac{x^4y+4x^2y^3-y^5}{(x^2+y^2)^2}$
$f_{xy}(x,y) = \frac{(x^4+12y^2x^2-5y^4)(x^2+y^2)^2 - 2(x^2+y^2)(2y(x^4y+4x^2y^3-y^5)}{(x^2+y^2)^4}$
$= \frac{(x^8+14x^6y^2+20x^4y^4+2x^2y^6-5y^8) - (4x^6y^2+20x^4y^4+14x^2y^6-4y^8)}{(x^2+y^2)^4}$
Oh, I guess I messed up here (assuming I did everything above correctly, anyways)
$= \frac{x^8+10x^6y^2-12x^2y^6-y^8}{(x^2+y^2)^4}$
But still, wouldn't the denominator become 0 at the origin?
6. Yes, the denominator will become 0 at the origin. This means Clairaut's theorem does not apply. However, I think you'll find that the mixed partial derivatives have the same form. So what is that telling you?
7. Hmm... Does this mean the results of the theorem can still occur when the theorem itself doesn't apply?
8. Yes. Think about the statement, "If A then B." This is the format of most theorems. B can be true whether or not A is or not. The situation you cannot have is when A is true and B is false. Any other truth combination of A and B can still hold.
9. Originally Posted by Ackbeet
Yes. Think about the statement, "If A then B." This is the format of most theorems. B can be true whether or not A is or not. The situation you cannot have is when A is true and B is false. Any other truth combination of A and B can still hold.
Ah, it's nice and clear now. Thanks a lot.
10. You're very welcome. Have a good one! | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 12, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8594460487365723, "perplexity": 387.069855728535}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402479.21/warc/CC-MAIN-20160624155002-00033-ip-10-164-35-72.ec2.internal.warc.gz"} |
https://de.mathworks.com/help/fixedpoint/ug/implement-hardware-efficient-complex-partial-systolic-matrix-solve-using-q-less-qr-decomposition-with-forgetting-factor.html | # Implement Hardware-Efficient Complex Partial-Systolic Matrix Solve Using Q-less QR Decomposition with Forgetting Factor
This example shows how to use the hardware-efficient Complex Partial-Systolic Matrix Solve Using Q-less QR Decomposition with Forgetting Factor block.
### Q-less QR Decomposition with Forgetting Factor
The Complex Partial-Systolic Matrix Solve Using Q-less QR Decomposition with Forgetting Factor block implements the following recursion to compute the upper-triangular factor R of continuously streaming n-by-1 row vectors A(k,:) using forgetting factor . It's as if matrix A is infinitely tall. The forgetting factor in the range keeps it from integrating without bound.
### Forward and Backward Substitution
When an upper triangular factor is ready, then forward and backward substitution are computed with the current input B to produce output X.
### Define System Parameters
n is the length of the row vectors A(k,:), the number of rows in B, and the number of rows and columns in R.
n = 5;
p is the number of columns in B
p = 1;
m is the effective numbers of rows of A to integrate over.
m = 100;
Use the fixed.forgettingFactor function to compute the forgetting factor as a function of the number of rows that you are integrating over.
forgettingFactor = fixed.forgettingFactor(m)
forgettingFactor = 0.9950
precisionBits defines the number of bits of precision required for the QR Decomposition. Set this value according to system requirements.
precisionBits = 24;
In this example, complex-valued matrices A and B are constructed such that the magnitude of their real and imaginary parts of their elements is less than or equal to one, so the maximum possible absolute value of any element is . Your own system requirements will define what those values are. If you don't know what they are, and A and B are fixed-point input to the system, then you can use the upperbound function to determine the upper bounds of the fixed-point types of A and B.
max_abs_A is an upper bound on the maximum magnitude element of A.
max_abs_A = sqrt(2);
max_abs_B is an upper bound on the maximum magnitude element of B.
max_abs_B = sqrt(2);
### Select Fixed-Point Types
Use the fixed.complexQlessQRMatrixSolveFixedpointTypes function to compute fixed-point types.
T = fixed.complexQlessQRMatrixSolveFixedpointTypes(m,n,max_abs_A,max_abs_B,precisionBits);
T.A is the fixed-point type computed for transforming A to R in-place so that it does not overflow.
T.A
ans = [] DataTypeMode: Fixed-point: binary point scaling Signedness: Signed WordLength: 31 FractionLength: 24
T.B is the type computed for B so that it does not overflow.
T.B
ans = [] DataTypeMode: Fixed-point: binary point scaling Signedness: Signed WordLength: 27 FractionLength: 24
T.X is the type computed for the output X so that there is a low probability of overflow.
T.X
ans = [] DataTypeMode: Fixed-point: binary point scaling Signedness: Signed WordLength: 75 FractionLength: 24
### Define Simulation Parameters
Create random matrix A to contain a specified number of inputs, and n-by-p random matrix B.
numInputs is the number of input rows A(k,:) for this example.
numInputs = 500; rng('default') [A,B] = fixed.example.complexRandomQlessQRMatrices(numInputs,n,p);
Cast the inputs to the types determined by fixed.complexQlessQRMatrixSolveFixedpointTypes.
A = cast(A,'like',T.A); B = cast(B,'like',T.B);
Use the fixed.extractNumericType function to extract a numerictype object to use as an input parameter to the block.
OutputType = fixed.extractNumericType(T.X)
OutputType = DataTypeMode: Fixed-point: binary point scaling Signedness: Signed WordLength: 75 FractionLength: 24
Cast the forgetting factor to a fixed-point type with the same word length as A and best-precision scaling.
forgettingFactor = fi(forgettingFactor,1,T.A.WordLength);
Select a stop time for the simulation that is long enough to process all the inputs from A.
stopTime = 2*(2*numInputs + n)*T.A.WordLength;
### Open the Model
model = 'ComplexPartialSystolicSolveQlessQRForgettingFactorModel'; open_system(model);
### Set Variables in the Model Workspace
Use the helper function setModelWorkspace to add the variables defined above to the model workspace.
fixed.example.setModelWorkspace(model,'A',A,'B',B,'n',n,'p',p,... 'forgettingFactor',forgettingFactor,'OutputType',OutputType,... 'regularizationParameter',0,... 'stopTime',stopTime);
### Simulate the Model
out = sim(model);
### Verify the Accuracy of the Output
Define matrix as follows
Then using the formula for the computation of the th output , and the fact that , you can show that
So to verify the output, the difference between and should be small.
Choose the last output of the simulation.
X = double(out.X(:,:,end));
Synchronize the last output X with the input by finding the number of inputs that produced it.
A = double(A); B = double(B); alpha = double(forgettingFactor); relative_errors = nan(1,numInputs); for k = 1:numInputs A_k = alpha.^(k:-1:1)' .* A(1:k,:); relative_errors(k) = norm(A_k'*A_k*X - B)/norm(B); end
k is the number of inputs A(k,:) that produced the last X.
k = find(relative_errors==min(relative_errors))
k = 493
Verify that
with a small relative error.
A_k = alpha.^(k:-1:1)' .* A(1:k,:); relative_error = norm(A_k'*A_k*X - B)/norm(B)
relative_error = 4.1169e-05
Suppress mlint warnings in this file.
%#ok<*NOPTS> | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8336737751960754, "perplexity": 2618.002663770564}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104054564.59/warc/CC-MAIN-20220702101738-20220702131738-00348.warc.gz"} |
https://www.physicsforums.com/threads/charging-by-contact.155217/ | # Charging by contact
1. Feb 7, 2007
### slickvic
Suppose you let identical pith balls come in contact to make q1=q2. Would the charges be equal if the pith balls were of different size?
2. Feb 8, 2007
### arunma
You know, I vaguely remember doing this problem way back in freshman physics. Just a guess on my part, but I think that putting the balls in contact would make the charge density uniform. If my assumption is correct, then when you separate the balls, the smaller one would contain less charge.
3. Feb 8, 2007
### mdelisio
I'm not real sure about pith, but I can answer for metal balls...
Putting the balls in contact forces their potential to be the same. The problem is easier if the balls are separated by a distance large compared to their size. Briefly connecting them with a wire would force the potential at each ball to be the same. In this case the ratio of charges would be equal to the ratio of the radii.
4. Feb 8, 2007
### arunma
Are you sure it wouldn't be equal to the ratio of the cube of their radii?
I'm asking because I'm also considering lesser dimensional problems. What if, instead of spheres, we were talking about disks, or straight rods? Well, maybe I should work the problem for myself and get back to you guys. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9121881127357483, "perplexity": 610.9112662843437}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583509690.35/warc/CC-MAIN-20181015184452-20181015205952-00448.warc.gz"} |
http://math.stackexchange.com/users/34397/parth-kohli?tab=activity&sort=all&page=7 | Parth Kohli
Reputation
4,762
Top tag
Next privilege 5,000 Rep.
Approve tag wiki edits
Apr 19 answered Is the following statement true? Why? Apr 19 awarded Cleanup Apr 19 comment With the binomial expansion of $(3+x)^4$ express $(3 - \sqrt 2)^4$ in the form of $p+q\sqrt 2$ @jack $[(3 - \sqrt{2})^2]^2 = [11 - 6\sqrt2]^2$ because $11 - 6\sqrt2 = (3 - \sqrt{2})^2$ Apr 19 revised With the binomial expansion of $(3+x)^4$ express $(3 - \sqrt 2)^4$ in the form of $p+q\sqrt 2$ rolled back to a previous revision Apr 19 answered With the binomial expansion of $(3+x)^4$ express $(3 - \sqrt 2)^4$ in the form of $p+q\sqrt 2$ Apr 19 comment With the binomial expansion of $(3+x)^4$ express $(3 - \sqrt 2)^4$ in the form of $p+q\sqrt 2$ forgetting $\binom{4}{k}$? Apr 19 answered Derivative of $\sin(e^{-x})$ Apr 14 accepted Factoring a long expression in the form $(a+b)^3 + (c - b)^3 - (c+b)^3$ Apr 14 comment Factoring a long expression in the form $(a+b)^3 + (c - b)^3 - (c+b)^3$ Nice alternative. Thanks. Apr 14 comment Factoring a long expression in the form $(a+b)^3 + (c - b)^3 - (c+b)^3$ @Easy wow, that is epic! Gotcha! Apr 14 comment Factoring a long expression in the form $(a+b)^3 + (c - b)^3 - (c+b)^3$ $= -(b+a)(3c^2 - 3bc + b^2 + ac - ba + 2ac + a^2)$ Apr 14 comment Factoring a long expression in the form $(a+b)^3 + (c - b)^3 - (c+b)^3$ @Jay Ah, I didn't notice that. Apr 14 revised Factoring a long expression in the form $(a+b)^3 + (c - b)^3 - (c+b)^3$ deleted 1 characters in body; edited title Apr 14 comment Factoring a long expression in the form $(a+b)^3 + (c - b)^3 - (c+b)^3$ $(c - b)^3 - (a + b)^3 = \left(c - a - 2b\right)\left(c^2 - 2cb + ac - ba + bc + a^2 + 2ab + b^2 \right)$ Is that correct? Apr 14 asked Factoring a long expression in the form $(a+b)^3 + (c - b)^3 - (c+b)^3$ Apr 13 accepted $x = \sqrt[3]{3} + \frac{1}{\sqrt[3]{3}}$, what is $3x^3 - 9x$? Apr 13 comment $x = \sqrt[3]{3} + \frac{1}{\sqrt[3]{3}}$, what is $3x^3 - 9x$? Ah, so $3x^3 = 10 + 9x$ and I want to get the value of $3x^3 - 9x$, which gives me $10 + 9x - 9x = 10$. Great solution! Apr 13 asked $x = \sqrt[3]{3} + \frac{1}{\sqrt[3]{3}}$, what is $3x^3 - 9x$? Apr 12 revised The inverse function of $e^{x^2}$ added 198 characters in body Apr 12 comment The inverse function of $e^{x^2}$ Brian is right as always. I have a tendency to neglect things. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8824205994606018, "perplexity": 733.8953648553667}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645335509.77/warc/CC-MAIN-20150827031535-00033-ip-10-171-96-226.ec2.internal.warc.gz"} |
http://www.chegg.com/homework-help/questions-and-answers/mno2-s-4hcl-aq-mncl2-aq-cl2-g-2h2o-l-reaction-243-g-mno2-580-g-hcl-react-form-chlorine-gas-q3283548 | MnO2(s) + 4HCl(aq) ? MnCl2(aq) + Cl2(g) + 2H2O(l) In the above reaction 2.43 g of MnO2 and 5.80 g of HCl react to form chlorine gas. ok so i know that i asked this question before but i still keep only getting people telling me to stop asking questions but i still do not understand how to do this question a. How many moles of Cl2 can be produced from the given mass of MnO2? b. How many moles of Cl2 can be produced from the given mass of HCl? c. Which reactant is the limiting reactant? d. What is the maximum amount of Cl2 that can be produced by this reaction? e. Calculate the amount of excess reactant. ok so i got this answer for the question but i still do not understand how to use this The mole ratio of MnO2 to Cl2 is 1:1 so first we calculate the number of moles of MnO2 = 12/(55 + 16 + 16) = 12/87 = 0.1379 moles The same number of moles of Cl2 will be formed. At S.T.P, I mole of Cl2 will occupy 22.4 L so 0.1379 moles will occupy = 3.0897 L but this is at S.T.P so we convert it to the given conditions using PV/T = constant 1 x 3.0897/273 = 0.95 x V/298 V = 3.55 L ok i finally get the start of part A. and B both both = 0.1379 moles but after that i got confused on separating the ques?---> At S.T.P, I mole of Cl2 will occupy 22.4 L so 0.1379 moles will occupy = 3.0897 L but this is at S.T.P so we convert it to the given conditions using PV/T = constant 1 x 3.0897/273 = 0.95 x V/298 V = 3.55 L what is that used for ? can you separate them by the letters ...and then explain it to me by parts because i am still really confused on part C D and E Please do not answer if you are just going to re-answer the question with the same retarded answer ...because i am trying to learn how to do this and i have been trying to work it out all night. And my teacher made this an open class question not a homework question but i need to learn this for a test. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.820042073726654, "perplexity": 603.5709283191638}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997857710.17/warc/CC-MAIN-20140722025737-00111-ip-10-33-131-23.ec2.internal.warc.gz"} |
https://mathoverflow.net/questions/285604/estimation-of-a-combinatorial-sum-when-n-is-large/285624 | # Estimation of a combinatorial sum when $n$ is large
Suppose $c,t$ are such that, $0< c< 1$ constant and $cn\leq t \leq n$. I want to have an estimation of
$\sum _{i=0}^{cn} {cn\choose {i}}{(1-c)n \choose t-i} 2^{t-i}$
when n goes to infinity.
Can I bound it by $2^{c'n}$ for some $0<c'<log_2(3)$?
I have no idea to do that.Is there any hint?
Thanks!
• If $c=1/2$, and $i=n/4$, the corresponding summand is about $2^{n+t-n/4}$, which is much more than $2^{c'n}$ for $c'<1$. – Fedor Petrov Nov 9 '17 at 9:10
• In general combinatorial sums such as this one, with all terms positive, can be estimated to within a factor of the length of the sum by just approximating how large an individual summand cn get. – Noam D. Elkies Nov 9 '17 at 15:19
Here is a special case where your [original] estimate [of $c'<1$] does not work. Let $c\le 1/2$, $cn\le t\le (1-c)n$, and assume that $cn$ and $t$ are integers. Let us get rid of of the second factorial. Then your sum would be $2^{t-cn}3^{cn}$. So, it cannot be bounded by $2^{c'n}$ for $c'<1$, nor can your original sum.
On the other hand, a very rough estimate (replacing $2^{t-i}$ with $2^t$) gives an upper bound of $\binom{n}{t}2^t$ whose $n$th root even in the worst case of $t=2n/3$ approaches $3$. So, I guess your $c'\le\log_2{3}$, certainly less than $2$.
• @user115608 This is more likely. Try letting $t$ and $m=cn$ be integers first. – Alexander Burstein Nov 9 '17 at 19:09
• @user115608 I edited my answer now as well. It looks like $c'\le\log_2{3}$. – Alexander Burstein Nov 10 '17 at 2:50
• sorry,I remembered sth and now I know it is certainly less than $3^n$ ,but $\log_2(3)$ is not enough for me, so I edited again. – user115608 Nov 10 '17 at 15:15
• @user115608 See the other answer for why $c'<\log_2 3$. – Alexander Burstein Nov 11 '17 at 22:02
Of course, you may estimate $\binom{cn}i$ as $2^{cn}$ and $\binom{(1-c)n}{t-i}2^{t-i}$ as $(1+2)^{(1-c)n}$, totally you get $(2^c3^{1-c})^n$, the exponent is strictly less than 3. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9667278528213501, "perplexity": 318.32191175766206}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141715252.96/warc/CC-MAIN-20201202175113-20201202205113-00481.warc.gz"} |
https://socialsci.libretexts.org/Courses/College_of_the_Canyons/COMS_120%3A_Small_Group_Communication_(Osborn)/00%3A_Front_Matter/01%3A_TitlePage | Skip to main content
# TitlePage
$$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$ $$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$$$\newcommand{\AA}{\unicode[.8,0]{x212B}}$$
Communication 120
• Was this article helpful? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8860087990760803, "perplexity": 251.34799264254295}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948932.75/warc/CC-MAIN-20230329023546-20230329053546-00568.warc.gz"} |
https://calculus.subwiki.org/wiki/First_derivative_test_is_inconclusive_for_function_whose_derivative_has_ambiguous_sign_around_the_point | # First derivative test is inconclusive for function whose derivative has ambiguous sign around the point
This article describes a situation, or broad range of situations, where a particular test or criterion is inconclusive, i.e., it does not work as intended to help us determine what we would like to determine. This is either because one of the hypotheses for the test fails or we land up in an inconclusive branch of the test.
The test is first derivative test. See more inconclusive cases for first derivative test | conclusive cases for first derivative test
## Statement
### Goal of statement
The goal of this statement is to identify a type of situation where the first derivative test is inconclusive.
### One-sided version
Suppose $f$ is a real-valued function of one variable and $c$ is a point in the domain of $f$.
Continuity and differentiability assumption Hypothesis on sign of derivative $f'$ Conclusion
$f$ is left continuous at $c$ and differentiable on the immediate left of $c$ $f'$ has oscillating sign on the immediate left of $c$, i.e., for any $\delta > 0$, $f'(x)$ takes both positive and negative values for $x \in (c - \delta, c)$. $f$ may have a local maximum from the left at $c$, local minimum from the left at $c$, or neither.
$f$ is right continuous at $c$ and differentiable on the immediate right of $c$ $f'$ has oscillating sign on the immediate right of $c$, i.e., for any $\delta > 0$, $f'(x)$ takes both positive and negative values for $x \in (c, c + \delta)$. $f$ may have a local maximum from the right at $c$, local minimum from the right at $c$, or neither.
### Two-sided version
Suppose $f$ is a real-valued function of one variable and $c$ is a point in the domain of $f$.
Continuity and differentiability assumption Sign of $f'$ on immediate left Sign of $f'$ on immediate right Conclusion
$f$ is continuous at $c$ and differentiable on the immediate left and immediate right of $c$ oscillatory oscillatory $f$ may have a local maximum, local minimum, or neither
$f$ is continuous at $c$ and differentiable on the immediate left and immediate right of $c$ oscillatory positive $f$ has a strict local minimum from the right. Hence, overall, it cannot have a local maximum at $c$. It could have a local minimum or no local extremum.
$f$ is continuous at $c$ and differentiable on the immediate left and immediate right of $c$ oscillatory negative $f$ has a strict local maximum from the right. Hence, overall, it cannot have a local minimum at $c$. It could have a local maximum or no local extremum.
$f$ is continuous at $c$ and differentiable on the immediate left and immediate right of $c$ positive oscillatory $f$ has a strict local maximum from the left. Hence, overall, it cannot have a local minimum at $c$. It could have a local maximum or no local extremum.
$f$ is continuous at $c$ and differentiable on the immediate left and immediate right of $c$ negative oscillatory $f$ has a strict local minimum from the left. Hence, overall, it cannot have a local maximum at $c$. It could have a local minimum or no local extremum.
### Relation with critical points
In all the two-sided cases, the point $c$ under consideration must be a critical point for the function (i.e., either $f'(c) = 0$ or $f'(c)$ does not exist). Thus, we could add, to each case, the additional condition that $c$ be a critical point for the function, and this will not affect the strength of the statements.
## Example
### Example of two-sided local minimum despite oscillatory sign of derivative around the point
The function illustrated in the picture is:
$f(x) := \lbrace\begin{array}{rl} |x|\left(2 + \sin\left(\frac{1}{x}\right)\right), & x \ne 0 \\ 0, & x = 0 \\\end{array}$
We note the following:
Assertion Explanation
$f$ has a strict local and absolute minimum at 0 For $x \ne 0$, we have $f(x) > 0 = f(0)$. This is because $|x| > 0$ and $2 + \sin(1/x) \in [1,3]$, so that is also positive.
$f$ is continuous at 0 We can see this using the pinching theorem (by pinching the function between $|x|$ and $3|x|$) or more directly by noting that as $x \to 0$, $|x| \to 0$ and $2 + \sin(1/x)$ is bounded between finite values 1 and 3.
$f'$ has oscillatory sign on the immediate left of 0 For $x < 0$, we have $f(x) = -x(2 + \sin(1/x))$. The derivative is $f'(x) = -2 - \sin(1/x) + (1/x)\cos(1/x)$. The part $-2 - \sin(1/x)$ is boundd, but the part $(1/x)\cos(1/x)$ oscillates between large magnitude positive and negative values as $x \to 0^-$. In particular, $f'$ does not have constant sign on the immediate left of 0.
$f'$ has oscillatory sign on the immediate right of 0 For $x > 0$, we have $f(x) = x(2 + \sin(1/x))$. The derivative is $f'(x) = 2 + \sin(1/x) - (1/x)\cos(1/x)$. The part $2 + \sin(1/x)$ is bounded, but the part $-(1/x)\cos(1/x)$ oscillates between large magnitude positive and negative values as $x \to 0^+$. In particular, $f'$ does not have constant sign on the immediate right of 0.
### Example of oscillatory sign of derivative where the function does not have a local extremum from either side
An example is:
$f(x) := \lbrace\begin{array}{rl} x \sin(1/x), & x \ne 0 \\ 0, & x = 0 \\\end{array}$
Assertion Construction
$f$ does not have a local extremum at 0 from either side This is because $f$ takes both positive and negative values as $x \to 0^+$. Also, $f$ takes both positive and negative values as $x \to 0^-$.
$f$ is continuous at 0 We can see this using the pinching theorem or more directly by noting that as $x \to 0$, the factor $x \to 0$ and the factor $\sin(1/x)$ is bounded between finite values -1 and 1.
$f'$ has oscillatory sign to the immediate left of 0 The derivative is $f'(x) = \sin(1/x) - (1/x)\cos(1/x)$. The $\sin(1/x)$ part is bounded, but the part $-(1/x)\cos(1/x)$ oscillates between large magnitude positive and negative values as $x \to 0^-$. In particular, $f'$ does not have constant sign on the immediate left of 0.
$f'$ has oscillatory sign to the immediate right of 0 The derivative is $f'(x) = \sin(1/x) - (1/x)\cos(1/x)$. The $\sin(1/x)$ part is bounded, but the part $-(1/x)\cos(1/x)$ oscillates between large magnitude positive and negative values as $x \to 0^+$. In particular, $f'$ does not have constant sign on the immediate right of 0.
### Variations that cover all cases
The above two examples can be modified to produce examples for all the cases mentioned in the statement. To convert local minimum to local maximum, multiply the whole function by $-1$. To obtain one-sided behavior, restrict the analysis to one side. Also, use piecewise combinations with functions that are nice on the other side to obtain other examples for two-sided behavior.
In all cases, we take the point $c = 0$ and the value $f(0) = 0$ for convenience in all examples. This allows us to easily make two-sided combinations.
One-sided requirement Example function (definition only on appropriate side)
Oscillatory sign of derivative on left, but function has strict local maximum from left $f(x) := \left \lbrace\begin{array}{rl} x\left(2 + \sin\left(\frac{1}{x}\right)\right), & x < 0 \\ 0, & x = 0 \\\end{array} \right.$
Oscillatory sign of derivative on left, but function has strict local minimum from left $f(x) := \left \lbrace\begin{array}{rl} (-x)\left(2 + \sin\left(\frac{1}{x}\right)\right), & x < 0 \\ 0, & x = 0 \\\end{array} \right.$
Oscillatory sign of derivative on left, and function has neither local maximum nor local minimum from left $f(x) := \left \lbrace\begin{array}{rl} x \sin(1/x), & x < 0 \\ 0, & x = 0 \\\end{array} \right.$
Oscillatory sign of derivative on right, but function has strict local maximum from right $f(x) := \left \lbrace\begin{array}{rl} (-x)\left(2 + \sin\left(\frac{1}{x}\right)\right), & x > 0 \\ 0, & x = 0 \\\end{array} \right.$
Oscillatory sign of derivative on right, but function has strict local minimum from right $f(x) := \left \lbrace\begin{array}{rl} x\left(2 + \sin\left(\frac{1}{x}\right)\right), & x > 0 \\ 0, & x = 0 \\\end{array} \right.$
Oscillatory sign of derivative on right, and function has neither local maximum nor local minimum from right $f(x) := \left \lbrace\begin{array}{rl} x \sin(1/x), & x > 0 \\ 0, & x = 0 \\\end{array} \right.$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 118, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8883760571479797, "perplexity": 216.52610015812692}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057158.19/warc/CC-MAIN-20210921041059-20210921071059-00646.warc.gz"} |
http://mathhelpforum.com/algebra/98118-help.html | # Math Help - help
1. ## help
If $a_1 = 2$ and $a_{n+1} = (a_n - 1)^2$, then find the value of $a_{17}$ ?
i got answer zero is it correct !
2. Originally Posted by bluffmaster.roy.007
If a1 = 2 and an+1=(an - 1)2, then find the value of a17?
i got answer zero is it correct !
If $a_1 = 2$ and $a_{n+1} = (a_n - 1)^2$, then find the value of $a_{17}$ ? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9746275544166565, "perplexity": 806.8479260470535}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678699570/warc/CC-MAIN-20140313024459-00032-ip-10-183-142-35.ec2.internal.warc.gz"} |
https://socratic.org/questions/what-is-the-distance-between-13-11-and-22-4 | Algebra
Topics
# What is the distance between (13,-11) and (22,-4)?
Dec 20, 2015
$\sqrt{130}$ units
#### Explanation:
The distance between two points can be calculated with the formula:
$d = \sqrt{{\left({x}_{2} - {x}_{1}\right)}^{2} + {\left({y}_{2} - {y}_{1}\right)}^{2}}$
where:
$d =$distance
$\left({x}_{1} , {y}_{1}\right) = \left(13 , - 11\right)$
$\left({x}_{2} , {y}_{2}\right) = \left(22 , - 4\right)$
Substitute your known values into the distance formula to find the distance between the two points:
$d = \sqrt{{\left({x}_{2} - {x}_{1}\right)}^{2} + {\left({y}_{2} - {y}_{1}\right)}^{2}}$
$d = \sqrt{{\left(\left(22\right) - \left(13\right)\right)}^{2} + {\left(\left(- 4\right) - \left(- 11\right)\right)}^{2}}$
$d = \sqrt{{\left(9\right)}^{2} + {\left(7\right)}^{2}}$
$d = \sqrt{81 + 49}$
$d = \sqrt{130}$
$\therefore$, the distance between the two points is $\sqrt{130}$ units.
##### Impact of this question
196 views around the world | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 12, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9708230495452881, "perplexity": 811.2080403207898}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251672537.90/warc/CC-MAIN-20200125131641-20200125160641-00483.warc.gz"} |
http://mathhelpforum.com/statistics/226796-calculating-probability-making-correct-choice.html | # Math Help - calculating probability of making the correct choice
1. ## calculating probability of making the correct choice
So an annoying question is floating around where four answers are given, asking the probability of picking the correct one, with the following answers provided:
a) 25%
b) 50%
c) 60%
d) 25%
I came up with a question to better visualize this and then I tried to solve that instead.
We have spheres of 3 colours. Red, Green and Blue.
There are 10 Red, 5 Green and 10 Blue spheres.
Assuming a colour is chosen at random, and then a sphere is chosen at random, what is the probability of picking the correct colour sphere?
The way I approach this is, we have a 1/3 chance of picking each colour and then either a 5/25 or 10/25 chance of picking the right sphere.
This gives me a probability of picking the right sphere 1/3 of the time. That does not make sense to me since there is a clear bias toward picking a Red or Green sphere. In fact, since the probabilities of picking a sphere always add up to one, changing the amount of spheres does not affect my answer at all. This further seems to indicate that my thought process is wrong. What is the correct way to approach this?
Help appreciated.
Thanks,
Chris
2. ## Re: calculating probability of making the correct choice
Originally Posted by ffezz
So an annoying question is floating around where four answers are given, asking the probability of picking the correct one, with the following answers provided:
a) 25%
b) 50%
c) 60%
d) 25%
I came up with a question to better visualize this and then I tried to solve that instead.
We have spheres of 3 colours. Red, Green and Blue.
There are 10 Red, 5 Green and 10 Blue spheres.
Assuming a colour is chosen at random, and then a sphere is chosen at random, what is the probability of picking the correct colour sphere?
The way I approach this is, we have a 1/3 chance of picking each colour and then either a 5/25 or 10/25 chance of picking the right sphere.
This gives me a probability of picking the right sphere 1/3 of the time. That does not make sense to me since there is a clear bias toward picking a Red or Green sphere. In fact, since the probabilities of picking a sphere always add up to one, changing the amount of spheres does not affect my answer at all. This further seems to indicate that my thought process is wrong. What is the correct way to approach this?
Help appreciated.
Thanks,
Chris
simplify it even further. Consider just two colors blue and yellow. As you said we select one of these randomly (1/2, 1/2)
Let there be N balls all together and Np of them are blue. The probability of selecting a blue ball is thus p, and selecting a yellow (1-p).
Given your setup the probability of selecting the correct color ball is $1/2p + 1/2(1-p) = 1/2, ~~ \forall p ~\ni~ 0 \leq p \leq 1$
At the extremes of p you have 50% chance of either making a certainly correct choice or a certainly incorrect choice.
3. ## Re: calculating probability of making the correct choice
Interesting Probability Sheet.xlsThis is quite interesting. So because we pick a sphere colour at random, our choice of picking a matching color sphere is the probability of each colour regardless of the actual distribution of spheres. If we have 998 red, 1 blue and 1 green, we still expect to pick the right sphere 1/3 times. Is there a theorem/law for this? There surely must be.
Ps. I just did a 1000 isntance test using excel, 9999998 reds, 1 blue and 1 green and the result concurred with this. Included is the excel sheet.
4. ## Re: calculating probability of making the correct choice
Originally Posted by ffezz
So an annoying question is floating around where four answers are given, asking the probability of picking the correct one, with the following answers provided:
a) 25%
b) 50%
c) 60%
d) 25%
I came up with a question to better visualize this and then I tried to solve that instead.
We have spheres of 3 colours. Red, Green and Blue.
There are 10 Red, 5 Green and 10 Blue spheres.
Assuming a colour is chosen at random, and then a sphere is chosen at random, what is the probability of picking the correct colour sphere?
The way I approach this is, we have a 1/3 chance of picking each colour and then either a 5/25 or 10/25 chance of picking the right sphere.
This gives me a probability of picking the right sphere 1/3 of the time. That does not make sense to me since there is a clear bias toward picking a Red or Green sphere. In fact, since the probabilities of picking a sphere always add up to one, changing the amount of spheres does not affect my answer at all. This further seems to indicate that my thought process is wrong. What is the correct way to approach this?
Help appreciated.
Thanks,
Chris
$P(red\ ball) = \dfrac{10}{25} = \dfrac{2}{5}.$
$P(green\ ball) = \dfrac{5}{25} = \dfrac{1}{5}.$
$P(blue\ ball) = \dfrac{10}{25} = \dfrac{2}{5}.$
$P(red) = P(green) = P(blue) = \dfrac{1}{3}.$
$P(ball\ chosen\ has\ same\ color\ as\ color\ chosen) =$
$P(red\ ball\ chosen\ and\ red\ chosen) + P(green\ ball\ chosen\ and\ green\ chosen) + P(blue\ ball\ chosen\ and\ blue\ chosen) =$
$\dfrac{2}{5} * \dfrac{1}{3} + \dfrac{1}{5} * \dfrac{1}{3} + \dfrac{2}{5} * \dfrac{1}{3} = \dfrac{1}{3} * \dfrac{2 + 1 + 2}{5} = \dfrac{1}{3}.$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9768975973129272, "perplexity": 286.18785885528615}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042981921.1/warc/CC-MAIN-20150728002301-00214-ip-10-236-191-2.ec2.internal.warc.gz"} |
http://tex.stackexchange.com/tags/latex-kernel/hot | Tag Info
112
Penalties are the main value that TeX tries to minimise when line or page breaking, They may be inserted explicitly (\penalty125 means that the penalty for breaking at that point is 125). Some penalties are built in to the TeX system and inserted automatically, LaTeX for example sets these default values for built in penalties \linepenalty=10 the penalty ...
68
latex.ltx says \newdimen\z@ \z@=0pt % can be used both for 0pt and 0 so as it says it is short (and efficient) way of getting 0. You should always have a copy of the latex source file latex.ltx in a text editor window while reading package code:-), or perhaps, if you prefer, the typeset version of that, without the comments being removed, source2e.pdf, ...
55
I now compiled such a document listing all internal macros of LaTeX2e which are also useful for package authors. The work title is "List of internal LaTeX2e Macros useful to Package Authors" and I'm planning to release it on CTAN 'soon' (e.g. as 'macros2e') when its out of the beta stage. The beta release can be found on my website. There is also a feedback ...
43
\z@ is a LaTeX “constant” that's defined to be zero. Package developers can use it to assign or test against the value 0 and it can also replace a length of 0pt. Similar constants are \@ne (one) \tw@ (two) and so on. Due to the @ they can only be used in packages or between \makeatletter and \makeatother.
42
This is a very important function in the LaTeX kernel. The macro \@ifnextchar takes three argument. The first one should be a single token, usually [ but not necessarily. When the input stream has the following tokens \@ifnextchar<token>{<true>}{<false>} TeX will look at the next token (skipping spaces) and compare it to the ...
42
Penalties are used by TeX for controlling the line and page break routines. Some of them are inserted implicitly, others can be added by the user (usually via macros). A penalty issued in horizontal or math mode will influence line breaking, one issued in vertical mode will influence page breaking decisions. The list of "implicit penalties" can be found in ...
41
\@ifnextchar is a LaTeX conditional that peeks ahead at the following character. So, \@ifnextchar[ looks ahead to see if the following character in the input stream is a [ (opening left bracket). If this is true, then it executes the immediately following token, otherwise, it skips it and executes the token following that. The first token, executed upon a ...
40
The output routine is called either by TeX's normal page-breaking mechanism, or by a macro putting a penalty of < or = -10000 in the output list. These large penalties communicate with the OTR. For example a penalty of -10001 is a clearpage, whereas a -10004 is a float insertion etc. Information on LaTeX output routine is very hard to find - and guessing ...
30
The actual command, including its formal definition posted by @JoshLee is contained in the LaTeX 2e source, section 61.2 Sectioning (p 283). It forms part of ltsect.dtx, the bundle containing all sectioning commands for LaTeX. It even includes a pseudo-code interpretation of the actual macro. Here is an extract: The ...
29
You are missing two very important \expandafters. The normally used, "correct" code is: \def\ifeq#1#2{% \ifx#1#2\relax \expandafter\@firstoftwo \else \expandafter\@secondoftwo \fi } The difference to a macro which uses #3 and #4 is that the if-statement is fully processed before the first or second of the next two arguments is processed. This ...
27
This is the kind of question that is fairly difficult to answer after 20 years. But basically because it was not considered so important. One has to remember that the implementation of 2e was guided by space and speed restriction, So initially we spend an enormous amount of time optimizing the kernel for speed and space. 2e introduced a lot of new code ...
25
The use of scratch registers and macros in TeX/LaTeX date back to the time when it was absolutely essential to conserve memory consumption, because TeX's memory (both in terms of token/macro memory as well as number of available registers) was very limited and one could easily run out of space just by loading a few packages on top of the main format. Traces ...
25
The LaTeX kernel allocates some scratch registers and defines a scratch conditional. The complete list is \newcount\@tempcnta \newcount\@tempcntb \newif\if@tempswa \newdimen\@tempdima \newdimen\@tempdimb \newdimen\@tempdimc \newbox\@tempboxa \newskip\@tempskipa \newskip\@tempskipb \newtoks\@temptokena Notice that for the first two \newcount is used, ...
25
Any command may be used as an environment, and if surrounding large blocks of text it is often convenient, so {\small zzz\par} and \begin{small}% zzzz \end{small} are more or less equivalent. Note however that unlike \small, spaces after \begin{small} are not dropped due to normal TeX tokenization rules. Also you almost always need a \par or blank ...
24
\@ifnextchar\bgroup See the definition of \input in the kernel: \def\input{\@ifnextchar\bgroup\@iinput\@@input} where \@iinput is able to cope with \input{file} and \@@input is an alias for the primitive \input so \input file will be processed as well. \@ifnextchar has three arguments: (1) the token to be looked for, (2) what to do if it's found, and ...
23
Actually, the “LaTeXbook” (properly “LaTeX. A Document Preparation System”, by Leslie Lamport) endorses the use of such environments: at the end of page 27 we find: Every declaration has a corresponding environment of the same name (minus the \ character). Typing \begin{em} ... \end{em} is equivalent to typing {\em ... }. In particular, the ...
22
It was when the code was written, but is not now (in my opinion). The current LaTeX2e kernel was released in 1992 and carries forward a lot of material from LaTeX2.09. Even with these optimisations and the old 'autoload' system, there were a lot of systems that LaTeX was too big for on release. So looked at in the early 1990s this was entirely sensible. I'd ...
22
This is abbreviated notation for a 1pt dimension, as included in latex.ltx and therefore common to all LaTeX documents: \newdimen\p@ \p@=1pt % this saves macro space and time \newdimen\z@ \z@=0pt % can be used both for 0pt and 0 As such, you can use it in calculation with dimensions, such that 60\p@ translates to 60 times 1pt, or 60pt. In a similar ...
22
Let's assume you have \include{fileA} \include{fileB} If there is no \clearpage when fileA ends and TeX starts reading fileB, there might be a \write relative to fileA pending and it will get lost: the \write commands relative to \label are performed at shipout, not immediately. When the next shipout occurs, the fileA.aux file will have already been ...
21
A macro is any control sequence (or active character) defined with \def, \gdef, \edef or \xdef. TeX macros support up to nine arguments, which contradicts your statement about it not having the notion of arguments to control sequences. The most common usage of arguments is in the “undelimited” form; say that you do \def\foo#1{--#1--} so \foo takes an ...
20
You can use any LaTeX command defined by \newcommand and wrap a \begin{}...\end{} pair around it, however, it's not recommended, since this is not an environment. The interesting thing is, however, that grouping works anyway, but this is consequence of \begin...\end. There are no fontsize environments like \begin{small} etc, as there aren't ...
20
\@let@token is assigned by \futurelet to the next token after the \@ifnextchar, i.e. it is the next character. The \futurelet\@let@token\@ifnch code means "assign the next token to \@let@token and then process \@ifnch". Inside \@ifnch the \@let@token macro is tested if it is equal to the (first token of the) first argument of \@ifnextchar, i.e. \reserved@d ...
20
I guess texdoc source2e is the answer to many of your questions. Both the macro's you mentioned are discussed in the manual, which is basically a documented source of LaTeX2e. As for learning these commands, it's reading existing packages, reading the documentation and reading tex.stackexchange.com. At least it is for me :).
20
Since the fixltx2e package ends with \MakeRobust$$\MakeRobust$$ \MakeRobust$\MakeRobust$ \MakeRobust\makebox \MakeRobust\savebox \MakeRobust\framebox \MakeRobust\parbox \MakeRobust\rule \MakeRobust\raisebox it is probably safe to say that it was simply a mistake that was left around because the kernel can't change at this point. It's also worth ...
20
\p@ is a LaTeX2e kernel dimension, equal to 1 pt. It is used as this saves some tokens in the kernel, and also makes it possible to write thinks like 0\pt, which TeX interprets as 0 times 1 pt. So written out 'long hand' the definition is '1pt'. (That token-saving was really important when LaTeX2e was written: \p@ is one token, 1pt is three. Today, it is ...
18
The correct syntax for \ifeq should be \def\ifeq#1#2{% \ifx#1#2\relax \expandafter\@firstoftwo \else \expandafter\@secondoftwo \fi } Martin has explained well the reasons for preferring this approach than the "four argument" one. What about \@firstofone? Well it may be used for stripping a pair of braces around an argument, for example. Another ...
18
\documentclass{book} \usepackage{caption} \makeatletter \def\InFloat{\ifnum\@floatpenalty<0\relax in float \else outside float \fi} \makeatother \begin{document} \begin{figure}\relax [figure]: \InFloat \end{figure} \begin{center} [center]: \InFloat \end{center} \captionsetup{type=figure} \begin{center} [center]: \InFloat \end{center} ...
18
It essentially checks if the token after \CJ@title is [ or not, in order to pass to the main command the same argument twice, if [ is not found. The command \CJ@title must be defined by \def\CJ@title[#1]#2{...} so that, with a call such as \title{xyz} the expansions will be (successively} \@dblarg\CJ@title{xyz} ...<some complex action>... ...
17
These are LaTeX kernel macros that are associated with environments. In simple terms anything that is enclosed with a \begin{foo}...\end{foo} is an environment. For example a figure or a table. Every time you insert a table a counter is incremented. This counter let us call it foo has an associated macro named \p@foo. This macro expands to a printed ...
17
The definition is found in latex.ltx: % latex.ltx, line 481: \def\m@th{\mathsurround\z@} OK, it's just an abbreviation for \mathsurround\z@ which in turn is an abbreviation for \mathsurround=0pt Now, what's \mathsurround? The syntax says it's a parameter, which has a length as value. It contains the amount of blank space that's inserted at either side ...
Only top voted, non community-wiki answers of a minimum length are eligible | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9601580500602722, "perplexity": 2989.132720285698}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645311026.75/warc/CC-MAIN-20150827031511-00202-ip-10-171-96-226.ec2.internal.warc.gz"} |
https://math-faq.com/how-long-would-it-take-to-double-the-population/ | # How Long Would It Take to Double the Population?
In a previous FAQ, I looked at an equation for modeling the growth of the population on Gilligan’s Island,
A = 7 (1+0.02)n
This equation assumed that the initial was 7 in 1964 and grew by 2% in each year after that. That means that if you put in n = 14, you will have multiplied the initial population by 1.02 a total of 14 times (1978).
How many years would you need to go to double the original population?
Since the original population was 7, double that number is 14. So if I put that number into A, I need to solve
14 = 7 (1.02)n
for the number of years n after 1964.
Start by dividing both sides by 7 to isolate the piece with the exponent in it,
2 = (1.02)n
Now take the logarithm of both sides,
log(2) = log((1.02)n)
The reason we need to use logarithms is that powers in front of logarithms may be moved to the front,
log(2) = n log(1.02)
Now divide both sides by log(1.02),
log(2)/log(1.02) = n
Using a calculator to compute the logs gives a value of n of about 35. So in 1964 + 35 or 1995, the population would have increased from 7 to 14 according to the equation. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8729664087295532, "perplexity": 463.0685284748805}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057337.81/warc/CC-MAIN-20210922072047-20210922102047-00679.warc.gz"} |
http://thousandfold.net/cz/2012/07/03/problem-another-inequality/ | # Problem: another inequality
Let $$\mat{F},\mat{G}$$ be positive definite matrices (do they have to be definite?) and $$0 \leq p \leq 2.$$ Show that
$\tr\left(\mat{F}^{p/2} – \mat{G}^{p/2}\right) \leq \frac{p}{2} \tr\left((\mat{F} – \mat{G})\mat{G}^{p/2 – 1}\right).$
I’m working on it. See if you can get a proof before I do 🙂
• Pete
Move terms around, differentiate with respect to F, (derivative is 0 when F=G). Use concavity of tr(F^(p/2)) to show it’s optimal. I think F can be semidefinite.
• swiftset
That seems to work. Good job Peter! You win a pumpkin. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8354620933532715, "perplexity": 1908.5024940702501}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687820.59/warc/CC-MAIN-20170921134614-20170921154614-00332.warc.gz"} |
https://chesterrep.openrepository.com/handle/10034/6981/browse?view=list&rpp=20&etal=-1&sort_by=1&type=title&starts_with=H&order=ASC | Now showing items 75-94 of 194
• #### Halanay-type theory in the context of evolutionary equations with time-lag
We consider extensions and modifications of a theory due to Halanay, and the context in which such results may be applied. Our emphasis is on a mathematical framework for Halanay-type analysis of problems with time lag and simulations using discrete versions or numerical formulae. We present selected (linear and nonlinear, discrete and continuous) results of Halanay type that can be used in the study of systems of evolutionary equations with various types of delayed argument, and the relevance and application of our results is illustrated, by reference to delay-differential equations, difference equations, and methods.
• #### High order algorithms for numerical solution of fractional differential equations
In this paper, two novel high order numerical algorithms are proposed for solving fractional differential equations where the fractional derivative is considered in the Caputo sense. The total domain is discretized into a set of small subdomains and then the unknown functions are approximated using the piecewise Lagrange interpolation polynomial of degree three and degree four. The detailed error analysis is presented, and it is analytically proven that the proposed algorithms are of orders 4 and 5. The stability of the algorithms is rigorously established and the stability region is also achieved. Numerical examples are provided to check the theoretical results and illustrate the efficiency and applicability of the novel algorithms.
• #### A high order numerical method for solving nonlinear fractional differential equation with non-uniform meshes
We introduce a high-order numerical method for solving nonlinear fractional differential equation with non-uniform meshes. We first transform the fractional nonlinear differential equation into the equivalent Volterra integral equation. Then we approximate the integral by using the quadratic interpolation polynomials. On the first subinterval $[t_{0}, t_{1}]$, we approximate the integral with the quadratic interpolation polynomials defined on the nodes $t_{0}, t_{1}, t_{2}$ and in the other subinterval $[t_{j}, t_{j+1}], j=1, 2, \dots N-1$, we approximate the integral with the quadratic interpolation polynomials defined on the nodes $t_{j-1}, t_{j}, t_{j+1}$. A high-order numerical method is obtained. Then we apply this numerical method with the non-uniform meshes with the step size $\tau_{j}= t_{j+1}- t_{j}= (j+1) \mu$ where $\mu= \frac{2T}{N (N+1)}$. Numerical results show that this method with the non-uniform meshes has the higher convergence order than the standard numerical methods obtained by using the rectangle and the trapzoid rules with the same non-uniform meshes.
• #### High-Order Numerical Methods for Solving Time Fractional Partial Differential Equations
In this paper we introduce a new numerical method for solving time fractional partial differential equation. The time discretization is based on Diethelm’s method where the Hadamard finite-part integral is approximated by using the piecewise quadratic interpolation polynomials. The space discretization is based on the standard finite element method. The error estimates with the convergence order O(τ^(3−α) +h^2 ),0
• #### A high-order scheme to approximate the Caputo fractional derivative and its application to solve the fractional diffusion wave equation
A new high-order finite difference scheme to approximate the Caputo fractional derivative $\frac{1}{2} \big ( \, _{0}^{C}D^{\alpha}_{t}f(t_{k})+ \, _{0}^{C}D^{\alpha}_{t}f(t_{k-1}) \big ), k=1, 2, \dots, N,$ with the convergence order $O(\Delta t^{4-\alpha}), \, \alpha\in(1,2)$ is obtained when $f^{\prime \prime \prime} (t_{0})=0$, where $\Delta t$ denotes the time step size. Based on this scheme we introduce a finite difference method for solving fractional diffusion wave equation with the convergence order $O(\Delta t^{4-\alpha} + h^2)$, where $h$ denotes the space step size. Numerical examples are given to show that the numerical results are consistent with the theoretical results.
• #### A higher order numerical method for time fractional partial differential equations with nonsmooth data
Gao et al. (2014) introduced a numerical scheme to approximate the Caputo fractional derivative with the convergence rate $O(k^{3-\alpha}), 0< \alpha <1$ by directly approximating the integer-order derivative with some finite difference quotients in the definition of the Caputo fractional derivative, see also Lv and Xu (2016), where $k$ is the time step size. Under the assumption that the solution of the time fractional partial differential equation is sufficiently smooth, Lv and Xu (2016) proved by using energy method that the corresponding numerical method for solving time fractional partial differential equation has the convergence rate $O(k^{3-\alpha}), 0< \alpha <1$ uniformly with respect to the time variable $t$. However, in general the solution of the time fractional partial differential equation has low regularity and in this case the numerical method fails to have the convergence rate $O(k^{3- \alpha}), 0 < \alpha <1$ uniformly with respect to the time variable $t$. In this paper, we first obtain a similar approximation scheme to the Riemann-Liouville fractional derivative with the convergence rate $O(k^{3- \alpha}), 0 < \alpha <1$ as in Gao \et \cite{gaosunzha} (2014) by approximating the Hadamard finite-part integral with the piecewise quadratic interpolation polynomials. Based on this scheme, we introduce a time discretization scheme to approximate the time fractional partial differential equation and show by using Laplace transform methods that the time discretization scheme has the convergence rate $O(k^{3- \alpha}), 0 < \alpha <1$ for any fixed $t_{n}>0$ for smooth and nonsmooth data in both homogeneous and inhomogeneous cases. Numerical examples are given to show that the theoretical results are consistent with the numerical results.
• #### Higher order numerical methods for solving fractional differential equations
In this paper we introduce higher order numerical methods for solving fractional differential equations. We use two approaches to this problem. The first approach is based on a direct discretisation of the fractional differential operator: we obtain a numerical method for solving a linear fractional differential equation with order 0 < α < 1. The order of convergence of the numerical method is O(h^(3−α)). Our second approach is based on discretisation of the integral form of the fractional differential equation and we obtain a fractional Adams-type method for a nonlinear fractional differential equation of any order α >0. The order of convergence of the numerical method is O(h^3) for α ≥ 1 and O(h^(1+2α)) for 0 < α ≤ 1 for sufficiently smooth solutions. Numerical examples are given to show that the numerical results are consistent with the theoretical results.
• #### Higher Order Time Stepping Methods for Subdiffusion Problems Based on Weighted and Shifted Grünwald–Letnikov Formulae with Nonsmooth Data
Two higher order time stepping methods for solving subdiffusion problems are studied in this paper. The Caputo time fractional derivatives are approximated by using the weighted and shifted Gr\"unwald-Letnikov formulae introduced in Tian et al. [Math. Comp. 84 (2015), pp. 2703-2727]. After correcting a few starting steps, the proposed time stepping methods have the optimal convergence orders $O(k^2)$ and $O(k^3)$, respectively for any fixed time $t$ for both smooth and nonsmooth data. The error estimates are proved by directly bounding the approximation errors of the kernel functions. Moreover, we also present briefly the applicabilities of our time stepping schemes to various other fractional evolution equations. Finally, some numerical examples are given to show that the numerical results are consistent with the proven theoretical results.
• #### High‐order ADI orthogonal spline collocation method for a new 2D fractional integro‐differential problem
We use the generalized L1 approximation for the Caputo fractional deriva-tive, the second-order fractional quadrature rule approximation for the inte-gral term, and a classical Crank-Nicolson alternating direction implicit (ADI)scheme for the time discretization of a new two-dimensional (2D) fractionalintegro-differential equation, in combination with a space discretization by anarbitrary-order orthogonal spline collocation (OSC) method. The stability of aCrank-Nicolson ADI OSC scheme is rigourously established, and error estimateis also derived. Finally, some numerical tests are given
• #### How do numerical methods perform for delay differential equations undergoing a Hopf bifurcation?
This paper discusses the numerical solution of delay differential equations undergoing a Hopf birufication. Three distinct and complementary approaches to the analysis are presented.
• #### Identification of the initial function for discretized delay differential equations
In the present work, we analyze a discrete analogue for the problem of the identification of the initial function for a delay differential equation (DDE) discussed by Baker and Parmuzin in 2004. The basic problem consists of finding an initial function that gives rise to a solution of a discretized DDE, which is a close fit to observed data.
• #### Identification of the initial function for nonlinear delay differential equations
We consider a 'data assimilation problem' for nonlinear delay differential equations. Our problem is to find an initial function that gives rise to a solution of a given nonlinear delay differential equation, which is a close fit to observed data. A role for adjoint equations and fundamental solutions in the nonlinear case is established. A 'pseudo-Newton' method is presented. Our results extend those given by the authors in [(C. T. H. Baker and E. I. Parmuzin, Identification of the initial function for delay differential equation: Part I: The continuous problem & an integral equation analysis. NA Report No. 431, MCCM, Manchester, England, 2004.), (C. T. H. Baker and E. I. Parmuzin, Analysis via integral equations of an identification problem for delay differential equations. J. Int. Equations Appl. (2004) 16, 111–135.)] for the case of linear delay differential equations.
• #### An implicit finite difference approximation for the solution of the diffusion equation with distributed order in time
In this paper we are concerned with the numerical solution of a diffusion equation in which the time order derivative is distributed over the interval [0,1]. An implicit numerical method is presented and its unconditional stability and convergence are proved. A numerical example is provided to illustrate the obtained theoretical results.
• #### Introducing delay dynamics to Bertalanffy's spherical tumour growth model
We introduce delay dynamics to an ordinary differential equation model of tumour growth based upon von Bertalanffy's growth model, a model which has received little attention in comparison to other models, such as Gompterz, Greenspan and logistic models. Using existing, previously published data sets we show that our delay model can perform better than delay models based on a Gompertz, Greenspan or logistic formulation. We look for replication of the oscillatory behaviour in the data, as well as a low error value (via a Least-Squares approach) when comparing. We provide the necessary analysis to show that a unique, continuous, solution exists for our model equation and consider the qualitative behaviour of a solution near a point of equilibrium.
• #### Malliavin Calculus for the stochastic Cahn- Hilliard/Allen-Cahn equation with unbounded noise diffusion
The stochastic partial di erential equation analyzed in this work, is motivated by a simplified mesoscopic physical model for phase separation. It describes pattern formation due to adsorption and desorption mechanisms involved in surface processes, in the presence of a stochastic driving force. This equation is a combination of Cahn-Hilliard and Allen-Cahn type operators with a multiplicative, white, space-time noise of unbounded di usion. We apply Malliavin calculus, in order to investigate the existence of a density for the stochastic solution u. In dimension one, according to the regularity result in [5], u admits continuous paths a.s. Using this property, and inspired by a method proposed in [8], we construct a modi ed approximating sequence for u, which properly treats the new second order Allen-Cahn operator. Under a localization argument, we prove that the Malliavin derivative of u exists locally, and that the law of u is absolutely continuous, establishing thus that a density exists.
• #### Mathematical modelling and numerical simulations in nerve conduction
In the present work we analyse a functionaldifferential equation, sometimes known as the discrete FitzHugh-Nagumo equation, arising in nerve conduction theory.
• #### Mathematical models of DNA methylation dynamics: Implications for health and ageing
DNA methylation status is a key epigenetic process which has been intimately associated with gene regulation. In recent years growing evidence has associated DNA methylation status with a variety of diseases including cancer, Alzheimers disease and cardiovascular disease. Moreover, changes to DNA methylation have also recently been implicated in the ageing process. The factors which underpin DNA methylation are complex, and remain to be fully elucidated. Over the years mathematical modelling has helped to shed light on the dynamics of this important molecular system. Although the existing models have contributed significantly to our overall understanding of DNA methylation, they fall-short of fully capturing the dynamics of this process. In this paper we develop a linear and nonlinear model which captures more fully the dynamics of the key intracellular events which characterise DNA methylation. In particular the outcomes of our linear model result in gene promoter specific methylation levels which are more biologically plausible than those revealed by previous mathematical models. In addition, our non-linear model predicts DNA methylation promoter bistability which is commonly observed experimentally. The findings from our models have implications for our current understanding of how changes to the dynamics which underpin DNA methylation affect ageing and health. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9331496357917786, "perplexity": 419.91998316963696}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991207.44/warc/CC-MAIN-20210514183414-20210514213414-00099.warc.gz"} |
https://www.calctool.org/atmospheric-thermodynamics/air-pressure-at-altitude | # Air Pressure at Altitude Calculator
Created by Krishna Nelaturu
Last updated: Jun 24, 2022
Welcome to our air pressure at altitude calculator, which can help you calculate the atmospheric pressure at any elevation on Earth and temperature. In this article, we aim to break down this confounding concept by addressing some fundamentals:
• Why the atmospheric pressure at an altitude is different from air pressure at sea level?
• What is the atmospheric pressure formula?
• How to calculate atmospheric pressure with height?
Similar to how air pressure changes with altitude, we know that the pressure in a liquid changes with depth. Find out why using our hydrostatic pressure calculator.
🔎 In our discussion, we use the terms "air pressure", "atmospheric pressure" and "barometric pressure" as synonyms of each other.
## Why atmospheric pressure varies with altitude?
Atmospheric pressure is the pressure exerted by the weight of the Earth's atmosphere. It varies with altitude because the weight of air present above the point of measurement changes. And since air is denser at sea level than at higher altitudes, you can understand that the air pressure decreases as the altitude increases.
Air pressure is also proportional to the temperature. At higher temperatures, the air particles move faster and bump into each other more frequently, increasing the air pressure. Humidity (not to be confused with ) is inversely proportional to the air pressure.
Ever wonder why air pressure at a higher elevation affects the boiling point? Learn more with our boiling point calculator.
## Atmospheric pressure formula to calculate barometric pressure
The following exponential equation gives the atmospheric pressure at altitude formula:
$p = p_0 \cdot \text{exp} \left( -\frac{gM(h-h_0)}{RT}\right)$
where:
• $p$ is the atmospheric pressure at altitude $h$;
• $p_0$ is the atmospheric pressure at reference altitude $h_0$;
• $g$ is the gravitational acceleration, equal to 9.81 m/s on Earth;
• $M$ is the molar mass of dry air, equal to 0.02896968 kg/mol;
• $R$ is the universal gas constant, equal to 8.31432 N·m/(mol·K); and
• $T$ is the temperature at the altitude $h$ in Kelvins.
If we choose sea level as the reference altitude, $h_0 = 0$, then $p_0$ is equal to the air pressure at sea level, 101.32 kPa. Rewriting our initial equation, we obtain the following atmospheric pressure formula:
$p = p_0 \cdot \text{exp} \left( -\frac{gMh}{RT}\right)$
It is vital to get the air temperature right here. You can use our temperature at altitude calculator if you need help.
## How do you calculate atmospheric pressure with height?
Using the pressure altitude formula we introduced in the previous section, let us calculate the air pressure for the cruising altitude of commercial flights, $35000 \text{ft}$ above sea level, and air temperature of $-51\degree \text{C}$.
\footnotesize \begin{align*} p &= 101.32 \cdot \text{exp} \left( -\frac{gM \cdot 35000\cdot 0.3048}{R\cdot (-51 + 273.15)}\right)\\ &= 101.32 \cdot \text{exp} \left( -\frac{gM \cdot 10668}{R\cdot 222.15}\right)\\ & = 19.6438 \text{ kPa} \end{align*}
The air pressure at this altitude is less than 20% of the air pressure at sea level. Understanding such drastic differences is what enables the safe design of aircraft!
## How to use this air pressure at altitude calculator?
This air pressure at altitude calculator is a powerful tool at your disposal to calculate the barometric pressure at any altitude:
1. Provide the value of air pressure at sea level.
2. Enter the altitude at which you want to calculate the air pressure.
3. Input the air temperature at this altitude, and this tool will calculate the air pressure at this altitude.
You can do more - this tool is versatile enough for you to enter any three of the four unknowns and thereby calculate the fourth.
Krishna Nelaturu
Pressure at sea level (P₀)
in Hg
Altitude (h)
m
Temperature (T)
°C
Pressure (P)
in Hg
People also viewed…
### Cloud base
The cloud base is the lowest altitude at which clouds can form: learn how to calculate the cloud base with our handy tool.
### Density altitude
This density altitude calculator tells you the density altitude according to the ICAO Standard Atmosphere model. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 16, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.966937780380249, "perplexity": 865.5045963106274}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00474.warc.gz"} |
http://mathhelpforum.com/algebra/19075-equation.html | 1. Equation
I have two questions:
The first is that I can't seem to get the answer to the attached equation. The book says x = 1, or x = 1/2. Though this is probably because of my second question:
The second question is, is there a method to work out how to factorise? Sometimes it seems impossible by looking at the numbers. Is there more than one method to do this?
Thanks.
Attached Thumbnails
I have two questions:
The first is that I can't seem to get the answer to the attached equation. The book says x = 1, or x = 1/2. Though this is probably because of my second question:
The second question is, is there a method to work out how to factorise? Sometimes it seems impossible by looking at the numbers. Is there more than one method to do this?
Thanks.
Let me just go through that working again:
$\frac{1}{x+1}-\frac{1}{2x+1}-\frac{1}{6}=0$
$6(2x+1)-6(x+1)-(x+1)(2x+1)=0$
$12x+6-(6x+6)-(2x^2+x+2x+1)=0$
$12x+6-6x-6-2x^2-x-2x-1=0$
$-2x^2+3x-1=0$
$2x^2-3x+1=0$
$2x^2-x-2x+1=0$
$x(2x-1)-(2x-1)=0$
$(x-1)(2x-1)=0$
$x=1$ or $x=\frac{1}{2}$
Some extra info:
If you look at the discriminant, $\Delta=\sqrt{b^2-4ac}$, of a quadratic expression $ax^2+bx+c$, you can work out:
1. How many solutions it has
2. Whether or not it can be factored
For 1.
If $\Delta>0$, there are 2 real solutions.
If $\Delta = 0$, there is 1 real solution.
If $\Delta < 0$, there are no real solutions.
For 2.
If $\Delta$ is a perfect square, that is, if $\Delta=n^2,n \in Z$, then the expression can be factored, else it can't be.
For your original incorrect expression $2x^2-6x+1=0$, $\Delta=\sqrt{(-6)^2-4 \times 2 \times 1}=\sqrt{28}$, and since 28 isn't a perfect square it can't be factorised.
Whereas $2x^2-3x+1$ has discriminant $\Delta = \sqrt{(-3)^2-4 \times 2 \times 1}=\sqrt{1}$, which is a perfect square so it can.
There are other methods of finding roots, the first is completing the square, and the second is the quadratic formula:
The quadratic formula is very useful and you probably should remember it. For any expression $ax^2+bx+c=0$, the solutions are
$x=\frac{-b \pm \sqrt{b^2-4ac}}{2a}=\frac{-b \pm \Delta}{2a}$
3. Completing the square
In attached I don't understand the the last line. Could you explain it please?
How can we replace the first 2 terms in that way? I can't quite see it.
Thank you.
Attached Thumbnails
4. $x^2+6x$
$=x^2+6x+\left(\frac{6}{2}\right)^2-\left(\frac{6}{2}\right)^2$
$=(x+3)^2-9$
To form a square with the first two terms of a quadratic expression, add and subtract the square of half the coefficient of the x term (line 2). Then factorise (line 3).
In general:
$ax^2+bx$
$=ax^2+bx+\left(\frac{b}{2} \right)^2-\left(\frac{b}{2} \right)^2$
$=\left(a+\frac{b}{2} \right)^2-\left(\frac{b}{2} \right)^2$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 30, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8227694034576416, "perplexity": 246.56850881139567}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424910.80/warc/CC-MAIN-20170724202315-20170724222315-00458.warc.gz"} |
http://mathhelpforum.com/advanced-applied-math/4935-42-0-kg-block-ice.html | # Math Help - A 42.0 kg block of ice
1. ## A 42.0 kg block of ice
A 42.0 kg block of ice slide down a plane with an incline of 34 degree assuming friction is negligible, what is the acceleration of the block down the incline?
2. Originally Posted by Refujoi
A 42.0 kg block of ice slide down a plane with an incline of 34 degree assuming friction is negligible, what is the acceleration of the block down the incline?
Set up your Free-Body Diagram. I am going to assume the incline is down and to the right. For a coordinate system I am going to say that +x is down the incline and +y is perpendicular to and coming out of the plane of the incline.
There are two forces in the FBD: the weight, w, acting straight downward, and the normal force, N, acting in the +y direction.
The only force having a component in the x direction is the weight, so:
$\sum F_x = w sin \theta = ma$
where $\theta$ is the angle of incline. (Note that it is not cosine!)
Since w = mg we may solve this equation for a and we get
$a = g sin( \theta )$.
(Note that the acceleration is less than g as it should be.)
-Dan | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9325505495071411, "perplexity": 415.75216074183714}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678702919/warc/CC-MAIN-20140313024502-00072-ip-10-183-142-35.ec2.internal.warc.gz"} |
http://www.chemeurope.com/en/encyclopedia/Compton_edge.html | My watch list
my.chemeurope.com
Compton edge
In spectrophotometry, the Compton edge is a feature of the spectrograph that results from the Compton scattering in the scintillator or detector. When a gamma-ray scatters off the scintillator but escapes, only a fraction of its energy is registered by the detector. This leads to a spectrum of gamma-rays in the data that is not really there. The highest energy that occurs from this process is the Compton edge.
Background
In a Compton scattering process, an incident photon collides with an electron in the scintillator. The amount of energy exchanged varies with angle, and is given by the formula:
$\frac{1}{E^\prime} - \frac{1}{E} = \frac{1}{m_e c^2}\left(1-\cos \theta \right)$
or $E^\prime = \frac{E}{1 + \frac{(1 - \cos \theta)E}{m_e c^2}}$
• E is the energy of the incident photon.
• E' is the energy of the outgoing photon, which escapes the detector.
• me is the mass of the electron.
• c is the speed of light.
• θ is the angle of deflection for the photon.
The amount of energy transferred to the scintillator varies with the angle of deflection. As θ approaches zero, none of the energy is transferred. The maximum amount of energy is transferred when θ approaches 180 degrees.
$E_T = E - E^\prime$
$E_{Compton} = E_T (max) = E \frac{2E}{m_e c^2 + 2E}$
It is impossible for the photon to transfer any more energy via this process, hence there is a sharp cutoff at this energy giving rise to the name Compton edge. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 4, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9492384791374207, "perplexity": 459.2831147250445}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394011231453/warc/CC-MAIN-20140305092031-00007-ip-10-183-142-35.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/need-help-daughter-has-trig-exam-tomorrow.172862/ | # Need help -daughter has trig exam tomorrow
1. Jun 4, 2007
### crm
1. The problem statement, all variables and given/known data
A surveyor on the ground takes two readings of the angles of elevation at the top of the tower. From 150' apart, the measures are 50 and 70 degrees. Find the tower's height to the nearest foot.
2. Relevant equations
3. The attempt at a solution
2. Jun 4, 2007
### ice109
law of cosines
3. Jun 4, 2007
### crm
can you work through to the solution?
4. Jun 4, 2007
### crm
My daughter says that to use the law of cosines you need to know two sides and an angle. In the above problem, we only know one side. Any comments?
5. Jun 4, 2007
### chroot
Staff Emeritus
Have you drawn a picture? You cannot just go blindly applying rules without understanding the problem.
If you draw a picture, you'll see that there are two triangles. You know all the angles of both, and one side of one of them. From there, you can use the law of sines to find any other side you want.
- Warren
6. Jun 4, 2007
### crm
Yes, we have drawn a picture, but we do not know any sides. We know that the entire length of the two triangles together is 150, but how do we determine the length of the bases? Are they halved? and if so, how do you know? Because if they are not halved then there is no way to find the side....
7. Jun 4, 2007
### chroot
Staff Emeritus
Draw a small right triangle. This represents the measurement made by the surveyor when he's close to the tower. He measures the angle to be 70 degrees in this position. Since it's a right triangle, you know all the angles of that triangle. You do not know any sides.
Imagine that the surveyor walks 150 feet backward away from the tower. When he looks at the tower again, he measures an angle of 50 degrees. Draw another triangle adjacent to this one. It will not be a right triangle. The base of this triangle is 150 feet.
(The entire length of the triangles together is not 150 feet; only the base of this second triangle is 150 feet. You do not know the sum of the bases.)
Simple inspection shows that you know all of the angles of this second triangle, and you know one of its sides. You can use the law of sines to find any other side. For example, use the law of sines to find the hypotenuse of the first triangle. From there, use the law of sines again to find the height of the tower.
- Warren
Last edited: Jun 4, 2007
8. Jun 4, 2007
### crm
Okay, so the first triangle is a right triangle with the angle at the top being 50 degrees and the angle at the bottom being 40 degrees. The second triangle has a 20 degree angle at the top (70 - 50) and angles at the bottom of 140 and 20 degrees. And the length at the bottom is 150 feet.
We solve for the longest side with the formula 150/ sin 20 = X/sin 140, or 282. We then solve for the height of the tower with the formula 282/sin 90 = X/sin 20, or 96.
Unfortunately, the answer key says the correct answer is 360, so we're way off.
Any further thoughts?
9. Jun 4, 2007
### chroot
Staff Emeritus
Your first triangle is incorrect. The 50 degree angle is at the bottom. The question states "angles of elevation" are 50 and 70 degrees. An angle of elevation means an angle relative the horizontal.
- Warren
10. Jun 4, 2007
### Sleek
Last edited by a moderator: May 2, 2017
11. Jun 4, 2007
### chroot
Staff Emeritus
Thanks for the diagram, Sleek. I didn't have time to make one.
- Warren
12. Jun 5, 2007
### crm
Warren and Sleek:
Thank you so much!
crm
13. Jun 5, 2007
### TheoMcCloskey
crm -- just curious, what answer did you get --
I worked this out two ways using a figure very similar to Sleek's and came up with 315.7 feet - not 360 feet.
I use tangents, eg, (ref Sleek's figure)
Tan(70 deg) = h / x
Tan(50 deg) = h / (x+150)
Solve for h
$$h = \frac{T_{50} \cdot T_{70}}{T_{70} - T_{50}} \cdot 150$$
14. Jun 5, 2007
### chroot
Staff Emeritus
I also got 315.7 feet, by the way (checked in several different ways).
- Warren
15. Jun 6, 2007
### crm
Warren and Sleek: Thank you so much for your help on Monday night. I found the diagram and the explanation of the angles of elevation. I found them when I got up Tuesday morning and provided them to my daughter. She reviewed this prior to her exam. She later advised that she had an almost identical question in her exam and was able to handle it. Your help made it possible. I'll ask her what answer she got when she worked through the problem, and let you know. crm
16. Jun 6, 2007
### chroot
Staff Emeritus
Glad to know we could help, crm.
- Warren
Similar Discussions: Need help -daughter has trig exam tomorrow | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8384074568748474, "perplexity": 1032.9377071052852}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886103270.12/warc/CC-MAIN-20170817111816-20170817131816-00462.warc.gz"} |
http://pnylab.com/papers/vp3/main.html | Up
AMS-DIMACS Book Chapter (to appear) and Proc. 11th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA) 2000
# Locally Lifting the Curse of Dimensionality for Nearest Neighbor Search
Peter N. Yianilos
Abstract: Our work gives a positive result for nearest neighbor search in high dimensions. It establishes that radius-limited search is, under particular circumstances, free of the {\em curse of dimensionality}. It further illuminates the nature of the curse, and may therefore someday contribute to improved general purpose algorithms for high dimensions and for general metric spaces.
We consider the problem of nearest neighbor search in the Euclidean hypercube $[-1,+1]^d$ with uniform distributions, and the additional natural assumption that the nearest neighbor is located within a constant fraction $R$ of the maximum interpoint distance in this space, i.e. within distance $2 R \sqrt{d}$ of the query.
We introduce the idea of {\em aggressive pruning} and give a family of practical algorithms, an idealized analysis, and describe experiments. Our main result is that search complexity measured in terms of $d$-dimensional inner product operations, is i) strongly sublinear with respect to the data set size $n$ for moderate $R$, ii) asymptotically, and as a practical matter, independent of dimension.
Given a random data set, a random query within distance $2 R \sqrt{d}$ of some database element, and a randomly constructed data structure, the search succeeds with a specified probability, which is a parameter of the search algorithm. On average a search performs $\approx n^\rho$ distance computations where $n$ is the number of of points in the database, and $\rho < 1$ is calculated in our analysis. Linear and near-linear space structures are described, and our algorithms and analysis are free of large hidden constants, i.e. the algorithms perform far less work than exhaustive search -- both in theory and practice.
Keywords: Nearest neighbor search, Vantage point tree (vp-tree), Kd-tree, Computational geometry, Metric space. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9346972107887268, "perplexity": 594.0643749561095}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591575.49/warc/CC-MAIN-20180720080634-20180720100634-00251.warc.gz"} |
https://philpapers.org/rec/KIRTCR | # The contraction rule and decision problems for logics without structural rules
Studia Logica 50 (2):299 - 319 (1991)
# Abstract
This paper shows a role of the contraction rule in decision problems for the logics weaker than the intuitionistic logic that are obtained by deleting some or all of structural rules. It is well-known that for such a predicate logic L, if L does not have the contraction rule then it is decidable. In this paper, it will be shown first that the predicate logic FLec with the contraction and exchange rules, but without the weakening rule, is undecidable while the propositional fragment of FLec is decidable. On the other hand, it will be remarked that logics without the contraction rule are still decidable, if our language contains function symbols.
# PhilArchive
Upload a copy of this work Papers currently archived: 72,766
Setup an account with your affiliations in order to access resources via your University's proxy server
2009-01-28
36 (#320,472)
6 months
1 (#386,989) | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8039409518241882, "perplexity": 1384.9194717520306}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00661.warc.gz"} |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.