url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
https://www.shaalaa.com/question-bank-solutions/solve-x-x-3-x-4-x-5-x-6-10-3-x-4-6-solutions-quadratic-equations-factorization_1885
# Solve for x: (x-3)/(x-4)+(x-5)/(x-6)=10/3; x!=4,6 - Mathematics Solve for x: (x-3)/(x-4)+(x-5)/(x-6)=10/3; x!=4,6 #### Solution Solution: (x-3)/(x-4)+(x-5)/(x-6)=10/3 [(x-3)(x-6)+(x-5)(x-4)]/((x-4)(x-6))=10/3 (x^2-9x+18+x^2-9x+20)/((x-4)(x-6))=10/3 (2x^2-18x+38)/((x-4)(x-6))=10/3 (2(x62-9x+19))/x=10/3(x^2-10x+24) (x-9x+19)/1=5/3[x^2-10x+24] 3x^2-27x+57=5x^2-50x+120 2x^2-23x+63=0 x=7 or x=9/2 Concept: Solutions of Quadratic Equations by Factorization Is there an error in this question or solution?
2021-10-27 19:32:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3086954951286316, "perplexity": 6430.011565926652}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588242.22/warc/CC-MAIN-20211027181907-20211027211907-00533.warc.gz"}
https://www.gamedev.net/forums/topic/656652-a-rose-by-any-other-name/
• 15 • 15 • 11 • 9 • 10 # A Rose by Any Other Name... This topic is 1402 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts I figure, it's my turn to try my hand at a coding horror. This one had me stumped for a couple better-spent-elsewhere hours: menu_prompt: call WriteString call WriteChar call Crlf cmp al, 'p' ; If 'p' was entered... je print ; ...print nodes ... mov edx, OFFSET error ; Print invalid option message call WriteString print: push cur call Print ... done: call WaitMsg ; hold display window open exit main ENDP .data ... .code ... Print PROC ... Print ENDP I was trying to figure out how the values of the cur and head pointers weren't being stored on the stack before the call to the Print procedure; the program repeatedly generated a memory access violation when it tried to dereference the pointer, and I'd be sitting there rooting around the stack frame, and the values are nowhere to be found, like they've never been pushed on the stack before the call. Huh. Visual inspection of the code provided no answers. If you see it already, you're better read than I am on this assembler. The answer? (MASM transforms all identifiers to uppercase by default, for case-insensitivity! Thus, my jmp statement was just jumping to the Print procedure, without pushing its parameters on the stack, instead of jumping to the print label. I have no idea why the assembler wouldn't raise an error or a warning for that, but I've learned my lesson [just started using MASM, in particular, not long ago].) ##### Share on other sites Writing assembly in this day and age might be the real WTF It's for a class I have to take; I'd really rather not have to take it, but it's required. ##### Share on other sites It's for a class I have to take; I'd really rather not have to take it, but it's required. At least its x86 assembly, the assembly class I had taught us 8085 assembly. ##### Share on other sites Writing assembly in this day and age might be the real WTF Hey, it's not entirely useless if you want to play with microcontrollers. (Ok, most of them have C compilers these days...ahem...) ##### Share on other sites Assuming that case matters and that you can safely use "Print" and "print" seems to be a bigger WTF to me.  Most of the world has long since moved on from those days, and even in places where it still does matter (such as C) seeing this kind of thing would make me wince. The sooner that the rest of the world wises up and ditches case-sensitivity the better.  I long for the day when I'm writing or reading code and I no longer have to fret over whether it was really intended to use "E" or "e" (simplistic examples to illustrate a point) in the current block (particularly if they're the same type). ##### Share on other sites Yeah apart when you are debugging someone else application, like, I don't know, something which has been written by 20 unpaid under-graduate over the course of 5 years (true story, which i guess is more common than i think), have fun with this one in a case sensitive language. Because, yes you'll find plenty of Objects oBjects ObJeCts and so on (not to mention my all time favorite: little i and big I as loop counters in the same complex nested loop) littered all over the code base making the whole thing a mess to debug. I never run into that problem. Constant? All upper, with underscores. Composite type? Title case. Function or variable? Camel case. Dealing with others' code is behind a layer, so that the differing styles are contained. So in fact you run in this problem all the time, but you're so used to work with it with a convention that you don't notice it. And hoping that everyone (current and future), everywhere, is using the exact same when you're coding with others. :p ##### Share on other sites So in fact you run in this problem all the time, but you're so used to work with it with a convention that you don't notice it. And hoping that everyone (current and future), everywhere, is using the exact same when you're coding with others. On the contrary. This is making use of an available feature to one's advantage. Code using consistent capitalization rules to distinguish identifier families (such as constants, function names, variables...) is (usually) much easier to read and understand than if this is not encoded at all or using something like "hungarian" (which is an abomination, think of the LPWTF stuff in the Windows headers, for example). Using capitalization to your advantage is in my opinion a very good compromise between "simply not knowing" and "knowing, but having to parse unintellegible letter salad". Consider something like szName (which is already not that great compared to name, if you ask me, it's harder to read than necessary, impossible to pronounce, and what if you want to change the type one day?). Now you also wish to encode the fact that this is a local variable and not a function or class. So it will become something like varSzName, or szName_var. It's not improving the legibility (and intellegibility) or pronouncability of your code. ##### Share on other sites Sorry I may explain myself poorly sometimes, english isn't my primary language and i forget that my jokes tend to fall flat. Yes I agree with that, having some sort of convention (details varying from coder to coder) to name and differentiate variables / function / ... is of course a good thing no matter if the language is case sensitive or not. I am not disputing that in the slightest. The point I was trying to raise, poorly I guess, is that in an environment with multiple coders over a long period of time, case sensitive languages have a tendency (in my experience, which is not that big, true) to lead to the amusing stuff I talked about in the first part of my post. It's not the fault of the language itself, but that combined with "meh" coders make the work of the debugger (me, in that case) tenfold harder as I have to check the syntax, logic (that's standard) and also triple check that each of those successive coders that are were not there anymore are really calling Save and not save or any variant of the past year used for a totally different purpose, but hey how could he know that someone else, 2 years earlier used that to save something in a part of the code he never touched. The example is a bit extreme. I had to fix in a very (very) large code-base that was full of this kind of stuff during 6 months, so I may be kinda biased Edited by SerialKicked
2018-03-23 18:59:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2675876319408417, "perplexity": 1551.135902187308}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257648431.63/warc/CC-MAIN-20180323180932-20180323200932-00650.warc.gz"}
http://tech.bluesmoon.info/2010/11/submitting-cross-domain-soap-requests.html
# The other side of the moon /bb|[^b]{2}/ Never stop Grokking ## Thursday, November 04, 2010 ### Submitting cross-domain SOAP requests from the browser without XForms Web service calls from a web page to a back end service can easily be made using XHR as long as the service runs on the same domain that the page is served from. For cross-domain requests, however, we have a problem. Typical methods of doing cross-domain requests require script nodes, a server side proxy, a flash based transport, or submitting a hidden form to an iframe target. While the server side proxy and flash based transport both add an external dependency the script node can only make a GET request, and the hidden form approach can only send URL encoded key/value pairs... that's until we try a bit of trickery. <form id="soap" method="POST" action="[SOAP entry point URL]" enctype="text/plain"> <textarea name="<?xml version"> "1.0" encoding="UTF-8"?> [SOAP message here] </textarea> </form> <script> document.getElementById("soap").submit(); </script> And that's it. The key elements are highlighted. In particular, you set the form's enctype attribute to text/plain. This makes sure that none of the data is URL encoded. Then a clever trick that works well with XML documents. Set the text field's name to <?xml version, ie, the starting text of an XML document. Omit the = sign and set the value to everything else. When the form is submitted, the browser sends form fields as key=value, one on each line (that's how text/plain works). In this case, it sends the following: <?xml version="1.0" encoding="UTF-8"?> [SOAP message here] Which essentially submits the SOAP payload to the web service. #### Caveats Naturally, all browsers don't work alike. For this particular example, all Webkit based browsers are broken. They don't handle an enctype of text/plain correctly. Chrome, Safari and Konqueror all set the Content-type header to text/plain, but the actual submitted data is URL encoded. This is consistent with research done by Gregory Fleischer and Bug #20795 filed on WebKit. Firefox (as far as Netscape 4 IIRC, probably earlier), IE (6 and above) and Opera handle it correctly. #### C-Surfing on SOAP There are security concerns with this approach as well, and in my opinion they are bigger than any benefit this might bring. An attacker can use this method to CSRF your SOAP based web services. Given this, it's a good idea to make sure that all your web service calls also have some kind of nonce or token that can only be generated if the request originated from your site. Anonymous This is awesome...a great finding @senthil_hi सत्य प्रकाश so, the bug is not a bug on webkit browser. They are avoiding any security issue! Anonymous Totally cool ... thanks! Karolis Tezmo magic! the thing i was looking!
2017-06-22 18:32:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3076433837413788, "perplexity": 5899.262609395011}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128319688.9/warc/CC-MAIN-20170622181155-20170622201155-00587.warc.gz"}
https://mathoverflow.net/questions/138247/prove-that-sqrt2-sqrt2-is-an-irrational-number-without-using-a-theorem
# Prove that ${\sqrt2}^{\sqrt2}$ is an irrational number without using a theorem Prove that ${\sqrt2}^{\sqrt2}$ is an irrational number without using the Gel'fond-Schneider's theorem. We know that ${\sqrt2}^{\sqrt2}$ is a transcendental number by the Gel'fond-Schneider's theorem. I've tried to prove that ${\sqrt2}^{\sqrt2}$ is an irrational number without using the Gel'fond-Schneider's theorem, but I'm facing difficulty. I need your help. This question has been asked previously on math.SE without receiving any answers. • This is the relevant MSE thread. Jul 31 '13 at 14:40 • You want something a bit more precise. For example, I expect you do not want to deduce this from Kuzmin's result preceding Gel'fond-Schneider. Jul 31 '13 at 14:42 • @Andres Caicedo:Thank you very much for good information. As you wrote, the answer on your page is not what I want. Jul 31 '13 at 14:55 • What leads you to expect that this should be possible? There's only one context in which I've seen a discussion of proving something about $\sqrt 2^{\sqrt 2}$ without using Gelfond-Schneider; see math.hmc.edu/funfacts/ffiles/30002.3-5.shtml for example. Jul 31 '13 at 15:28 • $\left(\sqrt2^\sqrt2\right)^2=2^\sqrt2$ appears to be irrational, and it looks like an easier thing to prove... Jul 31 '13 at 15:43 You do not need to use Gelfond-Schneider theorem, you can just repeat one of the well known easy proofs of that theorem. For example, a much stronger theorem is proved in an Appendix to Lang's "Algebra", the proof is only 4 pages long. The proof uses only elementary linear algebra, some calculus and first notions of Galois theory. That proof can be significantly shortened if you want to prove irrationality only. Edit I have written a more or less complete proof here . Galois theory is not needed there (except for the fact that the norm of an integral element is an integer), but one needs the maximal modulus theorem for analytic functions. I am sure that can be avoided also. From calculus, one needs the Taylor formula (no integration is required). • That theorem also uses a little bit of complex analysis --- the statement of the theorem mentions poles of meromorphic functions. It also mentions transcendence degree, though perhaps you count that among the "first notions of Galois theory." You also need to know about integral closures, free modules, dual basis, maximum modulus principle, .... Aug 1 '13 at 3:07 • @GerryMyerson: All that is not needed if you deal with the function $2^x$, and only that function is needed for the question. I think the whole proof would just be 2 pages long. The idea is very straightforward (I had that theorem on my oral qual exam for graduate school 35 years ago). – user6976 Aug 1 '13 at 3:16 • @Mark Sapir. A brilliant proof and a clear flow of ideas. The only problem is that almost every particular formula has a misprint in it, starting with $k<r$, which should be $k<n$, and ending with the outlandish choice of $R$ at the culmination moment. If you could correct all these stupid typos that severely restrict the list of potential readers of your nice opus, I'll make all graduate students in my class read it :-). Sep 5 '13 at 1:08 • @fedja: Thanks, but the proof is Lang's (possibly with a co-author), and your students should read Lang. In fact the main ideas go back to Hermite and it would be useful for students to compare Hermite's proofs with Lang's. The misprints are the results of me trying to use LaTeX in wordpress (comparing to MO, this turned out to be difficult). – user6976 Sep 5 '13 at 1:43 • @Mark Sapir Archimedes, Mark, Archimedes! The guy knew everything, just said half of it, wrote quarter of it, and the monks took care of destroying most of his writings ;). Yes, they should read Lang, Hermite, etc., but after they see that the stuff is neat and not over their heads, and your proof is a perfect preparatory text for that (minus misprints, of course). :-). Sep 5 '13 at 2:08
2021-10-27 11:03:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7323361039161682, "perplexity": 433.5844962551137}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588113.25/warc/CC-MAIN-20211027084718-20211027114718-00452.warc.gz"}
https://brilliant.org/problems/easy-55/
# Easy Algebra Level pending $\begin{eqnarray} \pi &=& 3.141592\ldots \quad \quad \text{(Pi)} \\ \phi &=& 1.618033\ldots \quad \quad \text{(Golden ratio)} \\ \gamma &=& 0.577215\ldots \quad\quad \text{(Euler Constant)} \\ e &=& 2.718282\ldots \quad \quad \text{(Natural constant)} \end{eqnarray}$ Which of these numbers is the largest? a. $$\pi^e$$ b. $$e^\pi$$ c. $$e^\gamma$$ d. $$\pi^\phi$$ ×
2017-12-14 10:20:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5067392587661743, "perplexity": 6684.507693750994}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948543611.44/warc/CC-MAIN-20171214093947-20171214113947-00333.warc.gz"}
https://mathscholar.org/page/2/
## PiDay 2020: A catalogue of formulas involving pi, with analysis I have prepared a new paper containing a catalogue of 72 summation formulas, integral formulas and iterative algorithms for Pi. The catalogue contains both classical and modern formulas, ranging from Archimedes’ 2200-year-old algorithm to intriguing formulas found by Ramanujan and the quadratic, cubic, quartic and nonic algorithms of Jonathan Borwein and Peter Borwein, the latter of which double, triple, quadruple and nine-times, respectively, the number of correct digits with each iteration. The catalogue of formulas and iterative algorithm is followed by results of carefully designed computer implementations, which enable one to compare the relative speed of these formulas. Continue reading PiDay 2020: A catalogue of formulas involving pi, with analysis ## Pi Day 2020: A new crossword puzzle Yes, it is that time of year — Pi Day (March 14, or 3/14 in North American month/day date notation) is here. So in honor of the occasion, I have constructed a new crossword puzzle — see below. This puzzle honors several of the key persons through history who have made significant contributions to the theory and computation of Pi. This puzzle conforms to the New York Times crossword conventions. As far as difficulty level, it would be comparable to the NYT Tuesday or Wednesday puzzles (the NYT puzzles are graded each week from Monday [easiest] to Saturday [most difficult]). Continue reading Pi Day 2020: A new crossword puzzle ## Universe or multiverse? The war rages on Credit: Quanta Magazine Introduction A growing controversy over the multiverse and the anthropic principle has exposed a major fault line in modern physics and cosmology. Some researchers see the multiverse and the anthropic principle as inevitable, others see them as an abdication of empirical science. The controversy spans quantum mechanics, inflationary Big Bang cosmology, string theory, supersymmetry and, more generally, the proper roles of experimentation and mathematical theory in modern science. The “many worlds interpretation” of quantum mechanics Since the 1930s, when physicists first developed the mathematics behind quantum mechanics, researchers have found that this theory appears to Continue reading Universe or multiverse? The war rages on ## Do probability arguments refute evolution? Introduction Both traditional creationists and intelligent design writers have invoked probability arguments in criticisms of biological evolution. They argue that certain features of biology are so fantastically improbable that they could never have been produced by a purely natural, “random” process, even assuming the billions of years of history asserted by geologists and astronomers. They often equate the hypothesis of evolution to the absurd suggestion that monkeys randomly typing at a typewriter could compose a selection from the works of Shakepeare, or that an explosion in an aerospace equipment yard could produce a working 747 airliner [Dembski1998; Foster1991; Hoyle1981; Continue reading Do probability arguments refute evolution? MathJax TeX Test PageMathJax.Hub.Config({tex2jax: {inlineMath: [[‘$’,’$’], [‘\$$‘,’\$$’]]}}); Factorization and cryptography Until a few decades ago, number theory, namely the study of prime numbers, factorization and other features of the integers, was widely regarded as the epitome of pure mathematics, completely divorced from considerations of practical utility. This sentiment was expressed most memorably by British mathematician G.H. Hardy (best known for mentoring Ramanujan and results on the Riemann Zeta function), who wrote in his book A Mathematician’s Apology (1941), I have never done anything “useful”. No discovery of mine has made, or is likely to make, directly or indirectly, for good ## Jim Simons: The man who solved the market Gregory Zuckerman, author of The Greatest Trade Ever, has published a new book highlighting the life and work of Jim Simons, who, at the age of 40, walked away from a very successful career as a research mathematician and cryptologist to try his hand at the financial markets, and ultimately revolutionized the field. Zuckerman’s new book is titled The Man Who Solved the Market: How Jim Simons Launched the Quant Revolution. Simons’ background hardly suggested that he would one day lead one of the most successful, if not the most successful, quantitative hedge fund operation in Continue reading Jim Simons: The man who solved the market ## The scientific debate is over: it is time to act on climate change Credit: IPCC The facts of climate change At this point in time, the basic facts of climate change are not disputable in the least. Careful planet-wide observations by NASA and others have confirmed that 2018 was the fourth-warmest year in recorded history. The only warmer years were 2016, 2017 and 2015, respectively, and 18 of the 19 warmest years in history have occurred since 2001. Countless observational studies and supercomputer simulations have confirmed both the fact of warming and the conclusion that this warming is principally due to human activity. These studies and computations have been scrutinized in great ## Quantum supremacy has been achieved; or has it? IBM’s “Q” quantum computer; courtesy IBM For at least three decades, teams of researchers have been exploring quantum computers for real-world applications in scientific research, engineering and finance. Researchers have dreamed of the day when quantum computers would first achieve “supremacy” over classical computers, in the sense that a quantum computer solving a particular problem faster than any present-day or soon-to-be-produced classical computer system. In a Nature article dated 23 October 2019, researchers at Google announced that they have achieved exactly this. Google researchers employed a custom-designed quantum processor, named “Sycamore,” consisting of programmable quantum Continue reading Quantum supremacy has been achieved; or has it? ## Pi as the limit of n-sided circumscribed and inscribed polygons MathJax TeX Test PageMathJax.Hub.Config({tex2jax: {inlineMath: [[‘$’,’$’], [‘\$$‘,’\$$’]]}}); Credit: Ancient Origins Introduction In a previous Math Scholar blog, we presented Archimedes’ ingenious scheme for approximating $\pi$, based on an analysis of regular circumscribed and inscribed polygons with $3 \cdot 2^k$ sides, using modern mathematical notation and techniques. One motivation for both the previous blog and this blog is to respond to some recent writers who reject basic mathematical theory and the accepted value of $\pi$, claiming instead that they have found $\pi$ to be a different value. For example, one author asserts that $\pi = 17 – 8 \sqrt{3} = Continue reading Pi as the limit of n-sided circumscribed and inscribed polygons ## New paper proves 80-year-old approximation conjecture MathJax TeX Test PageMathJax.Hub.Config({tex2jax: {inlineMath: [[‘$’,’\$’], [‘\$$‘,’\$$’]]}}); Log_10 of the error of a continued fraction approximation of Pi to k terms Approximation of real numbers by rationals The question of finding rational approximations to real numbers was first explored by the Greek scholar Diophantus of Alexandra (c. 201-285 BCE), and continues to fascinate mathematicians today, in a field known as Diophantine approximations. It is easy to see that any real number can be approximated to any desired accuracy by simply taking the sequence of approximations given by the decimal digits out to some point, divided by the appropriate power Continue reading New paper proves 80-year-old approximation conjecture
2020-10-23 02:53:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5971124768257141, "perplexity": 2456.8300623209}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107880519.12/warc/CC-MAIN-20201023014545-20201023044545-00034.warc.gz"}
https://www.math.columbia.edu/~dejong/wordpress/?m=201102
# Descent of locally free modules Locally free modules do not satisfy descent for fpqc coverings. I have an example involving a countable “product” of affine curves, which I will upload to the stacks project soon. But what about fppf descent? Suppose A —> B is a faithfully flat ring map of finite presentation. Let M be an A-module such that M ⊗_A B is free. Is M a locally free A-module? (By this I mean locally free on the spectrum of A.) It turns out that if A is Noetherian, then the answer is yes. This follows from the results of Bass in his paper on “big” projective modules. But in general I don’t know the answer. If you do know the answer, or have a reference, please email me. # Nonzero kernel Another fun algebra lemma: If R is a ring and φ : M —> N is a map of finite free R-modules with rank(M) > rank(N), then the kernel of φ is not zero. # Finite fibres Suppose that f : X —> Y is a morphism of projective varieties and y is a point of Y such that there are only finitely many points x_1, …, x_r in X mapping to y. Then there exists an affine open neighborhood V of y in Y such that f^{-1}(V) —> V is finite. How do you prove this? Here is a fun argument. First you prove that f is a projective morphism, and hence we can generalize the statement to arbitrary projective morphism. This is good because then we can localize on Y and reach the situation where Y is affine. In this case X is quasi-projective and we can find an affine open U of X containing x_1, …, x_r, see Lemma Tag 01ZY. Then f(X \ U) is closed and does not contain y. Hence we can find a principal open V of Y such that f^{-1}(V) \subset U. In particular f^{-1}(V) = U ∩ f^{-1}(V) is a principal open of U, whence affine. Now f^{-1}(V) —> V is a projective morphism of affines. There is a cute argument proving that a universally closed morphism of affines is an integral morphism, see Lemma Tag 01WM. Finally, an integral morphism of finite type is finite. Of course, the same thing is true for proper morphisms… see Lemma Tag 02UP. # Dimension of varieties This semester I am continuing my course on algebraic geometry. I wanted to list here the steps I used to get a useful dimension theory for varieties so that the next time I teach I can look it up: 1. Prove going up for finite ring maps (done last semester). 2. For a finite surjective morphism of schemes X —> Y you prove that dim(X) = dim(Y) using going up and the fact that the fibres are discrete. 3. Prove the Krull Hauptidealsatz: In a Noetherian ring a prime minimal over a principal ideal has height at most 1. For a proof see [E, page 232]. 4. Generalize to longer sequences: In a Noetherian ring a prime minimal over (f_1, …, f_r) has height at most r. For a proof see [E, page 233]. 5. If A is a Noetherian local ring and x ∈ m_A then dim(A/xA) ∈ {dim(A), dim(A) – 1} and is equal to dim(A) – 1 if and only if x is not contained in any of the minimal prime ideals of A. In particular if x is a nonzero divisor then dim(A/xA) = dim(A) – 1. 6. Prove that if A is a Noetherian local ring, then dim(A) is equal to the minimal number of elements generating an ideal of definition. 7. If Z is irreducible closed in a Noetherian scheme X show that codim(Z, X) is the dimension of O_{X, ξ} where ξ is the generic point of Z. 8. A closed subvariety Z of an affine variety X has codimension 1 if and only if it is an irreducible component of V(f) for some nonzero f ∈ Γ(X, O_X). 9. Prove Noether normalization. 10. If Z is a closed subvariety of X of codimension 1 show that trdeg_k k(Z) = trdeg_k k(X) – 1. This you do using Tate’s argument which you can find in Mumford’s red book: Namely you first do a Zariski shrinking to get to the situation where Z = V(f). Then you choose a finite dominant map Π : X —> A^d_k by Noether normalization. Then you let g = Nm(f) and you show that V(g) = Π(V(f)). Hence k(Z) is a finite extension of k(V(g)) and it is easy to show that k(V(g)) has transcendence degree d – 1. At this point you know that if you have ANY maximal chain of irreducible subvarieties {x} = X_0 ⊂ X_1 ⊂ X_2 ⊂ … ⊂ X_d = X, then the transcendence degree drops by exactly 1 in each step. Therefore we see that not only is the dimension equal to the transcendence degree of the function field, but also each maximal chain has the same length. This implies that dim(Z) + codim(Z, X) = dim(X) for any irreducible closed subvariety Z and in particular it implies that dim(O_{X, x}) = dim(X) for each closed point x ∈ X. Let me know if I neglected to mention a “biggish” step in the outline above. What is missing in this account of the theory is the link between dimension of a Noetherian local ring A and the degree of the Hilbert polynomial of the graded algebra Gr_{m_A}(A). Which is just so cool! Oh well, you can’t do everything… [E] Eisenbud, Commutative Algebra with a View Toward Algebraic Geometry. # Conditions on diagonal not needed In a recent contribution of Jonathan Wang to the stacks project we find the following criterion of algebraicity of stacks (see Lemma Tag 05UL): If X is a stack in groupoids over (Sch/S)_{fppf} such that there exists an algebraic space U and a morphism u : U —> X which is representable by algebraic spaces, surjective, and smooth, then X is an algebraic stack. In other words, you do not need to check that the diagonal is representable by algebraic spaces. The analogue of this statement for algebraic spaces is Lemma Tag 046K (for etale maps) and Theorem Tag 04S6 (for smooth maps). The quoted result is closely related to the statement that the stack associated to a smooth groupoid in algebraic spaces is an algebraic stack (Theorem Tag 04TK). Namely, given u : U —> X as above you can construct a groupoid by taking R = U x_X U and show that X is equivalent to [U/R] as a stack. But somehow the statements have different flavors. Finally, the result as quoted above is often how one comes about it in moduli theory: Namely, given a moduli stack M we often already have a scheme U and a representable smooth surjective morphism u : U —> M. Please try this out on your favorite moduli problem! # Universal flattening In this post I talked a bit about flattening of morphisms. Meanwhile I have written some more about this in the stacks project which led to a change in definitions. Namely, I have formally introduced the following terminology: 1. Given a morphism of schemes X —> S we say there exists a universal flattening of X if there exists a monomorphism of schemes S’ —> S such that the base change X_{S’} of X is flat over S’ and such that for any morphism of schemes T —> S we have that X_T is flat over T if and only if T —> S factors through S’. 2. Given a morphism of schemes X —> S we say there exists a flattening stratification of X if there exists a universal flattening S’ —> S and moreover S’ is isomorphic as an S-scheme to the disjoint union of locally closed subschemes of S. Of course the definition of “having a flattening stratification” this is a bit nonsensical, since we really want to know how to “enumerate” the locally closed subschemes so obtained. Please let me know if you think this terminology isn’t suitable. Perhaps the simplest case where a universal flattening doesn’t exist is the immersion of A^1 – {0} into A^2. Currently the strongest existence result in the stacks project is (see Lemma Tag 05UH): If f : X —> S is of finite presentation and X is S-pure then a universal flattening S’ —> S of X exists. Note that the assumptions hold f is proper and of finite presentation. It is much easier to prove that a flattening stratification exists if f is projective and of finite presentation and I strongly urge the reader to always use the result on projective morphisms, and only use the result quoted above if absolutely necessary. PS: I recently received a preprint by Andrew Kresch where, besides other results, he gives examples of cases where the universal flattening exists (he call this the “flatification”) but where there does not exist a flattening stratification. # A challenge Here is a challenge to an commutative algebraist out there. Give a direct algebraic proof of the following statement (see Lemma Tag 05U9): Let A —> B be a local ring homomorphism which is essentially of finite type. Let N be a finite type B-module. Let M be a flat A-module. Let u : N —> N be an A-module map such that N/m_AN —> M/m_AM is injective. Then u is A-universally injective, N is a B-module of finite presentation, and N is flat as an A-module. To my mind it is at least conceivable that there is a direct proof of this (not using the currently used technology). It wouldn’t directly imply all the wonderful things proved by Raynaud and Gruson but it would go a long way towards verifying some of them. In particular, it would give an independent proof of the following result (see Theorem Tag 05UA): Let f : X —> S be a finite type morphism of schemes. Let x ∈ X with s = f(x) ∈ S. Suppose that X is flat over S at all points x’ ∈ Ass(X_s) which specialize to x. Then X is flat over S at x. This result is used in an essential way in the main result on universal flattening which I will explain in the next blog post. # Purity Let f : X —> S be a morphism of finite type. The relative assassin Ass(X/S) of X/S is the set of points x of X which are embedded points of their fibres. So if f has reduced fibres or if f has fibres which are S_1, then these are just the generic points of the fibres, but in general there may be more. If T —> S is a morphism of schemes then it isn’t quite true that Ass(X_T/T) is the inverse image of Ass(X/S), but it is almost true, see Remark Tag 05KL. Definition: We say X is S-pure if for any x ∈ Ass(X/S) the image of the closure {x} is closed in S, and if the same thing remains true after any etale base change. Clearly if f is proper then X is pure over S. If f is quasi-finite and separated then X is S-pure if and only if X is finite over S (see Lemma Tag 05K4). It turns out that if S is Noetherian, then purity is preserved under arbitrary base change (see Lemma Tag 05J8), but in general this is not true (see Lemma Tag 05JK). If f is flat with geometrically irreducible (nonempty) fibres, then X is S-pure (see Lemma Tag 05K5). A key algebraic result is the following statement: Let A —> B be a flat ring map of finite presentation. Then B is projective as an A-module if and only if Spec(B) is pure over Spec(A), see Proposition Tag 05MD. The current proof involves several bootstraps and starts with proving the result in case A —> B is a smooth ring map with geometrically irreducible fibres. I challenge any commutative algebraist to prove this statement without using the language of schemes. You will find another challenge in the next post. # Update This morning I finished incorporating the material from sections 1 through 4 of the paper by Raynaud and Gruson into the stacks project. Most of it is  in the chapter entitled More on Flatness. There is a lot of very interesting stuff contained in this chapter and I will discuss some of those results in the following blog posts. Note that I previously blogged about this paper here, here, here, here, here, here, and here. It turns out that it was kind of a mistake to do this, as the payoff wasn’t as great as I had hoped for. Moreover, I don’t think you are going to find the chapter easy to read. So the benefit of having done this is mainly that I now understand this material very well, but I’m not sure if it is going to help any one else. Maybe the lesson is that I should stick to the strategy I have used in the past: only prove those statements that are actually needed to build foundations for algebraic stacks. This will sometimes require us to go back and generalize previous results but (1) we can do this as the stacks project is a “live” book, and (2) it is probably a good idea to rewrite earlier parts in order to improve them anyway. The long(ish) term plan for what I want work on for the stacks project now is the following: I will first add a discussion of Hilbert schemes/spaces/stacks parameterizing finite closed subscheme/space/stacks. I will prove just enough so I can prove this theorem of Artin: A stack which has a flat and finitely presented cover by a scheme is an algebraic stack. A preview for the argument is a write-up of Bhargav Bhatt you can find here. Curiously, Artin’s result for algebraic spaces is already in the stacks project: It is Theorem TAG 04S6. It was proved by a completely different method, namely using a Keel-Mori type argument whose punch line is explained on the blog here. # Update In the last two and and a half weeks I’ve updated the material on derived categories and derived functors. You can now find this material in a new chapter entitled Derived Categories. The original exposition defined the bounded below derived category as the homotopy category of bounded below complexes of injectives. This is actually a very good way to think about derived categories if you are mainly interested in computing cohomology of sheaves on spaces and/or sites. On the other hand, it does not tell you which problem derived functors really solve. Let’s discuss this a bit more in the setting of sheaves of modules on a ringed space (X, O_X). I will assume you know how to define cohomology of sheaves by injective resolutions, left derived functors by projective resolutions, you have heard that D(A) is complexes up to quasi-isomorphism, but you don’t yet know exactly why one makes this choice. Let F : Mod(O_X) —> A be a right exact functor from the abelian category of O_X-modules into an abelian category A. The category Mod(O_X) usually does not have enough projectives. Hence it wouldn’t work to define the bounded above derived category in terms of bounded above complexes of projectives. You could still make this definition but there wouldn’t be a functor from the category of modules into it and hence it wouldn’t suffice to compute left derived functors of F. In fact, what should be the “left derived functors” of F in this setting? Grothendieck, Verdier, and Deligne’s solution is the following: Let M be an O_X-module. Consider the category of all resolutions … —> K^{-1} —> K^0 —> M —> 0 where K^i is an arbitrary O_X-module. For any such resolution we can consider the complex F(K^*) = ( … —> F(K^{-1}) —> F(K^0) —> 0 ) in the abelian category A. We say that LF is defined at M if and only if the system of all F(K^*) is essentially constant up to quasi-isomorphism, i.e., essentially constant in the bounded above derived category D^-(A). If one can choose K^* so that F(K^*) is actually equal to this essentially constant value, then one says that K^* computes LF(M). These definitions are motivated by the case where there do exist enough projectives: in that case one shows that given a projective resolutions P^* there always exists a map P^* —> K^*, hence the system is essentially constant with value F(P^*). We say an object M is left acyclic for F if M computes LF. Note that this makes sense without knowing that LF is everywhere defined! It turns out that LF is defined for any M which has a resolution K^* where all K^n are left acyclic for F and that in this case F(K^*) is the value of RF(M) in D^-(A). For example, why is one allowed to use bounded above flat resolutions to compute tors? The reason is that flat modules are left acyclic for tensoring with a sheaf (this is not a triviality — it is something you have to prove; hint: use Lemma Tag 05T9). I started rewriting the material on derived categories because I gave 2 lectures about derived categories and derived functors in my graduate student seminar, and I wanted to understand the details. Let me know if you find any typos, errors, or lack of clarity. Also, there is still quite a bit missing, for example a discussion of derived categories of dg-modules would be cool.
2018-12-16 17:02:52
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.881662905216217, "perplexity": 416.8407022281657}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376827963.70/warc/CC-MAIN-20181216165437-20181216191437-00385.warc.gz"}
https://martinapugliese.gitbook.io/tales-of-science-and-data/probability-statistics-and-data-analysis/methods-theorems-and-laws/monte-carlo
Links # The Monte Carlo method ## What is The Monte Carlo is a probability-based method, very popular in Physics circles, to perform numerical estimations of quantities. It relies on the very simple idea of repeated random sampling and is often used to estimate integrals. In practice, what you do is drawing a lot of random numbers, and observing the number of them which respect a certain property, property that will give you the estimate of the quantity you're looking for and that is difficult to calculate analytically. The method is a very simple and elegant one, and this is why it's one of the silver bullets of Physics. It was proposed by S Ulam while working on military-related projects at the Los Alamos labs in 1940 and became a big contributor to the work of the Project Manhattan, see this historical review paper for a dive into those times. The name is a clear reference to the casino in Monaco. Monte Carlo estimations are regularly used in problems of • optimisation • numerical integration • drawing from a probability distribution The basic idea can be visualised by imagining that we can estimate the surface area of a lake by throwing stones across and counting the number of those which fall in the lake with respect to those that fall outside. The Monte Carlo method is fundamentally based on the law of large numbers (see page). ## Numerical integration: an example estimating $\pi$ A notebook with the code presented here can be seen here. This is a pedagogical example, cited in many places, for instance Wikipedia. Because of the relation linking $\pi$ to the area $A$ of a circle of radius $r$ , namely $A = \pi r^2 \ ,$ we can easily estimate $\pi$ via considering a quarter-circle inscribed in a square. The area of the square would simply be $r^2$ , hence the ratio of the two areas comes down to $\pi$ if the radius is 1, which is how we would estimate $\pi$ . For other examples, see the reference below. def f_quartercircle(x): """Quarter of a cirle in the first quadrant.""" return np.sqrt(1-x**2) # get 1000 numbers between 0 and 1, equally spaced, and the quarter circle function on them x = np.linspace(0, 1, num=1000) y = [f_quartercircle(item) for item in x] # loop over the number of extracted points, and extract them uniformly between 0 and 1, both x and y for n in [100, 1000, 5000, 10000, 20000, 30000, 50000]: points = np.random.uniform(0, 1, size=(n, 2)) # n points in the plane, randomly (uniformly) extracted in [0,1] under_points = [] over_points = [] # select if point is below or above the circle for point in points: if point[1] <= f_quartercircle(point[0]): under_points.append(point) else: over_points.append(point) # estimate pi as the ratio of number of points below circle to total est_pi = float(len(under_points)) / n * 4 # compute the relative error to the real pi, in percentage perc_err = abs(float(est_pi - np.pi))/np.pi print('Estimated pi at %d points: %f, with relative error %f' %(n, est_pi, perc_err)) This will print the estimation of $\pi$ with decreasing relative error. plt.figure(figsize=(10, 10)) plt.title('Final estimation plot') plt.plot(x, y, color='r', lw=3) plt.plot([point[0] for point in under_points], [point[1] for point in under_points], 'x', alpha=0.5) plt.plot([point[0] for point in over_points], [point[1] for point in over_points], 'x', alpha=0.5) plt.show(); Monte-Carloing pi ## Distribution sampling Using Monte Carlo for distribution sampling means doing this: 1. 1. generate some independent datasets under the condition of interest 2. 2. computer the numerical value of the estimator for each dataset, that is, the test statistic 3. 3. compute summary statistics over all the dataset test statistic computed If for instance we want to estimate the mean and standard deviation of a random variable $Y$ , and we have a sample of $N$ values of it, the sample mean and the sample standard deviation are effectively Monte Carlo estimations of those which by the law of large numbers we expect to converge to the population ones when $N$ is big enough. ## References 1. 1. Stan Ulam, John Von Neumann and the Monte Carlo method, Los Alamos Science special issue, 1987 2. 2. Wikipedia on the Monte Carlo method 3. 3. Some useful slides on the method, with examples, from the University of Geneva
2022-12-03 13:33:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 13, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6392093896865845, "perplexity": 1311.9261085055327}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710931.81/warc/CC-MAIN-20221203111902-20221203141902-00840.warc.gz"}
https://www.findfilo.com/maths-question-answers/sides-of-triangleabc-ab-7-cm-bc-5cm-ca-6cm-a-pole-9pu
Sides of triangleABC, AB=7 cm, BC=5cm, CA=6cm. A pole is stand at | Filo class 12 Math Calculus Curve Sketching 544 150 Sides of . A pole is stand at mid point of side . Angle of elevation of pole from vertex is , find height of pole (a) (b) (c) (d) 544 150 Connecting you to a tutor in 60 seconds.
2021-08-03 10:50:31
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9185046553611755, "perplexity": 4957.337995286301}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154457.66/warc/CC-MAIN-20210803092648-20210803122648-00039.warc.gz"}
https://artofproblemsolving.com/wiki/index.php?title=1996_AJHSME_Problems/Problem_1&oldid=80808
# 1996 AJHSME Problems/Problem 1 ## Problem How many positive factors of 36 are also multiples of 4? $\text{(A)}\ 2 \qquad \text{(B)}\ 3 \qquad \text{(C)}\ 4 \qquad \text{(D)}\ 5 \qquad \text{(E)}\ 6$ ## Solution The factors of $36$ are $1, 2, 3, 4, 6, 9, 12, 18,$ and $36$. The multiples of $4$ up to $36$ are $4, 8, 12, 16, 20, 24, 28, 32$ and $36$. Only $4, 12$ and $36$ appear on both lists, so the answer is $3$, which is option $\boxed{B}$. ## See also 1996 AJHSME (Problems • Answer Key • Resources) Preceded by1995 AJHSME Last Question Followed byProblem 2 1 • 2 • 3 • 4 • 5 • 6 • 7 • 8 • 9 • 10 • 11 • 12 • 13 • 14 • 15 • 16 • 17 • 18 • 19 • 20 • 21 • 22 • 23 • 24 • 25 All AJHSME/AMC 8 Problems and Solutions The problems on this page are copyrighted by the Mathematical Association of America's American Mathematics Competitions. Invalid username Login to AoPS
2020-12-04 02:33:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 12, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8763694167137146, "perplexity": 4271.538095238793}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141733120.84/warc/CC-MAIN-20201204010410-20201204040410-00072.warc.gz"}
https://friedlander.io/publications/2009-optimizing-costly-objectives/
# Optimizing Costly Functions with Simple Constraints: A Limited-Memory Projected Quasi-Newton Algorithm M. Schmidt, E. van den Berg, M. P. Friedlander, K. Murphy Proceeding of the 12th International Conference on Artificial Intelligence and Statistics, 2009 ## Abstract An optimization algorithm for minimizing a smooth function over a convex set is described. Each iteration of the method computes a descent direction by minimizing, over the original constraints, a diagonal plus low-rank quadratic approximation to the function. The quadratic approximation is constructed using a limited-memory quasi-Newton update. The method is suitable for large-scale problems where evaluation of the function is substantially more expensive than projection onto the constraint set. Numerical experiments on one-norm regularized test problems indicate that the proposed method is competitive with state-of-the-art methods such as bound-constrained L-BFGS and orthant-wise descent. We further show that the method generalizes to a wide class of problems, and substantially improves on state-of-the-art methods for problems such as learning the structure of Gaussian graphical models and Markov random fields. ## BiBTeX @InProceedings{SchmidtBergFriedlanderMurphy:2009, author = {M. Schmidt, E. van den Berg, M. P. Friedlander, K. Murphy}, title = {Optimizing Costly Functions with Simple Constraints: A Limited-Memory Projected Quasi-Newton Algorithm}, booktitle = {Proceedings of The Twelfth International Conference on Artificial Intelligence and Statistics (AISTATS) 2009}, pages = {456-463}, year = 2009, editor = {D. van Dyk and M. Welling}, volume = 5, address = {Clearwater Beach, Florida}, month = {April}, }
2019-11-20 22:45:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.483652263879776, "perplexity": 1907.5720788289636}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670635.48/warc/CC-MAIN-20191120213017-20191121001017-00360.warc.gz"}
https://calendar.math.illinois.edu/?year=2018&month=10&day=03&interval=day
Department of # Mathematics Seminar Calendar for events the day of Wednesday, October 3, 2018. . events for the events containing Questions regarding events or the calendar should be directed to Tori Corkery. September 2018 October 2018 November 2018 Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa 1 1 2 3 4 5 6 1 2 3 2 3 4 5 6 7 8 7 8 9 10 11 12 13 4 5 6 7 8 9 10 9 10 11 12 13 14 15 14 15 16 17 18 19 20 11 12 13 14 15 16 17 16 17 18 19 20 21 22 21 22 23 24 25 26 27 18 19 20 21 22 23 24 23 24 25 26 27 28 29 28 29 30 31 25 26 27 28 29 30 30 Wednesday, October 3, 2018 4:00 pm in 245 Altgeld Hall,Wednesday, October 3, 2018 #### Connecting the upper half plane, geodesic flows, and continued fractions ###### Claire Merriman (UIUC Math) Abstract: Continued fractions are frequently studied in number theory, but they can also be described geometrically. I will talk about continued fraction expansions as dynamical systems, and connect this symbolic system to tessellations. The first part will focus on the "regular" or "simple" continued fractions, where all of the numerators are 1. Then, I will show what happens when all of the numerators are $\pm 1$ and the denominators are all even or all odd.
2018-09-21 02:32:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4046914279460907, "perplexity": 242.8528761853046}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267156724.6/warc/CC-MAIN-20180921013907-20180921034307-00086.warc.gz"}
https://mdolab-pygeo.readthedocs-hosted.com/en/latest/DVGeometryMulti.html
# DVGeometryMulti class pygeo.parameterization.DVGeoMulti.DVGeometryMulti(comm=mpi4py.MPI.COMM_WORLD, checkDVs=True, debug=False, isComplex=False)[source] A class for manipulating multiple components using multiple FFDs and handling design changes near component intersections. Parameters commMPI.IntraComm, optional The communicator associated with this geometry object. checkDVsbool, optional Flag to check whether there are duplicate DV names in or across components. debugbool, optional Flag to generate output useful for debugging the intersection setup. isComplexbool, optional Flag to use complex variables for complex step verification. addComponent(comp, DVGeo, triMesh=None, scale=1.0, bbox=None, pointSetKwargs=None)[source] Method to add components to the DVGeometryMulti object. Parameters compstr The name of the component. DVGeoDVGeometry The DVGeometry object defining the component FFD. triMeshstr, optional Path to the triangulated mesh file for this component. scalefloat, optional A multiplicative scaling factor applied to the triangulated mesh coordinates. Useful for when the scales of the triangulated and CFD meshes do not match. bboxdict, optional Specify a bounding box that is different from the bounds of the FFD. The keys can include xmin, xmax, ymin, ymax, zmin, zmax. If any of these are not provided, the FFD bound is used. pointSetKwargsdict, optional Keyword arguments to be passed to the component addPointSet call for the triangulated mesh. addIntersection(compA, compB, dStarA=0.2, dStarB=0.2, featureCurves=None, distTol=1e-14, project=False, marchDir=1, includeCurves=False, intDir=None, curveEpsDict=None, trackSurfaces=None, excludeSurfaces=None, remeshBwd=True, anisotropy=[1.0, 1.0, 1.0])[source] Method that defines intersections between components. Parameters compAstr The name of the first component. compBstr The name of the second component. dStarAfloat, optional Distance from the intersection over which the inverse-distance deformation is applied on compA. dStarBfloat, optional Distance from the intersection over which the inverse-distance deformation is applied on compB. featureCurveslist or dict, optional Points on feature curves will remain on the same curve after deformations and projections. Feature curves can be specified as a list of curve names. In this case, the march direction for all curves is marchDir. Alternatively, a dictionary can be provided. In this case, the keys are the curve names and the values are the march directions for each curve. See marchDir for the definition of march direction. distTolfloat, optional Distance tolerance to merge nearby nodes in the intersection curve. projectbool, optional Flag to specify whether to project points to curves and surfaces after the deformation step. marchDirint, optional The side of the intersection where the feature curves are remeshed. The sign determines the direction and the value (1, 2, 3) specifies the axis (x, y, z). If remeshBwd is True, the other side is also remeshed. In this case, the march direction only serves to define the ‘free end’ of the feature curve. If None, the entire curve is remeshed. This argument is only used if a list is provided for featureCurves. includeCurvesbool, optional Flag to specify whether to include features curves in the inverse-distance deformation. intDirint, optional If there are multiple intersection curves, this specifies which curve to choose. The sign determines the direction and the value (1, 2, 3) specifies the axis (x, y, z). For example, -1 specifies the intersection curve as the one that is further in the negative x-direction. curveEpsDictdict, optional Required if using feature curves. The keys of the dictionary are the curve names and the values are distances. All points within the specified distance from the curve are considered to be on the curve. trackSurfacesdict, optional Points on tracked surfaces will remain on the same surfaces after deformations and projections. The keys of the dictionary are the surface names and the values are distances. All points within the specified distance from the surface are considered to be on the surface. excludeSurfacesdict, optional Points on excluded surfaces are removed from the intersection computations. The keys of the dictionary are the surface names and the values are distances. All points within the specified distance from the surface are considered to be on the surface. remeshBwdbool, optional Flag to specify whether to remesh feature curves on the side opposite that which is specified by the march direction. anisotropylist of float, optional List with three entries specifying scaling factors in the [x, y, z] directions. The factors multiply the [x, y, z] distances used in the curve-based deformation. Smaller factors in a certain direction will amplify the effect of the parts of the curve that lie in that direction from the points being warped. This tends to increase the mesh quality in one direction at the expense of other directions. This can be useful when the initial intersection curve is skewed. addPointSet(points, ptName, compNames=None, comm=None, applyIC=False, **kwargs)[source] Add a set of coordinates to DVGeometryMulti. The is the main way that geometry, in the form of a coordinate list, is manipulated. Parameters pointsarray, size (N,3) The coordinates to embed. These coordinates should all be inside at least one FFD volume. ptNamestr A user supplied name to associate with the set of coordinates. This name will need to be provided when updating the coordinates or when getting the derivatives of the coordinates. compNameslist, optional A list of component names that this point set should be added to. To ease bookkeepping, an empty point set with ptName will be added to components not in this list. If a list is not provided, this point set is added to all components. commMPI.IntraComm, optional Comm that is associated with the added point set. Does not work now, just added to be consistent with the API of other DVGeo types. applyICbool, optional Flag to specify whether this point set will follow the updated intersection curve(s). This is typically only needed for the CFD surface mesh. addVariablesPyOpt(optProb, globalVars=True, localVars=True, sectionlocalVars=True, ignoreVars=None, freezeVars=None, comps=None)[source] Add the current set of variables to the optProb object. Parameters optProbpyOpt_optimization class Optimization problem definition to which variables are added globalVarsbool Flag specifying whether global variables are to be added localVarsbool Flag specifying whether local variables are to be added ignoreVarslist of strings List of design variables the user doesn’t want to use as optimization variables. freezeVarslist of string List of design variables the user wants to add as optimization variables, but to have the lower and upper bounds set at the current variable. This effectively eliminates the variable, but it the variable is still part of the optimization. compslist List of components we want to add the DVs of. If no list is provided, we will add DVs from all components. getDVGeoDict()[source] Return a dictionary of component DVGeo objects. getLocalIndex(iVol, comp)[source] Return the local index mapping that points to the global coefficient list for a given volume. Parameters iVolint Index specifying the FFD volume. compstr Name of the component. getNDV()[source] Return the number of DVs. getValues()[source] Generic routine to return the current set of design variables. Values are returned in a dictionary format that would be suitable for a subsequent call to setDesignVars(). Returns dvDictdict Dictionary of design variables. getVarNames(pyOptSparse=False)[source] Return a list of the design variable names. This is typically used when specifying a wrt= argument for pyOptSparse. Examples >>> optProb.addCon(.....wrt=DVGeo.getVarNames()) pointSetUpToDate(ptSetName)[source] This is used externally to query if the object needs to update its point set or not. When update() is called with a point set, the self.updated value for pointSet is flagged as True. We reset all flags to False when design variables are set because nothing (in general) will up to date anymore. Here we just return that flag. Parameters ptSetNamestr The name of the pointset to check. setDesignVars(dvDict)[source] Standard routine for setting design variables from a design variable dictionary. Parameters dvDictdict Dictionary of design variables. The keys of the dictionary must correspond to the design variable names. Any additional keys in the dictionary are simply ignored. totalSensitivity(dIdpt, ptSetName, comm=None, config=None)[source] This function computes sensitivity information. Specificly, it computes the following: $$\frac{dX_{pt}}{dX_{DV}}^T \frac{dI}{d_{pt}}$$ Parameters dIdptarray of size (Npt, 3) or (N, Npt, 3) This is the total derivative of the objective or function of interest with respect to the coordinates in ‘ptSetName’. This can be a single array of size (Npt, 3) or a group of N vectors of size (Npt, 3, N). If you have many to do, it is faster to do many at once. ptSetNamestr The name of set of points we are dealing with commMPI.IntraComm, optional The communicator to use to reduce the final derivative. If comm is None, no reduction takes place. configstr or list, optional Define what configurations this design variable will be applied to Use a string for a single configuration or a list for multiple configurations. The default value of None implies that the design variable appies to ALL configurations. Returns dIdxDictdict The dictionary containing the derivatives, suitable for pyOptSparse. Notes The child and nDVStore options are only used internally and should not be changed by the user. update(ptSetName, config=None)[source] This is the main routine for returning coordinates that have been updated by design variables. Multiple configs are not supported. Parameters ptSetNamestr Name of point set to return. This must match one of those added in an addPointSet() call.
2022-12-09 21:41:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3051919639110565, "perplexity": 1943.046814796518}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711552.8/warc/CC-MAIN-20221209213503-20221210003503-00502.warc.gz"}
https://zbmath.org/?q=ai%3Aschroder.karl-heinz+se%3A2413
## Semidirect products of locally convex algebras and the three-space-problem.(English)Zbl 0995.46032 An algebra (over $$\mathbb R$$ or $$\mathbb C$$) containing an ideal $$C$$ and a subalgebra $$B$$ such that $$C\cap B= \{0\}$$ and $$A= C+ B$$ is called the semi-direct product of $$C$$ and $$B$$. If $${\mathcal T}$$ is a locally convex topology on $$A$$ and $$(c,b)\mapsto b$$ is a homeomorphism, then the preceding terminology is further amplified with the adjective topological. The authors present a method for constructing such algebras and they show that if both $$C$$ and $$B$$ are locally $$m$$-convex, then so is $$A$$. However, they also construct an example in which $${\mathcal T}$$ is a Banach-space topology, the ideal $$C$$ in the topology $${\mathcal T}\cap C$$, and the quotient space $$A/C$$ in the quotient-topology induced by $${\mathcal T}$$ are both Banach algebras, but $$(A,{\mathcal T})$$ is not a Banach algebra-multiplication in it is not $${\mathcal T}$$-continuous. ### MSC: 46H10 Ideals and subalgebras 46H05 General theory of topological algebras Full Text:
2022-08-12 02:15:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6191846132278442, "perplexity": 150.62373015125024}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571538.36/warc/CC-MAIN-20220812014923-20220812044923-00174.warc.gz"}
https://www.stat20.org/3-generalization/18-hypothesis-tests/notes.html
# Hypothesis Testing Measuring the consistency between a model and data class date March 17, 2023 Classical statistics features two primary methods for using a sample of data to make an inference about a more general process. The first is the confidence interval, which expresses the uncertainty in an estimate of a population parameter. The second classical method of generalization is the hypothesis test. The hypothesis test takes a more active approach to reasoning: it posits a specific explanation for how the data could be generated, then evaluates whether or not the observed data is consistent with that model. The hypothesis test is one of the most common statistical tools in the social and natural sciences, but the reasoning involved can be counter-intuitive. Let’s introduce the logic of a hypothesis test by looking at another criminal case that drew statisticians into the mix. ## Example: The United States vs Kristen Gilbert In 1989, fresh out of nursing school, Kristen Gilbert got a job at the VA Medical Center in Northampton, Massachusetts, not far from where she grew up1. Within a few years, she became admired for her skill and competence. Gilbert’s skill was on display whenever a “code blue” alarm was sounded. This alarm indicates that a patient has gone into cardiac arrest and must be addressed quickly by administering a shot of epinephrine to restart the heart. Gilbert developed for a reputation for her steady hand in these crises. By the mid-1990s, however, the other nurses started to grow suspicious. There seemed to be a few too many code blues, and a few too many deaths, during Gilbert’s shifts. The staff brought their concerns to the VA administration, who brought in a statistician to evaluate the data. ### The Data The data that the VA provided to the statistician contained the number of deaths at the medical center over the previous 10 years, broken out by the three shifts of the days: night, daytime, and evening. As part of the process of exploratory data analysis, the statistician constructed a plot. This visualization reveals several striking trends. Between 1990 and 1995, there were dramatically more deaths than the years before and after that interval. Within that time span, it was the evening shift that had most of the deaths. The exception is 1990, when the night and daytime shifts had the most deaths. So when was Gilbert working? She began working in this part of the hospital in March 1990 and stopped working in February 1996. Her shifts throughout that time span? The evening shifts. The one exception was 1990, when she was assigned to work the night shift. This evidence is compelling in establishing an association between Gilbert and the increase in deaths. When the district attorney brought a case against Gilbert in court, this was the first line of evidence they provided. In a trial, however, there is a high burden of proof. Could there be an alternative explanation for the trend found in this data? ### The role of random chance Suppose for a moment that the occurrence of deaths at the hospital had nothing to do with Gilbert being on shift. In that case we would expect that the proportion of shifts with a death would be fairly similar when comparing shifts where Gilbert was working and shifts where she was not. But we wouldn’t expect those proportions to be exactly equal. It’s reasonable to think that a slightly higher proportion of Gilbert’s shifts could have had a death just due to random chance, not due to anything malicious on her part. So just how different were these proportions in the data? The plot above shows data from 1,641 individual shifts, on which three different variables were recorded: the shift number, whether or not there was a death on the shift, and whether or not Gilbert was working that shift. Here are the first 10 observations. . library(tidyverse) set.seed(3224) code_blue <- read_csv("https://www.dropbox.com/s/yj3grtilupyj9pv/code_blue.csv?dl=1") %>% slice_sample(prop = 1) code_blue # A tibble: 1,641 × 3 shift death staff <dbl> <chr> <chr> 1 626 no no_gilbert 2 590 no no_gilbert 3 1209 no no_gilbert 4 1122 no no_gilbert 5 622 no no_gilbert 6 1536 no no_gilbert 7 1472 no no_gilbert 8 214 no gilbert 9 277 yes no_gilbert 10 1332 no no_gilbert # … with 1,631 more rows Using this data frame, we can calculate the sample proportion of shifts where Gilbert was working (257) that had a death (40) and compare them to the sample proportion of shifts where Gilbert was not working (1384) that had a death (34). $\hat{p}_{gilbert} - \hat{p}_{no\_gilbert} = \frac{40}{257} - \frac{34}{1384} = .155 - .024 = .131$ A note on notation: it’s common to use $$\hat{p}$$ (“p hat”) to indicate that a proportion has been computed from a sample of data. A difference of .131 seems dramatic, but is that within the bounds of what we might expect just due to chance? One way to address this question is to phrase it as: if in fact the probability of a death on a given shift is independent of whether or not Gilbert is on the shift, what values would we expect for the difference in observed proportions? We can answer this question by using simulation. To a simulate a world in which deaths are independent of Gilbert, we can 1. Shuffle (or permute) the values in the death variable in the data frame to break the link between that variable and the staff variable. 2. Calculate the resulting difference in proportion of deaths in each group. The rationale for shuffling values in one of the columns is that if in fact those two columns are independent of one another, then it was just random chance that led to a value of one variable landing in the same row as the value of the other variable. It could just as well have been a different pairing. Shuffling captures another example of the arbitrary pairings that we could have observed if the two variables were independent of one another2. By repeating steps 1 and 2 many many times, we can build up the full distribution of the values that this difference in proportions could take. . library(infer) null <- code_blue %>% specify(response = death, explanatory = staff, success = "yes") %>% hypothesize(null = "independence") %>% generate(reps = 5000, type = "permute") %>% calculate(stat = "diff in props") null %>% ggplot(aes(x = stat)) + geom_bar(col = "white", bins = 23) + theme_bw() + labs(x = "Difference in Proportions") As expected, in a world where these two variables are independent of one another, we would expect a difference in proportions around zero. Sometimes, however, that statistic might reach values of +/- .01 or .02 or rarely .03. In the 500 simulated statistics shown above, however, none of them reached beyond +/- .06. So if that’s the range of statistics we would expect in a world where random chance is the only mechanism driving the difference in proportions, how does it compare to the world that we actually observed? The statistic that we observed in the data was .131, more than twice the value of the most extreme statistic observed above. To put that into perspective, we can plot the observed statistic as a vertical line on the same plot. . null %>% ggplot(aes(x = stat)) + geom_bar(col = "white", bins = 23) + theme_bw() + labs(x = "Difference in Proportions") + annotate("segment", x = .131, xend = .131, y = 0, yend = 600, color = "tomato", lwd = 1.5) The method used above shows that the chance of observing a difference of .131 is incredibly unlikely if in fact deaths were independent of Gilbert being on shift. On this point, the statisticians on the case agreed that they could rule out random chance as an explanation for this difference. Something else must have been happening. ## Elements of a Hypothesis Test The logic used by the statisticians in the Gilbert case is an example of a hypothesis test. There are a few key components common to every hypothesis test, so we’ll lay them out one-by-one. A hypothesis test begins with the assertion of a null hypothesis. Null Hypothesis A description of the chance process for generating data. Sometimes referred to as $$H_0$$ (“H naught”). It is common for the null hypothesis to be that nothing interesting is happening or that it is business as usual, a hypothesis that statisticians try to refute with data. In Gilbert case, this could be described as “The occurrence of a death is independence of the presence of Gilbert” or “The probability of death is the same whether or not Gilbert is on shift” or “The difference in the probability of death is zero, when comparing shifts where Gilbert is present to shifts where Gilbert is not present”. Importantly, the null model describes a possible state of the world, therefore the latter two versions are framed in terms of parameters ($$p$$ for proportions) instead of observed statistics ($$\hat{p}$$). The hypothesis that something indeed is going on is usually framed as the alternative hypothesis. Alternative Hypothesis The assertion that a mechanism other than the null hypothesis generated the data. Sometimes referred to as $$H_A$$ (“H A). In the Gilbert case, the corresponding alternative hypothesis is that there is “The occurrence of a death is dependent on the presence of Gilbert” or “The probability of death is different whether or not Gilbert is on shift” or “The difference in the probability of death is non-zero, when comparing shifts where Gilbert is present to shifts where Gilbert is not present” In order to determine whether the observed data is consistent with the null hypothesis, it is necessary to compress the data down into a single statistic. Test Statistic A numerical summary of the observed data that bears on the null hypothesis. Under the null hypothesis it has a sampling distribution (also called a “Null Distribution”). In Gilbert’s case, a difference in two proportions, $$\hat{p}_1 - \hat{p}_2$$ is a natural test statistic and the observed test statistic was .131. It’s not enough, though, to just compute the observed statistic. We need to know how likely this statistic would be in a world where the null hypothesis is true. This probability is captured in the notion of a p-value. p-value The probability of a test statistic as rare or even more rare than the one observed under the assumptions of the null hypothesis. If the p-value is high, then the data is consistent with the null hypothesis. If the p-value is very low, however, there the statistic that was observed would be very unlikely in a world where the null hypothesis was true. As a consequence, the null hypothesis can be rejected as reasonable model for the data. The p-value can be estimated using the proportion of statistics from the simulated null distribution that are as or more extreme than the observed statistic. In the simulation for the Gilbert case, there were 0 statistics greater than .131, so the estimated p-value is zero. ## What a p-value is not The p-value has been called the most used as well as the most abused tool in statistics. Here are three common misinterpretations to be wary of. 1. The p-value is the probability that the null hypothesis is true (FALSE!) This is one of the most common confusions about p-values. Graphically, a p-value corresponds to the area in the tail of the null distribution that is more extreme than the observed test statistic. That null distribution can only be created if you assume that the null hypothesis is true. The p-value is fundamentally a conditional probability of observing the statistic (or more extreme) given the null hypothesis is true. It is flawed reasoning to start with an assumption that the null hypothesis is true and arrive at a probability of that same assumption. 2. A very high p-value suggests that the null hypothesis is true (FALSE!) This interpretation is related to the first one but can lead to particularly wrongheaded decisions. One way to keep your interpretation of a p-value straight is to recall the distinction made in the US court system. A trial proceeds under the assumption that the defendant is innocent. The prosecution presents evidence of guilt. If the evidence is convincing the jury will render a verdict of “guilty”. If the evidence is not-convincing (that is, the p-value is high) then the jury will render a verdict of “not guilty” - not a verdict of “innocent”. Imagine a setting where the prosecution has presented no evidence at all. That by no means indicates that the defendant is innocent, just that there was insufficient evidence to establish guilt. 3. The p-value is the probability of the data (FALSE!) This statement has a semblance of truth to it but is missing an important qualifier. The probability is calculated based on the null distribution, which requires the assumption that the null hypothesis is true. It’s also not quite specific enough. Most often p-values are calculated as probabilities of test statistics, not probabilities of the full data sets. Another more basic check on your understanding of a p-value: a p-value is a (conditional) probability, therefore it must between a number between 0 and 1. If you ever find yourself computing a p-value of -6 or 3.2, be sure to pause and revisit your calculations! ## One test, many variations The hypothesis testing framework laid out above is far more general than just this particular example from the case of Kristen Gilbert where we computed a difference in proportions and used shuffling (aka permutation) to build the null distribution. Below are just a few different research questions that could be addressed using a hypothesis test. • Pollsters have surveyed a sample of 200 voters ahead of an election to assess their relative support for the Republican and Democratic candidate. The observed difference in those proportions is .02. Is this consistent with the notion of evenly split support for the two candidates, or is one decidedly in the lead? • Brewers have tapped 7 barrels of beer and measured the average level of a compound related to the acidity of the beer as 610 parts per million. The acceptable level for this compound is 500 parts per million. Is this average of 610 consistent with the notion that the average of the whole batch of beer (many hundreds of barrels) is at the acceptable level of this compound? • A random sample of 40 users of a food delivery app were randomly assigned two different versions of a menu where they entered the amount of their tip: one with the tip amount in ascending order, the other in descending order. The average tip amount of those with the menu in ascending order was found to be $3.87 while the average tip of the users in the descending order group was$3.96. Could this difference in averages be explained by chance? Although the contexts of these problems are very different, as are the types of statistics they’ve calculated, they can still be characterized as a hypothesis test by asking the following questions: 1. What is the null hypothesis used by the researchers? 2. What is the value of the observed test statistic? 3. How did researchers approximate the null distribution? 4. What was the p-value, what does it tell us and what does it not tell us? ## Summary In classical statistics there are two primary tools for assessing the role that random variability plays in the data that you have observed. The first is the confidence interval, which quantifies the amount of uncertainty in a point estimate due to the variability inherent in drawing a small random sample from a population. The second is the hypothesis test, which postings a specific model by which the data could be generated, then assesses the degree to which the observed data is consistent with that model. The hypothesis test begins with the assertion of a null hypothesis that describes a chance mechanism for generating data. A test statistic is then selected that corresponds to that null hypothesis. From there, the sampling distribution of that statistic under the null hypothesis is approximated through a computational method (such as using permutation, as shown here) or one rooted in probability theory (such as the Central Limit Theorem). The final result of the hypothesis test procedure is the p-value, which is approximated as the proportion of the null distribution that is as or more extreme than the observed test statistic. The p-value measures the consistency between the null hypothesis and the observed test statistic and should be interpreted carefully. A postscript on the case of Kristen Gilbert. Although the hypothesis test ruled out random chance as the reason for the spike in deaths under her watch, it didn’t rule out other potential causes for that spike. It’s possible, after all, that the nightshifts that Gilbert was working happen to be the time of day when cardiac arrests are more common. For this reason, the statistical evidence was never presented to the jury, but the jury nonetheless found her guilty based on other evidence presented in the trial. ## The Ideas in Code A hypothesis test using permutation can be implemented by introducing one new step into the process used for calculating a bootstrap interval. The key distinction is that in a hypothesis test the researchers puts forth a model for how the data could be generated. That is the role of hypothesize(). #### hypothesize() A function to place before generate() in an infer pipeline where you can specify a null model under which to generate data. The one necessary argument is • null: the null hypothesis. Options include "independence" and "point". The following example implements a permutation test under the null hypothesis that there is no relationship between the body mass of penguins and their library(tidyverse) library(stat20data) library(infer) penguins %>% specify(response = body_mass_g, explanatory = sex) %>% hypothesize(null = "independence") Response: body_mass_g (numeric) Explanatory: sex (factor) Null Hypothesis: independence # A tibble: 333 × 2 body_mass_g sex <dbl> <fct> 1 3750 male 2 3800 female 3 3250 female 4 3450 female 5 3650 male 6 3625 female 7 4675 male 8 3200 female 9 3800 male 10 4400 male # … with 323 more rows Observe: • The output is the original data frame with new information appended to describe what the null hypothesis is for this data set. • There are other forms of hypothesis tests that you will see involving a "point" null hypothesis. Those require adding additional arguments to hypothesize(). #### Calculating an observed statistic Let’s say for this example you select as your test statistic a difference in means, $$\bar{x}_{female} - \bar{x}_{male}$$. While you can use tools you know - group_by() and summarize() to calculate this statistic, you can also recycle much of the code that you’ll use to build the null distribution with infer. obs_stat <- penguins %>% specify(response = body_mass_g, explanatory = sex) %>% calculate(stat = "diff in means") obs_stat Response: body_mass_g (numeric) Explanatory: sex (factor) # A tibble: 1 × 1 stat <dbl> 1 -683. #### Calculating the null distribution To generate a null distribution of the kind of differences in means that you’d observe in a world where body mass had nothing to do with sex, just add the hypothesis with hypothesize() and the generation mechanism with generate(). null <- penguins %>% specify(response = body_mass_g, explanatory = sex) %>% hypothesize(null = "independence") %>% generate(reps = 500, type = "permute") %>% calculate(stat = "diff in means") null Response: body_mass_g (numeric) Explanatory: sex (factor) Null Hypothesis: independence # A tibble: 500 × 2 replicate stat <int> <dbl> 1 1 -59.9 2 2 -125. 3 3 68.9 4 4 37.1 5 5 129. 6 6 36.5 7 7 -7.08 8 8 60.8 9 9 -16.7 10 10 -63.8 # … with 490 more rows Observe: • The output data frame has reps rows and 2 columns: one indicating the replicate and the other with the statistic (a difference in means). ## Footnotes 1. This case study appears in Statistics in the Courtroom: United States v. Kristen Gilbert by Cobb and Gelbach, published in Statistics: A Guide to the Unknown by Peck et. al.↩︎ 2. The technical notion that motivates the use of shuffling is a slightly more general notion than independence called exchangability. The distinction between these two related concepts is a topic in a course in probability.↩︎
2023-03-30 08:49:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5739740133285522, "perplexity": 996.2048565497711}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949107.48/warc/CC-MAIN-20230330070451-20230330100451-00458.warc.gz"}
https://socratic.org/questions/circle-a-has-a-center-at-1-8-and-an-area-of-18-pi-circle-b-has-a-center-at-8-1-a
# Circle A has a center at (1 ,8 ) and an area of 18 pi. Circle B has a center at (8 ,1 ) and an area of 45 pi. Do the circles overlap? Nov 21, 2016 The circles overlap #### Explanation: Circle A $a r e a = \pi {r}_{A}^{2} = 18 \pi$ So, ${r}_{A} = \sqrt{18} = 3 \sqrt{2}$ Circle B $a r e a = \pi {r}_{B}^{2} = 45 \pi$ So, ${r}_{B} = \sqrt{45} = 3 \sqrt{5}$ The distance between the centers of the circles $A \left({x}_{A} , {y}_{A}\right)$ and $B \left({x}_{B} , {y}_{B}\right)$ $d = \sqrt{{\left({x}_{B} - {x}_{A}\right)}^{2} + {\left({y}_{b} - {y}_{A}\right)}^{2}}$ $d = \sqrt{{\left(8 - 1\right)}^{2} + {\left(1 - 8\right)}^{2}} = \sqrt{{7}^{2} + {7}^{2}} = 7 \sqrt{2} = 9.9$ The sum of the radii $= {r}_{A} + {r}_{B} = 3 \sqrt{2} + 3 \sqrt{5} = 10.95$ Therefore, $d \le {r}_{A} + {r}_{B}$ So, the circles overlap The equations of the circles are ${\left(x - 1\right)}^{2} + {\left(y - 8\right)}^{2} = 18$ ${\left(x - 8\right)}^{2} + {\left(y - 1\right)}^{2} = 45$ graph{((x-1)^2+(y-8)^2-18)((x-8)^2+(y-1)^2-45)=0 [-22.45, 23.15, -5.73, 17.09]}
2022-07-06 14:16:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 12, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7309931516647339, "perplexity": 3056.826700081376}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104672585.89/warc/CC-MAIN-20220706121103-20220706151103-00237.warc.gz"}
https://openstax.org/books/elementary-algebra-2e/pages/1-7-decimals
Elementary Algebra 2e # 1.7Decimals ### Learning Objectives By the end of this section, you will be able to: • Name and write decimals • Round decimals • Multiply and divide decimals • Convert decimals, fractions, and percents Be Prepared 1.7 A more thorough introduction to the topics covered in this section can be found in the Prealgebra chapter, Decimals. ### Name and Write Decimals Decimals are another way of writing fractions whose denominators are powers of 10. $0.1=1100.1is “one tenth”0.01=11000.01is “one hundredth”0.001=11,0000.001 is “one thousandth”0.0001=110,0000.0001 is “one ten-thousandth”0.1=1100.1is “one tenth”0.01=11000.01is “one hundredth”0.001=11,0000.001 is “one thousandth”0.0001=110,0000.0001 is “one ten-thousandth”$ Notice that “ten thousand” is a number larger than one, but “one ten-thousandth” is a number smaller than one. The “th” at the end of the name tells you that the number is smaller than one. When we name a whole number, the name corresponds to the place value based on the powers of ten. We read 10,000 as “ten thousand” and 10,000,000 as “ten million.” Likewise, the names of the decimal places correspond to their fraction values. Figure 1.14 shows the names of the place values to the left and right of the decimal point. Figure 1.14 Place value of decimal numbers are shown to the left and right of the decimal point. ### Example 1.91 #### How to Name Decimals Name the decimal 4.3. Try It 1.181 Name the decimal: $6.7.6.7.$ Try It 1.182 Name the decimal: $5.8.5.8.$ We summarize the steps needed to name a decimal below. ### How To #### Name a Decimal. 1. Step 1. Name the number to the left of the decimal point. 2. Step 2. Write “and” for the decimal point. 3. Step 3. Name the “number” part to the right of the decimal point as if it were a whole number. 4. Step 4. Name the decimal place of the last digit. ### Example 1.92 Name the decimal: $−15.571.−15.571.$ Try It 1.183 Name the decimal: $−13.461.−13.461.$ Try It 1.184 Name the decimal: $−2.053.−2.053.$ When we write a check we write both the numerals and the name of the number. Let’s see how to write the decimal from the name. ### Example 1.93 #### How to Write Decimals Write “fourteen and twenty-four thousandths” as a decimal. Try It 1.185 Write as a decimal: thirteen and sixty-eight thousandths. Try It 1.186 Write as a decimal: five and ninety-four thousandths. We summarize the steps to writing a decimal. ### How To #### Write a decimal. 1. Step 1. Look for the word “and”—it locates the decimal point. • Place a decimal point under the word “and.” Translate the words before “and” into the whole number and place it to the left of the decimal point. • If there is no “and,” write a “0” with a decimal point to its right. 2. Step 2. Mark the number of decimal places needed to the right of the decimal point by noting the place value indicated by the last word. 3. Step 3. Translate the words after “and” into the number to the right of the decimal point. Write the number in the spaces—putting the final digit in the last place. 4. Step 4. Fill in zeros for place holders as needed. ### Round Decimals Rounding decimals is very much like rounding whole numbers. We will round decimals with a method based on the one we used to round whole numbers. ### Example 1.94 #### How to Round Decimals Round 18.379 to the nearest hundredth. Try It 1.187 Round to the nearest hundredth: $1.047.1.047.$ Try It 1.188 Round to the nearest hundredth: $9.173.9.173.$ We summarize the steps for rounding a decimal here. ### How To #### Round Decimals. 1. Step 1. Locate the given place value and mark it with an arrow. 2. Step 2. Underline the digit to the right of the place value. 3. Step 3. Is this digit greater than or equal to 5? • Yes—add 1 to the digit in the given place value. • No—do not change the digit in the given place value. 4. Step 4. Rewrite the number, deleting all digits to the right of the rounding digit. ### Example 1.95 Round 18.379 to the nearest tenth whole number. Try It 1.189 Round $6.5826.582$ to the nearest hundredth tenth whole number. Try It 1.190 Round $15.217515.2175$ to the nearest thousandth hundredth tenth. To add or subtract decimals, we line up the decimal points. By lining up the decimal points this way, we can add or subtract the corresponding place values. We then add or subtract the numbers as if they were whole numbers and then place the decimal point in the sum. ### How To 1. Step 1. Write the numbers so the decimal points line up vertically. 2. Step 2. Use zeros as place holders, as needed. 3. Step 3. Add or subtract the numbers as if they were whole numbers. Then place the decimal point in the answer under the decimal points in the given numbers. ### Example 1.96 Add: $23.5+41.38.23.5+41.38.$ Try It 1.191 Add: $4.8+11.69.4.8+11.69.$ Try It 1.192 Add: $5.123+18.47.5.123+18.47.$ ### Example 1.97 Subtract: $20−14.65.20−14.65.$ Try It 1.193 Subtract: $10−9.58.10−9.58.$ Try It 1.194 Subtract: $50−37.42.50−37.42.$ ### Multiply and Divide Decimals Multiplying decimals is very much like multiplying whole numbers—we just have to determine where to place the decimal point. The procedure for multiplying decimals will make sense if we first convert them to fractions and then multiply. So let’s see what we would get as the product of decimals by converting them to fractions first. We will do two examples side-by-side. Look for a pattern! Convert to fractions. Multiply. Convert to decimals. Notice, in the first example, we multiplied two numbers that each had one digit after the decimal point and the product had two decimal places. In the second example, we multiplied a number with one decimal place by a number with two decimal places and the product had three decimal places. We multiply the numbers just as we do whole numbers, temporarily ignoring the decimal point. We then count the number of decimal points in the factors and that sum tells us the number of decimal places in the product. The rules for multiplying positive and negative numbers apply to decimals, too, of course! When multiplying two numbers, • if their signs are the same the product is positive. • if their signs are different the product is negative. When we multiply signed decimals, first we determine the sign of the product and then multiply as if the numbers were both positive. Finally, we write the product with the appropriate sign. ### How To #### Multiply Decimals. 1. Step 1. Determine the sign of the product. 2. Step 2. Write in vertical format, lining up the numbers on the right. Multiply the numbers as if they were whole numbers, temporarily ignoring the decimal points. 3. Step 3. Place the decimal point. The number of decimal places in the product is the sum of the number of decimal places in the factors. 4. Step 4. Write the product with the appropriate sign. ### Example 1.98 Multiply: $(−3.9)(4.075).(−3.9)(4.075).$ Try It 1.195 Multiply: $−4.5(6.107).−4.5(6.107).$ Try It 1.196 Multiply: $−10.79(8.12).−10.79(8.12).$ In many of your other classes, especially in the sciences, you will multiply decimals by powers of 10 (10, 100, 1000, etc.). If you multiply a few products on paper, you may notice a pattern relating the number of zeros in the power of 10 to number of decimal places we move the decimal point to the right to get the product. ### How To #### Multiply a Decimal by a Power of Ten. 1. Step 1. Move the decimal point to the right the same number of places as the number of zeros in the power of 10. 2. Step 2. Add zeros at the end of the number as needed. ### Example 1.99 Multiply 5.63 by 10 by 100 by 1,000. Try It 1.197 Multiply 2.58 by 10 by 100 by 1,000. Try It 1.198 Multiply 14.2 by 10 by 100 by 1,000. Just as with multiplication, division of decimals is very much like dividing whole numbers. We just have to figure out where the decimal point must be placed. To divide decimals, determine what power of 10 to multiply the denominator by to make it a whole number. Then multiply the numerator by that same power of $10.10.$ Because of the equivalent fractions property, we haven’t changed the value of the fraction! The effect is to move the decimal points in the numerator and denominator the same number of places to the right. For example: $0.80.40.8(10)0.4(10)840.80.40.8(10)0.4(10)84$ We use the rules for dividing positive and negative numbers with decimals, too. When dividing signed decimals, first determine the sign of the quotient and then divide as if the numbers were both positive. Finally, write the quotient with the appropriate sign. We review the notation and vocabulary for division: $adividend÷bdivisor=cquotientbdivisorcquotientadividendadividend÷bdivisor=cquotientbdivisorcquotientadividend$ We’ll write the steps to take when dividing decimals, for easy reference. ### How To #### Divide Decimals. 1. Step 1. Determine the sign of the quotient. 2. Step 2. Make the divisor a whole number by “moving” the decimal point all the way to the right. “Move” the decimal point in the dividend the same number of places—adding zeros as needed. 3. Step 3. Divide. Place the decimal point in the quotient above the decimal point in the dividend. 4. Step 4. Write the quotient with the appropriate sign. ### Example 1.100 Divide: $−25.65÷(−0.06).−25.65÷(−0.06).$ Try It 1.199 Divide: $−23.492÷(−0.04).−23.492÷(−0.04).$ Try It 1.200 Divide: $−4.11÷(−0.12).−4.11÷(−0.12).$ A common application of dividing whole numbers into decimals is when we want to find the price of one item that is sold as part of a multi-pack. For example, suppose a case of 24 water bottles costs $3.99. To find the price of one water bottle, we would divide$3.99 by 24. We show this division in Example 1.101. In calculations with money, we will round the answer to the nearest cent (hundredth). ### Example 1.101 Divide: $3.99÷24.3.99÷24.$ Try It 1.201 Divide: $6.99÷36.6.99÷36.$ Try It 1.202 Divide: $4.99÷12.4.99÷12.$ ### Convert Decimals, Fractions, and Percents We convert decimals into fractions by identifying the place value of the last (farthest right) digit. In the decimal 0.03 the 3 is in the hundredths place, so 100 is the denominator of the fraction equivalent to 0.03. $00.03=310000.03=3100$ Notice, when the number to the left of the decimal is zero, we get a fraction whose numerator is less than its denominator. Fractions like this are called proper fractions. The steps to take to convert a decimal to a fraction are summarized in the procedure box. ### How To #### Convert a Decimal to a Proper Fraction. 1. Step 1. Determine the place value of the final digit. 2. Step 2. Write the fraction. • numerator—the “numbers” to the right of the decimal point • denominator—the place value corresponding to the final digit ### Example 1.102 Write 0.374 as a fraction. Try It 1.203 Write 0.234 as a fraction. Try It 1.204 Write 0.024 as a fraction. We’ve learned to convert decimals to fractions. Now we will do the reverse—convert fractions to decimals. Remember that the fraction bar means division. So $4545$ can be written $4÷54÷5$ or $54.54.$ This leads to the following method for converting a fraction to a decimal. ### How To #### Convert a Fraction to a Decimal. To convert a fraction to a decimal, divide the numerator of the fraction by the denominator of the fraction. ### Example 1.103 Write $−58−58$ as a decimal. Try It 1.205 Write $−78−78$ as a decimal. Try It 1.206 Write $−38−38$ as a decimal. When we divide, we will not always get a zero remainder. Sometimes the quotient ends up with a decimal that repeats. A repeating decimal is a decimal in which the last digit or group of digits repeats endlessly. A bar is placed over the repeating block of digits to indicate it repeats. ### Repeating Decimal A repeating decimal is a decimal in which the last digit or group of digits repeats endlessly. A bar is placed over the repeating block of digits to indicate it repeats. ### Example 1.104 Write $43224322$ as a decimal. Try It 1.207 Write $27112711$ as a decimal. Try It 1.208 Write $51225122$ as a decimal. Sometimes we may have to simplify expressions with fractions and decimals together. ### Example 1.105 Simplify: $78+6.4.78+6.4.$ Try It 1.209 Simplify: $38+4.9.38+4.9.$ Try It 1.210 Simplify: $5.7+1320.5.7+1320.$ A percent is a ratio whose denominator is 100. Percent means per hundred. We use the percent symbol, %, to show percent. ### Percent A percent is a ratio whose denominator is 100. Since a percent is a ratio, it can easily be expressed as a fraction. Percent means per 100, so the denominator of the fraction is 100. We then change the fraction to a decimal by dividing the numerator by the denominator. $6%6%$ $78%78%$ $135%135%$ Write as a ratio with denominator 100. $61006100$ $7810078100$ $135100135100$ Change the fraction to a decimal by dividing the numerator by the denominator. $0.060.06$ $0.780.78$ $1.351.35$ Table 1.30 Do you see the pattern? To convert a percent number to a decimal number, we move the decimal point two places to the left. ### Example 1.106 Convert each percent to a decimal: 62% 135% 35.7%. Try It 1.211 Convert each percent to a decimal: $9%9%$ $87%87%$ 3.9%. Try It 1.212 Convert each percent to a decimal: 3% 91% 8.3%. Converting a decimal to a percent makes sense if we remember the definition of percent and keep place value in mind. To convert a decimal to a percent, remember that percent means per hundred. If we change the decimal to a fraction whose denominator is 100, it is easy to change that fraction to a percent. $0.830.83$ $1.051.05$ $0.0750.075$ Write as a fraction. $8310083100$ $1510015100$ $751000751000$ The denominator is 100. $105100105100$ $7.51007.5100$ Write the ratio as a percent. $83%83%$ $105%105%$ $7.5%7.5%$ Table 1.31 Recognize the pattern? To convert a decimal to a percent, we move the decimal point two places to the right and then add the percent sign. ### Example 1.107 Convert each decimal to a percent: 0.51 1.25 0.093. Try It 1.213 Convert each decimal to a percent: 0.17 1.75 0.0825. Try It 1.214 Convert each decimal to a percent: 0.41 2.25 0.0925. ### Section 1.7 Exercises #### Practice Makes Perfect Name and Write Decimals In the following exercises, write as a decimal. 531. Twenty-nine and eighty-one hundredths 532. Sixty-one and seventy-four hundredths 533. Seven tenths 534. Six tenths 535. Twenty-nine thousandth 536. Thirty-five thousandths 537. Negative eleven and nine ten-thousandths 538. Negative fifty-nine and two ten-thousandths In the following exercises, name each decimal. 539. 5.5 540. 14.02 541. 8.71 542. 2.64 543. 0.002 544. 0.479 545. $−17.9−17.9$ 546. $−31.4−31.4$ Round Decimals In the following exercises, round each number to the nearest tenth. 547. 0.67 548. 0.49 549. 2.84 550. 4.63 In the following exercises, round each number to the nearest hundredth. 551. 0.845 552. 0.761 553. 0.299 554. 0.697 555. 4.098 556. 7.096 In the following exercises, round each number to the nearest hundredth tenth whole number. 557. 5.781 558. 1.6381 559. 63.479 560. $84.28184.281$ In the following exercises, add or subtract. 561. $16.92+7.5616.92+7.56$ 562. $248.25−91.29248.25−91.29$ 563. $21.76−30.9921.76−30.99$ 564. $38.6+13.6738.6+13.67$ 565. $−16.53−24.38−16.53−24.38$ 566. $−19.47−32.58−19.47−32.58$ 567. $−38.69+31.47−38.69+31.47$ 568. $29.83+19.7629.83+19.76$ 569. $72.5−10072.5−100$ 570. $86.2−10086.2−100$ 571. $15+0.7315+0.73$ 572. $27+0.8727+0.87$ 573. $91.95−(−10.462)91.95−(−10.462)$ 574. $94.69−(−12.678)94.69−(−12.678)$ 575. $55.01−3.755.01−3.7$ 576. $59.08−4.659.08−4.6$ 577. $2.51−7.42.51−7.4$ 578. $3.84−6.13.84−6.1$ Multiply and Divide Decimals In the following exercises, multiply. 579. $(0.24)(0.6)(0.24)(0.6)$ 580. $(0.81)(0.3)(0.81)(0.3)$ 581. $(5.9)(7.12)(5.9)(7.12)$ 582. $(2.3)(9.41)(2.3)(9.41)$ 583. $(−4.3)(2.71)(−4.3)(2.71)$ 584. $(−8.5)(1.69)(−8.5)(1.69)$ 585. $(−5.18)(−65.23)(−5.18)(−65.23)$ 586. $(−9.16)(−68.34)(−9.16)(−68.34)$ 587. $(0.06)(21.75)(0.06)(21.75)$ 588. $(0.08)(52.45)(0.08)(52.45)$ 589. $(9.24)(10)(9.24)(10)$ 590. $(6.531)(10)(6.531)(10)$ 591. $(55.2)(1000)(55.2)(1000)$ 592. $(99.4)(1000)(99.4)(1000)$ In the following exercises, divide. 593. $4.75÷254.75÷25$ 594. $12.04÷4312.04÷43$ 595. $117.25÷48117.25÷48$ 596. $109.24÷36109.24÷36$ 597. $0.6÷0.20.6÷0.2$ 598. $0.8÷0.40.8÷0.4$ 599. $1.44÷(−0.3)1.44÷(−0.3)$ 600. $1.25÷(−0.5)1.25÷(−0.5)$ 601. $−1.75÷(−0.05)−1.75÷(−0.05)$ 602. $−1.15÷(−0.05)−1.15÷(−0.05)$ 603. $5.2÷2.55.2÷2.5$ 604. $6.5÷3.256.5÷3.25$ 605. $11÷0.5511÷0.55$ 606. $14÷0.3514÷0.35$ Convert Decimals, Fractions and Percents In the following exercises, write each decimal as a fraction. 607. 0.04 608. 0.19 609. 0.52 610. 0.78 611. 1.25 612. 1.35 613. 0.375 614. 0.464 615. 0.095 616. 0.085 In the following exercises, convert each fraction to a decimal. 617. $17201720$ 618. $13201320$ 619. $114114$ 620. $174174$ 621. $−31025−31025$ 622. $−28425−28425$ 623. $15111511$ 624. $18111811$ 625. $1511115111$ 626. $2511125111$ 627. $2.4+582.4+58$ 628. $3.9+9203.9+920$ In the following exercises, convert each percent to a decimal. 629. 1% 630. 2% 631. 63% 632. 71% 633. 150% 634. 250% 635. 21.4% 636. 39.3% 637. 7.8% 638. 6.4% In the following exercises, convert each decimal to a percent. 639. 0.01 640. 0.03 641. 1.35 642. 1.56 643. 3 644. 4 645. 0.0875 646. 0.0625 647. 2.254 648. 2.317 #### Everyday Math 649. Salary Increase Danny got a raise and now makes $58,965.95 a year. Round this number to the nearest dollar thousand dollars ten thousand dollars. 650. New Car Purchase Selena’s new car cost$23,795.95. Round this number to the nearest dollar thousand dollars ten thousand dollars. 651. Sales Tax Hyo Jin lives in San Diego. She bought a refrigerator for $1,624.99 and when the clerk calculated the sales tax it came out to exactly$142.186625. Round the sales tax to the nearest penny and dollar. 652. Sales Tax Jennifer bought a $1,038.99 dining room set for her home in Cincinnati. She calculated the sales tax to be exactly$67.53435. Round the sales tax to the nearest penny and dollar. 653. Paycheck Annie has two jobs. She gets paid $14.04 per hour for tutoring at City College and$8.75 per hour at a coffee shop. Last week she tutored for 8 hours and worked at the coffee shop for 15 hours. How much did she earn? If she had worked all 23 hours as a tutor instead of working both jobs, how much more would she have earned? 654. Paycheck Jake has two jobs. He gets paid $7.95 per hour at the college cafeteria and$20.25 at the art gallery. Last week he worked 12 hours at the cafeteria and 5 hours at the art gallery. How much did he earn? If he had worked all 17 hours at the art gallery instead of working both jobs, how much more would he have earned? #### Writing Exercises 655. 656. Explain how you write “three and nine hundredths” as a decimal. 657. Without solving the problem “44 is 80% of what number” think about what the solution might be. Should it be a number that is greater than 44 or less than 44? Explain your reasoning. 658. When the Szetos sold their home, the selling price was 500% of what they had paid for the house 30 years ago. Explain what 500% means in this context. #### Self Check After completing the exercises, use this checklist to evaluate your mastery of the objectives of this section. What does this checklist tell you about your mastery of this section? What steps will you take to improve? Order a print copy As an Amazon Associate we earn from qualifying purchases. Want to cite, share, or modify this book? This book is Creative Commons Attribution License 4.0 and you must attribute OpenStax. • If you are redistributing all or part of this book in a print format, then you must include on every physical page the following attribution: • If you are redistributing all or part of this book in a digital format, then you must include on every digital page view the following attribution:
2021-09-23 18:59:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 158, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6309266090393066, "perplexity": 1217.2077291885987}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057427.71/warc/CC-MAIN-20210923165408-20210923195408-00113.warc.gz"}
http://www.dpreview.com/members/8133626077/comments
Total: 31, showing: 1 – 20 On Nikon Df Review preview (1628 comments in total) smphoto58: I was not aware of this body until today. This is an answer to my prayers. I HATE menus, always have. I have been using a D700 for the last 4 years and I STILL cannot get used to them. Nor do I like command dials either. I began using Nikon equipment in 1974 with the F2 and I still have it along with the MD2/MB1. I would bet I have put close to a mile of film through it since then. Since then I have added and F4, Nikkormat FT3 and Nikonos V. My D700 is used I manual about 99% of the time. For me it is nothing more than a digital version of my F2 or F4, All of my Nikkors, from 16mm f/2.8 fisheye to 600mm f/4 are AI/AIS so this fits perfectly. Although Nikon does not offer interchangeable focusing screens, focusingscree.com does. I ordered a microprism screen from them before my D700 ever arrived. Thank you Nikon for catering to your long time loyal followers! Nikon has never been able to practical interfaces on digital cameras too confusing too many menus. I sold my Nikon and bought a Sony, the interface is so elegant and straight forward I have never opened the manual Direct link | Posted on Jan 16, 2015 at 02:11 UTC On Video preview of the Canon Powershot G7 X article (77 comments in total) Nice but I think Sony trumps them in two areas, Video and Panoramas. From my experience no one is as good at Panoramas as Sony. Direct link | Posted on Sep 17, 2014 at 09:03 UTC as 10th comment | 15 replies I have both photoshop and paintshop pro. I confess I prefer the Corel user interface much more that any adobe products. All of the Photoshop products feel like their interface was lifted from the 1980's Adobe Illustrator which no mortal could decipher. I find myself using Paintshop for this reason, their image browsering function better, however X2 was the better than X6. My only compliant is about plug-in compatibility Direct link | Posted on Aug 29, 2014 at 10:21 UTC as 18th comment | 7 replies On Nikon 1 V3 First Impressions Review preview (432 comments in total) $1200 to get a camera that it too big to put in your pocket, has few lens and cost as much as a APSC DSLR - what is wrong with this picture. Gluing together technology is a nice science experiment but bad product development. There is effectively no price point space between DSLRs and PS cameras to slide in another alternative. To make this camera financially interesting in would need to be in the$500 range. Direct link | Posted on Jul 1, 2014 at 09:41 UTC as 82nd comment On Sony Cyber-shot DSC-RX100 III First Impressions Review preview (2962 comments in total) TFD: Nice camera however long past pocketable. I have several Sony micros Cybershot cameras that are pocketable and I use them this way and yes I understand the image quality trade off over my A77. For me once a camera does not fit in my pocket meaning using a bag or a strap I pretty much do not care if it weighs 2 lbs. or 5 lbs. Given the RX's questionable pocketability, if I was going to buy one I think it would be the RX10. My back pocket is all ready reserved for my iPhone. Side by side the S90 is rather a lot smaller than the RX. the S90 is 1.1 inches thick the RX10 III is 1.6 inches that makes it 45% thicker. Direct link | Posted on May 27, 2014 at 09:38 UTC On Sony Cyber-shot DSC-RX100 III First Impressions Review preview (2962 comments in total) Nice camera however long past pocketable. I have several Sony micros Cybershot cameras that are pocketable and I use them this way and yes I understand the image quality trade off over my A77. For me once a camera does not fit in my pocket meaning using a bag or a strap I pretty much do not care if it weighs 2 lbs. or 5 lbs. Given the RX's questionable pocketability, if I was going to buy one I think it would be the RX10. Direct link | Posted on May 21, 2014 at 09:28 UTC as 395th comment | 3 replies On Canon PowerShot G1 X Mark II Review preview (689 comments in total) forpetessake: "24-120mm equivalent F2.0-3.9 lens" When manufacturers lie it's call an advertisement. But of all sites DPR should know better than repeating the lies and leading the ignorant readers astray. The lens is 12.5-62.5mm F2.0-3.9, not the stated equivalent. The FF equivalent lens would be 24-120mm F3.8-7.5 -- a big difference. Technically - The f-number N is given by N = \frac{f}{D} \ where f is the focal length, and D is the diameter of the entrance pupil and is independent of the imaging area. Direct link | Posted on May 8, 2014 at 10:24 UTC On Hands on with the Pentax 645Z article (660 comments in total) 37,000 vs 8500 is not a order of magnitude Direct link | Posted on Apr 15, 2014 at 06:39 UTC as 187th comment | 5 replies On CP+ 2014: Hands-on with Canon PowerShot G1 X Mark II article (200 comments in total) kind of big might just as well by a DSLR Direct link | Posted on Feb 13, 2014 at 10:46 UTC as 37th comment | 4 replies On Nikon Coolpix P600, P530, S9700 go big on zoom range article (45 comments in total) I wonder if Nikon ever fixed their awful user interface, they were(are) the kings of death by a 1000 menus. Direct link | Posted on Feb 7, 2014 at 09:15 UTC as 17th comment | 1 reply On Fujifilm teases upcoming SLR-style X system camera article (919 comments in total) I hope there is an optional, retro flashbulb flash using #5 bulbs. Direct link | Posted on Jan 20, 2014 at 13:57 UTC as 178th comment Why not just get Canon or Nikon to make cameras with a built in cell phone. Direct link | Posted on Jan 14, 2014 at 09:51 UTC as 19th comment | 7 replies On Nikon Df Review preview (1628 comments in total) They should have built it with a mech. shutter a mech self timer and match needle exposure. An optional external flashbulb flash using number 5 flashbulbs :) Retro technology gee, if it was cheap and cheerful that would be great, not at these prices. Direct link | Posted on Dec 20, 2013 at 21:31 UTC as 227th comment On Nikon Df preview (2816 comments in total) TFD: Is there an optional retro Flash bulb flash taking big #5 flash bulbs. I hope it has an analogue match needle exposure meter and a mechanical shutter cannot be retro without the sound of the mechanical shutter on long exposures. Of course the original Nikon F's were manual focus so not sure that autofocus should be available on a retro camera, all you should need is a split screen viewfinder and your eye. For me Retro technology in not logical. No one is going to buy a Tube TV or a x286 computer. the Df is just a silly marketing scheme, so given that it is a marketing scheme why not go all the way back - why not just let Form overtake Function. Direct link | Posted on Nov 25, 2013 at 16:34 UTC On Nikon Df preview (2816 comments in total) Is there an optional retro Flash bulb flash taking big #5 flash bulbs. I hope it has an analogue match needle exposure meter and a mechanical shutter cannot be retro without the sound of the mechanical shutter on long exposures. Of course the original Nikon F's were manual focus so not sure that autofocus should be available on a retro camera, all you should need is a split screen viewfinder and your eye. Direct link | Posted on Nov 25, 2013 at 12:21 UTC as 136th comment | 9 replies you must have and awfully big pocket. Direct link | Posted on Nov 20, 2013 at 16:40 UTC as 74th comment On Hands-on with the retro Nikon Df article (230 comments in total) I guess it should really have match needle metering a mechanical shutter and a mechanical self timer. Maybe a hotshoe mounted flash with #5 flash bulbs. Just too weird, retro cameras, why not retro TVs - how about selling a 27 inch tube TV for $3000, or maybe a PC with a 286 processor and a 10M hard drive. Direct link | Posted on Nov 6, 2013 at 11:18 UTC as 26th comment | 4 replies On Retro Nikon 'DF' emerges from the shadows article (1396 comments in total) I guess when you lack any new ideas you dig up the old ones, I guess to be complete it should have lenses with aperture rings and manual focus only. of course given Nikon's poor user interface perhaps a shutter speed dial IS a step forward... This is just another blatant marketing attempt, like Fuji's retro looking, non-rangefinder rangefinder cameras to eke out sales in a stagnant market, Especially as the mirorless cameras are not flying off the shelves. Direct link | Posted on Nov 3, 2013 at 17:24 UTC as 112th comment | 5 replies While I admire all these new niche camera each with their quirky feature sets and small collection of lens but their price points always leave me cold. Especially when you look at the lens prices - what is the point of an interchange lens camera if there are few lens, no third party lens and they are more expensive than their SLR peers.$1400 would buy you a Canon 70D a Nikon 7100 or a Sony A77. You would have 10 times more lens options - including 3rd party lens. Direct link | Posted on Oct 18, 2013 at 11:10 UTC as 18th comment | 6 replies On Nikon 1 AW1 preview (588 comments in total) Not quite sure I understand the fuss about interchangeable lens camera that have no/few actually lens to change, for which you get to pay a premium price. Not sure I would be submerging this camera either \$ Direct link | Posted on Sep 20, 2013 at 07:09 UTC as 56th comment | 7 replies Total: 31, showing: 1 – 20
2015-01-25 05:00:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1825348287820816, "perplexity": 5786.5683400385315}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422118059355.87/warc/CC-MAIN-20150124164739-00047-ip-10-180-212-252.ec2.internal.warc.gz"}
https://stdworkflow.com/447/python-matplotlib-subscript-font-setting-problem
# Python matplotlib subscript font setting problem created at 07-29-2021 views: 23 ## subscript font¶ The python code used (first step in your own python): df_up= pd.read_excel(xlsxFilename) df_up.index=['Q$_\mathrm{Oct}$','Q$_\mathrm{Nov}$','Q$_\mathrm{Dec}$','Q$_\mathrm{Jan}$'] This mathrm is the first step to make the subscript into the desired font If you don’t understand, you can copy the above code directly to your document. ## global font settings¶ At this point, you will find that even after using the above, the picture that comes out still looks unsatisfactory, because it is just like the picture below If I want the above "sou" to also become "Times New Roman" font, what should I do? plt.rcParams['mathtext.fontset'] = 'stix'# Most similar to Times new roman Or (I didn't use this, but you can try) import matplotlib.pyplot as plt from matplotlib import rcParams config = { "font.family":'serif', "font.size": 20, "mathtext.fontset":'stix', "font.serif": ['SimSun'], } rcParams.update(config) Or: plt.rcParams['font.family'] = "Times New Roman" plt.rcParams["mathtext.fontset"] = "dejavuserif" plt.rc('text', usetex=True ) ## debug¶ If you show up at this step: RuntimeError: Failed to process string with tex because latex could not be found you can first step: pip install latex second step: The main reason for this problem is the lack of latex, dvipng and ghostscript Just choose one of the 3 subordinates (anaconda) conda install -c conda-forge jupyter_latex_envs conda install -c conda-forge/label/cf201901 jupyter_latex_envs conda install -c conda-forge/label/cf202003 jupyter_latex_envs third step After the above attempts, it doesn’t work, what should I do, download Protext and only install miktex fourth step Detect created at:07-29-2021
2022-07-01 07:15:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5570465326309204, "perplexity": 8947.654862560874}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103922377.50/warc/CC-MAIN-20220701064920-20220701094920-00128.warc.gz"}
https://brilliant.org/problems/intersecting-planes/
# intersecting planes Level pending For what value of $$k$$ do the following sets of planes intersect in a line? $$3x-y+z=0$$ $$kx+2y-z=0$$ $$4x+y+z=0$$ The answer can be represented as $$-\frac { a }{ b }$$. Express the answer as $$a+b$$. ×
2017-03-25 11:38:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6128813028335571, "perplexity": 970.5488544675563}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218188924.7/warc/CC-MAIN-20170322212948-00425-ip-10-233-31-227.ec2.internal.warc.gz"}
http://www.menahrs.com/8y8xkrs/euclidean-normalization-of-a-vector-a6f805
# euclidean normalization of a vector Most of the switching times, either of the quintuple pair {8,26} is selected. The squared Euclidean norm is widely used in machine learning partly because it can be calculated with the vector operation $\bs{x}^\text{T}\bs{x}$. Then if. Thus m ε is continuous from E2 into E1. Finally, −ck=−λTC is the negative of the dual cost in Eq. The Frobenius norm of a matrix A∈Rm×n, with elements aij, is defined as ∥A∥F≜∑i=1n∑j=1n|aij|2. Definition of euclidean vector in the Definitions.net dictionary. { Euclidean 2-space <2: The collection of ordered pairs of real numbers, (x 1;x If the sign of sk(i) is undefined, the algorithm detects the most negative entry below a threshold, which depends on the thruster noise, and a new quintuple is selected according to the substitution set Rk. Browse other questions tagged statistics probability-distributions normal-distribution or ask your own question. In fact, assume that yn ∈ E2 and yn → y in RN. Guide - how to use vector magnitude calculator To find the vector magnitude: Select the vector dimension and the vector form of representation; Type the coordinates of the vector; Press the button "Calculate vector magnitude" and you will have a detailed step-by-step solution. We have used it earlier to calculate the magnitude of vector. ScienceDirect ® is a registered trademark of Elsevier B.V. ScienceDirect ® is a registered trademark of Elsevier B.V. URL: https://www.sciencedirect.com/science/article/pii/B9780124095205500126, URL: https://www.sciencedirect.com/science/article/pii/S0168202499800022, URL: https://www.sciencedirect.com/science/article/pii/B9780124095205500217, URL: https://www.sciencedirect.com/science/article/pii/B9780124095205500102, URL: https://www.sciencedirect.com/science/article/pii/B9780080446745500080, URL: https://www.sciencedirect.com/science/article/pii/B9780124095205500084, URL: https://www.sciencedirect.com/science/article/pii/B978012409520550028X, URL: https://www.sciencedirect.com/science/article/pii/S0168202499800083, URL: https://www.sciencedirect.com/science/article/pii/S1874573307800076, URL: https://www.sciencedirect.com/science/article/pii/B9780124095205500114, Computer Solution of Large Linear Systems, Studies in Mathematics and Its Applications, Advanced Mathematical Tools for Automatic Control Engineers: Deterministic Techniques, Volume 1, The second possibility we have is to minimize the, Stationary Partial Differential Equations, Handbook of Differential Equations: Stationary Partial Differential Equations, Journal of Computational and Applied Mathematics. Proceed by repeating this procedure moving one line down to start (F3) which will give you the second entry in the resulting vector. This means that when we conduct machine learning tasks, we can usually try to measure Euclidean distances in a dataset during preliminary data analysis. 26.2 caption. Samples lying inside the tetrahedron are mixtures of several sources. Combined sections of the nine SICs used in this study for 18 replicate analyses of the reference samples after normalization to the constant Euclidean norm. Forming the cross product of two vectors (a) graphically by subsequently multiplying in a cross pattern to find the x-component (b), the y-component (c), and the z-component (d) of the resulting vector, Alexander S. Poznyak, in Advanced Mathematical Tools for Automatic Control Engineers: Deterministic Techniques, Volume 1, 2008, The following functions are the matrix norms for the matrix A=[aij]1≤i,j≤n. Max(V ) denotes the maximum element of a vector V, and sgn(.) The length any vector v in Rn will be represented by kvk. It is defined as. In this methods additional terms in the equation of Riccati are introduced [4, 5]. Calculates the L1 norm, the Euclidean (L2) norm and the Maximum(L infinity) norm of a vector. for the short notation of time derivative, and f(r)(t) for drf(t)dtr. The notation |.| is used to signify the Euclidean norm of a vector, the notation |.|Q denotes a weighted Euclidean norm of a vector (i.e., | x |Q = xTQx where Q is a positive definite matrix). The sediments from areas inside and close to the outlet of Iguaçu and Sarapuí Rivers (i.e., RioIG, RioSA, BG31, and BG32) form a cluster at large positive PC2 scores, positive PC1 scores, and both negative (BG31 and BG32) and positive (RioIG and RioSA) PC3 scores. Do you want to open this version instead? From my understanding, the results from the settings [cosine, none] should be identical or at least really really similar to [euclidean, l2], but they aren't. (9.81) has been indexed by k to denote the kth decomposition. Do you know if there is really a bias in my example if I would take just the normal Euclidean distance? Another interesting idea is presented in Ref. There is always a vector, in fact a unique vector x^ of minimal (Euclidean) norm, which minimizes, The vector x^ is the unique vector in the range, where z^ is the projection of z on the kernel (the null space), Since Hx∈R(H) for any x∈Rm, it follows that. A different approach proposed in Ref. The time abscissa is in [ks = kilosecond]. Hello. In the sequel, the Euclidean norm ∥⋅∥ is used for vectors. 26.19C) and 2- and 3-methyldibenzothiophene (m/z 198, Fig. There is always an n-dimensional vector y such that, with strict inequality unless HTy. Repeat a third time, starting again one line farther down (F1). Euclidean Norm returns the shortest distance between two points. Fig. The relative concentrations of C1-pyrenes to C1-fluoranthenes and C1-chrysenes to C1-benzo(a)anthracenes gave additional evidence that samples close to the spill accident (e.g., BG31) contained a fraction of WCO (see Fig. 1.3). To normalize a vector, therefore, is to take a vector of any length and, keeping it pointing in the same direction, change its length to 1, turning it into what is called a unit vector. Figure 9.7. See Also. The dotted lines are the mean chromatogram of the entire training set, while the solid lines are the loadings. 26.19). This quantity is also referred to as the magnitude or norm of v. Let u = » u1 u2 – be a vector in R2. FAQ. So we see it is "normalized" "squared euclidean distance" between the "difference of each vector with its mean". The sediments from São João de Meriti River and BG37 are found at another vertex (i.e., at large positive PC3 and small positive PC1 and PC2 scores). First, it is computationally efficient when dealing with sparse data. Next, putting hε(y) = Hε(mε(y), y) for y ∈ E2, we observe that hε is clearly continuous on E2. 9.208 NORM2 — Euclidean vector norms Description:. The matrix Nk of the nonbasic columns, that is of the columns discarded from the basis, is represented by three constant column vectors, since only their order changes with k. The LP tableau generated by the basis Bk can be shown to have, for any k, the following form: where sk is the basic solution. 26.19). As proved by Theorem 1, the pseudoinverse distribution law in Eq. Lemma 8.2 X2Rd is a sub-Guassian random vector with parameter jj jj op if X˘N(0;) Proof: For any v2Sd 1, vT v jj jj op. Wow, normalization is indeed a broad term and each of them has pros and cons! 2006). Describing a plane with a point and two vectors lying on it L1 Norm is the sum of the magnitudes of the vectors in a space. When used on the entire dataset, the transformed data can be visualized as a bunch of vectors with different directions on the D-dimensional unit sphere. These ratios are less affected by weathering processes than those of parent PAHs, as the additional methyl group increases the boiling point, decreases the water solubility and makes the compounds less prone to biodegradation (Wang et al., 1998). It is defined as. In ℝ, the Euclidean distance between two vectors and is always defined. The set Lpr is defined by. I have a vector space model which has distance measure (euclidean distance, cosine similarity) and normalization technique (none, l1, l2) as parameters. From Euclidean Distance - raw, normalized and double‐scaled coefficients. Fortran … Throughout this chapter, we will use the following notations: (⋆) is used for the blocks induced by symmetry. (9.79) can be converted into the dual problem to be solved for λ: Let us denote the set P of thrusters with the set of indices j, that is with P={1,...,j,...,m}, and a basis of B (a quintuple of thrusters) by Bk={jk1,...,jk5}, where k=1,...,M, M=25 is the number of candidate quintuples, and jk1<... Cluster >. Plane having the vector by its norm results in a Euclidean plane, he made equipollent any pair of.... N-Dimensional real vector and H∈Rn×m of considerable research activities over the years [ 1–3 ] their dimensions not! The Definitions.net dictionary m/z 178–276 ) with high relevance for distinguishing Pyrogenic and petrogenic sources LP... Y ) both cases, two different Control modes are shown in Fig wastewater discharges and surface from... L1 or l2-normalization Mechanics and mathematics, physics, engineering etc Christofides, in Learning-Based Control. It results that mε ( yε ) and 2- and 3-methyldibenzothiophene ( m/z 198, Fig optimal quintuple will represented! 26.18 indicate that the samples can be performance gain due to the first analysis normalized and double‐scaled coefficients normalize on. Concepts of length, distance and angle may not be described without use. Pair { 8,26 } is selected bay based on the length of the times... Yε: = yε that usually refers to the mini-thruster noise euclidean normalization of a vector the smallest force component F3 ( t.!, i.e., the pseudoinverse distribution law in Eq difference of vector... Following notations: ( ⋆ ) is of class euclidean normalization of a vector if it computationally! By x0if and only if Hx0=z^ where z^ is the projection of z on R H! Normalized to length one next issue is to find the Euclidean norm are shown vectors can be measured, as. That minimizes the Euclidean norm or 2-norm result in a vector we can compute the z-score along dim... The asymptotic stability was studied find a distribution law that minimizes the Euclidean?. Pseudovectors that belong to this category presented in Ref rotation around the Sun quintuple will time-varying... Of these definitions of vector should be distinguished from the intial point ( 2,2,2 ) 3-D! Relative concentrations of the most used norm within p-norm family is the distance between two normalized vectors that have normalized... Vector a into â ( non-zero ) vector, so that all the nice properties of L2 still., so that all document vectors turned into unit vectors Panagiotis D. Christofides in! Normalization in this case is considered here as proved by Theorem 1, the stability is studied dependently the... Scalar product can also be found if the angle between the two.. Euclidean vector space is the negative of the partial derivatives of a vector of input! Maximum element of a vector by its length or norm would take just the normal Euclidean distance two... By Pixabay on Pexels 2 June 2007: source: Created by bdesham using Inkscape if their dimensions are explicitly! To prove, really different techniques were derived to deal with nonlinear time-delay have. A metric space proposed methods bay based on the disturbances the second method introduces a observer! Normal Euclidean concepts of length 1 ( ) is of class matrix and ‖⋅‖ is any matrix norm knowledge..., p. 67, Definition 2.12 ) for time-dependent impulsive dynamical systems ( cf, it is significantly if! Normalized ; Community euclidean normalization of a vector Hunt years [ 1–3 ] = { x ∈ R | ≥. Real numbers, i.e., the Euclidean norm or 2-norm Carlos Perez Montenegro, in stability, Control Application... Called a Euclidean vector spaces in machine learning belong to this category by its length or norm R H... Refers to the Euclidean distance between two normalized vectors that have been the subject of research! De F.G. Meniconi,... Jan H. Christensen, in Learning-Based Adaptive Control, 2018 p.,. Is exemplary of the variance in the Definitions.net dictionary in new Trends Observer-Based... Two arguments such as the vector, i.e., a new criterion of robustness based on of. Or contributors [ ( 10-1 ) machine learning Toolbox > Cluster analysis > Nearest.. Dot product of two bay based on the length of the former is negative. A ) anthracenes to C1-chrysenes and C1-fluoranthenes to C1-pyrenes were found to be the Euclidean. Unit T=0.1s L infinity ) norm of Δu=u−emumin in the Definitions.net dictionary time-delay... ] ; y = [ ( 10-1 ) length or norm when he established the of... difference of each vector with its mean '' © 2021 Elsevier B.V. or licensors... The plane or l2-normalization of vectors suppose that Hx * =Hx * * =z^ * 're behind a filter! A result of containing more terms - have higher tf values length any v... In Sections 2 and 3, a new criterion of robustness based on patterns of PAH isomers ∥x∥Lpr=∫0∞∥x! The basic idea in 1835 when he established the concept of equipollence associated vector space Lpr norm of in..., variable-configuration LP distribution law euclidean normalization of a vector the loadings provides detailed qualitative information on PAH sources in positive. Located at the fourth vertex of the dual of the quintuple pair 8,26. The general form of the original document ; this masks some subtleties about longer documents pseudovectors belong. Ψ will result in a vector and then takes dot product of two class of Lipschitz is in! Interesting is that all document vectors turned into unit vectors behind a web filter, please fill questionnaire. And Statistics > Statistics and machine learning belong to this rule is the negative of the high euclidean normalization of a vector! Euclidean norm returns the z-score along dimension dim selected equal to x^ * that minimizes the |Δu|1=emTΔu. Less restrictive conditions by comparison to the L2-norm of the equation of PAH... Selected equal to the point ( 2,2,2 ) in 3-D space updated February,! Anyway normalizes the vector time unit T=0.1s, the stability is studied dependently on the norm... And Control, 2019 data in a very efficient way in to answer this.. The existing nonlinear UIO design problem is of class C1 if it is impossible estimate! Extended block euclidean normalization of a vector observable form suitable for the shortest distance indicated by a vector turned into vectors. The length of the most used norm within p-norm family is the sum of the disturbances to avoid presence... Of Eq ; this masks some subtleties about longer documents will - as vector... ” usually refers to the optimization see here and here for more.... Methods additional terms in the positive orthant the L2 norm is for the shortest distance between two points wow normalization. The plane observer ( UIO ) is an affine space over the real numbers, i.e., a vector to... X, 0 ) where x is any nonsingular matrix and the Maximum of... If … there is nothing to prove, really continuing you agree to the first ideas dealing sparse! Putting x0∈: = yε that the abbreviations of “ Weathered Crude ”! 0, 0 ) where x is any matrix norm y such that, the! .. includes a squared Euclidean distance the vectors in general, Definition... The 2-norm of a vector we can compute the $\ell_p$ nor Stack. Element is an affine space over the reals such that, with ν=4 and. Perez Montenegro, in new Trends in Observer-Based Control, 2016 time unit.! A new observer design method with H∞ performance is proposed Mahmoud, in Microfluidics: Modelling, Mechanics mathematics! Representation ( 6.2 ) follows from the intial point ( the origin ) to the first entry in sequel. Here for more details is for the shortest distance between two points in space... Terms in the same length and orientation Control modes are shown raw, normalized and double‐scaled.! Compressive Sensing in Healthcare, 2020 broad term and each of them has and. Spacecraft Dynamics and Control, 2019 that great and consequently a variable time say i have the following scenario you. With H∞ performance is proposed based on the Sobolev norms and Sobolev spaces terms, Euclidean distance the pair! Positive orthant ℝ+m are the mean chromatogram of the same length and orientation unit.! Mean '' to deal with nonlinear time-delay systems, 2019 Euclidean vectors those., we will see that, with strict inequality unless HTy norm k of class integer time-unit.. This library used for manipulating multidimensional array in a unit vector, you the... No a priori knowledge of the high discriminative ability of the vector given... Normalize data in a space ( 10,1,1 ) ] ; y = [ ( ). Fields can not be applicable magnitudes of the vector dot product of two: calculates the Euclidean vector where... Ability of the tetrahedron, labeled “ Pyrogenic ” in Fig by computing the z-score normalizing... Line farther down ( F1 ) shortest distance indicated by a constant time-unit controller normalizes the vector by Euclidean! Into an extended block triangular observable form suitable for the shortest between the two vectors, in Trends. That if … there is really a bias in my example if i would just. [ 6, 2019 the relative importance of LMW PAH ( e.g., 178... Be equivalent to an H∞ Control problem in Ref of < 1: the set of all real numbers i.e.! C1-Benzo ( a ) anthracenes to C1-chrysenes and C1-fluoranthenes to C1-pyrenes were found to be the Euclidean... Of data separately point and two vectors can also be found if the angle enclosed by the Euclidean norm a... Of vector, 1, the Euclidean norm or 2-norm kth decomposition ( yn →... Longer documents diagenetic source ( Yunker et al., 2002 ) “ Weathered Crude Oil ” and Lighter. Aij, is defined as ∥A∥F≜∑i=1n∑j=1n|aij|2 to help provide and enhance our service tailor.
2021-05-13 15:24:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7803881764411926, "perplexity": 1587.9534925362157}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989814.35/warc/CC-MAIN-20210513142421-20210513172421-00093.warc.gz"}
https://www.cell.com/cell-systems/fulltext/S2405-4712(19)30031-6?_returnURL=https%3A%2F%2Flinkinghub.elsevier.com%2Fretrieve%2Fpii%2FS2405471219300316%3Fshowall%3Dtrue
Article| Volume 8, ISSUE 2, P97-108.e16, February 27, 2019 • Top # Quantifying Drug Combination Synergy along Potency and Efficacy Axes • Author Footnotes 9 These authors contributed equally Open ArchivePublished:February 20, 2019 ## Highlights • MuSyC is a synergy framework applicable to any metric of drug combination effect • Unlike other methods, MuSyC decouples synergy of potency and efficacy • It subsumes traditional synergy methods, resolving ambiguities and biases in the field • MuSyC reveals optimal co-targeting strategies in NCSLC and melanoma ## Summary Two goals motivate treating diseases with drug combinations: reduce off-target toxicity by minimizing doses (synergistic potency) and improve outcomes by escalating effect (synergistic efficacy). Established drug synergy frameworks obscure such distinction, failing to harness the potential of modern chemical libraries. We therefore developed multi-dimensional synergy of combinations (MuSyC), a formalism based on a generalized, multi-dimensional Hill equation, which decouples synergistic potency and efficacy. In mutant-EGFR-driven lung cancer, MuSyC reveals that combining a mutant-EGFR inhibitor with inhibitors of other kinases may result only in synergistic potency, whereas synergistic efficacy can be achieved by co-targeting mutant-EGFR and epigenetic regulation or microtubule polymerization. In mutant-BRAF melanoma, MuSyC determines whether a molecular correlate of BRAFi insensitivity alters a BRAF inhibitor’s potency, efficacy, or both. These findings showcase MuSyC’s potential to transform the enterprise of drug-combination screens by precisely guiding translation of combinations toward dose reduction, improved efficacy, or both. ## Introduction Recent decades have witnessed an exponential expansion of available drugs for the treatment of diseases ( • Gong Z. • Hu G. • Li Q. • Liu Z. • Wang F. • Zhang X. • Xiong J. • Li P. • Xu Y. • Ma R. • et al. Compound libraries: recent advances and their applications in drug discovery. ). This expansion has been concomitant with an evolving understanding of disease complexity—complexity commonly necessitating combination therapy ( • He B. • Lu C. • Zheng G. • He X. • Wang M. • Chen G. • Zhang G. • Lu A. Combination therapeutics in complex diseases. ). However, clinical applications of combination therapy are often limited by tolerable dose ranges, and, therefore, it is desirable to identify combinations that enable dose reduction ( • Tallarida R.J. Quantitative methods for assessing drug synergism. ), i.e., synergistic potency. Additionally, combining drugs does not guarantee a priori an increase in efficacy over the single agents, and, therefore, it is desirable to identify combinations with effects greater than what is achievable with either drug alone ( • Foucquier J. • Guedj M. Analysis of drug combinations: current methodological landscape. ), i.e., synergistic efficacy. To assess a combination’s performance toward these goals, several drug synergy metrics have been proposed ( • Foucquier J. • Guedj M. Analysis of drug combinations: current methodological landscape. ). The roots of current synergy metrics can be traced back to either Loewe, who advanced the dose additivity principle ( • Loewe S. • Muischnek H. über Kombination swirkungen. ) or Bliss who first described the multiplicative survival principle ( • Bliss C.I. The toxicity of poisons applied jointly. ). Nearly a century later, methods to quantify drug synergy continue to appear ( • Chou T.-C. • Talalay P. Analysis of combined drug effects: a new look at a very old problem. , • Wennerberg K. • Aittokallio T. • Tang J. Searching for drug synergy in complex dose–response landscapes using an interaction potency model. , • Twarog N.R. • Stewart E. • Hammill C.V. • Shelat A.A. BRAID: A unifying paradigm for the analysis of combined drug action. , • Zimmer A. • Katzir I. • Dekel E. • Mayo A.E. • Alon U. Prediction of multidimensional drug dose responses based on measurements of drug pairs. , • Schindler M. Theory of synergistic effects: hill-type response surfaces as “null-interaction” models for mixtures. ) based on these two principles. However, none of these methods distinguish between synergistic potency and synergistic efficacy. Instead, they either make no distinction or tacitly assume the only form of synergism is through potency. Nevertheless, this distinction is essential to arrive at an unambiguous definition of synergy and properly rationalize the deployment of drug combinations, e.g., in personalized medicine. Indeed, conflating them may mislead drug combination discovery efforts. For instance, a search for improved efficacy based on traditional synergy frameworks may be confounded by an inability to sort out synergistically potent combinations. To address this critical shortcoming and resolve these two independent types of synergy, herein we propose a synergy framework termed multi-dimensional synergy of combinations (MuSyC), which is based on a two-dimensional (2D) extension of the Hill equation derived from mass action kinetics. The 2D Hill equation extends dose-response curves to dose-response surfaces. MuSyC distinguishes between synergistic potency and synergistic efficacy based on parameters in the 2D Hill equation. These synergy parameters are extensions of standard pharmacologic measures of potency and efficacy and define a dose-response surface onto which changes in potency and efficacy are orthogonal. We visualize synergy of potency and efficacy on drug synergy diagrams (DSDs), which globally stratify drug combinations along orthogonal axes of synergy facilitating comparisons between the synergistic profiles of many combinations. To demonstrate the value of MuSyC, we investigate a panel of anti-cancer compounds in combination with a third-generation mutant-EGFR inhibitor, osimertinib, in EGFR-mutant non-small-cell lung cancer (NSCLC). We find that drugs targeting epigenetic regulators or microtubule polymerization are synergistically efficacious with osimertinib. In contrast, drugs co-targeting kinases in the MAPK pathway affect potency, not efficacy of osimertinib. These conclusions have implications for drug combination deployment in NSCLC where increasing the efficacy of EGFR inhibitors has historically relied on trial and error, with no overarching principles to guide development ( • Schiffmann I. • Greve G. • Jung M. • Lübbert M. Epigenetic therapy approaches in non-small cell lung cancer: update and perspectives. ). We also apply MuSyC to study the well-established, clinically relevant combination targeting RAF and MEK in BRAF-mutant melanoma ( • Long G.V. • Stroyakovskiy D. • Gogas H. • Levchenko E. • de Braud F. • Larkin J. • Garbe C. • Jouary T. • Hauschild A. • Grob J.J. • et al. Combined BRAF and MEK inhibition versus BRAF inhibition alone in melanoma. ). We find this combination to be synergistically efficacious, though, in several cases, at the cost of potency. We then identify NADPH oxidase 5 (NOX5) as a previously unsuspected molecular determinant of sensitivity to BRAF inhibition (BRAFi) in BRAF-mutant melanoma. Applying MuSyC, we find that NOX5 expression levels affect BRAFi efficacy but not potency. In direct comparisons, we found that traditional synergy frameworks are biased and ambiguous even for the most synergistically efficacious of the NSCLC and melanoma combination studies, leading to misclassifications of combination synergy. We further show how MuSyC addresses and corrects these problems by generalizing the traditional models. ## Results ### 2D Hill Equation Decouples Synergy of Efficacy from Synergy of Potency The dose-effect relationship of a single drug is traditionally quantified by the Hill equation, which contains parameters describing efficacy (Emax) and potency (EC50) of a dose-response curve (see STAR Methods for equation derivation; Table 1 for definitions) (Figure 1A). The Hill equation is derived from a phenomenological 2-state model of drug effect (Figure S1A). Therefore, to characterize the dose-effect relationship for drug combinations, we extended this model to a 4-state model (Figure S1B) to derive a 2D generalization of the Hill equation, using principles of mass action kinetics (see STAR Methods). The 2D Hill equation parameterizes a dose-response surface (Figure 1B; Table S1 for parameter descriptions) ( • Greco W.R. • Bravo G. • Parsons J.C. The search for synergy: a critical review from a response surface perspective. ), a 2D extension of 1D dose-response curves (Figure 1A). In this equation, the changes in the efficacy and potency resulting from the combination are quantified by parameters for synergistic efficacy, denoted by β, and synergistic potency, denoted by α (Table S1). These parameters govern the shape of the dose-response surface and can capture complex patterns in experimental data. Table 1Key Definitions Potency The amount of drug required to produce a specified effect. A highly potent drug is active at low concentrations. Classically quantified as the required concentration to achieve half the maximal effect (EC50). Efficacy The degree to which a drug can produce a beneficial effect. Classically quantified as the maximal effect (Emax). Synergistic potency The magnitude of the change in the drug potency, owing to the presence of another drug. Synergistic efficacy The percent change in the maximal efficacy of the combination compared to the most efficacious single agent. The parameter β is defined as the percent increase in a drug combination’s effect beyond the most efficacious single drug. For instance, in the case of synergistic efficacy (β > 0), the effect at the maximum concentration of both drugs (E3) exceeds the maximum effect of either drug alone (E1 or E2) (Figure 1C, quadrants I and II). For antagonistic efficacy (β < 0) (Figure 1C, quadrants III and IV), at least one or both drugs are more efficacious as single agents than in combination (see Video S1 for an animated example of how the dose-response surface changes as a function of β). The parameter α quantifies how the effective dose of one drug is altered by the presence of the other. In the case of synergistic potency (α > 1), the EC50 (denoted C in Figure 1B) decreases because of the addition of the other drug (Figure 1C, quadrants I and IV), corresponding to an increase in potency. In the case of antagonistic potency (0 ≤ α < 1), the EC50 of the drug increases as a result of the other drug (Figure 1C, quadrants II and III), corresponding to a decrease in potency (see Video S1 for an animated example of how the dose-response surface changes as a function of α). Since each drug can modulate the effective dose of the other independently ( • Zimmer A. • Katzir I. • Dekel E. • Mayo A.E. • Alon U. Prediction of multidimensional drug dose responses based on measurements of drug pairs. ), the 2D Hill equation contains two α values (α1 and α2) (Figure S1B, bottom and right edges of surface). This separation of α values in the 2D Hill equation makes it possible for a given drug combination to have synergism of potency in one direction (α1 > 1), and antagonism of potency in the other (α2 < 1), or vice versa (see Figure S1C for example surfaces). Both MuSyC parameters for synergy of efficacy (β) and synergy of potency (α) correspond to geometric transformations of the dose-response surface, analogous to the parameters for efficacy (Emax) and potency (EC50) that transform the single-drug dose-response curve in classic pharmacology. We surveyed eight synergy methods to understand how they might account for these distinct types of synergy, including traditional methods of Bliss, Loewe, and highest single agent (HSA) ( Pharmacology. ), as well as more recent frameworks including the combination index (CI) ( • Chou T.-C. • Talalay P. Analysis of combined drug effects: a new look at a very old problem. ), Zimmer et al.’s equivalent dose model ( • Zimmer A. • Katzir I. • Dekel E. • Mayo A.E. • Alon U. Prediction of multidimensional drug dose responses based on measurements of drug pairs. ), Schindler’s PDE-Hill model ( • Schindler M. Theory of synergistic effects: hill-type response surfaces as “null-interaction” models for mixtures. ), ZIP ( • Wennerberg K. • Aittokallio T. • Tang J. Searching for drug synergy in complex dose–response landscapes using an interaction potency model. ), and BRAID ( • Twarog N.R. • Stewart E. • Hammill C.V. • Shelat A.A. BRAID: A unifying paradigm for the analysis of combined drug action. ). We find Bliss, Loewe, HSA, PDE-Hill, ZIP, and BRAID conflate synergy of efficacy and potency (Figures S2A–S2F) so that a drug combination with high synergistic potency scores identical to a combination with high synergistic efficacy (Figure S2A). This conflation, even in methods classically regarded as quantifying exclusively changes in efficacy, such as HSA, underscores the necessity of considering the entire topology of the dose-response surface in order to decouple synergistic efficacy from synergistic potency. In other methods (equivalent dose and CI), only synergistic potency is tacitly assumed by asserting the maximal effect of each drug and of the combination is equal to zero (Figures S2G–S2J). (See STAR Methods, section Methods Details, subsection Comparison to alternative synergy models, for a case-by-case comparison of MuSyC with other synergy frameworks.) By using the Hill equation as the basis for MuSyC, the metric of drug effect is not bounded to range between 0 and 1, as is the case for Bliss, CI, and the equivalent dose model, providing a greater versatility for application to other systems. Indeed, the challenges in applying prior synergy frameworks to our recently proposed metric of drug effect, the drug-induced proliferation (DIP) rate ( • Harris L.A. • Frick P.L. • Garbett S.P. • Hardeman K.N. • Paudel B.B. • Lopez C.F. • Quaranta V. • Tyson D.R. An unbiased metric of antiproliferative drug effect in vitro. ), provided the initial impetus for developing this framework. In summary, the 2D Hill equation enables a formalism, termed MuSyC, in which synergistic efficacy and synergistic potency are orthogonal and quantified by the parameters β and α, respectively. We have provided an interactive MuSyC demo (see Data and Software Availability section in STAR Methods,) to facilitate an intuitive understanding of the relationship between different parameter values and the shape of the dose-response surface. ### MuSyC Quantifies Synergy of Potency and Efficacy in a Drug Combination Screen We applied MuSyC to evaluate the synergistic potency and efficacy of a 64-drug panel (see Table S2 for drugs, drug classes, nominal targets, and tested concentration ranges) in combination with osimertinib, a mutant EGFR-tyrosine kinase inhibitor recently approved for first-line treatment of EGFR-mutant NSCLC ( • Soria J.C. • Ohe Y. • Vansteenkiste J. • Reungwetwattana T. • Lee K.H. • Dechaphunkul A. • Imamura F. • Nogami N. • Kurata T. • et al. Osimertinib in untreated EGFR -mutated advanced non–small-cell lung cancer. ). The selected drugs span a diverse array of cellular targets that can be broadly grouped into four categories: kinases, receptors and channels, epigenetic regulators, and mitotic checkpoints (Figure 2D), each with several sub-categories. The combinations were tested in PC9 cells, a canonical model of EGFR-mutant NSCLC ( • Jia P. • Jin H. • Xia J. • Ohashi K. • Liu L. • Pirazzoli V. • Dahlman K.B. • Politi K. • Michor F. • et al. Next-generation sequencing of paired tyrosine kinase inhibitor-sensitive and -resistant EGFR mutant lung cancer cell lines identifies spectrum of DNA changes associated with drug resistance. ) using a high-throughput, in vitro, drug-screening assay (Figure 2A). We quantified drug effect using the DIP rate metric ( • Harris L.A. • Frick P.L. • Garbett S.P. • Hardeman K.N. • Paudel B.B. • Lopez C.F. • Quaranta V. • Tyson D.R. An unbiased metric of antiproliferative drug effect in vitro. ), a metric that avoids temporal biases characteristic of traditional endpoint assays (see Quantification and Statistical Analysis section in STAR Methods, section). To fit the resulting dose-response surfaces, we developed a Bayesian fitting algorithm, using a particle swarm optimizer (PSO) to seed priors for a Markov chain Monte Carlo (MCMC) optimization (Figures S3A and S3B; Quantification and Statistical Analysis section in STAR Methods). The algorithm also accounts for non-optimal drug dosage selection, since dose ranges that are insufficient to observe saturating effects—owing to limited solubility or potency of the drug—result in a commensurate increase in the uncertainty of MuSyC’s synergy parameters (Figures S3C–S3E). Applying this algorithm, we extracted synergy parameters (α1, α2, and βobs) from fitted surfaces for all osimertinib combinations (βobs is the observed synergistic efficacy at the maximum tested dose range) (see STAR Methods, section Methods Details). The drug panel displays wide ranges of efficacy (E2) and potency (C) for single agents (Figure S4A). The efficacy and potency of the single agents have no relationship with the synergistic efficacy and synergistic potency when combined with osimertinib (p value > 0.2) (Figure S4B), confirming MuSyC’s synergy parameters are independent of single agents’ dose-response curve and therefore, as expected, cannot be predicted from the single-agent, pharmacologic profiles. Inspection of dose-response surfaces from this combination screen highlights the significance of resolving synergistic potency and efficacy. For instance, the dose-response surface for the osimertinib combination with M344 (a histone deacetylase [HDAC] inhibitor) exhibits synergistic efficacy (βobs = 1.25 ± 0.03, reflecting a 125% increase in efficacy over osimertinib alone) (Figures 2B and 2E). However, this improved efficacy comes at the cost of potency (log(α2) = −0.90 ± 0.01) as observed in the shift in the EC50 of osimertinib in the presence of 1 uM M344 (Figure 2B; red to purple dotted line). In contrast, ceritinib, an ALK inhibitor with off-target effects on IGF1R ( • Shaw A.T. • Kim D.W. • Mehra R. • Tan D.S. • Felip E. • Chow L.Q. • Camidge D.R. • Vansteenkiste J. • Sharma S. • De Pas T. • et al. Ceritinib in ALK -Rearranged non–small-cell lung cancer. ), increases osimertinib’s potency (log(α2) = 6.25 ± 0.50) (Figure 2C; green to orange dotted line) at 4 uM (maximal tested concentration) but with inconsequential improvement of efficacy (βobs = 0.28 ± 0.003). To visualize synergy globally, we plotted drug combinations on DSDs, with observed synergistic efficacy (βobs) and potency on the vertical and horizontal axes, respectively (Figure 2E). These DSDs reveal distinguishing trends between the four drug categories tested. Within the mitotic checkpoint drugs, tubulin destabilizers (including vindesine and vinorelbine) showed an upward shift along the axis of synergistic efficacy (Figure 2E). The marginal distribution confirmed this trend in comparison to all the drugs (Figure 2F, blue versus black vertical distributions). Similar results were obtained for the histone deacetylase inhibitor (HDACi) subgroup within the epigenetic regulators (Figures 2E and 2F). As expected, we observed limited synergistic or antagonistic efficacy for drugs targeting G-protein-coupled receptors (GPCRs) (Figures 2E and 2F; red versus black distributions). We also observed limited synergistic efficacy in directly co-targeting kinases in the MAPK pathway, suggesting this may be an unproductive avenue in EGFR-mutant NSCLC (Figures 2E and 2F; purple to black comparison along vertical axis). In summary, by quantifying synergy of potency separate from synergy of efficacy, MuSyC reveals drug-class trends, which can be used to guide subsequent screens and drug combination deployment in NSCLC. ### MuSyC Validates Co-targeting RAF and MEK in BRAF-Mutant Melanoma The NSCLC drug screen (Figure 2) suggests combinations targeting molecules within the same signaling pathway may not be productive avenues for increasing efficacy. However, a combination used clinically in BRAF-mutant melanoma co-targets kinases BRAF and MEK in the MAPK pathway ( • Long G.V. • Stroyakovskiy D. • Gogas H. • Levchenko E. • de Braud F. • Larkin J. • Garbe C. • Jouary T. • Hauschild A. • Grob J.J. • et al. Combined BRAF and MEK inhibition versus BRAF inhibition alone in melanoma. , • Eroglu Z. • Ribas A. Combination therapy with BRAF and MEK inhibitors for melanoma: latest evidence and place in therapy. ). To investigate this combination in more detail, we screened a panel of 8 BRAFV600-mutant melanoma cell lines (see • Paudel B.P. • Harris L.A. • Hardeman K.N. • Abugable A.A. • Hayford C.E. • Tyson D.R. • Quaranta V. A nonquiescent “idling” population state in drug-treated, BRAF-mutated melanoma. for cell-line information) against 16 BRAFi/MEKi combinations (see Table S2 for drug information and tested dose ranges). Based on the mean βobs across cell lines, all 16 combinations were synergistically efficacious (Figures 3A and S5C), indicating MuSyC would have identified this treatment strategy prospectively. In contrast, conventional methods produce ambiguous results (Figure S6, top 3 panels in each cell line group), such that this combination strategy could have not been identified. Furthermore, MuSyC detected variations in synergistic efficacy between cell lines (Figures 3A and S5C), underscoring its sensitivity and pointing to heterogeneous, cell-intrinsic mechanisms modulating the efficacy of combined BRAF and MEK inhibition. In particular, A2058 displayed low average synergistic efficacy, suggesting that its canonical insensitivity to BRAFi does not depend on MEK reactivation but rather on altered metabolic phenotype ( • Parmenter T.J. • Kleinschmidt M. • Kinross K.M. • Bond S.T. • Li J. • Rao A. • Sheppard K.E. • Hugo W. • Pupo G.M. • et al. Response of BRAF-mutant melanoma to BRAF inhibition is mediated by a network of transcriptional regulators of glycolysis. , • Hardeman K.N. • Peng C. • Paudel B.B. • Meyer C.T. • Luong T. • Tyson D.R. • Young J.D. • Quaranta V. • Fessel J.P. Dependence on glycolysis sensitizes BRAF-mutated melanomas for increased response to targeted BRAF inhibition. ). MuSyC also provides information on synergistic potency for these combinations. A clinically deployed combination (dabrafenib and trametinib) is synergistically efficacious but antagonistically potent in all cell lines except one (Figure S5), a trade-off that may be relevant in the clinic. Together, MuSyC analyses of NSCLC and of melanoma combination screens indicate that the magnitude of a drug combination’s synergistic efficacy depend upon the oncogenetic context, i.e., co-targeting within the MAPK pathway may work for mutant-BRAF melanoma but not for mutant-EGFR NSCLC. ### MuSyC Reveals Whether Molecular Correlates of Insensitivity Alter Synergistic Efficacy or Potency While drug combinations are commonly identified from top-down approaches, e.g., high-throughput drug screens, others, including BRAFi/MEKi, were discovered from a bottom-up approach by investigating molecular correlates of insensitivity. However, these molecular correlates may alter either the potency or the efficacy of the primary drug (or both). MuSyC can distinguish among these possibilities, enabling an informed choice between improving either efficacy or potency. As an example, we looked for molecular correlates of BRAFi insensitivity between subclones of a BRAF-mutant melanoma cell line (SKMEL5) with differential sensitivity to BRAFi (Figure 4A). Specifically, we quantified gene expression using RNA sequencing (RNA-seq) and identified the top 200 differentially expressed genes (DEGs) (FDR < 0.001; see STAR Methods section Quantification and Statistical Analysis). This gene set was significantly enriched in processes, cellular components, and molecular functions relating to metabolism (Figure 4B), aligning with previous reports on the relationship between altered metabolism and resistance to BRAFi ( • Parmenter T.J. • Kleinschmidt M. • Kinross K.M. • Bond S.T. • Li J. • Rao A. • Sheppard K.E. • Hugo W. • Pupo G.M. • et al. Response of BRAF-mutant melanoma to BRAF inhibition is mediated by a network of transcriptional regulators of glycolysis. , • Hardeman K.N. • Peng C. • Paudel B.B. • Meyer C.T. • Luong T. • Tyson D.R. • Young J.D. • Quaranta V. • Fessel J.P. Dependence on glycolysis sensitizes BRAF-mutated melanomas for increased response to targeted BRAF inhibition. ). We computed the correlation of the 200 DEGs’ expression to BRAFi sensitivity across a 10-cell-line panel (see STAR Methods) using expression data from • Subramanian A. • Narayan R. • Corsello S.M. • Peck D.D. • Natoli T.E. • Lu X. • Gould J. • Davis J.F. • Tubelli A.A. • Asiedu J.K. • et al. A next generation connectivity map: L1000 platform and the first 1,000,000 profiles. . NOX5 stood out as one of five genes with a significant, positive correlation with BRAFi insensitivity (Pearson r = 0.65; p value = 0.042) (Figures 4C and 4D; Table S3 for quantification of BRAFi insensitivity and Table S4 for genes correlated with BRAFi insensitivity) and was significantly up-regulated in the BRAFi-insensitive subclone (SC10) compared with the sensitive subclone (SC01) (Figure 4E). Previously unconsidered, NOX5 is an interesting target because of its convergent regulation on metabolic and redox signaling at mitochondria ( • Lu W. • Hu Y. • Chen G. • Chen Z. • Zhang H. • Wang F. • Feng L. • Pelicano H. • Wang H. • Keating M.J. • et al. Novel role of NOX in supporting aerobic glycolysis in cancer cells with mitochondrial dysfunction and as a potential target for cancer therapy. ), processes significantly enriched in the DEGs (Figure 4B). To study NOX5’s contribution to the potency or efficacy of BRAFi, we tested PLX4720 in combination with a NOX5 inhibitor, DPI ( • Jaquet V. • Marcoux J. • Forest E. • Leidal K.G. • McCormick S. • Westermaier Y. • Perozzo R. • Plastre O. • Fioraso-Cartier L. • Diebold B. • et al. NADPH oxidase (NOX) isoforms are inhibited by celastrol with a dual mode of action. ), in a panel of 7 melanoma cell lines selected based on differential NOX5 expression. We found synergistic efficacy correlated with NOX5 expression (Pearson r = 0.77; p value = 0.043) (Figures 4G and 4H); however, synergistic potency did not (Pearson r = 0.01; p value = 0.96) (Figures 4G and 4I). Of note, A2058, well known for its resistance to BRAFi, exhibited the highest NOX5 expression among the cell lines and the highest synergistic efficacy (βobs = 1.42 ± 0.05) (Figure 4F), which was more synergistically efficacious than all tested MEKi/BRAFi combinations (Figure 3A). Taken together, these results suggest co-targeting NOX5 in BRAF-mutant melanoma could lead to improved outcomes for BRAF-mutant melanoma patients with a unique metabolic program for which NOX5 is a biomarker. Furthermore, this study demonstrates the utility of MuSyC for distinguishing a molecular constituent’s role in modulating the potency or efficacy of a drug. ### MuSyC Generalizes Traditional Synergy Metrics and Removes Biases and Ambiguities To investigate how results from MuSyC compare with the most frequently used synergy metrics, we calculated synergy using Loewe additivity, CI, and Bliss on data from the NSCLC (Figure 2) and the melanoma (Figure 3A) screens. Loewe synergy was calculated directly from the DIP rate data, while CI and Bliss, which require percent metrics, were calculated from 72-h percent viability ( • Barretina J. • Caponigro G. • Stransky N. • Venkatesan K. • Margolin A.A. • Kim S. • Wilson C.J. • Lehár J. • Kryukov G.V. • Sonkin D. • et al. The Cancer Cell Line Encyclopedia enables predictive modelling of anticancer drug sensitivity. ) imputed from the growth curves (see STAR Methods section Quantification and Statistical Analysis). Unlike MuSyC, these metrics are evaluated at every concentration, resulting in dose-dependent distributions of synergy (Figures 5A and S6) commonly resulting in an ambiguous classification of a combination. By the median of each distribution, none of the metrics can statistically discriminate between the MuSyC DSD quadrants (Figures 5A and S6; Kruskal-Wallis p value > 0.05). Examining the models underlying these metrics revealed several limitations and biases accounting for their ambiguity. For Loewe additivity, synergy is undefinable for many tested concentrations as Loewe cannot be calculated at combination conditions with effects exceeding the maximum effect of the weaker drug ( • Foucquier J. • Guedj M. Analysis of drug combinations: current methodological landscape. ). This is particularly limiting for synergistically efficacious combinations, which, by definition, achieve greater effect than either drug alone. In the NSCLC screen, because osimertinib alone was not sufficient to achieve a negative DIP rate (i.e., regressing population), Loewe is undefinable for all conditions where the DIP rate was less than zero (Figure 5B). For conditions where Loewe is defined, Loewe additivity has been reported to be most appropriate for combinations of mutually exclusive inhibitors ( • Chou T.C. • Talalay P. Quantitative analysis of dose-effect relationships: the combined effects of multiple drugs or enzyme inhibitors. ). Accordingly, we found Loewe emerges from MuSyC as a special case under the conditions of both α1 = α2 = 0 (i.e., the drugs are mutually exclusive) and h1 = h2 = 1 (see STAR Methods, section Methods Details, subsection 2.1). If the condition h1 = h2 = 1 is not satisfied (Figure 5C), MuSyC predicts that when the geometric mean of the hill slopes is less than 1 $(h1∗h2<1)$, the linear model of Loewe will overestimate synergy and when $h1∗h2>1$, Loewe will underestimate synergy (Figure 5C). Correspondingly, we found the median value of Loewe synergy was negatively correlated with the geometric mean of the hill coefficients in both the NSCLC and melanoma screens (Figure 5D, Spearman r = −0.51 and −0.41; p value = 1e−3 and 8e−4, respectively); that is, the synergy of a combination according to Loewe additivity could be estimated based on the hill slope of a single drug alone in contrast to MuSyC where synergistic potency and efficacy are decoupled from the single drug’s pharmacologic profile (Figure S3B). CI is a special case of Loewe additivity that adds the additional condition that E0 = 1, E1 = E2 = E3 = 0, such that the drug effect is equated with percent inhibition ( • Chou T.-C. • Talalay P. Analysis of combined drug effects: a new look at a very old problem. ). The condition on effect range assumes all drugs achieve the same maximum effect, and thus, unlike Loewe additivity, CI range is not limited by the weaker drug. However, in percent viability data, many drugs do not achieve 0% viability (e.g., methotrexate, which reaches a maximum effect of 52% viability) (Figure 5E). In these cases, fits for the single-drug dose-response curves used to calculate CI are poor (Figure 5E). CI is thus inappropriate for cell-based assays of drug effect where the correspondence between percent inhibition and cell viability is not one-to-one. Bliss, similar to CI, can only be applied to percent metrics with the condition E0 = 1, E1 = E2 = E3 = 0. As with CI, because most drugs in combination do not satisfy this condition, Bliss is also an inappropriate model to use. However, if this condition is satisfied, Bliss emerges as a special case of MuSyC under the conditions α1 = α2 = 1 (see STAR Methods section Methods Details, subsection 2.2). In summary, MuSyC subsumes Loewe (and therefore CI) and Bliss into a single framework satisfying both the dose additivity and the multiplicative survival principles under certain conditions. For combinations that do not satisfy these conditions, we show the traditional metrics lead to biased and ambiguous results, while MuSyC’s generality resolves these limitations. Specifically these limitations are as follows traditional methods cannot distinguish synergy of potency from synergy of efficacy (Figures 5A and S2); Loewe is undefined for combinations with synergistic efficacy (Figure 5B); Loewe (and by extension CI) contain an artificial bias toward synergy for drugs with hill slopes much less than 1 (Figures 5C and 5D); and CI leads to poor fits because it disregards synergistic efficacy by assuming that the maximal effect of a drug reaches 0%, even when this is not the case (Figure 5E). ## Discussion The goal of using synergistic drugs is to achieve more with less. It is therefore intuitive that two types of synergy exist: one corresponding to how much more is achievable (synergistic efficacy) and the other to how much less is required (synergistic potency). Finding such combinations is vital for optimizing therapeutic windows, as there exists a fundamental trade-off between clinical efficacy and tolerable doses. Diseases for which single-drug efficacy is sufficient would benefit from synergistically potent combinations to drive down toxicity and/or side effects. Diseases with treatments of insufficient efficacy are in pressing need of synergistically efficacious combinations in order to improve the depth and durability of response. By stratifying synergy along distinct axes of potency and efficacy using MuSyC, informed choices can be made about this trade-off. The distinction facilitates identifying drug-class trends that can be iteratively expanded in future screens to optimize synergistic efficacy or synergistic potency, whichever is desirable for a particular disease. In this respect, MuSyC provides a global view of the synergistic behavior of whole classes of drugs, e.g., from a high-throughput drug screen, via DSDs. In this work, MuSyC revealed a subclass of epigenetic regulators as potentially interesting targets for combination therapy in an EGFR-oncogene-addicted background. Epigenetic regulators have previously been suggested to prime NSCLC for sensitivity to EGFRi ( • Schiffmann I. • Greve G. • Jung M. • Lübbert M. Epigenetic therapy approaches in non-small cell lung cancer: update and perspectives. ), and the HDACi entinostat in combination with erlotinib (first generation EGFR-TKI) has been shown to increase overall survival in EGFR-mutant NSCLC cases with high expression of E-cadherin ( • Witta S.E. • Gemmill R.M. • Hirsch F.R. • Coldren C.D. • Hedman K. • Ravdel L. • Helfrich B. • Chan D.C. • Sugita M. • et al. Restoring E-cadherin expression increases sensitivity to epidermal growth factor receptor inhibitors in lung cancer cell lines. , • Witta S.E. • Jotte R.M. • Konduri K. • Neubauer M.A. • Spira A.I. • Ruxer R.L. • Varella-Garcia M. • Bunn P.A. • Hirsch F.R. Randomized phase II trial of erlotinib with and without entinostat in patients with advanced non-small-cell lung cancer who progressed on prior chemotherapy. ). Consistent with this, we also observe entinostat was synergistically efficacious with osimertinib (βobs = 0.84 ± 0.027) in PC9 cells, an E-cadherin high-expressing cell line ( • Shimoyama Y. • Nagafuchi A. • Fujita S. • Gotoh M. • Takeichi M. • Tsukita S. • Hirohashi S. Cadherin dysfunction in a human cancer cell line: possible involvement of loss of alpha-catenin expression in reduced cell-cell adhesiveness. ). As is typical of high-throughput screens, there were results of undetermined significance, including dronedarone (an anti-arrhythmic sodium channel inhibitor) and GW694590a (an anti-angiogenesis compound targeting the TIE2 receptor), which were the most antagonistic and synergistically efficacious compounds out of the receptors and channels drug classes, respectively. Further studies are needed to verify these results. Nonetheless, MuSyC provides a quantitative foundation to further investigate unsuspected combinations. The global views provided by the MuSyC DSDs also reveal synergistic trends that vary according to disease context. For example, co-targeting the MAPK pathway in NSCLC or BRAF-mutant melanoma yields different outcomes: in the former, only synergistic potency is observed, while in the latter, synergistic efficacy, and sometimes potency, are registered. The disparity emphasizes that synergistic trends require data-driven metrics that distinguish between synergy of efficacy and potency. MuSyC dose-response surfaces facilitate evaluating the significance that combination synergy should be assigned; that is, MuSyC’s synergy parameters quantify the relative increase in efficacy or potency of the combination, with respect to single agents, and therefore, the improvements should be interpreted in the context of the absolute potency and efficacy. This information is directly conveyed in the topology of the dose-response surface. As an example, in the NSCLC screen, the combination of osimertinib with quisinostat exhibited the greatest total efficacy. However, since quisinostat is already significantly efficacious on its own, that combination ranks lower than the M344-osimertinib combination along the axis of synergistic efficacy on a DSD. Thus, DSDs are useful to rank relative increase in potency or efficacy, whereas surfaces convey the absolute efficacy and potency achieved by a combination. MuSyC is also useful for investigating a molecular species’ contribution to the potency and efficacy of a compound. Here, we demonstrated NOX5 activity modulates the efficacy but not the potency, of BRAFi. However, the NOX5i used, DPI, is known to have off-target effects ( • Altenhöfer S. • Kleikers P.W. • Wingler K. • Schmidt H.H. Evolution of NADPH oxidase inhibitors: selectivity and mechanisms for target engagement. ); therefore, further evidence for the role of NOX5 in BRAFi efficacy will require extending MuSyC to studies combining drugs and gene silencing technology (e.g., RNAi or CRISPR). To fit the dose response surface and extract synergy parameters, MuSyC utilizes a Bayesian approach combining PSO and a multi-tier MCMC walk in order to track uncertainty in the values for synergistic potency and efficacy. The sources for this uncertainty include noise, partial dose-response curves, and data density. A similar Bayesian approach was previously implemented for Loewe ( • Hennessey V.G. • Rosner G.L. • Bast Jr., R.C. • Chen M. A Bayesian approach to dose-response assessment and synergy and its application to in vitro dose-response studies. ). Loewe additivity and Bliss independence have maintained dominance in the field, along with the related work of Chou and Talalay. Yet there is no consensus regarding the appropriate use of these methods because they are based on distinct foundational principles, often leading to incompatible results ( • Greco W. • Unkelbach H.-D. • Pöch G. • Sühnel J. • Kundi M. • Bödeker W. Consensus on concepts and terminology for combined-action assessment: the Saariselka Agreement. ). MuSyC removes these sources of confusion by unifying these methods into a consensus framework, within which Loewe and Bliss emerge as special cases. There has been much critical analysis over the past 25 years on the term “synergy” ( • Greco W. • Unkelbach H.-D. • Pöch G. • Sühnel J. • Kundi M. • Bödeker W. Consensus on concepts and terminology for combined-action assessment: the Saariselka Agreement. ), arguably rooted in the practice of defining synergy with respect to arbitrary expectations of drug additivity implicitly codified in previous methods’ foundational principles. In contrast, ambiguity about the meaning of synergy disappears in MuSyC because its synergy parameters relate directly to the textbook pharmacology concepts of efficacy and potency. Indeed, a major advance of MuSyC is the decisive shift toward synergy calculations directly related to an observable change in efficacy and/or potency. Thus, ambiguous questions such as “Is there synergy?” can be recast into more precise questions, such as “How much does efficacy or potency of drug X change when drug Y is added?” Such precise language should promote a move away from arbitrary cutoffs for “significant synergy,” which are context dependent. While we focused on the DIP rate as our metric of effect, MuSyC may be applied to any quantifiable phenotype whose dose response is suitable to be fit by a Hill equation. In contrast, all other synergy models we surveyed impose strict constraints on the type and/or magnitude of the drug effect metric. Thus, MuSyC opens up the potential to study synergy of drug effects previously impossible to address by existing methods. Examples of metrics include immune activation, growth in 3D culture, or second messenger efflux. The flexibility is particularly critical in translating drug combinations to the clinic by using models of increasing complexity, such as organoids, which better represent the drug sensitivity of a patient ( • Jabs J. • Zickgraf F.M. • Park J. • Wagner S. • Jiang X. • Jechow K. • Kleinheinz K. • Toprak U.H. • Schneider M.A. • Meister M. • et al. Screening drug effects in patient-derived cancer cells links organoid responses to genome alterations. ). Indeed, that most clinical combinations can be explained by patient-to-patient variability ( • Palmer A.C. • Sorger P.K. Combination. ) is a strong rationale for translating combination screens to more complex, pre-clinical models. Subsequent work will be devoted to scaling the combination drug screening pipeline developed here to pre-clinical experimental models of increasing complexity, such as organoids. In conclusion, we have presented MuSyC, a drug synergy framework that maintains a distinction between two intuitive types of pharmacological synergy and that may be applied to any drug effect metric. We showed this framework allows for a richer understanding of drug interactions, with practical, translational consequences. We foresee this approach will streamline drug discovery pipelines and facilitate the deployment of precision approaches to therapeutic combinations. ## STAR★Methods ### Key Resources Table Tabled 1 REAGENT or RESOURCESOURCEIDENTIFIER Chemicals, Peptides, and Recombinant Proteins TrizolInvitrogen15596026 FBSGibco10437-028 PBSCorning21-040-CV DMEMGibco11965-092 RPMICorning10-040-CV TryplEGibco12604-013 DMEM/F12Gibco11330-032 SytoxGreenThermoFisherS7020 5-IodotubericidinENZOEI-293 Abexinostat (PCI-24781)SelleckChemS1090 AcetylcysteineSelleckChemS1623 Afatinib (BIBW2992)LC LaboratoriesA8644 AG-879ENZOEI-258 Alisertib (MLN8237)MedChemExpressHY-10971 Amiodarone HClSelleckChemS1979 AprepitantSelleckChemS1189 Bazedoxifene HClSelleckChemS2128 Beclomethasone dipropionateLight Biologicals (NIH Clinical Collection II)MZ-3012 BendroflumethiazideLight Biologicals (NIH Clinical Collection II)B-8008 BML-259ENZOEI-344 Bosutinib (SKI-606)LC LaboratoriesB-1788 Brigatinib (AP26113)SelleckChemS8229 Buparlisib (BKM120, NVP-BKM120)SelleckChemS2247 CabozantinibLC LaboratoriesC-8901 CarfilzomibLC LaboratoriesC-3022 CarmustineNCI Chemotherapeutic Agents Repository409962 CephalomannineSelleckChemS2408 Ceritinib (LDK378)SelleckChemS7083 CisplatinSigma470306 CobimetinibMedChemExpressHY-13064 CrizotinibLC LaboratoriesC-7900 DabrafenibLC LaboratoriesD-5678 DactolisibLC LaboratoriesN-4288 DasatinibLC LaboratoriesD-3307 DocetaxelSelleckChemS1148 Dronedarone HCl (Multaq)SelleckChemS2114 Ensartinib (X-396)SelleckChemS8230 Entinostat (MS-275)SelleckChemS1053 ErlotinibLC LaboratoriesE-4007 Foretinib (GSK1363089)SelleckChemS1111 Gefitinib (ZD1839)LC LaboratoriesG-4408 Givinostat (ITF2357)SelleckChemS2170 GSK1751853AGSK PKISN/A GSK994854AGSK PKISN/A GW458787AGSK PKISN/A GW644007XGSK PKISN/A GW694590AGSK PKISN/A GW770249X (GW770249A)GSK PKISN/A Homoharringtonine (Omacetaxine mepesuccinate)Sequoia Research Products Ltd. (NIH Clinical Collection II)SRP02125h Ivacaftor (VX-770)SelleckChemS1144 (+)-JQ1SelleckChemS7110 Linsitinib (OSI-906)SelleckChemS1091 LY294002ENZOST-420 M344SelleckChemS2779 MethotrexateMedChemExpressOL-14377 MG-132SelleckChemS2619 ML-9-HClENZOEI-153 Mocetinostat (MGCD0103)SelleckChemS1122 NaftopidilSelleckChemS2126 NateglinideSelleckChemS2489 Nebivolol HClSelleckChemS1549 Olaparib (AZD2281, Ku-0059436)LC LaboratoriesO-9201 Osimertinib (AZD9291)SelleckChemS7297 PaclitaxelSigma17191 PanobinostatNCI Chemotherapeutic Agents Repository761190 2'-Amino-3'-methoxyflavoneLC LaboratoriesP-4313 PimobendanSelleckChemS1550 PLX-4720SelleckChemS1152 Ponatinib (AP24534)LC LaboratoriesP-7022 PP2ENZOEI-297 Pracinostat (SB939)SelleckChemS1515 Primaquine DiphosphateSelleckChemS4237 QuercetinENZOAC-1142 Quisinostat (JNJ-26481585)SelleckChemS1096 RAF-265MedChemExpressHY-10248 Rapamycin (Sirolimus)SelleckChemS1039 SB-253226GSK PKISN/A Selumetinib (AZD-6244)LC LaboratoriesS-4490 SP 600125ENZOEI-305 Sunitinib MalateSelleckChemS1042 TAK-632SelleckChemS7291 Tanespimycin (17-AAG)SelleckChemS1141 Thioridazine hydrochlorideSelleckChemS5563 TrametinibLC LaboratoriesT8123 AG-370ENZOEI-229 U-0126ENZOEI-282 Ulixertinib (BVD-523, VRT752271)SelleckChemS7854 Vemurafenib (PLX4032)SelleckChemS1267 VerteporfinSelleckChemS1787 VindesineSequoia Research Products Ltd. (NIH Clinical Collection)SRP01038v Vinorelbine TartrateSelleckChemS4269 ZM 447439SelleckChemS1103 Critical Commercial Assays Tru-Seq stranded mRNA sample prep kitIlluminaCat # RS-122-2101 Reverse Transcription KitQuantiTectCat # 205311 IQTM SYBR Green SupermixBioRadCat # 170 Deposited Data Fitted combination surface plotsThis Paperhttps://github.com/QuLab-VU/MuSyC_Cell.git; In folder(s): Code_Paper_Figures/Fig2(3)/html Code for Generating Paper PlotsThis Paperhttps://github.com/QuLab-VU/MuSyC_Cell.git; In folder(s): Code_Paper_Figures/ Table of fitted parameters for all experimentsThis Paperhttps://github.com/QuLab-VU/MuSyC_Cell.git; In folder: Data; Files: MasterResults.csv and MasterResults_plx_dpi_melPanel RT-qPCR quantification of NOX5 expressionThis Paperhttps://github.com/QuLab-VU/MuSyC_Cell.git; In file: Data/nox5Expr.csv DIP Rate CalculationsThis Paperhttps://github.com/QuLab-VU/MuSyC_Cell.git; In folder(s): Data; Files: HTS018_rates, HTS022_timeavg_rates_sub2.csv, -03-27-2018-dpi+plx-cm_bp_timeavg_preCalcDIP_timSub.csv, dasatinib_osimertinib_cellavista_cm_8-24-17.csv, linsitinib_osimertinib_cellavista_cm_8-24-17.csv, HTS015_017_Combined.csv cFP Raw DataThis Paperhttps://github.com/QuLab-VU/MuSyC_Cell.git; In folder(s): Data/SKMEL5_cFP List of DEGSThis Paperhttps://github.com/QuLab-VU/MuSyC_Cell.git; In folder(s): Data/DEGs_GO_Analysis Raw RNAseq data for subclonesGEOGEO: GSE122041 This data is also available from Mendeley Data at the following doiThis Paper[https://doi.org/10.17632/n8bp8db5ff.1] Experimental Models: Cell Lines PC9-H2B.RFP • Tyson D.R. • Garbett S.P. • Frick P.L. • Quaranta V. Fractional proliferation: a method to deconvolve cell population dynamics from single-cell data. (W. Pao at UPenn) N/A SKMEL5-H2B.RFP • Paudel B.P. • Harris L.A. • Hardeman K.N. • Abugable A.A. • Hayford C.E. • Tyson D.R. • Quaranta V. A nonquiescent “idling” population state in drug-treated, BRAF-mutated melanoma. (ATCC) HTB-70 WM1799-H2B.RFP • Paudel B.P. • Harris L.A. • Hardeman K.N. • Abugable A.A. • Hayford C.E. • Tyson D.R. • Quaranta V. A nonquiescent “idling” population state in drug-treated, BRAF-mutated melanoma. (M. Herlyn at Wistar Institute) N/A WM983B-H2B.RFP • Paudel B.P. • Harris L.A. • Hardeman K.N. • Abugable A.A. • Hayford C.E. • Tyson D.R. • Quaranta V. A nonquiescent “idling” population state in drug-treated, BRAF-mutated melanoma. (M. Herlyn at Wistar Institute) N/A A375-H2B.RFP-FUCCI • Paudel B.P. • Harris L.A. • Hardeman K.N. • Abugable A.A. • Hayford C.E. • Tyson D.R. • Quaranta V. A nonquiescent “idling” population state in drug-treated, BRAF-mutated melanoma. (ATCC) CRL-1619 SKMEL28-H2B.RFP-FUCCI • Paudel B.P. • Harris L.A. • Hardeman K.N. • Abugable A.A. • Hayford C.E. • Tyson D.R. • Quaranta V. A nonquiescent “idling” population state in drug-treated, BRAF-mutated melanoma. (ATCC) HTB-72 WM2664-H2B.RFP • Paudel B.P. • Harris L.A. • Hardeman K.N. • Abugable A.A. • Hayford C.E. • Tyson D.R. • Quaranta V. A nonquiescent “idling” population state in drug-treated, BRAF-mutated melanoma. (M. Herlyn at Wistar Institute) N/A A2058-H2B.RFP • Paudel B.P. • Harris L.A. • Hardeman K.N. • Abugable A.A. • Hayford C.E. • Tyson D.R. • Quaranta V. A nonquiescent “idling” population state in drug-treated, BRAF-mutated melanoma. (ATCC) CRL-11147 SKMEL5.SC10-H2B.RFP • Paudel B.P. • Harris L.A. • Hardeman K.N. • Abugable A.A. • Hayford C.E. • Tyson D.R. • Quaranta V. A nonquiescent “idling” population state in drug-treated, BRAF-mutated melanoma. Derived Subclone from SKMEL5-H2B.RFP N/A SKMEL5.SC07-H2B.RFP • Paudel B.P. • Harris L.A. • Hardeman K.N. • Abugable A.A. • Hayford C.E. • Tyson D.R. • Quaranta V. A nonquiescent “idling” population state in drug-treated, BRAF-mutated melanoma. Derived Subclone from SKMEL5-H2B.RFP N/A SKMEL5.SC01-H2B.RFP • Paudel B.P. • Harris L.A. • Hardeman K.N. • Abugable A.A. • Hayford C.E. • Tyson D.R. • Quaranta V. A nonquiescent “idling” population state in drug-treated, BRAF-mutated melanoma. Derived Subclone from SKMEL5-H2B.RFP N/A Oligonucleotides NOX5_Forward Primer: GGCTCAAGTCCTACCACTGGAThis paperN/A NOX5_Reverse Primer: GAACCGTGTACCCAGCCAATThis paperN/A HPRT_Forward Primer: TGCTCGAGATGTGATGAAGGAGThis paperN/A HPRT_Reverse Primer: TGATGTAATCCAGCAGGTCAGCThis paperN/A 36B4_Forward Primer: CATGTTGCTGGCCAATAAGGThis paperN/A 36B4_Reverse Primer: TGGTGATACCTAAAGCCTGGAAThis paperN/A PGC1a_Forward Primer: TGCCCTGGATTGTTGACATGAThis paperN/A PGC1a_Reverse Primer: TTTGTCAGGCTGGGGGTAGGThis paperN/A Recombinant DNA • Welm B.E. • Dijkgraaf G.J. • Bledau A.S. • Welm A.L. • Werb Z. Lentiviral transduction of mammary stem cells for analysis of gene function during development and cancer. Plasmid #:18982 Software and Algorithms Scikit-learn • van der Walt S. • Schönberger J.L. • Nunez-Iglesias J. • Boulogne F. • Warner J.D. • Yager N. • Gouillart E. • Yu T. scikit-image contributors Scikit-image: image processing in Python. N/A RabbitMQ/Celerywww.celeryproject.orgN/A GNU parallel • Tange O. GNU parallel: the command-line power tool. N/A Scipy • Jones E. • Oliphant T. • Peterson P. SciPy: open source scientific tools for python. N/A Matplotlib • Hunter J.D. Matplotlib: a 2D graphics environment. N/A Pandas McKinney, W. (2010) Data structures for statistical computing in python, Proceedings of the 9th Python in Science Conference, pp. 51–56. N/A Numpy • Oliphant T.E. Guide to NumPy. N/A Pymc3 • Salvatier J. • Wiecki T.V. • Fonnesbeck C. Probabilistic programming in Python using PyMC3. N/A HISAT2 • Kim D. • Salzberg S.L. HISAT: a fast spliced aligner with low memory requirements. N/A featureCounts • Liao Y. • Smyth G.K. • Shi W. featureCounts: an efficient general purpose program for assigning sequence reads to genomic features. N/A Bioconductor (R)www.bioconductor.orgN/A ENRICHR (R) • Kuleshov M.V. • Jones M.R. • Rouillard A.D. • Fernandez N.F. • Duan Q. • Wang Z. • Koplev S. • Jenkins S.L. • Jagodnik K.M. • Lachmann A. • et al. Enrichr: a comprehensive gene set enrichment analysis web server 2016 update. N/A DESeq2 • Love M.I. • Huber W. • Anders S. Moderated estimation of fold change and dispersion for RNA-seq data with DESeq2. . N/A ### Contact for Reagent and Resource Sharing Further information and requests for resources and reagents should be directed to and will be fulfilled by the Lead Contact, Vito Quaranta ([email protected]). ### Experimental Model and Subject Details PC9 (previously PC-14, gender unknown) cells were obtained from W. Pao (U Penn.) and were cultured in RPMI 1640 medium containing 10% FBS at 37C and 5% CO2. Cells were engineered to express histone 2B-mRFP via lenti-viral transfection using the pHIV-H2B-mRFP plasmid ( • Welm B.E. • Dijkgraaf G.J. • Bledau A.S. • Welm A.L. • Werb Z. Lentiviral transduction of mammary stem cells for analysis of gene function during development and cancer. ) as previously described ( • Tyson D.R. • Garbett S.P. • Frick P.L. • Quaranta V. Fractional proliferation: a method to deconvolve cell population dynamics from single-cell data. ). A single-cell derived clonal population demonstrated to exhibit proliferation characteristic of the parental population was then selected by limiting dilution. BRAFV600-mutant melanoma lines cells (A2058 (M), WM1799 (U), A375 (F), WM983B (M), SKMEL5 (F), SKMEL28 (M), WM2664 (F), M=Male, F=Female, U=Unknown) were obtained from ATCC or M. Herlyn (Wistar Institute)(see Key Resources Table) and were cultured in DMEM media containing 2 mM glutamine, 4.5 g/L glucose, 10% FBS and no sodium pyruvate (catalog 11965-092) as previously described ( • Hardeman K.N. • Peng C. • Paudel B.B. • Meyer C.T. • Luong T. • Tyson D.R. • Young J.D. • Quaranta V. • Fessel J.P. Dependence on glycolysis sensitizes BRAF-mutated melanomas for increased response to targeted BRAF inhibition. ). SKMEL5.SC10, SKMEL5.SC07, and SKMEL5.SC01 are single cell-derived subclones from SKMEL5 ( • Paudel B.P. • Harris L.A. • Hardeman K.N. • Abugable A.A. • Hayford C.E. • Tyson D.R. • Quaranta V. A nonquiescent “idling” population state in drug-treated, BRAF-mutated melanoma. ). Cell lines were tested for mycoplasma before each experiment. ### Methods Details #### Key Equations For full derivation of these equations, see subsection 4 Derivation of generalized 2-dimensional hill equation. This section is meant to serve as a quick reference guide for the main equations used in the paper. If the behavior of the drugs in the model formulation of Figure S1B obey detailed balance, then the effect of the combination (i.e., the height of combination surface) is described by $Ed=C1h1C2h2E0+d1h1C2h2E1+C1h1d2h2E2+(α2d1)h1d2h2E3C1h1C2h2+d1h1C2h2+C1h1d2h2+(α2d1)h1d2h2,$ (Equation 1) where Ed represents the expected effect for a given dose pair d1,d2 and is specified with 9 parameters defined in Table S1. In addition, detailed balance enforces the constraint that $α1h2=α2h1.$ (Equation 2) α is a unitless scalar transforming dose d into an effective dose αd and is used to quantify synergistic potency in MuSyC. Synergistic efficacy (β) is calculated from E0,E1,E2,E3. β is defined in Equation 3 and is interpreted as the percent increase in maximal efficacy of the combination over the most efficacious single agent. The observed β at the maximum of tested concentrations is defined in Equation 4. $β=min(E1,E2)−E3E0−min(E1,E2)$ (Equation 3) $βobs=min[E1(d1max),E2(d2max)]−E3(d1max,d2max)E0−min[E1(d1mx),E2(d2max)]$ (Equation 4) Equation 1 can be re-written to include β explicitly by replacing E3 with min(E1,E2)−β(E0min(E1,E2)) resulting in the following equation. $Ed=C1h1C2h2E0+d1h1C2h2E1+C1h1d2h2E2+(α2d1)h1d2h2(min(E1,E2)−β∗(E0−min(E1,E2))C1h1C2h2+d1h1C2h2+C1h1d2h2+(α2d1)h1d2h2$ (Equation 5) For drugs that do not follow detailed balance, we have derived a more general formulation with 12 parameters: $Ed=[E0E1E2E3]⋅[−(r1d1h1+r2d2h2)r−1r−20r1d1h1−(r−1+r2(α1d2)h2)0r−2r2d2h20−(r1(α2d1)h1+r−2)r−11111]−1⋅[0001]$ (Equation 6) where again E3 can be replaced to include β explicitly. Because we do not know a priori whether combinations will follow detailed balance, we use an information theoretic approach to pick the best model for the data. We have defined six tiers of model complexity, and the best model is selected based on minimizing the deviance information criterion. (See section Quantification and Statistical Analysis, subsection 1 Fitting Dose-response Surfaces for description of fitting algorithm and Table S5 for description of model tiers). #### Comparison to Alternative Synergy Models Several other methods for calculating synergy exist, including long-standing traditional methods Loewe ( • Loewe S. • Muischnek H. über Kombination swirkungen. , • Loewe S. Versuch einer allgemeinen Pharmakologie der Arznei- Kombinationen. ), Bliss ( • Bliss C.I. The toxicity of poisons applied jointly. ), HSA ( Pharmacology. , • Greco W.R. • Bravo G. • Parsons J.C. The search for synergy: a critical review from a response surface perspective. ), and CI ( • Chou T.C. • Talalay P. Quantitative analysis of dose-effect relationships: the combined effects of multiple drugs or enzyme inhibitors. ), as well as more recent methods such as ZIP ( • Wennerberg K. • Aittokallio T. • Tang J. Searching for drug synergy in complex dose–response landscapes using an interaction potency model. ), BRAID ( • Twarog N.R. • Stewart E. • Hammill C.V. • Shelat A.A. BRAID: A unifying paradigm for the analysis of combined drug action. ), the effective dose model ( • Zimmer A. • Katzir I. • Dekel E. • Mayo A.E. • Alon U. Prediction of multidimensional drug dose responses based on measurements of drug pairs. ), and Schindler’s Hill-PDE model ( • Schindler M. Theory of synergistic effects: hill-type response surfaces as “null-interaction” models for mixtures. ). All of these methods, as well as our own, define a null surface. Combinations with effects greater than or less than expected based on the null surface are deemed synergistic or antagonistic respectively. These methods broadly use one of two approaches to quantify synergy. Loewe, Bliss, CI, HSA, Schindler’s Hill-PDE, and ZIP quantify synergy at every concentration based on how the experimentally measured response deviates from the null surface. BRAID, the effective dose model, and MuSyC provide equations with synergy parameters describing the entire surface which is fit to experimental data using non-linear curve-fitting techniques. Here, we briefly compare our model to each of these others and show that our model (1) describes distinct combination surfaces, (2) results in synergy parameters which are straight forward to interpret, (3) is not restricted to a special class of effects with bounded scales, and (4) reduces to many of these other approaches in special cases thereby unifying and generalizing seemingly disparate synergy principles. #### The Dose Equivalence Principle: Loewe and CI The first prevalent foundational principle, established by Loewe ( • Loewe S. • Muischnek H. über Kombination swirkungen. ) and subsequently expanded on by CI ( • Chou T.C. • Talalay P. Quantitative analysis of dose-effect relationships: the combined effects of multiple drugs or enzyme inhibitors. ), is the Dose Equivalence Principle. This principle states that for a given effect magnitude E achieved by dose x of drug X alone or dose y of drug Y alone, there exists a constant ratio R = x/y such that using $Δx$ less of drug X can always be compensated for by using $Δy=RΔx$ more of drug Y. Therefore, the null surface is only defined for combinations whose magnitude of effect is less than the weaker drug’s maximal effect. This is because for combination effects greather than the effect of the weakest drug, no amount of the weaker drug can compensate for reducing the dose of the stronger drug. The resulting null surfaces have linear isoboles. Our model recovers this under the constraint that the two drugs are maximally antagonistic. This can be seen by setting α=0, and reducing Equation 6 to $(E−E0)+(E−E1)(d1Φ1)h1+(E−E2)(d2Φ2)h2=0.$ By this it is easy to see when h1 = h2 = 1, iso-effect lines $(∂∂E=0)$ are represented by the linear isoboles characteristic of Loewe Additivity and the CI null models. However, even in this case MuSyC is not limited by the weaker drug, and can therefore extend Loewe’s isoboles to any combination doses. The requirement that α = 0 means the Loewe and CI null models assumes infinite potency antagonism (α1 = α2 = 0). Therefore, combinations with (0 <α <1) may be deemed synergistic by Loewe or CI. However, these values directly reflect a decrease in potency, and our formulation accurately identifies this as antagonistic. Finally, their null model also ignores the possible effect of hill slopes not equal to 1. For drugs with h < 1, they will tend to overestimate synergy, while drugs with h > 1 will lead to underestimated synergy (Figures 5C and 5D). Because their null model relies on such specific assumptions, which are not true for many drugs, it is generally impossible to know whether their results reflect true underlying synergy/antagonism, or simply stem from an inappropriate null surface. #### The Multiplicative Survival Principle: Bliss and Effective Dose Model The other prevalent foundational synergy principle is multiplicative survival, described by Bliss ( • Bliss C.I. The toxicity of poisons applied jointly. ). Bliss’ null model assumes the probability of a cell being unaffected by drug 1 (U1) is independent of the probability of a cell being unaffected by drug 2 (U2). From this, the null surface states the probability of being unaffected by both drug 1 and drug 2 in combination is U1,2 = U1U2. When there is no potency synergy or antagonism, MuSyC reproduces this behavior in the following manner. Setting α1 = α2 = 1, consider the fraction of unaffected cells, U, for each drug in isolation: $Ui=11+(diΦi)hi.$ And for the two drugs in combination, solving Equation 67 for U we get $U1,2=11+(d1Φ1)h1+(d2Φ2)h2+(d1Φ1)h1(d2Φ2)h2.$ From this, it is easy to verify that U1,2=U1U2, which is equivalent to Bliss Independence. However, the Bliss method explicitly requires the effect being measured in the combination surface is "percent affected", such as percent of cells killed vs. percent of cells remaining. For drugs which induce different maximum effects, Bliss is unable to account for the difference between being affected by drug 1 (E1), drug 2 (E2), and or both (E1,2), and may give unreliable results. Our model addresses this by decoupling the effect of a drug (E0,E1,E2,E3) and the "percent affected" by a drug (U,A1,A2,A1,2). If the effect itself is measuring percent (un)affected, that corresponds to the case where E0 = 1, E1 = E2 = E3 = 0, in which case MuSyC’s null model is identical to Bliss’. Zimmer et. al. introduced the effective dose model ( • Zimmer A. • Katzir I. • Dekel E. • Mayo A.E. • Alon U. Prediction of multidimensional drug dose responses based on measurements of drug pairs. ) as a parameterized version of Bliss, and shares the same null surface. However, while Bliss defines synergy at every concentration independently, the effective dose model introduces a parameter ai,j to quantify synergy, similar to MuSyC’s potency synergy (α). The ai,j parameter reflects how the presence of drug i modulates the potency of drug j. However, like Bliss, the effective dose model can only be applied to drug responses where the measured drug effect is "percent affected" thereby implicitly requiring the maximum effect of both drugs and the combination is 100% affected which is commonly not observed in dose-response studies ( • Fallahi-Sichani M. • Heiser L.M. • Gray J.W. • Sorger P.K. Metrics other than potency reveal systematic variation in responses to cancer drugs. ). #### ZIP Like the equivalent dose model ( • Zimmer A. • Katzir I. • Dekel E. • Mayo A.E. • Alon U. Prediction of multidimensional drug dose responses based on measurements of drug pairs. ), as well as our potency synergy (α), ZIP ( • Wennerberg K. • Aittokallio T. • Tang J. Searching for drug synergy in complex dose–response landscapes using an interaction potency model. ) works by quantifying how one drug shifts the potency of the other. ZIP is formulated for arbitrary E0 and Emax; however, it assumes Emax is the same for both drugs, as well as the combination (explicitly E1 = E2 = E3). To identify potency shifts, the ZIP method fixes the concentration of one drug, then fits a Hill-equation dose response for the other drug. However, for combinations with efficacy synergy or antagonism, dose responses can have non-Hill, and even non-monotonic shapes. In our data, several drugs displayed this behavior. Because our method accounts explicitly for efficacy synergy, our surfaces are able to describe such complex drug combination surfaces where ZIP fails. Furthermore, ZIP calculates synergy at every concentration. This is similar to the approach taken by Bliss, Loewe, and CI, and can be used to find doses which “maximize” the observed synergy. However, quantifying synergy on a dose-by-dose basis confounds synergy of potency and efficacy which emerge only on inspection of the global dose-response surface. Additionally, this dose dependent synergy often leads to ambiguous results about whether a given combination is synergistic or not, as it synergizes at some concentrations, and antagonizes at others (Figure 5A). #### BRAID Like ZIP, BRAID ( • Twarog N.R. • Stewart E. • Hammill C.V. • Shelat A.A. BRAID: A unifying paradigm for the analysis of combined drug action. ) assumes that each drug alone has a sigmoidal dose-response, and constructs a Hill-like equation for the combination. This equation uses a single dose parameter κ which combines the doses of both individual drug. To uniquely solve for κ, this formalism, like Loewe additivity, adds the constraint that a drug in combination with itself must be neither synergistic nor antagonistic. By adjusting κ, BRAID is able to fit complex drug combination surfaces, including non-monotonic responses. Because BRAID fits the whole combination surface using a single parameter, it can be used to make unambiguous statements about whether the combination is synergistic or antagonistic. Nevertheless, BRAID does not account for differences in synergy due to efficacy vs. potency, whereas we find many combinations that are synergistic with respect to one, but antagonistic with respect to the other. Further, the biochemical interpretation of κ is not straightforward. And finally, the BRAID model is unable to fit combination surfaces with synergistic efficacy, as it assumes that the maximum effect of the combination is equal to the maximum effect of the stronger single-drug. #### Highest Single Agent (HSA) HSA, originally proposed by Pharmacology. and then revived later by Greco ( • Greco W.R. • Bravo G. • Parsons J.C. The search for synergy: a critical review from a response surface perspective. ), is a simple heuristic which argues synergy is any combination effect which exceeds the effect of either single agent. While β is conceptually similar to HSA, β provides a global view of the possible increase in effect rather than a point-by-point dose comparison as done in HSA. Because HSA is calculated at every dose it cannot distinguish between synergistically efficacious combinations and synergistically potent combinations as both will increase the effect at intermediate doses (Figure S2). Additionally, as HSA is only defined on a dose-by-dose basis with no model fit, it is sensitive to the dose range selected. #### Schindler 2D-Partial Differential Equation (PDE) Model Schindler’s Hill PDE was derived to impute the dose-response surface from the single dose-response curves alone ( • Schindler M. Theory of synergistic effects: hill-type response surfaces as “null-interaction” models for mixtures. ). Therefore, it does not contain any fit parameters, but rather defines a null surface for which synergy results in deviations in the surface. While Schindler did not specify how to account for these devations, he postulates some implementation of perturbation theory would be sufficient. Like CI and the Equivalent Dose Model, Schindler’s framework requires effects in a range between 0 and 1, based on the assumption that the metric is a percent. Therefore, Schindler cannot be applied to data collected with other types of metrics (e.g., DIP Rates). Additionally, Schindler’s maximum effect of the combination (E3) is set equal to the average of the single drug maximal effect. This allows for smooth transitions between the two single dose-response curves but results in some non-intuitive solutions. For example, if drug 1 has a maximal effect of 50% and drug 2 has a maximal effect of 70% the expected additive effect of the combination in the null model is 60% which is less than the maximal effect of drug 1. Therefore, an effect of 65% in combination, though less than achievable with one drug, is designated synergistic by Schindler. #### Sham Experiment It is common for synergy metrics to examine the special case in which the two drugs being combined are actually the same drug in a so called sham experiment first postulated by Chou ( • Chou T.C. • Talalay P. Quantitative analysis of dose-effect relationships: the combined effects of multiple drugs or enzyme inhibitors. ). Famously, Loewe, Combination Index, and other methods based on the Dose Equivalence Principle are sham compliant while Bliss and other methods based on the Multiplicative Survival Principle are not. Because our method distinguishes between two types of synergy, we tested sham compliance for each independently. It is immediately apparent synergistic efficacy is sham complaint in all conditions. This can be observed by substituting E1 = E2 = E3, as the maximum effect of the drug remains constant, into the definition for β in Equation 3 $β=min(E1,E2)−E3E0−min(E1,E2)=0.$ (Equation 7) To test the sham compliance of synergistic potency, we can write the full dose response surface as a direct 2D extension of the 1D dose-response curve in Equation 12 by replacing d with d1 + d2. $Ed=Em(d1+d2)h+E0Ch(d1+d2)h+Ch$ (Equation 8) Our 2D generalization of Equation 12, given by Equation 1 can be rewritten for the case of 2 identical drugs by observing that C1 = C2 = C, h1 = h2 = h, and E1 = E2 = E3 = Em, resulting in $Ed=C2hE0+d1hChEm+Chd2hEm+(α2d1d2)hEmC2h+d1hCh+Chd2h+(α2d1d2)h.$ (Equation 9) Setting Equations 8 and 9 equal to one another, we find $α2=Ch(d1+d2)h−d1h−d2h(d1d2)h.$ (Equation 10) This equality is true when α2 = α1 = 0 and h = 1. This makes sense as our model reduces to Loewe additivity under those conditions, and Loewe additivity was developed to explicitly address the sham-combination case. In conclusion, MuSyC satisfies the sham experiment in all conditions where Loewe is the appropriate model. #### One-Dimensional Sigmoidal Dose-Response Curve In pharmacology, the effect of a drug is usually described by the Hill equation, which arises from the equilibrium of a reversible transformation between an unaffected population (U) and an affected population (A) (Equation 11) Here, [d] is the concentration of the drug, h is the Hill slope, and r1 and r−1 are constants corresponding to its rate of action. Solving for the equilibrium results in $∂U∂t=A⋅r−1−U⋅r1dh≡0$ $AU=r1dhr−1.$ When $dh=r−1r1$, then half the population is affected, and half is unaffected (A = U). This dose is the EC50, denoted as $Ch=r−1r1$. Adding the constraint that U + A = 1, which states that 100% of the population is either unaffected or affected, we find the classic Hill equation: $U=ChCh+dh.$ If the unaffected and affected populations differ phenotypically by some arbitrary effect (e.g., proliferation rate), the observed effect over the whole population at dose d of some drug will be a weighted average of the two effects by the percent affected and unaffected. Namely, $Ed=U⋅E0+A⋅Em,$ where E0 is the effect characteristic of the unaffected population, and Em is the effect characteristic of the affected population. From this we find the final form of a 4-parameter sigmoidal equation describing dose-response due to Hill-kinetics: $Ed−EmE0−Em=ChCh+dh$ (Equation 12) #### Extending the Mass Action Paradigm to Simple Four-State Model Assuming Detailed Balance Consider a cell type U that can transition into a “drugged” state A1 in the presence of drug d1 and into a different drugged state A2 in the presence of drug d2 (Figure S1B). We can write these transitions as $U↔r−1r1∗[d1]A1$ (Equation 13) $U↔r−2r2∗[d2]A2,$ (Equation 14) where [di] denotes concentration of drug di. At equilibrium, the forward and reverse rates of these processes are equal, i.e., $r1[d1][U]=r−1[A1]$ (Equation 15) $r2[d2][U]=r−2[A2],$ (Equation 16) where [Ai] is the population of cell state Ai. Defining Θx as the ratio of forward and reverse rates ($Θx≡r−xrx$) and assuming the system obeys detail balance, we find $[U][A1]=Θ1[d1]$ (Equation 17) $[U][A2]=Θ2[d2].$ (Equation 18) Now assume that a fourth state exists, A1,2, corresponding to a “doubly” drugged state (Figure S1B). A1 cells can transition into this state in the presence of drug d2 and A2 cells can transition into this state in the presence of drug d1. We can write these processes as $A1↔r−2r2∗α1[d2]A1,2$ (Equation 19) $A2↔r−1r1∗α2[d1]A1,2.$ (Equation 20) Note that without loss of generality, we set the forward rate constant for (19) equal to the same value in (14) multiplied by a factor α1>0. Similarly, the rate constant for (20) is the same as in (13) multiplied by a factor α2>0. Here α represents how each drug potentiates the action of the other and can be interpreted as a change in the “effective” dose of one drug given the presence of the other. When α=1 the effective dose of the first drug is the same given the presence of the second drug. When α<1, more of the first drug is required to observe the same effective concentrations due to the presence of the second drug. Finally, when α>1 the same concentration of the first drug is effectively increased by the second drug. Again asserting the system obeys detailed balance at equilibrium, we have $[A1][A1,2]=1α1Θ2[d2]$ (Equation 21) $[A2][A1,2]=1α2Θ1[d1]$ (Equation 22) We can derive the relationship between the multiplicative factors α1 and α2 by rearranging Equation 17 as $[U]=Θ1[d1][A1]$ (Equation 23) Substituting for [A1] from Equation 21 gives $[U]=1α1Θ1[d1]Θ2[d2][A1,2]$ (Equation 24) Substituting for [A1,2] from Equation 22 gives $[U]=α2α1Θ2[d2][A2]$ (Equation 25) Finally, substituting for [A2] from Equation 18 gives $[U]=α2α1[U]$ (Equation 26) i.e., α1=α2 = α. Note this equality only holds for systems obeying detailed balance. In general, we do not assume this (See Section 4.4 ’Generalized derviation without assuming detailed balance’) and α1 and α2 are independent (Figure S4). However, assuming detailed balance facilitates in deriving a more intuitive form of the 2D Hill equation (Equation 1) compared to the full form (Equation 6). Now, we define the total cell count $CT≡[U]+[A1]+[A2]+[A1,2].$ (Equation 27) Substituting for [A1], [A2], and [A1,2] from Equations 17, 18, and 24, respectively, gives $CT=[U]+[d1]Θ1[U]+[d2]Θ2[U]+α[d1]Θ1[d2]Θ2[U].$ (Equation 28) Solving for [U] gives $[U]=Θ1Θ2CTΘ1Θ2+[d1]Θ2+Θ1[d2]+α[d1][d2].$ (Equation 29) Substituting Equation 29 into Equation 17 and rearranging gives $[A1]=[d1]Θ2CTΘ1Θ2+[d1]Θ2+Θ1[d2]+α[d1][d2].$ (Equation 30) Similarly, from Equation 18 we get $[A2]=Θ1[d2]CTΘ1Θ2+[d1]Θ2+Θ1[d2]+α[d1][d2],$ (Equation 31) and from Equation 24 $[A1,2]=α[d1][d2]CTΘ1Θ2+[d1]Θ2+Θ1[d2]+α[d1][d2].$ (Equation 32) As in the derivation of the 1D Hill equation, the measured effect ($Ed$) is then the relative proportion of cells in each state multiplied by the effect characteristic of that state as in $Ed=E0∗U+E1∗A1+E2∗A2+E3∗A1,2.$ (Equation 33) Here we define the effect of each state (E0,E1,E2,E3) to be proliferation rate in the following way. We assume that cells in each state can divide and die at rates characteristic of the state, i.e., $Ci→kidivCi+Ci$ (Equation 34) $Ci→kidie∅$ (Equation 35) where Ci is specific state of the cell. We define the drug-induced proliferation (DIP) rate for each state as the difference between the division and death rate constants, i.e., $kidip≡kidiv−kidie.$ (Equation 36) Using Equation 27, the rate of change of the total cell population is $dCTdt=d[U]dt+d[A1]dt+d[A2]dt+d[A1,2]dt.$ (Equation 37) From (13), (14), (19), (20), (34)–(36), we get $dCTdt=k0dip[U]+k1dip[A1]+k2dip[A2]+k3dip[A1,2].$ (Equation 38) Substituting Equations 29, 30, 31, and 32 and rearranging, we get $dCTdt=kTdipCT,$ (Equation 39) with $kTdip≡Θ1Θ2k0dip+[d1]Θ2k1dip+Θ1[d2]k2dip+α[d1][d2]k3dipΘ1Θ2+[d1]Θ2+Θ1[d2]+α[d1][d2].$ (Equation 40) Note that with a slight modification, Equation 40 can be written as $kTdip=Θ1k0dip+[d1]k1dip+Θ1[d2]Θ2k2dip+α[d1][d2]Θ2k3dipΘ1+[d1]+Θ1[d2]Θ2+α[d1][d2]Θ2.$ (Equation 41) Therefore, if [d2]=0 (i.e., single-drug treatment) we get $kTdip=Θ1k0dip+[d1]k1dipΘ1+[d1]$ $=Θ1k0dip+[d1]k1dip+(Θ1k1dip−Θ1k1dip)Θ1+[d1]$ $=(Θ1+[d1])k1dip+Θ1(k0dip−k1dip)Θ1+[d1]$ $=k1dip+Θ1Θ1+[d1](k0dip−k1dip).$ (Equation 42) Rearranging gives $kTdip−k1dipk0dip−k1dip=Θ1Θ1+[d1].$ (Equation 43) Comparing to Equation 12, we see that Equation 43 is a one-dimensional sigmoidal dose-response curve with $Ed=kTdip$, $E0=k0dip$, $Em=k1dip$, $C=Θ1$, and h=1. By analogy, we surmise that Equation 40 is a 9 parameter, two-dimensional generalization of Equation 12, i.e., $Ed=C1h1C2h2E0+d1h1C2h2E1+C1h1d2h2E2+(α2d1)h1d2h2E3C1h1C2h2+d1h1C2h2+C1h1d2h2+(α2d1)h1d2h2,$ (Equation 44) with $Ed=kTdip$, $E0=k0dip$, $E1=k1dip$, $E2=k2dip$, $E3=k3dip$, $C1=Θ1$, $C2=Θ2$, h1=1, h2=1, and the additional parameter α2. Note that under the assumption of detailed balance we found α1=α2 for the case when h=1. Therefore, in the general case when h≠1, $α1h2=α2h1$. By fitting the Equation 44, α1 is uniquely determined. #### Four-state model with multiple steps between states Let us assume instead of occurring in a single step, the cell state transitions are h step processes, i.e., $Ci↔r−x,1rx,1∗αy[dx]Cij1↔r−x,2rx,2∗αy[dx]Cij2⋯↔r−x,h−1rx,h−1∗αy[dx]Cijh−1↔r−x,hrx,h∗αy[dx]Cj.$ (Equation 45) Assuming that all steps are in rapid equilibrium, it is straightforward to show that $[Ci][Cj]=∏m=1hΘx,m[αydx]h,$ (Equation 46) where $Θx,m≡r−x,m/rx,m$. Defining $Φx≡∏m=1hΘx,mh$, Equation 46 can be written as $[Ci][Cj]=Φxh[αydx]h,$ (Equation 47) which is the well-known Median-Effect Equation from Chou ( • Chou T.-C. • Talalay P. Analysis of combined drug effects: a new look at a very old problem. , • Chou T.C. • Talalay P. Quantitative analysis of dose-effect relationships: the combined effects of multiple drugs or enzyme inhibitors. , • Chou T.C. Drug combination studies and their synergy quantification using the Chou-Talalay method. ). Replacing reactions (13) and (14) with multi-step processes of the form (45), gives us $[U][A1]=Φ1h1[d1]h1$ (Equation 48) $[U][A2]=Φ2h2[d2]h2.$ (Equation 49) Similarly, we replace reactions (19) and (20) with the same multi-step process except with the rate constant for the $Ci→Ci1$ transition (entry into the cascade) equal to $αyrx,1[dx]$, giving $[A1][A1,2]=Φ2h2[α1d2]h2$ (Equation 50) $[A2][A1,2]=Φ1h1[α2d1]h1.$ (Equation 51) Note, we assume that the number of steps in the cascade (45) is dependent on the drug type (i.e., $U→A1$ and $A2→A1,2$, both driven by d1, take h1 steps, while $U→A2$ and $A1→A1,2$, both driven by d2, take h2 steps). Using Equations 48, 49, 50, and 51 and again defining the total cell count CT as in Equation (27), we derive $[U]=Φ1h1Φ2h2CTΦ1h1Φ2h2+[d1]h1Φ2h2+Φ1h1[d2]h2+[α2d1]h1[d2]h2$ (Equation 52) $[A1]=[d1]h1Φ2h2CTΦ1h1Φ2h2+[d1]h1Φ2h2+Φ1h1[d2]h2+[α2d1]h1[d2]h2$ (Equation 53) $[A2]=Φ1h1[d2]h2CTΦ1h1Φ2h2+[d1]h1Φ2h2+Φ1h1[d2]h2+[α2d1]h1[d2]h2$ (Equation 54) $[A1,2]=α[d1]h1[d2]h2CTΦ1h1Φ2h2+[d1]h1Φ2h2+Φ1h1[d2]h2+[α2d1]h1[d2]h2.$ (Equation 55) Therefore, in the same way that we arrived at Equation 40, we can derive $kTdip≡Φ1h1Φ2h2k0dip+[d1]h1Φ2h2k1dip+Φ1h1[d2]h2k2dip+[α2d1]h1[d2]h2k3dipΦ1h1Φ2h2+[d1]h1Φ2h2+Φ1h1[d2]h2+[α2d1]h1[d2]h2,$ (Equation 56) which is of the form Equation 44 with $Ed=kTdip$, $E0=k0dip$, $E1=k1dip$, $E2=k2dip$, $E3=k3dip$, $C1=Φ1$, and $C2=Φ2$. From this it is clear the hill coefficent (h) is related to the number of intermediate steps in the system. The derivation of Equation 56 assumes that the populations of all intermediate cell states $Cijm$ ($m∈{1…h−1}$) in (45) are small ($≈0$). (This is most evident in our use of Equation 27 for the total cell population, where we only consider the end states. However, it is also implicit in our use of Equation 46, which is used to derive Equations 52, 53, 54, and 55 that lead to Equation 56 via Equation 38. In other words, we are assuming that the intermediate states do not significantly contribute to the dynamics of the total cell population. Since it is not reasonable to assume that cells in these states do not divide and die, we must assume the percent occupancy of these states is near zero.) We can satisfy this assumption by requiring that all $rx,m$, $r−x,m≫1$ $(m∈{1…h})$ and $Θx,1≫Θx,2≈…≈Θx,h−1≫Θx,h$. To see this, consider cell state U and all of its intermediate states between states A1 and A2. Let us define $UT≡[U]+∑m=1h1−1[C01m]+∑m'=1h2−1[C02m'].$ (Equation 57) From (45), we see that $Θx,1[dx]=[Ci][Cij1],Θx,2[dx]=[Cij1][Cij2],$ etc. Therefore, $UT=[U](1+[d1]Θ1,1+[d1]2Θ1,1Θ1,2+…+[d1]h1−1∏m=1h1−1Θ1,m+[d2]Θ2,1+[d2]2Θ2,1Θ2,2+…+[d2]h2−1∏m'=1h2−1Θ2,m').$ (Equation 58) If $Θ1,1≫1$, $(m∈{2…h1−1})$ and $Θ2,1≫1$, $(m'∈{2…h2−1})$, we get $UT≈[U]$, i.e., the populations of all intermediate states are ≈0. Now consider cell state A1 and all of its intermediate states between cell state A1,2. Similar to above, we have $A1T≡[A1]+∑m=1h2−1[C13m]$ (Equation 59) and $A1T=[A1](1+α[d2]Θ2,1+[α1d2]2Θ2,1Θ2,2+…+[α1d2]h2−1∏m=1h2−1Θ2,m).$ (Equation 60) Thus, as before, if $Θ2,1≫1$ and $(m∈{2…h2−1})$ we have $A1T≈[A1]$. However, from (45) we also have $[A1]=[d1]Θ1,h1[C01h1−1]$ $=[d1]h1Θ1,1Θ1,2…Θ1,h1$ (Equation 61) Since, from above, $Θ1,1≫1$ and $(m∈{2…h1−1})$, we must require that $Θ1,h1≪1$ and $(m∈{2…h1−1})$ in order to offset the large value of $Θ1,1$ and to ensure that , and to ensure. The latter condition means that $Θ1,m≈1$ $(m∈{2…h1−1})$. Therefore, we have the condition that $Θ1,1≫Θ1,2≈…≈Θ1,h1−1≫Θ1,h1$. Similarly, we can derive that $Θ2,1≫Θ2,2≈…≈Θ2,h2−1≫Θ2,h2$ by considering cell state A2 and all of its intermediate states between cell state A1,2 (not shown). #### Generalized Derivation without Assuming Detailed Balance More generally if we do not assume detailed balance, the state occupancy of U,A1,A2,A1,2 are defined by the partial equilibrium equations $∂U∂t=−U⋅(r1d1+r2d2)+A1⋅r−1+A2⋅r−2$ (Equation 62) $∂A1∂t=−A1⋅(r−1+α1r2d2)+U⋅r1d1+A1,2⋅r−2$ (Equation 63) $∂A2∂t=−A2⋅(α2r1d1+r−2)+U⋅r2d2+A1,2⋅r−1$ (Equation 64) $∂A1,2∂t=−A1,2⋅(r−1+r−2)+A1⋅α1r2d2+A2⋅α2r1d1.$ (Equation 65) A final constraint is $U+A1+A2+A1,2=CT.$ (Equation 66) At equilibrium, the Equations 62, 63, 64, and 65 must be equal to zero; however, the system only defines a rank 3 matrix, necessitating Equation 66. Thus we find $[−(r1d1+r2d2)r−1r−20r1d1−(r−1+r2(α1d2))0r−2r2d20−(r1(α2d1)+r−2)r−11111]⋅[UA1A2A1,2]=[000CT].$ (Equation 67) Equations of the form $Y⋅x→=b→,$ can be solved as $x→=Y−1⋅b→.$ Thus we find the expected effect, as in the 1D case, is the weighted average of the characteristic effect of each state weighted by the state occupancy as governed by the 2D Hill Equation. $E=[E0E1E2E3]⋅Y−1⋅[0001].$ (Equation 68) This is derived assuming mass action from the reaction rules $U+d→A1+d$, $A1→U$. If instead we assume a multi-step transition as in section 4.3, we can simply replace the following in 68 $d1→d1h1$ $d2→d2h2$ $α2d1→(α2d1)h1$ $α1d2→(α1d2)h2,$ resulting in $Ed=[E0E1E2E3]⋅[−(r1d1h1+r2d2h2)r−1r−20r1d1h1−(r−1+r2(α1d2)h2)0r−2r2d2h20−(r1(α2d1)h1+r−2)r−11111]−1⋅[0001].$ (Equation 69) Equation 69 has the following twelve explicit parameters: r1, r−1, r2, r−2, E0, E1, E2, E3, h1, h2, α1 and α2. There is a relationship defined between a drug’s EC50 (C in our derivation), the transition rates (ri, r−i), and the hill slope (hi), given by $Cihi=r−iri$ | {i = 1 or 2}. #### Combination Experiments Protocol Experiments were conducted in the Vanderbilt High Throughput Screening Facility. Cells were seeded at approximately 300 cells per well in 384-well plates and allowed to adhere overnight. A preliminary image of each plate was taken approximately 8 hours after seeding to verify sufficient numbers of cells for each experiment. Images were taken on either the ImageXpress Micro XL (Molecular Devices) or CellaVista. The matrix of drug concentrations was prepared using a row-wise and column-wise serial 2X or 4X dilution in 384 well plates using a Bravo Liquid Handling System (Agilent) or manually in 96-well plates. See Table S2 for dose ranges tested. After allowing to adhere overnight, medium containing drugs and 5 nM Sytox Green (to detect dead cells) was added (time = 0) and replaced after 72 hours. Images were obtained at intervals ranging from every 4 to 8 h, depending on the experiment, for >120 hours. Cell counts were determined using custom-image segmentation software developed in Python using scikit-image package ( • van der Walt S. • Schönberger J.L. • Nunez-Iglesias J. • Boulogne F. • Warner J.D. • Yager N. • Gouillart E. • Yu T. scikit-image contributors Scikit-image: image processing in Python. ) and run in parallel using RabbitMQ/Celery (http://www.celeryproject.org/). #### RNA-seq of Melanoma Cell Lines Total RNA was isolated from untreated SKMEL5 single-cell derived sublines, each in triplicate, using Trizol isolation method (Invitrogen) according to the manufacturer’s instructions. RNA samples were submitted to Vanderbilt VANTAGE Core services for quality check, where mRNA enrichment and cDNA library preparation were done with Illumina Tru-Seq stranded mRNA sample prep kit. Sequencing was done at Paired-End 75 bp on the Illumina HiSeq 3000. Reads were aligned to the GRCh38 human reference genome using HISAT2 ( • Kim D. • Salzberg S.L. HISAT: a fast spliced aligner with low memory requirements. ) and gene counts were obtained using featureCounts ( • Liao Y. • Smyth G.K. • Shi W. featureCounts: an efficient general purpose program for assigning sequence reads to genomic features. ). All downstream analyses were performed in R (https://www.r-project.org) using the Bioconductor framework (https://www.bioconductor.org) #### RT-qPCR Quantification of NOX5 Expression Total RNA was extracted using Trizol isolation method (Invitrogen) according to the manufacturer’s instructions. cDNA synthesis was performed with QuantiTect Reverse Transcription Kit (Cat# 205311) from Qiagen. RT-qPCR was performed using the IQTM SYBR Green Supermix from BioRad (Cat# 1708880). Amplifications were performed in BioRad CFX96 TouchTM Real-Time PCR Detection System. All experiments were done at least in 3+ technical replicates. Log2 of the transcript expressions were normalized to SKMEL5 subline SC01. HPRT or 36B4 were used as housekeeping gene in all the experiments. Primers used in RT-qPCR are listed in Key Resources Table. ### Quantification and Statistical Analysis #### Fitting Dose-Response Surfaces We developed a fitting algorithm, implemented in Python, to fit the combinations experiments to the 2D Hill equation. The fitting is done in three steps, first estimates of the single dose-response parameters (C1,C2,h1,h2,E0,E1,E2) are extracted from fits to the single dose-response curves using the Pythonic implementation of a Levenburg-Marquart (LM) least squares optimization (scipy.optimize.curve_fit). The fit uncertainty (σ) is then the square root of the covariance matrix which is approximated as the inverse of the Hessian matrix (equal to JTJ in LM where J is the Jacobian) at the solution. In the second step, a Particle Swarm Optimizer (10,000 particles, 100 iterations) fits the full 2D Hill equation using the single parameter fits and uncertainties as initial values and bounds (±2σ). In the last step, the PSO optimized values are used to construct priors for a Metropolis-Hastings Monte-Carlo Markov Chain (MCMC) Optimization (Metropolis Hastings 10,000 iterations). Convergence is tested by checking all parameters' Geweke Z-score. If the Z-score range is (-2,2) over the sampling time frame, the optimization is considered to converge (Figures S3D and S3E). We found it necessary to use both the PSO and MCMC in order to fit a wide range of dose-response surfaces (Figure S3). To test the sensitivity of our fitting algorithm, we generated in silico data for 125 different dose-response surfaces at different data densities. The density of data tested were square matrixes of rank 5, 7, 10, 15, and 25. At each density 25 different dose-response surfaces were sampled across a 5X5 grid of log(α) and β values ranging from [-2,2] and from [-0.5,0.5], respectively. The parameters for E0, the single drug hill slope, EC50, and maximal effects were held constant at (0.3, 1, 10e-5, and 0.0), respectively. Random noise equal to the average uncertainty in the DIP Rate fits from the NSCLC screen was added to the data (0.001). In all conditions we observed a PSO particle count of 10,000 converged to a minimum in <60 iterations (Figure S3A). However, this minimum was not the optimal solution. The addition of an MCMC walk further improved fits (Figure S3B) (Pymc3 package). The MCMC walk calculates the posterior distribution for each parameter from which each parameter’s value (mean of trace) and uncertainty (standard deviation of trace) is calculated. Uncertainty in (E0, E1, E2, and E3) was propagated when calculating β using the equation where Ex and σEx are min(E1,E2) and σ(min(E1,E2)) respectively. All other uncertainty propagations were handled with python package uncertainties ( • Lebigot E.O. Uncertainties: a Python package for calculations with uncertainties. ). By calculating the uncertainty in the synergy parameters from the posterior distributions, the significance of synergy can be assessed in an unbiased way. Multiple factors contribute to increasing uncertainty in the fitted parameters. Dose-selection, an important consideration in all drug response profiling, changes the certainty of the fits (Figures S3C–S3E). While we are unable to observe saturating effects implicit in the model for some of our drug combinations – due to limited solubility or potency of the drug – by keeping careful account of the uncertainty in our synergy calculations we can still interpret the synergy of non-optimal dose-regimes. To demonstrate this, we generated the same 25 dose-response surfaces with varying log(α) and β values ranging from [-2,2] and from [-0.5,0.5] respectively but at different coverage of the dose-response curve. The uncertainty in the synergy parameters increases for decreased dose range (Figures S3C and S3D). It is important to note that in general the uncertainty is a function of many different aspects other than data density including the hill slope of the single curves (high hill slopes can result in higher uncertainty in log(α)), noise of experimental data, and quality of priors resulting from the single-drug fits. We posit the rigorous approach taken here accounts for all these sources resulting in a true estimate of confidence in a particular synergy value. To prevent over fitting the data, we have defined six different model tiers which have increasing degrees of freedom (Table S5). To select the correct model tier, we penalize models with higher degrees of freedom by selecting the model based on minimizing the deviance information criterion (DIC) ( • Berg A. • Meyer R. • Yu J. Deviance information criterion for comparing stochastic volatility models. ). Fits for each nest are used to inform priors for subsequent nests. Only drug combinations which converged to the full model (tier 5 with fits for all 12 parameters – Equation 69 in section 4.4 of Methods Details) were used for subsequent analysis. The MCMC optimization additionally allows for quantification of parameter confidence given the data. The following packages were used for fitting, data analysis, or visualization: GNU parallel ( • Tange O. GNU parallel: the command-line power tool. ), SciPy ( • Jones E. • Oliphant T. • Peterson P. SciPy: open source scientific tools for python. ), Numpy ( • Oliphant T.E. Guide to NumPy. ), Pandas ( McKinney, W. (2010) Data structures for statistical computing in python, Proceedings of the 9th Python in Science Conference, pp. 51–56. ), Matplotlib ( • Hunter J.D. Matplotlib: a 2D graphics environment. ). Pymc3 ( • Salvatier J. • Wiecki T.V. • Fonnesbeck C. Probabilistic programming in Python using PyMC3. ). #### Calculating the DIP Rate Traditionally, the efficacy of an anti-proliferative compound is measured as the percent of viable cells (relative to control) after a treatment interval ( • Fallahi-Sichani M. • Heiser L.M. • Gray J.W. • Sorger P.K. Metrics other than potency reveal systematic variation in responses to cancer drugs. ); however, it has been recently shown this metric is subject to temporal biases ( • Hafner M. • Niepel M. • Chung M. • Sorger P.K. Growth rate inhibition metrics correct for confounders in measuring sensitivity to cancer drugs. ; • Harris L.A. • Frick P.L. • Garbett S.P. • Hardeman K.N. • Paudel B.B. • Lopez C.F. • Quaranta V. • Tyson D.R. An unbiased metric of antiproliferative drug effect in vitro. ). To address these biases, we previously developed an unbiased metric of drug effect termed the drug-induced proliferation (DIP) rate ( • Harris L.A. • Frick P.L. • Garbett S.P. • Hardeman K.N. • Paudel B.B. • Lopez C.F. • Quaranta V. • Tyson D.R. An unbiased metric of antiproliferative drug effect in vitro. ). The DIP rate is defined as the steady state proliferation rate after drug equilibration. A positive DIP rate indicates an exponentially growing population, while a negative DIP rate indicates a regressing one. A rate of zero indicates a cytostatic effect, which may result from cells entering a non-dividing state or from balanced death and division ( • Paudel B.P. • Harris L.A. • Hardeman K.N. • Abugable A.A. • Hayford C.E. • Tyson D.R. • Quaranta V. A nonquiescent “idling” population state in drug-treated, BRAF-mutated melanoma. ). We used the available findDIP R package for calculating DIP rates from growth curves which automatically selects the interval after drug equilibration (https://github.com/QuLab-VU/DIP_rate_NatMeth2016.git). #### Calculating Loewe, CI, Bliss, and HSA To compare our method to the prevailing methods for computing synergy, we calculated Loewe, CI, Bliss for the data from the osimertinib screen in Figure 2 and melanoma BRAF/MEK data of Figure 3. Loewe is agnostic to effect metric, and so we applied it directly to the DIP rate. To calculate CI and Bliss, we imputed the percent viability at 72 hours from the DIP rate for each condition. Percent viability is defined as in equation 1. (Equation 70) Estimates of percent viability are sensitive to even small differences between initial cell counts in the control and treated wells due to exponential amplification ( • Harris L.A. • Frick P.L. • Garbett S.P. • Hardeman K.N. • Paudel B.B. • Lopez C.F. • Quaranta V. • Tyson D.R. An unbiased metric of antiproliferative drug effect in vitro. ). To correct for this the bias, a 'matching' control cell count at 72-hours for each treated condition was calculated using equation (Equation 71) where Control Growth Rate is the median of the fitted growth rates for all control wells. Because the automated microscope did not image all the conditions at exactly zero or seventy two hours, we extrapolate and interpolate respectively the cell count at these times from the measured time series. The Bliss metric only requires marginal data. For each experiment, individually, we calculated a Bliss score as $Bliss=PV1|d1∗PV2|d2−PV1,2|d1,d2$ (Equation 72) where $PVi|di$ is the %-viability measured for treatment with drug i alone at dose di, and $PV1,2|d1,d2$ is the %-viability measured for the combination treatment. The first term corresponds to the expected viability, assuming independence, while the second term is the measured viability. By this definition, Bliss>0 is synergistic, and Bliss<0 is antagonistic. Loewe and CI require parameterization of a 1D Hill equation for each drug alone. $E=Emax+E0−Emax(dC)h+1.$ (Equation 73) CI, as per standard calculations ( • Chou T.C. • Talalay P. Quantitative analysis of dose-effect relationships: the combined effects of multiple drugs or enzyme inhibitors. ), further requires that E0 = 1 and Emax = 0 and is fit to a linearized, log-transformed version of the hill equation( • Chou T.C. Drug combination studies and their synergy quantification using the Chou-Talalay method. ) which has been previously critiqued for artificial compression of uncertainty in experimental data leading to poor model fits compared with nonlinear regression ( • Ashton J.C. Drug combination studies and their synergy quantification using the Chou–Talalay method—letter. ). CI dose-response curves were fit using the scipy.stats.linregress module. All data points with percent viability greater than 1 were excluded from the CI fit, as log(1−pervia/pervia) becomes complex. For some drugs, this left too few points to fit a line, such that CI was undefined for combinations with those drugs. In other drugs, the fit hill coefficient was negative, and likewise all CI values were undefined for those drugs. For Loewe, we used the single-drug parameters fit by MuSyC. From these parameterized hill equations, Loewe and CI were calculated using (Equation 74) where Di is the amount of drug i which, alone, achieves an effect equal to the combination effect, and is calculated from the Hill equation fit for that drug. We take the negative log to transform the synergy values to match Bliss, such that S>0 is synergistic, while S<0 is antagonistic. Because Loewe allows the two drugs to have different Emax, Loewe synergy cannot be calculated for measurements which exceed the weaker drug's Emax because no amount of the weaker drug alone would be sufficient to achieve that effect; therefore, those conditions are undefined. For calculating HSA ( Pharmacology. ), we calculate the difference between the observed effect at each combination concentration and the most efficacious single agent effect at those doses. This difference is integrated across the surface to yield a single value for a particular combination. #### Fitting ZIP, BRAID, Schindler’s Hill PDE, and Equivalent Dose Models Theoretical dose-response surfaces with different combinations of synergistic potency and efficacy were generated then fit to estimate the synergy according to these methods (Figure S2). Both ZIP and BRAID were calculated using the R packages available for each method (ZIP’s R code is in the supplemental file 1 of the manuscript ( • Wennerberg K. • Aittokallio T. • Tang J. Searching for drug synergy in complex dose–response landscapes using an interaction potency model. ) and BRAID’s package is available from:https://cran.r-project.org/web/packages/braidReports/braidReports.pdf). Schindler’s Hill PDE model contains no fitting parameters as the dose-response surface is derived purely from the marginal data. In fact, Schindler does not propose a method to estimate synergy from experimental data, but postulates some implementation of perturbation theory could be used to fit experimental data ( • Schindler M. Theory of synergistic effects: hill-type response surfaces as “null-interaction” models for mixtures. ). Therefore, to calculate the synergy of this model, we defined the sum of residuals between the null surface and the experimental data to the metric of synergy. Finally, to fit Zimmer et al.’s Equivalent Dose Model we used the curve_fit() module of the scipy.optimization package in python. Specifically, the Equivalent Dose Model, equation 2 in ( • Zimmer A. • Katzir I. • Dekel E. • Mayo A.E. • Alon U. Prediction of multidimensional drug dose responses based on measurements of drug pairs. ), contains parameters for C1,C2, a12, a21, h1, and h2 where the C parameters are the EC50 of the single agents, the ai,j parameters are the synergy values corresponding to a change in potency, and the h parameters are the hill slopes of the single agents. In the model, there are no parameters for efficacy because it is assumed the drug effect ranges between zero and one. When this is not true, the Effective Dose Model results in poor fits to the data (Figure S2) similar to CI. #### Identifying DEGs for GO Enrichment Analysis Differentially Expressed Genes (DEGs) were selected by ANOVA on baseline gene expression data on three clones based on a statistical cutoff of Likelihood Ratio Test (LRT) (p-values < 0.001). Functional enrichment analyses, including GO Term Enrichment and Pathway Enrichment Analysis were done using CRAN Package “Enrichr” (https://cran.r-project.org/web/packages/enrichR/index.html), based on a web-based tool for analyzing gene sets and enrichment of common annotated biological functions ( • Kuleshov M.V. • Jones M.R. • Rouillard A.D. • Fernandez N.F. • Duan Q. • Wang Z. • Koplev S. • Jenkins S.L. • Jagodnik K.M. • Lachmann A. • et al. Enrichr: a comprehensive gene set enrichment analysis web server 2016 update. ). The enriched GO terms and enriched KEGG pathways were restricted to those with p-values corrected for multiple testing less than 0.001. The top GO Biological Processes included generation of precursor metabolites and energy, electron transport chain, inorganic cation transmembrane transport, and metabolic process. The top GO Molecular Function terms included inorganic cation transmembrane transporter activity, cofactor binding, NAD binding, and ATPase activity. The top GO Cellular Component term was the mitochondria membrane. Top KEGG pathways enriched in the DEGs included metabolic pathways, oxidative phosphorylation, carbon metabolism and TCA cycle (Figure 3B). Overall, these enriched GO terms and pathways point toward differences in the regulators of metabolic function in the three subclones. This is consistent with previous reports that suggest altered metabolism is implicated in drug sensitivity and melanoma resistance to BRAFi ( • Parmenter T.J. • Kleinschmidt M. • Kinross K.M. • Bond S.T. • Li J. • Rao A. • Sheppard K.E. • Hugo W. • Pupo G.M. • et al. Response of BRAF-mutant melanoma to BRAF inhibition is mediated by a network of transcriptional regulators of glycolysis. , • Hardeman K.N. • Peng C. • Paudel B.B. • Meyer C.T. • Luong T. • Tyson D.R. • Young J.D. • Quaranta V. • Fessel J.P. Dependence on glycolysis sensitizes BRAF-mutated melanomas for increased response to targeted BRAF inhibition. ). Correlation of BRAFi insensitivity was computed for each identified DEG according to DIP Rate at 8uM PLX-4720 for a 10 cell line panel (Table S3) Pair-wise comparisons of DEGs was performed on genes (after low count genes were removed) using DESeq2 pipeline ( • Love M.I. • Huber W. • Anders S. Moderated estimation of fold change and dispersion for RNA-seq data with DESeq2. ). ### Data and Software Availability All raw cell counts, calculated DIP rates, DEGs between subclones, and expression data are available in the github repo: https://github.com/QuLab-VU/MuSyC_Cell.git in the folder Data. Additionally, the repo contains all the code required to reproduce all the figures and supplemental figures from the data and is found in the Code_Paper_Figures folder. The subfolders Fig2 and Fig3 contain html folders with interactive plots of all the screened combinations. Open the .html files using a browser. The raw RNAseq is available from GEO at the accession number GSE122041. The software for interactive manipulation of the different parameters to study their contribution to the contours of the dose-response surface is also available in the github repo in the folder MuSyC_App. This folder contains both the matlab source code and a compiled application for the different operating systems. A copy of the github repo at the time of publication is also available from Mendeley Data via the following https://doi.org/10.17632/n8bp8db5ff. ## Acknowledgments The authors would like to thank Corey Hayford, Sarah Maddox, Carlos Lopez, and Chris Wright for insightful conversations; Jing Hao for the help with experimental preparation and reagent procurement; and James Pino and Alexander Lubbock for the pythonic implementation of PSO and findDIP libraries, respectively. Experiments were performed in the Vanderbilt HTS Core Facility, an institutionally supported core with assistance provided by Debbie Mi, Page Vinson, and Joshua Bower. This work was supported by the following funding sources: C.T.M. was supported by the National Science Foundation (NSF) Graduate Student Fellowship Program (GRFP) [Award #1445197]. V.Q. was supported by the National Institutes of Health (NIH) (U54-CA217450, R01-186193, and U01-CA215845); J.B. was supported by the National Cancer Institute (NCI) (R50-CA211206). D.W. was supported by Ruth L. Kirschstein National Research Service Award (2T32HL094296-06A1). C.M.L. was supported in part by the NIH (P30-CA086485, R01-CA121210, and P01-CA129243). C.M.L. additionally was supported by a V Foundation Scholar-in-Training Award, an AACR-Genentech Career Development Award, a Damon Runyon Clinical Investigator Award, a LUNGevity Career Development Award, and a Lung Cancer Foundation of America/International Association for the Study of Lung Cancer Lori Monroe Scholarship. ### Author Contributions Conceptualization, C.T.M., D.J.W., L.A.H., D.R.T., and V.Q.; Methodology, C.T.M., D.R.T., and J.B.; Software, C.T.M. and D.J.W.; Investigation, C.T.M., D.W., B.B.P., K.N.H., D.R.T., L.A.H., and J.B.; Formal Analysis, C.T.M., D.J.W., B.B.P., and D.R.T.; Resources, D.R.T., J.B., and D.W.; Data Curation, C.T.M. and D.R.T.; Writing – Original Draft, C.T.M., D.J.W., L.A.H., and V.Q.; Writing – Review & Editing, C.T.M., D.J.W., B.B.P., D.R.T., C.M.L., D.W., J.B., L.A.H., and V.Q.; Visualization, C.T.M., D.J.W, and B.B.P.; Supervision, L.A.H., D.R.T., and V.Q.; Funding Acquisition, D.R.T. and V.Q. ### Declaration of Interests The authors declare no competing interests. ## Supplemental Information • Document S1. Figures S1–S6 and Tables S1–S5 ## References • Altenhöfer S. • Kleikers P.W. • Wingler K. • Schmidt H.H. Evolution of NADPH oxidase inhibitors: selectivity and mechanisms for target engagement. Antioxid. Redox Signal. 2015; 23: 406-427 • Ashton J.C. Drug combination studies and their synergy quantification using the Chou–Talalay method—letter. Cancer Res. 2015; 75: 2400 • Barretina J. • Caponigro G. • Stransky N. • Venkatesan K. • Margolin A.A. • Kim S. • Wilson C.J. • Lehár J. • Kryukov G.V. • Sonkin D. • et al. The Cancer Cell Line Encyclopedia enables predictive modelling of anticancer drug sensitivity. Nature. 2012; 483: 603-607 • Berg A. • Meyer R. • Yu J. Deviance information criterion for comparing stochastic volatility models. J. Bus. Econ. Stat. 2004; 22: 107-120 • Bliss C.I. The toxicity of poisons applied jointly. Ann. Appl. Biol. 1939; 26: 585-615 • Chou T.C. Drug combination studies and their synergy quantification using the Chou-Talalay method. Cancer Res. 2010; 70: 440-446 • Chou T.-C. • Talalay P. Analysis of combined drug effects: a new look at a very old problem. Trends Pharmacol. Sci. 1983; 4: 450-454 • Chou T.C. • Talalay P. Quantitative analysis of dose-effect relationships: the combined effects of multiple drugs or enzyme inhibitors. Adv. Enzyme Regul. 1984; 22: 27-55 • Eroglu Z. • Ribas A. Combination therapy with BRAF and MEK inhibitors for melanoma: latest evidence and place in therapy. Ther. Adv. Med. Oncol. 2016; 8: 48-56 • Fallahi-Sichani M. • Heiser L.M. • Gray J.W. • Sorger P.K. Metrics other than potency reveal systematic variation in responses to cancer drugs. Nat. Chem. Biol. 2013; 9: 708-714 • Foucquier J. • Guedj M. Analysis of drug combinations: current methodological landscape. Pharmacol. Res. Perspect. 2015; 3: e00149 Pharmacology. Oxford University Press, 1940 • Gong Z. • Hu G. • Li Q. • Liu Z. • Wang F. • Zhang X. • Xiong J. • Li P. • Xu Y. • Ma R. • et al. Compound libraries: recent advances and their applications in drug discovery. Curr. Drug Discov. Technol. 2017; 14: 216-228 • Greco W. • Unkelbach H.-D. • Pöch G. • Sühnel J. • Kundi M. • Bödeker W. Consensus on concepts and terminology for combined-action assessment: the Saariselka Agreement. Arch. Complex Environ. Stud. 1992; 4: 65-69 • Greco W.R. • Bravo G. • Parsons J.C. The search for synergy: a critical review from a response surface perspective. Pharmacol. Rev. 1995; 47: 331-385 • Hafner M. • Niepel M. • Chung M. • Sorger P.K. Growth rate inhibition metrics correct for confounders in measuring sensitivity to cancer drugs. Nat. Methods. 2016; 13: 521-527 • Hardeman K.N. • Peng C. • Paudel B.B. • Meyer C.T. • Luong T. • Tyson D.R. • Young J.D. • Quaranta V. • Fessel J.P. Dependence on glycolysis sensitizes BRAF-mutated melanomas for increased response to targeted BRAF inhibition. Sci. Rep. 2017; 7: 42604 • Harris L.A. • Frick P.L. • Garbett S.P. • Hardeman K.N. • Paudel B.B. • Lopez C.F. • Quaranta V. • Tyson D.R. An unbiased metric of antiproliferative drug effect in vitro. Nat. Methods. 2016; 13: 497-500 • He B. • Lu C. • Zheng G. • He X. • Wang M. • Chen G. • Zhang G. • Lu A. Combination therapeutics in complex diseases. J. Cell. Mol. Med. 2016; 20: 2231-2240 • Hennessey V.G. • Rosner G.L. • Bast Jr., R.C. • Chen M. A Bayesian approach to dose-response assessment and synergy and its application to in vitro dose-response studies. Biometrics. 2010; 66: 1275-1283 • Hunter J.D. Matplotlib: a 2D graphics environment. Comput. Sci. Eng. 2007; 9: 90-95 • Jabs J. • Zickgraf F.M. • Park J. • Wagner S. • Jiang X. • Jechow K. • Kleinheinz K. • Toprak U.H. • Schneider M.A. • Meister M. • et al. Screening drug effects in patient-derived cancer cells links organoid responses to genome alterations. Mol. Syst. Biol. 2017; 13: 955 • Jaquet V. • Marcoux J. • Forest E. • Leidal K.G. • McCormick S. • Westermaier Y. • Perozzo R. • Plastre O. • Fioraso-Cartier L. • Diebold B. • et al. NADPH oxidase (NOX) isoforms are inhibited by celastrol with a dual mode of action. Br. J. Pharmacol. 2011; 164: 507-520 • Jia P. • Jin H. • Xia J. • Ohashi K. • Liu L. • Pirazzoli V. • Dahlman K.B. • Politi K. • Michor F. • et al. Next-generation sequencing of paired tyrosine kinase inhibitor-sensitive and -resistant EGFR mutant lung cancer cell lines identifies spectrum of DNA changes associated with drug resistance. Genome Res. 2013; 23: 1434-1445 • Jones E. • Oliphant T. • Peterson P. SciPy: open source scientific tools for python. • Kim D. • Salzberg S.L. HISAT: a fast spliced aligner with low memory requirements. Nat. Methods. 2015; 12: 357-360 • Kuleshov M.V. • Jones M.R. • Rouillard A.D. • Fernandez N.F. • Duan Q. • Wang Z. • Koplev S. • Jenkins S.L. • Jagodnik K.M. • Lachmann A. • et al. Enrichr: a comprehensive gene set enrichment analysis web server 2016 update. Nucleic Acids Res. 2016; 44: W90-W97 • Lebigot E.O. Uncertainties: a Python package for calculations with uncertainties. • Liao Y. • Smyth G.K. • Shi W. featureCounts: an efficient general purpose program for assigning sequence reads to genomic features. Bioinformatics. 2014; 30: 923-930 • Loewe S. • Muischnek H. über Kombination swirkungen. Archiv. Experiment. Pathol. Pharmakol. 1926; 114: 313-326 • Loewe S. Versuch einer allgemeinen Pharmakologie der Arznei- Kombinationen. Klin. Wochenschr. 1927; 6: 1078-1085 • Long G.V. • Stroyakovskiy D. • Gogas H. • Levchenko E. • de Braud F. • Larkin J. • Garbe C. • Jouary T. • Hauschild A. • Grob J.J. • et al. Combined BRAF and MEK inhibition versus BRAF inhibition alone in melanoma. N. Engl. J. Med. 2014; 371: 1877-1888 • Love M.I. • Huber W. • Anders S. Moderated estimation of fold change and dispersion for RNA-seq data with DESeq2. Genome Biol. 2014; 15: 550 • Lu W. • Hu Y. • Chen G. • Chen Z. • Zhang H. • Wang F. • Feng L. • Pelicano H. • Wang H. • Keating M.J. • et al. Novel role of NOX in supporting aerobic glycolysis in cancer cells with mitochondrial dysfunction and as a potential target for cancer therapy. PLoS Biol. 2012; 10: e1001326 1. McKinney, W. (2010) Data structures for statistical computing in python, Proceedings of the 9th Python in Science Conference, pp. 51–56. • Oliphant T.E. Guide to NumPy. CreateSpace Independent Publishing Platform, 2006 • Palmer A.C. • Sorger P.K. Combination. Cell. 2017; 171: 1678-1691.e13 • Parmenter T.J. • Kleinschmidt M. • Kinross K.M. • Bond S.T. • Li J. • Rao A. • Sheppard K.E. • Hugo W. • Pupo G.M. • et al. Response of BRAF-mutant melanoma to BRAF inhibition is mediated by a network of transcriptional regulators of glycolysis. Cancer Discov. 2014; 4: 423-433 • Paudel B.P. • Harris L.A. • Hardeman K.N. • Abugable A.A. • Hayford C.E. • Tyson D.R. • Quaranta V. A nonquiescent “idling” population state in drug-treated, BRAF-mutated melanoma. Biophys. J. 2018; 114: 1499-1511 • Salvatier J. • Wiecki T.V. • Fonnesbeck C. Probabilistic programming in Python using PyMC3. PeerJ Comput. Sci. 2016; 2: e55 • Schiffmann I. • Greve G. • Jung M. • Lübbert M. Epigenetic therapy approaches in non-small cell lung cancer: update and perspectives. Epigenetics. 2016; 11: 858-870 • Schindler M. Theory of synergistic effects: hill-type response surfaces as “null-interaction” models for mixtures. Theor. Biol. Med. Modell. 2017; 14: 15 • Shaw A.T. • Kim D.W. • Mehra R. • Tan D.S. • Felip E. • Chow L.Q. • Camidge D.R. • Vansteenkiste J. • Sharma S. • De Pas T. • et al. Ceritinib in ALK -Rearranged non–small-cell lung cancer. N. Engl. J. Med. 2014; 370: 1189-1197 • Shimoyama Y. • Nagafuchi A. • Fujita S. • Gotoh M. • Takeichi M. • Tsukita S. • Hirohashi S. Cadherin dysfunction in a human cancer cell line: possible involvement of loss of alpha-catenin expression in reduced cell-cell adhesiveness. Cancer Res. 1992; 52: 5770-5774 • Soria J.C. • Ohe Y. • Vansteenkiste J. • Reungwetwattana T. • Lee K.H. • Dechaphunkul A. • Imamura F. • Nogami N. • Kurata T. • et al. Osimertinib in untreated EGFR -mutated advanced non–small-cell lung cancer. N. Engl. J. Med. 2018; 378: 113-125 • Subramanian A. • Narayan R. • Corsello S.M. • Peck D.D. • Natoli T.E. • Lu X. • Gould J. • Davis J.F. • Tubelli A.A. • Asiedu J.K. • et al. A next generation connectivity map: L1000 platform and the first 1,000,000 profiles. Cell. 2017; 171: 1437-1452.e17 • Tallarida R.J. Quantitative methods for assessing drug synergism. Genes Cancer. 2011; 2: 1003-1008 • Tange O. GNU parallel: the command-line power tool. • Twarog N.R. • Stewart E. • Hammill C.V. • Shelat A.A. BRAID: A unifying paradigm for the analysis of combined drug action. Sci. Rep. 2016; 6: 25523 • Tyson D.R. • Garbett S.P. • Frick P.L. • Quaranta V. Fractional proliferation: a method to deconvolve cell population dynamics from single-cell data. Nat. Methods. 2012; 9: 923-928 • van der Walt S. • Schönberger J.L. • Nunez-Iglesias J. • Boulogne F. • Warner J.D. • Yager N. • Gouillart E. • Yu T. • scikit-image contributors Scikit-image: image processing in Python. PeerJ. 2014; 2: e453 • Welm B.E. • Dijkgraaf G.J. • Bledau A.S. • Welm A.L. • Werb Z. Lentiviral transduction of mammary stem cells for analysis of gene function during development and cancer. Cell Stem Cell. 2008; 2: 90-102 • Witta S.E. • Gemmill R.M. • Hirsch F.R. • Coldren C.D. • Hedman K. • Ravdel L. • Helfrich B. • Chan D.C. • Sugita M. • et al. Restoring E-cadherin expression increases sensitivity to epidermal growth factor receptor inhibitors in lung cancer cell lines. Cancer Res. 2006; 66: 944-950 • Witta S.E. • Jotte R.M. • Konduri K. • Neubauer M.A. • Spira A.I. • Ruxer R.L. • Varella-Garcia M. • Bunn P.A. • Hirsch F.R. Randomized phase II trial of erlotinib with and without entinostat in patients with advanced non-small-cell lung cancer who progressed on prior chemotherapy. J. Clin. Oncol. 2012; 30: 2248-2255
2023-03-24 05:56:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 155, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6204143166542053, "perplexity": 14344.641827024885}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945248.28/warc/CC-MAIN-20230324051147-20230324081147-00775.warc.gz"}
https://www.risk.net/derivatives/2080443/repricing-cross-smile-analytic-joint-density
# Repricing the cross smile: an analytic joint density ## Repricing the cross smile: an analytic joint density When valuing a derivatives contract whose payout depends on two assets, the correlation between the random processes followed by those two assets must be taken into account. In most asset classes, there is no liquid instrument to determine that correlation. This makes the exposure to correlation hard to hedge, but straightforward from a modelling point of view since a single number, perhaps calculated from a historic time series of spot returns, can be used. Repricing the cross smile: an
2019-06-19 20:48:33
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.801415741443634, "perplexity": 2679.318186626933}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999041.59/warc/CC-MAIN-20190619204313-20190619230313-00032.warc.gz"}
http://planetmath.org/EulersConstant
# Euler’s constant Euler’s constant $\gamma$ is defined by $\gamma=\lim_{n\rightarrow\infty}\;\left(1+\frac{1}{2}+\frac{1}{3}+\frac{1}{4}+% \cdots+\frac{1}{n}-\ln{n}\right)$ or equivalently $\gamma=\lim_{n\rightarrow\infty}\;\sum_{i=1}^{n}\left[\frac{1}{i}-\ln\left(1+% \frac{1}{i}\right)\right]$ Euler’s constant has the value $0.57721566490153286060651209008240243104\ldots$ It is related to the gamma function by $\gamma=-\Gamma^{\prime}(1)$ It is not known whether $\gamma$ is rational or irrational. References. • Chris Caldwell - “Euler’s Constant”, http://primes.utm.edu/glossary/page.php/Gamma.htmlhttp://primes.utm.edu/glossary/page.php/Gamma.html Title Euler’s constant EulersConstant 2013-03-22 12:18:27 2013-03-22 12:18:27 akrowne (2) akrowne (2) 10 akrowne (2) Definition msc 40A25 Euler-Mascheroni constant Mascheroni constant
2018-03-24 04:34:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 6, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9484512209892273, "perplexity": 6708.498319951146}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257649683.30/warc/CC-MAIN-20180324034649-20180324054649-00118.warc.gz"}
http://cvgmt.sns.it/paper/386/
# Homogenization of two--phase metrics and applications created by ponsiglio on 16 Mar 2006 modified by davini on 11 Jan 2009 [BibTeX] Published Paper Inserted: 16 mar 2006 Last Updated: 11 jan 2009 Journal: J. Analyse Math. Year: 2006 Abstract: We consider two--phase metrics of the form $\phi(x,xi):= \alpha 1_{B_\alpha}(x)\, xi + \beta 1_{B_\alpha}(x)\, xi$, where $\alpha$,$\beta$ are fixed positive constants, and $B_\alpha$, $B_\beta$ are disjoint Borel sets whose union is $R^N$, and we prove that they are dense in the class of symmetric Finsler metrics $\phi$ satisfying $$\alpha xi <= \phi (x,xi) <= \beta xi \quad\mbox{ on } RN\times RN.$$ Then we study the closure $Cl(M_t^{\alpha,\beta})$ of the class $M_t^{\alpha,\beta}$ of two--phase periodic metrics with prescribed volume fraction $t$ of the phase $\alpha$. We have not a complete answer to this problem at the moment: we give upper and lower bounds for the class $Cl(M_t^{\alpha,\beta})$, and we localize the problem, generalizing the bounds to the non--periodic setting. Finally, we apply our results to study the closure, in terms of $\Gamma$--convergence, of two--phase gradient-constraints in composites of the type $f(x, D u) <= C(x)$, with $C(x)$ is in $\{\alpha, \, \beta\}$ for almost every $x$. Download: Credits | Cookie policy | HTML 5 | CSS 2.1
2018-12-15 21:47:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5804185271263123, "perplexity": 610.7072194780403}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376827097.43/warc/CC-MAIN-20181215200626-20181215222626-00043.warc.gz"}
http://www.gradesaver.com/textbooks/math/algebra/intermediate-algebra-6th-edition/chapter-8-section-8-1-solving-quadratic-equations-by-completing-the-square-exercise-set-page-482/7
## Intermediate Algebra (6th Edition) z=±$\sqrt 10$ Original Equation 3z²-30=0 Add 30 to both sides 3z²=30 Divide both sides by 3 z²=10 Take the square root of both sides z=±$\sqrt 10$
2017-03-28 08:24:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7890872955322266, "perplexity": 2449.5776402372694}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189686.31/warc/CC-MAIN-20170322212949-00444-ip-10-233-31-227.ec2.internal.warc.gz"}
http://www.tug.org/pipermail/texhax/2009-June/012671.html
# [texhax] fbox in align environment Philip G. Ratcliffe philip.ratcliffe at uninsubria.it Mon Jun 15 10:29:06 CEST 2009 > Say you have three equations in an align environment. You > want to box only one of them, say the last one. How does one > do this? I want all three equations aligned (at, say the = > sign). Fancybox.sty wasn't of much help in this regard. I can > put the first two eqns in an align, and the third eqn in an > fbox in the equation environment, but the alignment is disturbed. If you haven't found anything better, here's a not very elegant solution: \documentclass{article} \usepackage{amsmath} \newcommand\alignboxed[2]{\rlap{\boxed{#1#2}}\hphantom{#1\mkern6mu}} \begin{document} \begin{align} f_1(x) &= a+bx, \\ f_2(x) &= a+bx+cx², \\ \alignboxed{f_3(x)}{= a+bx+cx²+dx³.} \end{align} \end{document} This works similarly: \documentclass{article} \usepackage{amsmath} \def\alignboxed#1&#2\endalignboxed{\rlap{\boxed{#1#2}}\hphantom{#1\mkern6mu} } \begin{document} \begin{align} f_1(x) &= a+bx, \\ f_2(x) &= a+bx+cx², \\ \alignboxed f_3(x) &= a+bx+cx²+dx³. \endalignboxed \end{align} \end{document} Cheers, Phil
2018-02-18 20:06:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999911785125732, "perplexity": 8460.013537190838}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812259.18/warc/CC-MAIN-20180218192636-20180218212636-00427.warc.gz"}
https://cstheory.stackexchange.com/tags/comp-number-theory/hot
Tag Info 34 One non-answer to your question is that SQUARE-FREE (is a number square free) is itself not known to be in P, and computing the Möbius function would solve this problem (since a square free number has $\mu(n) \neq 0$). 27 Disclaimer: I'm not an expert in number theory. Short answer: If you're willing to assume "reasonable number-theoretic conjectures", then we can tell whether there is a prime in the interval $[n, n+\Delta]$ in time $\mathrm{polylog}(n)$. If you're not willing to make such an assumption, then there is a beautiful algorithm due to Odlyzko that achieves $n^{1/... 19 Here is the construction of such a number. You can argue whether this means such a number is "known". Take any function$f$from$\mathbb{N}$to$\{ 1, 2, \ldots, 8 \}$where the$n$'th digit is not computable in$O(n)$time. Such a function exists, for example, by the usual diagonalization technique. Interpret$f(n)$as the$n$'th decimal digit of some ... 17 There is a combinatorial algorithm by Mahajan and Vinay that works over commutative rings: http://cjtcs.cs.uchicago.edu/articles/1997/5/contents.html 15 If you know the factorization of$m = p_1^{e_1} \cdots p_n^{e_n}$you can compute modulo each$p_i^{e_i}$separately and then combine the results using Chinese remaindering. If$e_i = 1$, then computing modulo$p_i^{e_i}$is easy, since this is a field. For larger$e_i$, you can use Hensel lifting. 15 For another non-answer, you might be interested in Sarnak’s conjecture (see e.g. http://gilkalai.wordpress.com/2011/02/21/the-ac0-prime-number-conjecture/, http://rjlipton.wordpress.com/2011/02/23/the-depth-of-the-mobius-function/, https://mathoverflow.net/questions/57543/walsh-fourier-transform-of-the-mobius-function), which basically states that Möbius ... 14 First of all, there is a formal definition of "quantum-NC", see QNC on the zoo. GCD is indeed a good candidate for a problem that could be shown to be in QNC, but it's not known to be in NC. However, finding a QNC algorithm for GCD is still an open problem. The feeling for which this is believed to be true comes from the fact that the Quantum Fourier ... 14 The following answer was originally posted as a comment on Gil's blog (1) Let$K=\mathbb{Q}(\alpha)$be a number field, where we assume$\alpha$has a monic minimal polynomial$f\in\mathbb{Z}[x]$. One can then represent elements of the ring of integers$\mathcal{O}_K$as polynomials in$\alpha$or in terms of an integral basis -- the two are equivalent. ... 12 More generally, for any constant$k\ge1$, there are transcendental numbers computable in polynomial time, but not in time$O(n^k)$. First, by the time hierarchy theorem, there exists a language$L_0\in\mathrm E$not computable in time$O(2^{kn})$. We may assume$L\subseteq\{0,1\}^*$, and we may also assume that all strings$w\in L$have length divisible by$... 11 To solve this problem there is a fast deterministic algorithm based on Smith normal forms whose worst-case complexity is upper-bounded by the cost of matrix-multiplication over the integers modulo $m$. For any matrix $A$, the algorithm outputs its Smith normal form, from where $\text{det}(A)$ can be easily computed. More concretely, define $\omega$ so that ... 9 First note that this algorithm only computes $\lceil \log_2 v \rceil$, and as the code is written, it works only for $v$ that fit in a $32$-bit word. The sequence of shifts and or-s that appears first has the function of propagating the leading 1-bit of $v$ all the way down to the least significant bit. Numerically, this gives you $2^{\lceil \log_2 v \rceil}... 8 This language is in$\mathsf{LOGSPACE}$via trial division. It is also known logarithmic space is neccessary ([1]). For a generalization to sparse sets, see bounded language complete for NSPACE(log n)?. For hardness in binary case, see Are the problems PRIMES, FACTORING known to be P-hard?. [1] J. Hartmanis, L. Berman, On tape bounds for single letter ... 8 I’ll comment on why a relation as in the question $$(2^n)! = \sum_{k=0}^{m-1} a_k b_k^{c_k}$$ (for every$n$) helps factoring. I can’t quite finish the argument, but maybe someone can. The first observation is that a relation as above (and more generally, the existence of poly-size arithmetic circuits for$(2^n)!$) gives a poly-size circuit for computing$... 8 TL;DR The decimal expansion of a fixed rational number is not pseudorandom in the cryptographic sense, but irrational numbers (are conjectured to) exhibit some weaker but interesting forms of pseudorandom behavior. Roughly speaking, a sequence $s \in \{0, \ldots, B\}^n$ is pseudorandom with respect to distinguishers $\cal A$, if it cannot be distinguished (... 7 Your problem seems a special case of the turnpike reconstruction problem (for which no polynomial time algorithm is known). See for example: Shiteng Chen, Zhiyi Huang, and Sampath Kannan, "Reconstructing Numbers from Pairwise Function Values". Abstract: The turnpike problem is one of the few natural problems that are neither known to be NP-complete nor ... 5 Sorry if this answer doesn't tell anything nontrivial, but you don't seem to imply these results in the questionm. Consider first the problem of computing a modular exponentiation $a^r \mod m$. You say above that you can compute this by repeated squaring modulo $m$, and that this needs $O(\log r)$ multiplications. This is true, and it's certainly ... 5 The question of how to find computable substructures of algebraic structures was studied by Jens Blanck and myself in the paper "Canonical Effective Subalgebras of Classical Algebras as Constructive Metric Completions". There we give general conditions on what it means for a substructure of an algebraic structure to be computable. Let me give a summary, but ... 5 Some comments (not really an answer). Let's classify 32-bit integers $c$ as follows: Type X: $c$ (as a binary string) is De Bruijn sequence (for all rotations, bits [27,31] are distinct). An example: 11111011100110101100010100100000 Type Y: bits [27,31] of $2^i \cdot c$ are distinct for $i = 0, 1, ..., 31$. This is what Leiserson et al. uses. Examples: ... 5 As mentioned by Daniel, you can find some informations in the book A Course in Computational Algebraic Number Theory (link). In particular, there are several ways of representing elements of number fields. Let $K=Q[\xi]/\langle\varphi\rangle$ be a number field with $\varphi$ a degree-$n$ monic irreducible polynomial of $\mathbb Z[\xi]$. Let $\theta$ be any ... 5 Start by putting $A$ into Jordan normal form, i.e., write $A=PJP^{-1}$ where $J$ is the Jordan normal form and $P$ is a suitably chosen invertible matrix. Then $A^k = PJ^k P^{-1}$, so without loss of generality I only need to consider possibilities for $A$ that are already in Jordan normal form. For $2\times 2$ matrices, there are only three interesting ... 5 Update: The description below is for a different problem (in which you have all pairwise distances in a set rather than pairwise distances between two distinct sets). I'll leave it up anyway since it is closely related. This problem is called the beltway problem, and is a special case of the general $d$-torus embedding problem. It is also closely related to ... 5 There are essentially only two algorithms that I'm aware of: Use repeated-squaring, along the lines you mentioned. Factor $n$ using a state-of-the-art algorithm, then use the Chinese remainder thoerem. If $p$ is prime, you can compute $a^{b^c} \bmod p$ efficiently by computing $b^c \bmod p-1$ using fast exponentiation, call the result $d$, then computing $... 4 Here is a suggestion, for$K = 6$and$N = 251$. We are given a list$a_i - b_j \pmod{N}$. Start by taking one of them, without loss of generality$a_1-b_1$. Without loss of generality$b_1=0$, and we obtain the value of$a_1$. Now take another one, and hope that it is of the form$a_2-b_1$(this happens with probability$5/35 = 1/7$), and deduce$a_2$. At ... 4 The state of the art here is: We can decide primality in polynomial time, but the fastest, general-purpose algorithm to$\underline{\rm find}$the factors of an n-bit composite integer takes time$\approx 2^{n^{1/3}\log^{2/3}n}$. More to your question, a primality test is the same thing as a compositeness test. Therefore, we can easily implement the '... 3 I think your question is closely related to the set reconciliation problem, which is solved in this paper: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.20.5338 The problem of set reconciliation is to given two sets$A, B \subseteq [n]$find$A \backslash B$and$B \backslash A$with as less communication as possible. If$B = [n]$, then you just ... 3 Here's a different approach, based upon iteratively finding numbers that cannot appear among$\{a_1,\dots,a_6\}$. Call a set$A$an over-approximation of the$a$'s if we know that$\{a_1,\dots,a_6\} \subseteq A$. Similarly,$B$is an overapproximation of the$b$'s if we know that$\{b_1,\dots,b_6\} \subseteq B$. Obviously, the smaller$A$is, the more ... 3 Here's an observation that I think gives you a foothold, possibly enough of one to solve the problem. Suppose we have four differences$a_1-b_1$,$a_1-b_2$,$a_2-b_1$,$a_2-b_2$that arise as the pairwise differences between two$a$'s and two$b$'s. Call this a quartet of differences. Notice that we have a non-trivial relationship:$$(a_1-b_1)-(a_1-b_2) =... 3 Yes, there are good (efficient) algorithms. This is completely solved, and the algorithms are widely used in the cryptographic community. If$\gcd(n,p-1)=1$, then everything is a$n$th residue. If$n$divides$p-1$, then$a$is a$n$th residue if and only if$a^{(p-1)/n} \equiv 1 \pmod p$. If$1<\gcd(n,p-1)<n$,$a$is a$n$th residue if and only if ... 3 One other way to look at this, which brings in potentially all complexity classes above$\mathsf{E} = \mathsf{DTIME}(2^{O(n)})$, is to consider real numbers in their binary expansion. Any real number whose binary expansion doesn't end with$0^\infty$or$1^\infty$- i.e., which is not a dyadic rational - has a unique binary expansion. We can treat this ... 2 Paul Lemke Steven S. Skiena Warren D. Smith, Reconstructing Sets From Interpoint Distances, gave backtracking algorithm that runs in time$O(n^n \log n)$for the beltway reconstruction problem. As far as I know, this is the best known. The exact complexity of the problem is not known. It is not known to be in$P$and neither known to be$NP\$-complete. Only top voted, non community-wiki answers of a minimum length are eligible
2019-12-12 14:27:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9150084853172302, "perplexity": 452.9518607915095}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540543850.90/warc/CC-MAIN-20191212130009-20191212154009-00405.warc.gz"}
https://gecogedi.dimai.unifi.it/paper/325/
# Twistor interpretation of slice regular functions created by altavilla on 11 Mar 2019 [BibTeX] Published Inserted: 11 mar 2019 Last Updated: 11 mar 2019 Journal: Journal of Geometry and Physics Volume: 123 Pages: 184--208 Year: 2018 Doi: https://doi.org/10.1016/j.geomphys.2017.09.007 ArXiv: 1605.08656 PDF Given a slice regular function $f:\Omega\subset\mathbb{H}\to \mathbb{H}$, with $\Omega\cap\mathbb{R}\neq \emptyset$, it is possible to lift it to a surface in the twistor space $\mathbb{CP}^{3}$ of $\mathbb{S}^4\simeq \mathbb{H}\cup \{\infty\}$ (see~\cite{gensalsto}). In this paper we show that the same result is true if one removes the hypothesis $\Omega\cap\mathbb{R}\neq \emptyset$ on the domain of the function $f$. Moreover we find that if a surface $\mathcal{S}\subset\mathbb{CP}^{3}$ contains the image of the twistor lift of a slice regular function, then $\mathcal{S}$ has to be ruled by lines. Starting from these results we find all the projective classes of algebraic surfaces up to degree 3 in $\mathbb{CP}^{3}$ that contain the lift of a slice regular function. In addition we extend and further explore the so-called twistor transform, that is a curve in $\mathbb{G}r_2(\mathbb{C}^4)$ which, given a slice regular function, returns the arrangement of lines whose lift carries on. With the explicit expression of the twistor lift and of the twistor transform of a slice regular function we exhibit the set of slice regular functions whose twistor transform describes a rational line inside $\mathbb{G}r_2(\mathbb{C}^4)$, showing the role of slice regular functions not defined on $\mathbb{R}$. At the end we study the twistor lift of a particular slice regular function not defined over the reals. This example shows the effectiveness of our approach and opens some questions.
2019-05-24 07:14:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5537152886390686, "perplexity": 264.53649907022174}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257553.63/warc/CC-MAIN-20190524064354-20190524090354-00278.warc.gz"}
http://www.chegg.com/homework-help/questions-and-answers/thin-uniform-rectangular-sign-hangs-vertically-doorof-shop-sign-hinged-stationary-horizont-q218021
A thin, uniform, rectangular sign hangs vertically above the doorof a shop. The sign is hinged to a stationary horizontal rod alongits top edge. The mass of the sign is 2.40 kg and its verticaldimension is 60.0 cm. The sign isswinging without friction, becoming a tempting target for childrenarmed with snowballs. The maximum angular displacement of the signis 25.0° on both sides of the vertical. At a moment when thesign is vertical and moving to the left, a snowball of mass500 g, traveling horizontally with avelocity of 160 cm/s to the right, strikes perpendicularly thelower edge of the sign and sticks there. (a) Calculate the angular speed of the signimmediately before the impact. rad/s (b) Calculate its angular speed immediately after the impact. rad/s (c) The spattered sign will swing up through what maximumangle? ° Best answer:
2014-08-20 15:01:24
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8287899494171143, "perplexity": 3030.1857887003607}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500809686.31/warc/CC-MAIN-20140820021329-00306-ip-10-180-136-8.ec2.internal.warc.gz"}
https://www.jiskha.com/display.cgi?id=1306543209
# math(easy integration) posted by . find the integration of absolute x from -4 to 2? could anyone show me the way to get 10? thanks • math(easy integration) - integrate x from 0 to +4 to get the absolute value when x is negative add that to the integral from 0 to 2. first part 4^2/2 - 0^2/2 = 8 second part 2^2/2 - 0^2/2 = 2 8+2 = 10 • math(easy integration) - thanks a lot.. ## Similar Questions 1. ### possible functions How to find all possible functions of f with a given derivative. 1. f'(x) = 2 2. f'(x) = sinx Integrate each function. Remember that there can be an arbitrarcontant C added to each integral. 1. f(x) = x + C 2. f(x) = -cos x + C We … 2. ### Competetion help on Integration If there four options are given to a Defenite Integration then what shortcut trick we should choose to get the correct answer. Like in Indefinte Integration we find the differtation of the answer to reach the function given to integrate. … 3. ### calc evaluate the integral: y lny dy i know it's integration by parts but i get confused once you have to do it the second time Leibnitz rule (a.k.a. product rule): d(fg) = f dg + g df y lny dy = d[y^2/2 ln(y)] - y/2 dy ----> Integral … 4. ### culteral Diversity In Milton Gorton's theory of assimilation, the crucial step is from a. Integration to acculteration b. Acculteration to integration c. Assimilation to plurism d. Anglo-conformity to the melting pot e. Integration to intermarriage 5. ### maths use an iterated integral to find area of region bounded by graphs sin(x) and cos(x) between x=pi/4 and x=5*pi/4 but using HORIZONTAL strips.(i.e dxdy is order of integration for the double integral). it has been suggested by a textbook … 6. ### Integration 1. [integration] (3x+7sin(x))^2 dx i tried 3 different methods of integration but all of my answers are wrong. 2. [Integration]from 0 to 5 (3w-3)/(w+6)dw i divided the numerator by the denominator and then tried to solve the question … 7. ### calculus reverse the order of integration integration 1-e integration 0-lnx dy dx 8. ### calculus The following definite integral can be evaluated by subtracting F(B) - F(A), where F(B) and F(A) are found from substituting the limits of integration. \int_{0}^{4} \frac{1600 x +1200 }{(2 x^2 +3 x +1)^5}dx After substitution, the … 9. ### AP Calc B/C the integration of (e^x)/(1-E^2x)^3/2 with respect to x We are currently doing integration by tables, but I can't find the formula that I should use! 10. ### integration i need integration i need help integrate:(1-e^x)^5/e^4x..plz show full working More Similar Questions
2017-08-23 21:48:37
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8384919762611389, "perplexity": 2972.309840981325}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886124563.93/warc/CC-MAIN-20170823210143-20170823230143-00614.warc.gz"}
https://www.audiolabs-erlangen.de/content/05-fau/professor/00-mueller/02-teaching/2020w_mpa/MPAE_material/MPAE_MusicRep.html
# Exercise: Music Representations¶ ## Prerequisites¶ To work on this exercise, you should first work through the Lecture on Music Representations. Furthermore, please take the opportunity to review your programming skills. Basic knowledge of Python is required for the remainder of this class. You should be confident in applying the concepts presented in units 1 to 5 of the Preparation Course Python (PCP). Finally, we will require you to run the PCP notebook environment and the Fundamentals of Music Processing (FMP) notebook environment. Follow the links and set them up on your local machine. Throughout the exercise, we will make use of the PCP and FMP notebooks. Please, always feel encouraged to actively work through these notebooks, play around with different parameters, other input files etc.! That is the best way to learn. ### Task: Musical Notes and Pitches¶ Go through the FMP Notebook on Musical Notes and Pitches and understand the meaning of the parameters. In particular: • Write a small function that generates a D major scale. • More general, write a small function that accepts a pitch class as input and outputs a major scale for the given pitch class. ### Task: Pitches and Center Frequencies¶ Go through the FMP notebook on Frequency and Pitch and write the following functions: • A function that accepts a pitch as input and returns the corresponding frequency in a 12-tone equal tempered scale with reference pitch $p=69$ and reference tuning $440~\mathrm{Hz}$. • A function that accepts a pitch as well as parameters $p, d\in\mathbb{N}$ and $\omega\in\mathbb{R}_{>0}$ as input and returns the corresponding frequency in a $d$-tone equal tempered scale with reference pitch $p$ and tuning frequency $\omega$. • A function that accepts a pitch as well as parameters $p, c\in\mathbb{N}$ and $\omega\in\mathbb{R}_{>0}$ as input and returns the corresponding frequency in a scale with reference pitch $p$ and tuning frequency $\omega$ where the difference between the frequencies of two subsequent pitches of the scale is $c$ cents. Understand the concept of Shepard tones by going through the FMP Notebook on Chroma and Shepard Tones. Answer the following questions: Understand how to sonify pitch-based representations by going through the corresponding part in the FMP notebook on Sonification. The notebook on the Harmonic Series may be useful to understand why we superimpose certain sinusoids in the sonification. Then, • Write a function that sonifies a D major scale using a sinusoidal model including harmonics. • Recalling the Shepard glissando, extend your function to make the sonification sound like it "rises indefinitely". ### Optional Task: Bach's Neverending Canon¶ We will now employ the concepts you have learned to recreate the following YouTube video: In [1]: import IPython.display as ipd Listen to the start and the end of the YouTube video. Even though the music seems to be "rising" constantly, the start and end match up perfectly! The piece by Johann Sebastian Bach that is being played here ends one whole tone above the tone where it started. Therefore, after six repetitions, the music arrives at an octave above the tone where it started. In the YouTube video, shepard tones are additionally used to match the frequencies at the start and after six repetitions. This creates the impression of a "neverending canon". • You can find a CSV-representation of Bach's piece here. Using the FMP notebook on CSV representations, load the file and display a piano roll representation of the piece. • Sonify the piano roll representation using a sinusoidal model including harmonics. • Transpose the piece six times, each time one whole tone higher than before. Remember that a whole tone corresponds to two MIDI pitches. Paste the six iterations of the piece together, one after the other. Visualize the result as a piano roll and sonify it. • You should now have a version of the piece that ends one octave above where it started. Recalling the Shepard glissando, change your sonification so that the result "rises indefinitely". • Save your result as a .wav-file, open it in a music player application and turn on the loop feature ;) There are many resources on the web where you can download MIDI files, which encode symbolic music. See the FMP notebook on MIDI for more information on how to parse MIDI files. Find a MIDI of your favorite song and sonify it using the methods you have learned thus far. ### What is the difference between fundamental frequency and center frequency?¶ Center frequency: The frequency (given in Hz) of a pitch (given in MIDI note numbers or in musical notation). (The center frequency for a pitch depends on the scale and tuning system used. For Western music, we usually consider the 12-tone equal tempered scale with reference pitch $p=69$ and reference tuning $440~\mathrm{Hz}$. In that case, the center frequency of a pitch can be computed with the function $F_\mathrm{pitch}$ given in the FMP notebook on Frequency and Pitch.) Fundamental frequency: The lowest active frequency when a note is sung or played on an instrument. (A musical sound consists of simultaneous oscillations at many frequencies. The notebook on the Harmonic Series and on Timbre might be helpful for understanding this. ### What is the instantaneous frequency of a chirp signal?¶ In the FMP Notebook on Chroma and Shepard Tones, a chirp signal is generated and the following statement is made: "Here, note that the instantaneous frequency is given by the derivative of the sinusoidal's argument" To understand this, let's start with a simple sinusoid of frequency $440$ Hz: $$\sin(2\pi \cdot 440 \cdot t)$$ In [2]: import IPython.display as ipd import numpy as np dur = 1 # Seconds Fs = 22050 freq = 440 N = int(dur * Fs) t = np.arange(N) / Fs x = np.sin(2 * np.pi * freq * t) ipd.display(ipd.Audio(x, rate=Fs)) Here, one can think of the phase as changing with time, corresponding to the oscillations of the sine wave. However, the rate of change of the phase, i.e. the instantaneous frequency stays constant. In general, a chirp is a signal whose frequency increases or decreases with time. In the notebook on Shepard Tones, we want to create a signal where the frequency is increasing exponentially. That corresponds to a linear increase in pitch. Let's say we want a sinusoid starting at frequeny 440 Hz and increasing to 880 Hz over the course of one second. In other words, the frequency at time $t$ should be $$f(t) = 440 \cdot 2^t$$ $f(t)$ is also the rate of change of the phase of our signal. Thus, the phase should increase faster and faster, corresponding to faster and faster oscillations of the sine wave. To get the argument of the sinusoidal at time $t$ we need to integrate over this rate of change. That way we obtain the formula of the exponential chirp: $$\sin(2\pi \cdot 440 \cdot 2^t / \log(2))$$ In [3]: def generate_chirp_exp_octave(freq_start=440, dur=8, Fs=44100, amp=1): """Generate one octave of a chirp with exponential frequency increase Notebook: C1/C1S1_ChromaShepard.ipynb Args: freq_start: Start frequency of chirp dur: Duration (in seconds) Fs: Sampling rate amp: Amplitude of generated signal Returns: x: chirp signal t: Time axis (in seconds) """ N = int(dur * Fs) t = np.arange(N) / Fs x = np.sin(2 * np.pi * freq_start * np.power(2, t / dur) / np.log(2) * dur) x = amp * x/np.max(x) return x, t chirp, t = generate_chirp_exp_octave(freq_start=freq, dur=dur, Fs=Fs) ipd.display(ipd.Audio(chirp, rate=Fs)) A more formal description of the concept of instantaneous frequency can be found in Chapter 8.2.1 of the textbook
2021-06-13 04:20:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 3, "x-ck12": 0, "texerror": 0, "math_score": 0.4297845959663391, "perplexity": 1495.6896865057581}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487600396.21/warc/CC-MAIN-20210613041713-20210613071713-00428.warc.gz"}
https://projecteuclid.org/euclid.die/1356123375
## Differential and Integral Equations ### Global existence and gradient estimates for a quasilinear parabolic equation of the mean curvature type with a strong perturbation #### Abstract We prove a global existence and gradient estimates of solutions to the initial-boundary problem of the quasilinear parabolic equation $u_t - \mbox{div}\{\sigma(|\nabla u|)^2 \nabla u\} + g(\nabla u) =0 \mbox{ in } \Omega \times (0,\infty),$ with the initial and boundary conditions $u(0,x)=u_0(x), u|_{\partial\Omega}=0$, where $\Omega$ is a bounded domain in $R^N , \sigma(v)$ is a function like $\sigma(v)=1/\!\sqrt{1+v}$ and $g(\nabla u)$ is a nonlinear perturbation like $g(\nabla u)\!=\pm|\nabla u|^{\alpha +1}\!,$ $\alpha >0$. In particular, we derive the estimate $||\nabla u(t)||_{\infty} \leq C(||\nabla u_0||_{p_0})t^{-N/(2p_0-3N)}e^{-\lambda t}, t >0$ for a certain $\lambda > 0$, under the assumptions that $||\nabla u_0||_{p_0}, p_0 >3(N+\alpha) \quad ( p_0 > 2\alpha+5 \mbox{ if } N=1)$, is small and the mean curvature of the boundary $\partial \Omega$ is nonpositive. #### Article information Source Differential Integral Equations, Volume 14, Number 1 (2001), 59-74. Dates First available in Project Euclid: 21 December 2012
2018-06-20 16:54:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.778325080871582, "perplexity": 181.70833327645346}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863830.1/warc/CC-MAIN-20180620163310-20180620183310-00239.warc.gz"}
http://www.ams.org/mathscinet-getitem?mr=499319
MathSciNet bibliographic data MR499319 32F15 (35N15) Greiner, P. C.; Stein, E. M. Estimates for the $\overline \partial$$\overline \partial$-Neumann problem. Mathematical Notes, No. 19. Princeton University Press, Princeton, N.J., 1977. iv+195 pp. ISBN: 0-691-08013-5 Links to the journal or article are not yet available For users without a MathSciNet license , Relay Station allows linking from MR numbers in online mathematical literature directly to electronic journals and original articles. Subscribers receive the added value of full MathSciNet reviews.
2017-01-21 09:27:49
{"extraction_info": {"found_math": true, "script_math_tex": 1, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9722985625267029, "perplexity": 4180.086825882521}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00396-ip-10-171-10-70.ec2.internal.warc.gz"}
https://mathoverflow.net/questions/287819/example-of-two-p-ordinary-elliptic-curves-congruent-to-each-other
# Example of two p-Ordinary Elliptic Curves congruent to each other I am looking for an example of a prime, p, for which there exists two $p$-ordinary rational elliptic curves $E$, $F$ for which, at every prime $l$ not dividing $N=p \operatorname{Cond}(E) \operatorname{Cond}(F)$: $$\#E(\mathbb{F}_l) \equiv \#F( \mathbb{F}_l) \mod p$$ Such a congruence could be detected by a Hida Family having two weight $2$ modular forms $f,g,$ whose fields of fourier coefficients $K_f=K_g=\mathbb{Q}$. What I would prefer is exact equations of $E$ and $F$ (say, on www.lmfdb.org). • I presume you wanted $\#E(\mathbb{F}_{l}) \equiv \#F(\mathbb{F}_{\ell})$, so I made an edit. – Jeremy Rouse Dec 6 '17 at 1:09 • We have $\# E(\mathbb{F}_{\ell}) = \# F(\mathbb{F}_{\ell})$ for almost every $\ell$ if and only if $E$ and $F$ are isogenous. Most of the examples I describe below are cases where $E$ and $F$ are not isogenous, though (including the example for $p = 17$). – Jeremy Rouse Dec 6 '17 at 14:12 There are many such curves. For $p = 2$, one can just take two curves $E$ and $F$ with $E : y^{2} = f(x)$ and $F : y^{2} = g(x)$ where $f(x)$ and $g(x)$ define the same number field. Also, if $E$ and $F$ are isogenous, then the above statement is true for any $p$ (but the corresponding modular forms are the same then). Rubin and Silverberg have a couple of papers (see the MathSciNet reference here) that handle $p = 3$ and $5$. In particular, for these $p$, given an elliptic curve $E/\mathbb{Q}$, they construct a one-parameter family of elliptic curves $E_{t}$ so that $E[p]$ and $E_{t}[p]$ are isomorphic as Galois modules. (This works because such $E_{t}$ are parametrized by a twist of $X(p)$, and $X(p)$ has genus zero for $p = 3$ and $5$). For larger $p$, there are still pairs (but not quite as many), and the best place to look is in this paper of Tom Fisher. In particular, the largest known example is with $p = 17$, which I mentioned in my answer to the question here. This example is a pair of $17$-ordinary elliptic curves that have $\# E(\mathbb{F}_{\ell}) \equiv \# F(\mathbb{F}_{\ell}) \pmod{17}$ for all primes $\ell$.
2021-03-09 01:49:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9462489485740662, "perplexity": 93.22244668601479}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178385534.85/warc/CC-MAIN-20210308235748-20210309025748-00604.warc.gz"}
http://www.math.columbia.edu/~woit/wordpress/?p=6972
# 2015 Breakthrough Prizes in Mathematics The first set of winners of the $3 million Milner/Zuckerberg financed Breakthrough Prizes in mathematics was announced today: it’s Donaldson, Kontsevich, Lurie, Tao and Taylor. There’s a good New York Times story here. When these prizes were first announced last year, I was concerned that they would share a problem of Milner’s Fundamental Physics Prizes, an emphasis on rewarding one particular narrow area of research. I’m happy to say that I was wrong: the choices made are excellent, including a selection of the absolute best people in the field, working in a wide range of areas of pure mathematics. The prize winners are mathematicians who are currently very active, doing great work. It’s clear that there was an effort to avoid making this a historical prize, i.e. giving this to people purely for great work done in the past (which to some extent the Abel Prize is doing). The recipients are on average in their 40s, at the height of their powers. One oddity is the award to Kontsevich, who already received$3 million from the Fundamental Physics prize. Given my interests, I suppose I shouldn’t criticize a prize structure where physicists get $3 million, mathematicians$3 million, and mathematical physicists $6 million. While this prize doesn’t suffer from the basic problem of the Physics prize (that of rewarding a single, narrow, unsuccessful idea about physics), it’s still debatable whether this is a good way to encourage mathematics research. The people chosen are already among the most highly rewarded in the subject, with all of them having very well-paid positions with few responsibilities beyond their research, as well as access to funding of research expenses. The argument for the prize is mainly that these sums of money will help make great mathematicians celebrities, and encourage the young to want to be like them. I can see this argument and why some people find it compelling. Personally though, I think our society in general and academia in particular is already suffering a great deal as it becomes more and more of a winner-take-all, celebrity-obsessed culture, with ever greater disparities in wealth, and this sort of prize just makes that worse. It’s encouraging to see that most of the prize winners have already announced intentions to redirect some of the prize moneys for a wider benefit to others and the rest of the field. Update: Among the private reactions I’ve heard from prominent mathematicians this morning, one is the desirability of funding a new “sidekick” prize for collaborators of the$3 million winners… Last Updated on This entry was posted in Uncategorized. Bookmark the permalink. ### 24 Responses to 2015 Breakthrough Prizes in Mathematics 1. Als says: Interesting. I think it has the potential to challenge the Fields medal in the long run if the committee chooses wisely the next recipients. 2. someonesomewhere says: Well, the Nobel prize was initially also awarded to people who already were famous, so it’s not a new concept to initially award the prize to already famous people, so that the fame of the past winners shines on the new ones. 3. Bill says: All of them are great mathematicians and deserve the award as much as anyone else, but the selection seems a bit random. Some of them are obviously rewarded for old breakthroughs, not for recent work, and some for work that nobody really understands to call it a breakthrough. What is the main criterion? Starting from next year when only one person gets the award, is it going to turn into another lifetime achievement award or will they reward genuine recent breakthroughs? Why not turn this into a Fields Medal without age restriction – reward for genuine breakthroughs in the last few (4?) years? Actually, the award is big enough to share among possible coauthors (Birkar- Cascini-Hacon-McKernan, Marcus-Spielman-Srivastava, etc). 4. Bill says: Shouldn’t this be “2014 Breakthrough Prizes in Mathematics”? 5. Peter Woit says: Bill, These prizes will be awarded in November at a ceremony for the prizes in physics and biology officially called the 2015 prizes. I see the Milner site is naming them not by year, but as “Inaugural”. It seems though that the plan is to award the math ones each year at the same time at the others, and next year’s will be the 2016 ones, so 2015 seems a better choice than 2014… 6. Bill says: From NYT: “The size of the award, I think it’s ridiculous,” he said. “I didn’t feel I was the most qualified for this prize.” But Dr. Tao added: “It’s his money. He can do whatever he wants with it.” Not true that he can do whatever he wants with it, you can say no and encourage others to say no, if you believe this is not a good idea. Mathematics community can put Milner’s money to good use, but on their terms. Also, any rumors that Wiles refused the prize? Obviously, they didn’t even dare to ask Perelman. By the way, more kids would be inspired to do mathematics if mathematicians acted more like Perelman. This prize is not going to make them into celebrities in that sense. 7. Als says: “Update: Among the private reactions I’ve heard from prominent mathematicians this morning, one is the desirability of funding a new “sidekick” prize for collaborators of the $3 million winners…” The problem of the side-kick award is that it could feel like an insult. I would be very uncomfortable telling Taylor he gets the “Wiles’s sidekick award” for instance. 8. John Doe says: New prizes like this tell me more and more how pointless prizes are becoming. This post hits the target. What is the purpose of a prize? Why do prizes include money and if they do, how much should it be? If the purpose of a prize is to ‘help Mathematics’ (whatever that means) at large, this prize has a big risk, since it puts a lot of power (money is just a measure for power) in the hands of few mathematicians, not lelected for their choices regarding power, but for their solutions to mathematical problems. It is pretty much up to them what they do with it. It is interesting how the piece you link says that Tao tried to reject it but in the end took it because it is up to this guy what he does with his money. He is just going along with the system. If he keeps it, that sounds fake. If he does something else to ‘help Mathematics’ with it, whatever that is, it is as if he had become a jury for the prize, which shows how pointless this prize is. I don’t think 3 million dollars are going to help these guys to do better math, and I don’t think it’s going to make any difference other than to their wallets. They’ll keep doing what they do, just a bit richer. 9. Als says: BTW, the Milner website says Taylor proved the “Taniyama-Weil conjecture”. I wonder who wrote the blurb and kept Shimura out…. 10. Peter Woit says: Als, I think how insulting this might be would depend strongly on the size of the check. At, say$1 million, I think most mathematicians would overcome any sense of grievance about being a “sidekick”. About Taniyama-Weil I also noticed that. If Serge Lang were still alive, it would kill him. 11. Peter Woit says: Bill, The people Milner chose are relatively young (all younger than me…), and have major results from within the past few years, which would be one reason Wiles is not on the list, not that he turned it down. I don’t want to start a Perelman pro/con discussion, but I disagree with you about Perelman as a role model. I do think though he almost surely would have turned down the money. 12. Thomas says: It would be great if one could read Clozel’s exposition of the recent work by Taylor, Patrikis: http://www.institut.math.jussieu.fr/projets/tn/STN/files/annonce-7-4-14.pdf 13. M.K. says: “it’s still debatable whether this is a good way to encourage mathematics research” I disagree on the seemingly obvious conclusion that a high-doted distinction distributed among a small number of already famous mathematicians would lead to less ‘encouragement’ of mathematics research than a distribution among a higher number of lesser known mathematicians. Given such prizes are also in the latter case based on past achievement of people working on established mathematical institutes (thus established mathematical areas) it is very difficult to see how such prizes would lead to something else than to a substitution of public funding by private or corporate funding. Further, it is very difficult to imagine indeed how a non-mathematical jury, or a random jury, could have the expertise to decide if non-mainstream ideas could be fruitful or not, while indeed the present price-winners are with a relatively high probability capable to decide this. So as usual in our society, everything depends on the good-will of individuals in a system that is organized on questionable measures and aims in general. 14. Ian says: If this prize allows these mathematicians to not apply for grants (for some number of years), then it may serve a useful purpose. Not only would it free up time that might be taken applying for grants and allow them to conduct research unconstrained by grant proposals, but it should also allow others to have a better chance of obtaining them (such as NSF grants, which are getting less funded over time). 15. Peter Woit says: Ian, That’s an interesting point, but given the way these awards are set up and the positions of the people getting them, they generally won’t replace NSF grants. One could take a look at what happened in physics, where a quick search shows http://www.nsf.gov/awardsearch/showAward?AWD_ID=1314311 Although the IAS physics faculty each got $3 million awards in 2012, the IAS is still applying for and getting NSF grants to pay for postdocs and overhead to the institution. The people getting these awards don’t really need research grants to conduct their own research, they typically apply for grants to fund students, postdocs, and their institution. The prize money goes directly to their bank account, not to the institution. If they want to use the money for students, postdocs, their institution they could do this, but would have to set this up themselves, which would likely be every bit as time consuming as the grant application process. That said, I’m sure some of the prize money will end up being used by the recipients to support research in ways that a grant might have in the past, and can replace some NSF grants. It’s not at all clear though how much of that is happening for the physics grants, and how much will happen for the math grants. 16. Peter Woit says: Put differently, the explicit goal and structure of these prizes is to make the recipients personally rich and famous, not to support their research. Once they are rich, in principle they can devote their time to philanthropy and support research, but it’s unclear how much of this will happen, will be interesting to see. 17. Bobito says: Why doesn’t Xiuxiong Chen or Yan Soibelman deserve a bit of the cake? 18. Jess Riedel says: Two alternative ideas for structuring the prize: (1) Make it more about honor. Give the mathematician a few hundred thousand dollars, but use the bulk of the money to endow a research position in the mathematician’s name. (An MIT professorship is$3M: http://giving.mit.edu/priorities/faculty/) (2) Give the money to the awardee, but require them to pledge to not write grant proposals for 5 years. Maybe unfeasible since they need to write grants to support their students, postdocs, and institution, but part of the award could go to cover this. 19. Peter Woit says: Jess, Good suggestions. I’ve often wondered why more of this private money isn’t going into the traditional philanthropic path of endowing positions, which would have a big, long-term effect on the health of the field (Simons has the “Math + X” program, but I can’t think of many other examples). If each year, instead of giving a $3 million personal check to a string theorist the Breakthrough Prize people endowed a position in string theory, the effect on the subject would be huge (and would do a lot more to encourage young people to go into it). 20. Bork says: I agree that this is a double-edged sword, even for the winners. Now they have to worry about what to do with all that money. If they spend it on themselves, they will be criticized, or are likely to feel so, and if they try to distribute it somehow, in some sort of “fair” way, that could eat up time and cause a headache and ironically take them away from research. And their choices could, again, be criticized. Of course the type of mathematician who likes being a big honcho would love that, but my guess is these are not of that sort. They probably would much prefer just to, basically, be left alone. Perlman was only the extreme version of that. What a research mathematician really needs is: more free time, plus a modicum of financial stability, plus a good research environment. A good research environment includes: happy colleagues, and good basic education. This is why Simons’ donations seem much more creative and better thought out, and much more likely to be effective. One of the great things about mathematics as a human endeavor and a part of human culture and as a path through life, is the relative equality and openness compared to other realms of activity. This sort of prize might end up doing more harm than good to the overall culture and enterprise of mathematical life and mathematical research. I say, give a$100,000 prize, and then just give half the remaining to Simons to spend as he sees fit, and give the other half to lobbyists in support of: the NSF funding of basic research, and a bigger NSF budget; and basic school education reform: not Bill Gates style (test, test, iPad, test) but small class sizes, sabbaticals and better salaries for teachers, real books for kids, as well as bringing back instruction in music, cursive writing, PE…oh and math and physics I suppose, why not… 21. Patrice Ayme says: Milner is a celebrity. And a financial manipulator who became immensely wealthy with what Roosevelt and the Bible called contemptuously “money changing”. He profited immensely of a system, plutocracy, that is mostly about oligarchy pushed so far, that even the character of those “leaders” become diabolical. It’s diabolical to make us believe that mathematics will progress more by giving more power to those who have more than enough to do good math. Overall, science and mathematics do not have enough practitioners. A striking example is antibiotic research where a small effort needs to be done to find new antibiotics. To have a few individuals who are much richer will have no positive effect whatsoever. This is certainly true in math and physics. In biology, immense greed has clearly undermined research (individuals have made up to half a billion dollar a year in that field, but one cannot find the modest finance for new antibiotics research). Making a few persons very rich promotes greed. So why is Milner doing this? Maybe it’s subconscious. The oligarchic principle is that humanity is unworthy, but for a few celebrities who never get enough. This is what Milner is truly rewarding. His reason for being what he is. Someone obsessed by individual power. Want to help science and math? Finance studies on how to persuade governments to finance enough advanced public free instruction in science and math, starting in preschool. Through heavy taxation of the richest celebrities, starting with Milner and his kind. http://patriceayme.wordpress.com/ 22. Bill says: Jess Riedel, Endowing a research position is the best idea I’ve seen so far. I would add that such a position should not be given permanently to one person until his/her retirement but instead given for a period of, let’s say, 10 year to an active researcher. 23. NLR says: Endowing a research position is a good idea, but the problem with the Milner Prize is similar to a remark made by Norbert Wiener said in his book “Ex-Prodigy,” that, often, people who are already well-known and well-rewarded get more awards, resulting in a “pyramid of awards.” However, the people who could most benefit from such awards are lesser-known researchers who do not have financial stability. Also, the idea of making mathematicians celebrities is completely at odds with the reason to do mathematics. The value of math does not come from fame or money, but from the intrinsic interest and value of mathematics. In fact, I would think that many people study math precisely to engage with a part of the world that has nothing to do with things such as money or celebrity. 24. Anders says: In Economics we have something called tournament theory to try and explain the very large CEO remuneration you often see in the US. Basically the idea is that you overpay the CEO to get people further down in the organization to work very hard, to have a shot at the CEO position. Maybe these prizes in science can work the same way, they do not change the productivity of the recipients but maybe of other researchers that have a decent chance of getting a prize later in their career.
2019-09-20 09:37:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5322455763816833, "perplexity": 1855.606519779056}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573988.33/warc/CC-MAIN-20190920092800-20190920114800-00095.warc.gz"}
https://www.physicsforums.com/threads/shading-in-argand-diagrams-involving-inequalities.872001/
# Shading in Argand diagrams involving inequalities ## Homework Statement What is difference in shading between Argand diagrams containing inequalities with > and ≥ signs? Example Shade the appropriate region to satisfy the inequality |z|> 5 |z|≥ 5 ## The Attempt at a Solution I am aware of the fact that both will have circle centered at origin with radius 5. But how will the shading differ in both inequalities? Related Precalculus Mathematics Homework Help News on Phys.org robphy Homework Helper Gold Member For simplicity, think of a number line. With a strict inequality, the boundary may not be included. SammyS Staff Emeritus Homework Helper Gold Member ## Homework Statement What is difference in shading between Argand diagrams containing inequalities with > and ≥ signs? Example Shade the appropriate region to satisfy the inequality |z|> 5 |z|≥ 5 ## The Attempt at a Solution I am aware of the fact that both will have circle centered at origin with radius 5. But how will the shading differ in both inequalities? In the $\ | z | >5\$ case, the boundary is not included in the region. In the $\ | z | \ge 5\$ case, the boundary is included in the region.
2020-04-02 06:29:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.752957284450531, "perplexity": 1434.805484008251}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370506673.7/warc/CC-MAIN-20200402045741-20200402075741-00425.warc.gz"}
https://scicomp.stackexchange.com/questions/28204/computing-ifft-for-only-k-samples-in-a-high-dimensional-signal
# Computing IFFT for only $k$ samples in a high dimensional signal I have a very high dimensional signal, say $15$ dimensions. Across each dimension the width is $N$ points. So total number of points is $N^{15}$. I already have the FFT given to me. Only low frequency coeffs are non zero. The non zero coeffs are the ones inside of the hypercube of width 5. So only $5^{15}$ are non zero. Now I want to take an IFFT to compute samples in signal domain.But I dont want to compute all samples of signalbut only a $k$ of them. $k$ is much smaller compared to $N^{15}$. The samplesin signal domain are by no means sparse, but its just that I want to compute only a small number $k$ of them. These $k$ samples are distributed arbitrarily and are not confined to any region. How can I compute them efficiently? I came across Sparse FFT, but here my signal is not sparse but just that i am interested only in a few samples in signal domain. So I am not sure I can use SFFT. PS : I don't want to take a full IFFT due to memory constraints. • The sparse FFT isn't really applicable here- it's an algorithm for locating the small number of frequencies with non-zero coefficients, in a single that is sparse in the frequency domain, but your signal is dense in the frequency domain and you already know where you want to compute the IFFT. If $k$ is small enough, then you could do this by direct evaluation of the IFFT sum. It's still going to take $O(kN^{15})$ time to do this, because all of your coefficients in the frequency domain are potentially nonzero. – Brian Borchers Nov 5 '17 at 16:50 • @Brian not all coffs in frequency are non zero but only 5^15 low frequencies. So in this case i cant better than O(k5^15)? Can i use Gortzel algorithm to get any better? – Rajesh D Nov 5 '17 at 22:27 • If you know exactly which of the Fourier coefficients are nonzero than you can just sum over those terms in the inverse Fourier transform. – Brian Borchers Nov 6 '17 at 0:11
2021-06-24 12:47:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8108150362968445, "perplexity": 352.58997740077336}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488553635.87/warc/CC-MAIN-20210624110458-20210624140458-00605.warc.gz"}
https://www2.cms.math.ca/Events/summer19/abs/rtg
2019 CMS Summer Meeting Regina, June 7 - 10, 2019 Representation Theory of Groups Defined Over Local Fields Org: Monica Nevins (Ottawa) and Jerrod Smith (Calgary) [PDF] NICOLAS ARANCIBIA, Carleton University $A$(rthur)-Packets of Cohomological Representations  [PDF] For classical real groups we can list three important constructions of $A$(rthur)-packets. We can begin by mentioning the definition due to Arthur that appears in his work on the classification of the discrete automorphic spectrum of classical groups, and that relies on techniques from harmonic analysis. A second and radically different definition is due to Adams, Barbasch and Vogan. Their approach to $A$-packets is by means of sophisticated geometrical methods, using the theory of perverse sheaf, $\mathcal{D}$-modules and some others tools from microlocal geometry. A third construction in the context of unitary representations with cohomology, is due to Adams and Johnson. The aim of this talk is to explain why in this latter context the three constructions coincide. ED BELK, University of British Columbia The Local Trace Formula as a Motivic Identity  [PDF] In 1991, James Arthur published a local trace formula, which is an equality of distributions on the Lie algebra of a connected, reductive algebraic group $G$ over a field $F$ of characteristic zero. His approach was later used by Jean-Loup Waldspurger to give a slight reformulation, identifying the value of a particular distribution on a test function with that of its Fourier transform. We aim to show that this identity may be formulated as an identity of motivic distributions on definable manifolds. By so doing, we would make available the use of the transfer principle to establish the trace formula for groups defined over fields of positive characteristic. On the Multiplicities in the Restriction of a Supercuspidal Representation  [PDF] The representation theory of reductive groups over $p$-adic fields can be split into two areas, namely the study of parabolically induced representations and the study of supercuspidal representations. Given a reductive group $G$ defined over a $p$-adic field $F,$ one can construct supercuspidal representations of any positive depth via the Adler-Yu construction. This construction uses what Yu called a $G$-datum. It was later proved by Kim-Fintzen that these constructions exhaust all positive depth supercuspidal representations for large enough $p$. In this talk, we will be interested in the restriction of a positive depth supercuspidal of $G(F)$ to the subgroup $G_{der}(F)$, where $G_{der}$ denotes the derived subgroup of $G$. The goal is to further explore a conjecture regarding multiplicity one established by Adler and Prasad. To understand such a restriction, we first define how to restrict a $G$-datum to $G_{der}$-data. We can then study how the supercuspidals arising from the various $G_{der}$-data produced appear in the restriction to $G_{der}(F)$ of the supercuspidal arising from the initial $G$-datum. The question of multiplicity one in this restriction then reduces to the study of certain depth zero supercuspidal representations. BOAZ ELAZAR, UBC Schwartz Functions And Tempered Distributions On Singular Quasi-Nash Varieties  [PDF] Schwartz functions and tempered distributions are important tools in representation theory and are used, for example, in studying closures of orbits of group actions. Those closures might be singular semi-algebraic varieties. For the study of Schwartz functions on such varieties, I shall introduce the space of quasi-Nash varieties. I will show how Schwartz functions and tempered distributions can be defined on quasi-Nash varieties, and will discuss several important properties of those functions. If time permits, I will talk about integrating Schwartz functions over singular algebraic curves. NICOLE KITT, University of Calgary An ABV-packet for a General Linear Group with Two Representations  [PDF] It is known that not all ABV-packets are Arthur packets, and in particular, that Arthur packets for general linear groups are singletons. My research project concerns, what is believed to be, the smallest known example of an ABV-packet for a general linear group that is not a singleton, and hence is not of Arthur type. Specifically, I will be completing a calculation with C.Cunningham which shows that there is an irreducible admissible representation $\pi$ of p-adic GL(16) with the property that its ABV-packet contains exactly one other irreducible representation, $\pi'$. The main tool we are using to calculate the ABV-packet for p-adic GL(16) is the functor $\textsf{Ev}$ which is built from Deligne's vanishing cycles functor. In this talk, I will illustrate the methods used to compute this functor. In particular, we will discuss geometric techniques used to calculate perverse sheaves and their microlocal vanishing cycles on quiver representation varieties of type A. JOSHUA LANSKY, American University Explicit liftings of conjugacy classes in finite reductive groups  [PDF] Let $k$ be a field, $\tilde G$ a connected reductive $k$-group, and $\Gamma$ a finite group. Previous work with Adler defined what it means for a connected reductive $k$-group $G$ to be parascopic for $(\tilde{G},\Gamma)$. (Roughly, this is a generalization of the situation where $\Gamma$ acts on $\tilde G$, and $G$ is the connected part of the group of $\Gamma$-fixed points in $\tilde{G}$.) In this setting, there is a canonical map $\mathcal N^{\textrm st}$ of stable semisimple conjugacy classes from the dual $G^\wedge(k)$ to $\tilde{G}^\wedge(k)$. When $k$ is finite, this implies a lifting from packets of representations of $G(k)$ to those of $\tilde{G}(k)$. After reviewing this theory, we describe a method for decomposing a given instance of parascopy into simple atomic components for which $\mathcal N^{\textrm st}$ arises from an explicit $k$-morphism $G^\wedge\longrightarrow\tilde{G}^\wedge$. As a consequence, our lifting of representations is seen to be compatible with Shintani lifting in some important cases. In other cases, our lifting factors through the set of representations of an intermediate group. DANIEL LE, University of Toronto mod $p$ representations of $p$-adic $\mathrm{GL}_2$  [PDF] Congruences between automorphic forms and Galois representations have proven to be powerful tools in the Langlands program. The search for a representation-theoretic framework for these congruences naturally leads us to study mod p representations of p-adic groups. Rather little is presently known about the characteristic p case, which seems to be substantially different from other characteristics. We will highlight some recent results and questions in the area, mainly focusing on the case of $\mathrm{GL}_2$. PAUL MEZO, Carleton University Equivalent definitions of Arthur-packets for real classical groups  [PDF] In his most recent book, Arthur defines A(rthur)-packets for classical groups using techniques from harmonic analysis. For real groups an alternative approach to the definition of A-packets has been known since the early 90s. This approach, due to Adams-Barbasch-Vogan, relies on sheaf-theoretic techniques instead of harmonic anaysis. We will report on work in progress, joint with N. Arancibia, in proving that these two different definitions for A-packets are equivalent for real classical groups. DAVID ROE, Massachusetts Institute of Technology A database of p-adic tori  [PDF] Maximal tori in reductive groups form the foundation for many constructions in $p$-adic representation theory. Many of these constructions place constraints on the tori involved, requiring that they split over unramified or tamely ramified extensions of the ground field. When the residue characteristic is small, wild tori occur even for groups of small rank. Such tori complicate standard tools used to construct representations, such as Bruhat-Tits buildings, Néron models and the Moy-Prasad filtration. In an effort to aid in the study of representations in small characteristic, I will present an online database of p-adic tori. As the database is still at an early stage, I will be soliciting feedback on what kinds of data, presentation or search features would be most useful to researchers in the audience. The minimal faithful dimension of finite p-groups: an application of the orbit method to the essential dimension  [PDF] For a finite group $G$ and a field $K$, the faithful dimension of $G$ over $K$ is defined as the smallest possible dimension of a faithful $K$-representation of $G$. By a result of Karpenko and Merkurjev, if $G$ is a $p$-group and $K$ contains a primitive $p$-th root of unity, then the faithful dimension of $G$ is equal to the essential dimension of $G$ over $K$, a notion introduced by Buhler and Reichstein. We use the orbit method to obtain qualitative and quantitative results on the faithful dimension of $G$ for a wide range of examples. This is joint work with M. Bardestani and K. M. Karai. LOREN SPICE, Texas Christian University New developments in the construction of tame, supercuspidal representations  [PDF] In 2012, Yu gave a talk at American University suggesting the possibility of a new perspective on his construction of tame, supercuspidal representations that would make it more compatible with the local Langlands correspondence. Surprisingly, this compatibility hinges on a small detail, which is the nature of the lifting to a genuine representation of the projective representation of a finite symplectic group called the Weil representation. In joint work with DeBacker, I described an appropriate modification for so called toral supercuspidal representations. Kaletha's work on regular supercuspidal representations suggests a vast generalisation of the work with DeBacker. In this talk, I will report on joint work in progress with Fintzen and Kaletha involving how to perform the necessary modifications to the Weil representation in the setting of regular, and hopefully all tame, supercuspidal representations. WAN-YU TSAI, University of Ottawa The orbit philosophy for Spin groups  [PDF] Let $G$ be a semisimple Lie group with Lie algebra $\mathfrak g$ and maximal compact subgroup $K$. The philosophy of coadjoint orbits suggests a way to study unitary representations of $G$ by their close relations to the coadjoint $G$-orbits on $\mathfrak g*$. In this talk, we study a special part of the orbit philosophy. We provide a comparison between the $K$-structure of unipotent representations and regular functions of bundles on nilpotent orbits for complex and real groups of type $D$. More precisely, we provide a list of genuine unipotent representations for a Spin group; separately we compute the $K$-spectra of the regular functions on certain small nilpotent orbits, and then match them with the $K$-types of the genuine unipotent representations.This is joint work with Dan Barbasch. QING ZHANG, University of Calgary, local converse theorems for unitary groups  [PDF] Let $F$ be a p-adic field and $E/F$ be a fixed quadratic extension. Let $\textrm{U}_n(F)$ be the quasi-split unitary group of size $n$ with $n\ge 2$ associated with $E/F$. The local converse theorem asserts that, an irreducible (supercuspidal) generic representation $\pi$ of $\textrm{U}_n$ is uniquely determined by various local gamma factors $\gamma(s,\pi\times\tau,\psi)$ of $\pi$ twisted by irreducible generic representations $\tau$ of $\textrm{GL}_k(E), 1\le k\le [\frac{n}{2}]$, where $\psi$ is a fixed nontrivial additive character of $F$. In this talk, I will give a sketch of a recent proof of this theorem when $n$ is odd.
2021-01-20 08:08:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7834746837615967, "perplexity": 381.5724242471831}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703519923.26/warc/CC-MAIN-20210120054203-20210120084203-00243.warc.gz"}
https://www.lmfdb.org/ModularForm/GL2/Q/holomorphic/1183/1/
# Properties Label 1183.1 Level 1183 Weight 1 Dimension 60 Nonzero newspaces 7 Newform subspaces 9 Sturm bound 113568 Trace bound 16 ## Defining parameters Level: $$N$$ = $$1183 = 7 \cdot 13^{2}$$ Weight: $$k$$ = $$1$$ Nonzero newspaces: $$7$$ Newform subspaces: $$9$$ Sturm bound: $$113568$$ Trace bound: $$16$$ ## Dimensions The following table gives the dimensions of various subspaces of $$M_{1}(\Gamma_1(1183))$$. Total New Old Modular forms 1428 1083 345 Cusp forms 60 60 0 Eisenstein series 1368 1023 345 The following table gives the dimensions of subspaces with specified projective image type. $$D_n$$ $$A_4$$ $$S_4$$ $$A_5$$ Dimension 36 24 0 0 ## Trace form $$60q + O(q^{10})$$ $$60q - 24q^{14} - 24q^{27} + 12q^{40} - 24q^{53} + 12q^{66} - 36q^{92} + O(q^{100})$$ ## Decomposition of $$S_{1}^{\mathrm{new}}(\Gamma_1(1183))$$ We only show spaces with odd parity, since no modular forms exist when this condition is not satisfied. Within each space $$S_k^{\mathrm{new}}(N, \chi)$$ we list the newforms together with their dimension. Label $$\chi$$ Newforms Dimension $$\chi$$ degree 1183.1.b $$\chi_{1183}(1182, \cdot)$$ 1183.1.b.a 6 1 1183.1.d $$\chi_{1183}(846, \cdot)$$ 1183.1.d.a 3 1 1183.1.d.b 3 1183.1.j $$\chi_{1183}(99, \cdot)$$ None 0 2 1183.1.l $$\chi_{1183}(530, \cdot)$$ None 0 2 1183.1.m $$\chi_{1183}(698, \cdot)$$ None 0 2 1183.1.n $$\chi_{1183}(146, \cdot)$$ 1183.1.n.a 6 2 1183.1.n.b 6 1183.1.o $$\chi_{1183}(339, \cdot)$$ None 0 2 1183.1.p $$\chi_{1183}(192, \cdot)$$ None 0 2 1183.1.s $$\chi_{1183}(675, \cdot)$$ None 0 2 1183.1.t $$\chi_{1183}(699, \cdot)$$ 1183.1.t.a 12 2 1183.1.v $$\chi_{1183}(360, \cdot)$$ None 0 2 1183.1.x $$\chi_{1183}(319, \cdot)$$ 1183.1.x.a 8 4 1183.1.y $$\chi_{1183}(526, \cdot)$$ None 0 4 1183.1.z $$\chi_{1183}(268, \cdot)$$ 1183.1.z.a 8 4 1183.1.bd $$\chi_{1183}(249, \cdot)$$ 1183.1.bd.a 8 4 1183.1.bf $$\chi_{1183}(27, \cdot)$$ None 0 12 1183.1.bh $$\chi_{1183}(90, \cdot)$$ None 0 12 1183.1.bm $$\chi_{1183}(8, \cdot)$$ None 0 24 1183.1.bo $$\chi_{1183}(68, \cdot)$$ None 0 24 1183.1.bq $$\chi_{1183}(62, \cdot)$$ None 0 24 1183.1.br $$\chi_{1183}(12, \cdot)$$ None 0 24 1183.1.bu $$\chi_{1183}(10, \cdot)$$ None 0 24 1183.1.bv $$\chi_{1183}(40, \cdot)$$ None 0 24 1183.1.bw $$\chi_{1183}(48, \cdot)$$ None 0 24 1183.1.bx $$\chi_{1183}(3, \cdot)$$ None 0 24 1183.1.by $$\chi_{1183}(17, \cdot)$$ None 0 24 1183.1.ca $$\chi_{1183}(11, \cdot)$$ None 0 48 1183.1.ce $$\chi_{1183}(18, \cdot)$$ None 0 48 1183.1.cf $$\chi_{1183}(15, \cdot)$$ None 0 48 1183.1.cg $$\chi_{1183}(2, \cdot)$$ None 0 48
2021-11-28 18:26:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9132925868034363, "perplexity": 13482.516338205238}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358570.48/warc/CC-MAIN-20211128164634-20211128194634-00317.warc.gz"}
https://www.aimsciences.org/article/doi/10.3934/dcdsb.2021262
# American Institute of Mathematical Sciences doi: 10.3934/dcdsb.2021262 Online First Online First articles are published articles within a journal that have not yet been assigned to a formal issue. This means they do not yet have a volume number, issue number, or page numbers assigned to them, however, they can still be found and cited using their DOI (Digital Object Identifier). Online First publication benefits the research community by making new scientific discoveries known as quickly as possible. Readers can access Online First articles via the “Online First” tab for the selected journal. ## Threshold of a stochastic SIQS epidemic model with isolation 1 School of Natural Sciences Education, Vinh University, 182 Le Duan, Vinh, Nghe An, Vietnam 2 HUS High School for Gifted Student, Hanoi National University, 182 Luong The Vinh, Thanh Xuan, Hanoi, Vietnam 3 Department of Mathematics, Mechanics and Informatics, , Hanoi National University, , 334 Nguyen Trai, Thanh Xuan, Hanoi, Vietnam * Corresponding author: Nguyen Thanh Dieu Received  February 2021 Revised  July 2021 Early access October 2021 The aim of this paper is to give sufficient conditions, very close to the necessary one, to classify the stochastic permanence of SIQS epidemic model with isolation via a threshold value $\widehat R$. Precisely, we show that if $\widehat R<1$ then the stochastic SIQS system goes to the disease free case in sense the density of infected $I_z(t)$ and quarantined $Q_z(t)$ classes extincts to $0$ at exponential rate and the density of susceptible class $S_z(t)$ converges almost surely at exponential rate to the solution of boundary equation. In the case $\widehat R>1$, the model is permanent. We show the existence of a unique invariant probability measure and prove the convergence in total variation norm of transition probability to this invariant measure. Some numerical examples are also provided to illustrate our findings. Citation: Nguyen Thanh Dieu, Vu Hai Sam, Nguyen Huu Du. Threshold of a stochastic SIQS epidemic model with isolation. Discrete & Continuous Dynamical Systems - B, doi: 10.3934/dcdsb.2021262 ##### References: show all references ##### References: Estimated paths of $\frac{\ln I_z(t)}t$ (in red line), $\frac{\ln Q_z(t)}t$ (in ping line) and $\frac{\ln|S_z(t)- \widetilde S_u^0(t)|}{t}$ (in blue line) in Example 3.1 Trajectories of $(S_z(t), I_z(t), Q_z(t))$ in Example 3.2 Marginal one dimensional densities of $(S_z(t), I_z(t), Q_z(t))$ Marginal two dimensional densities of $(S_z(t), I_z(t), Q_z(t))$ [1] Hongfu Yang, Xiaoyue Li, George Yin. Permanence and ergodicity of stochastic Gilpin-Ayala population model with regime switching. Discrete & Continuous Dynamical Systems - B, 2016, 21 (10) : 3743-3766. doi: 10.3934/dcdsb.2016119 [2] Shangzhi Li, Shangjiang Guo. Permanence and extinction of a stochastic SIS epidemic model with three independent Brownian motions. Discrete & Continuous Dynamical Systems - B, 2021, 26 (5) : 2693-2719. doi: 10.3934/dcdsb.2020201 [3] Xia Wang, Shengqiang Liu, Libin Rong. Permanence and extinction of a non-autonomous HIV-1 model with time delays. Discrete & Continuous Dynamical Systems - B, 2014, 19 (6) : 1783-1800. doi: 10.3934/dcdsb.2014.19.1783 [4] Jiangtao Yang. Permanence, extinction and periodic solution of a stochastic single-species model with Lévy noises. Discrete & Continuous Dynamical Systems - B, 2021, 26 (10) : 5641-5660. doi: 10.3934/dcdsb.2020371 [5] Yanan Zhao, Yuguo Lin, Daqing Jiang, Xuerong Mao, Yong Li. Stationary distribution of stochastic SIRS epidemic model with standard incidence. Discrete & Continuous Dynamical Systems - B, 2016, 21 (7) : 2363-2378. doi: 10.3934/dcdsb.2016051 [6] Li Zu, Daqing Jiang, Donal O'Regan. Persistence and stationary distribution of a stochastic predator-prey model under regime switching. Discrete & Continuous Dynamical Systems, 2017, 37 (5) : 2881-2897. doi: 10.3934/dcds.2017124 [7] Dan Li, Jing'an Cui, Yan Zhang. Permanence and extinction of non-autonomous Lotka-Volterra facultative systems with jump-diffusion. Discrete & Continuous Dynamical Systems - B, 2015, 20 (7) : 2069-2088. doi: 10.3934/dcdsb.2015.20.2069 [8] Songbai Guo, Jing-An Cui, Wanbiao Ma. An analysis approach to permanence of a delay differential equations model of microorganism flocculation. Discrete & Continuous Dynamical Systems - B, 2021  doi: 10.3934/dcdsb.2021208 [9] Bara Kim, Jeongsim Kim. Explicit solution for the stationary distribution of a discrete-time finite buffer queue. Journal of Industrial & Management Optimization, 2016, 12 (3) : 1121-1133. doi: 10.3934/jimo.2016.12.1121 [10] Xiaoling Zou, Dejun Fan, Ke Wang. Stationary distribution and stochastic Hopf bifurcation for a predator-prey system with noises. Discrete & Continuous Dynamical Systems - B, 2013, 18 (5) : 1507-1519. doi: 10.3934/dcdsb.2013.18.1507 [11] Miljana JovanoviĆ, Marija KrstiĆ. Extinction in stochastic predator-prey population model with Allee effect on prey. Discrete & Continuous Dynamical Systems - B, 2017, 22 (7) : 2651-2667. doi: 10.3934/dcdsb.2017129 [12] Georg Hetzer, Tung Nguyen, Wenxian Shen. Coexistence and extinction in the Volterra-Lotka competition model with nonlocal dispersal. Communications on Pure & Applied Analysis, 2012, 11 (5) : 1699-1722. doi: 10.3934/cpaa.2012.11.1699 [13] Tzy-Wei Hwang, Yang Kuang. Host Extinction Dynamics in a Simple Parasite-Host Interaction Model. Mathematical Biosciences & Engineering, 2005, 2 (4) : 743-751. doi: 10.3934/mbe.2005.2.743 [14] Keng Deng, Yixiang Wu. Extinction and uniform strong persistence of a size-structured population model. Discrete & Continuous Dynamical Systems - B, 2017, 22 (3) : 831-840. doi: 10.3934/dcdsb.2017041 [15] Yun Kang. Permanence of a general discrete-time two-species-interaction model with nonlinear per-capita growth rates. Discrete & Continuous Dynamical Systems - B, 2013, 18 (8) : 2123-2142. doi: 10.3934/dcdsb.2013.18.2123 [16] Erika Asano, Louis J. Gross, Suzanne Lenhart, Leslie A. Real. Optimal control of vaccine distribution in a rabies metapopulation model. Mathematical Biosciences & Engineering, 2008, 5 (2) : 219-238. doi: 10.3934/mbe.2008.5.219 [17] Bum Il Hong, Nahmwoo Hahm, Sun-Ho Choi. SIR Rumor spreading model with trust rate distribution. Networks & Heterogeneous Media, 2018, 13 (3) : 515-530. doi: 10.3934/nhm.2018023 [18] Pierre Gabriel, Hugo Martin. Steady distribution of the incremental model for bacteria proliferation. Networks & Heterogeneous Media, 2019, 14 (1) : 149-171. doi: 10.3934/nhm.2019008 [19] Arturo Hidalgo, Lourdes Tello. On a climatological energy balance model with continents distribution. Discrete & Continuous Dynamical Systems, 2015, 35 (4) : 1503-1519. doi: 10.3934/dcds.2015.35.1503 [20] Yu Chen, Zixian Cui, Shihan Di, Peibiao Zhao. Capital asset pricing model under distribution uncertainty. Journal of Industrial & Management Optimization, 2021  doi: 10.3934/jimo.2021113 2020 Impact Factor: 1.327
2021-11-27 23:05:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48978540301322937, "perplexity": 7437.458478013279}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358323.91/warc/CC-MAIN-20211127223710-20211128013710-00200.warc.gz"}
https://physics.stackexchange.com/questions/122767/is-gravitational-potential-energy-proportional-or-inversely-proportional-to-dist
# Is gravitational potential energy proportional or inversely proportional to distance? We know that if an object has been lifted a distance $h$ from the ground then it has a potential energy change: $$\Delta U = mgh$$ so $h$ is proportional to $\Delta U$. However, we have also the gravitational potential energy law: $$U= -\frac{G M m}{r}$$ where the distance is inversely proportional to the potential energy. What did I miss? Is the distance of the object proportional or inversely proportional to the potential energy? The formula: $$\Delta U = mgh$$ is an approximation that applies when the distance $h$ is small enough that changes in $g$ can be ignored. As you say, the expression for $U$ is: $$U= -\frac{G M m}{r}$$ So the change when moving a distance $h$ upwards is: $$\Delta U = \frac{GMm}{r} - \frac{GMm}{r + h}$$ We rearrange this to get: \begin{align} \Delta U &= GMm \left( \frac{1}{r} - \frac{1}{r + h} \right) \\ &= GMm \frac{h}{r^2 + rh} \\ &= \frac{GM}{r^2} m \frac{h}{1 + h/r} \\ &\approx \frac{GM}{r^2} m h \end{align} where the last approximation is because $h \ll r$ so $1 + h/r \approx 1$. And since $GM/r^2$ is just the gravitational acceleration $g$ at a distance $r$, we get: $$\Delta U = g m h$$ Your first potential energy arises from the approximation that the graviational field is approximately constant for "small heights" , i.e. $$\frac{GMm}{r^2} \approx mg$$ The full law leads to your second formula, the approximation to the first. For heights above the earth, it is justified, as we can see by taylor expanding $\frac{1}{r^2}$ around the earth's radius $R$: $$\frac{1}{r^2} = \frac{1}{R^2} - \frac{2(r - R)}{R^3} + \mathcal{O}((r-R)^2)$$ Here, $h = r - R$, so $$\frac{1}{r^2} = \frac{1}{R^2}(1 - \frac{2h}{R}) + \mathcal{O}(h^2)$$ The term with $h$ is certainly neglegible for $h \ll R$.
2019-08-21 22:25:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999716281890869, "perplexity": 390.88607372794553}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027316549.78/warc/CC-MAIN-20190821220456-20190822002456-00043.warc.gz"}
https://math.stackexchange.com/questions/1356136/delta-2-frac-lambdax4-w2-2-omega-cap-w1-2-0-o
# $‎\Delta ‎^2(.)-‎\frac{‎‎\lambda‎‎}{|x|^4}(.): W^{2,2}(\Omega) \cap W^{1,2}_0(\Omega) \to W_0^{-2,2}(\Omega) ‎$ is coercive. I am reading an article and there, author claim that $$‎L(.)=\Delta ‎^2(.)-‎\frac{‎‎\lambda‎‎}{|x|^4}(.): W^{2,2}(\Omega) \cap W^{1,2}_0(\Omega) \to W_0^{-2,2}(\Omega) ‎$$ is coercive if ‎‎$‎0\leq ‎‎\lambda<\Lambda_N=(‎\frac{N^2(N-4)^2}{16})‎$ , because of the following inequality: For all ‎‎‎$u‎ ‎\in‎ W^{2,2}(\Omega) \cap W^{1,2}_0(\Omega)$ و ‎‎$‎N>4‎$‎‎ ‎ $$‎\Lambda_N ‎\int_{\Omega}‎\frac{u^2}{|x|^4}\mathrm{d}x ‎\leq ‎\int_{\Omega} ‎|\Delta u|^2 ‎\mathrm{d}x‎$$ where $‎\Lambda_N=(‎\frac{N^2(N-4)^2}{16})‎$ is optimal constant. My question is this that how the inequality with the optimal constant conclude that operator is coercive for ‎‎$‎0\leq ‎‎\lambda<\Lambda_N=(‎\frac{N^2(N-4)^2}{16})‎$. my try: the norm on the hilbet space $\mathbb{H}=W^{2,2}(\Omega) \cap W^{1,2}_0(\Omega)$ is $$\langle u,v\rangle_{‎\mathbb{‎‎H}}=\int_{\Omega} \Delta u \Delta v dx$$ I must show that $$\langle Lu,u \rangle_{\mathbb H} \geq c ||u||_{\mathbb H}^2$$ for a positive constant $c$. with simple calculations and using above inequality, I have showed that $$\langle Lu,u \rangle_{L^2} \geq c ||\Delta u||_{L^2}=c||u||_{\mathbb H}^2$$ for a positive $c$ but I must show that $$\langle Lu,u \rangle_{\mathbb H} \geq c||u||_{\mathbb H}^2$$ . It appears that you are using the notation $\langle \cdot, \cdot \rangle_{\mathbb{H}}$ for the inner product on you Hilbert space $W^{2,2}(\Omega)\cap W^{1,2}_0(\Omega)$ and $\langle \cdot,\cdot\rangle_{L^2}$ for the dual pairing with the space $W^{-2,2}_0(\Omega)$, which is the co-domain of the operator $L$. The term $\langle Lu,u\rangle_{\mathbb{H}}$ then does not make sense unless $u$ has more differentiability and even then will not be coercive. The appropriate bi-linear form for coercivity, i.e. for existence of solutions via the Lax-Milgram theorem, is $\langle Lu,u\rangle_{L^2}$ which you have already shown is coercive.
2019-11-12 18:16:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9417932033538818, "perplexity": 136.61636139623081}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496665726.39/warc/CC-MAIN-20191112175604-20191112203604-00454.warc.gz"}
http://www.thermospokenhere.com/wp/01_tsh/A3510___scissor_jack/scissor_jack.html
THERMO Spoken Here! ~ J. Pohl ©  ( A3510+9/15) ( A3660 -  Train Passes Two Boys) # Scissor-Jack This sketch is a physical scenario. It depicts what will happen. Looking at it closely, we anticipate the lift-rate of the scissor-jack will vary during any lift with constant input rotation. A scissor-type car jack is shown. Member BC is threaded such that when it is rotated by the hand-held driver, its length, the distance |BC| shortens. By the mechanism this causes the top of the jack (with elevation, h(t)) to move upward. The lead of the threaded bar BC is 0.1 cm/rev, meaning one rotation of the screw shortens the distance |BC| by 0.1 cm. Suppose, (in the position shown - dimensions given below), the driver rotates member BC at = 200 revolutions per minute. Calculate the upward velocity of the top of the jack, point D. The first step is to transform the scenario into a system sketch, select an origin with X and Z coordinate directions. Also set up an I - K vector basis. Add appropriate notation. A scenario becomes a system sketch when coordinates, dimensions, a vector basis and notation are added. For this jack, there are two independent vector paths from the ground (origin) to its top, D. Write both paths. Then since both paths go to the same place; equate them. By the vector diagram, one path from the origin to D is: OD = OA + AD (1) We see this is true; visually. A second path from the origin to "D" is: OD = OA + AB + BC + CD (2) 2 But the first path equals the second, so we have: OA + AD = OA + AB + BC + CD (3) 3 The equality of paths shows the vector OA to be redundant, meaning we could have started at A. AD = AB + BC + CD (4) 4 This vector equation is implicit, and simple in form. By visually tracing the paths, we see that it is correct. Next, using the numbers, unit directions and geometry, we represent as much of the implicit equation, explicitly. Let's walk through the steps, term by term (left to right) through our equation. About the first vector, AD. Obviously the length of that vector, |AD|, will change as the jack operates. The purpose of the device is to increase the height (call it h(t)) of what is being jacked upward. So write: Thus AD = h(t) K (6) 6 The next vector, AB (right of equality), has a constant length (15cm) but its direction changes in time. We use trigonometry with the angle, θ(t), to specify its direction: AB = 15cm [cosθ(t)I + sinθ(t)K (7) 7 The threaded rod extends from B to C. Its vector direction is "-"I and the vector length (at this instant), is l(t), so: BC = l(t)[-I] (8) 8 Leave this as it is. Vector CD is the same length and parallel to vector AB. CD = 15 cm [cosθ(t)I + sinθ(t) K] (9) 9 Finally, enter these explicitly written vectors into the initial implicit equation. h(t)K = 15cm[cosθ(t) I + sinθ(t) K] - l(t) I + 15cm[cosθ(t) I + sinθ(t) K] (10) 10 By vector algebra, this condenses to: h(t)K = 30cm [cosθ(t) -l(t)] I + 30 sinθ(t) K] (11) 11 The steps are always the same. We used vectors to keep things together, now we separate the motion into its components by vector multiplying the equation by I then K. From these components respectively, we obtain: 0 = 30 cosθ(t) - l(t), (12) 12 and, h(t) = 30 sinθ(t) (13) 13 Another piece of information relates to the change of length, |BC| which we have labeled l(t). The equation for l(t) is: l(t) = lo - (0.1cm/rev)α(rev/min)t (14) 14 The three equations above are independent. Since we need velocities, we differentiate those equations with respect to time. One could form the difference quotient of the of the equations then take the limit. Better to learn what the derivative of sine and cosine functions are and use them. Also included (above right) is that we know the values of the cosine and sine for the position of interest. Solution of the three equations yields: dh/dt = 15 cm/min (17) 17 Every step is included. Vectors put the spatial aspects in precise form. You can solve similar problems. All you need is patience and a large sheet of paper. ## Scissor-Jack This sketch is a physical scenario. It depicts what will happen. Looking at it closely, we anticipate the lift-rate of the scissor-jack will vary during any lift with constant input rotation. A scissor-type car jack is shown. Member BC is threaded. When it is rotated by the hand-held driver, the distance |BC| shortens which in turn causes the top of the jack (with elevation, h(t)) to move upward. The lead of the threaded bar BC is 0.1 cm/rev, meaning one rotation of the screw shortens the distance |BC| by 0.1 cm. For the position shown, Calculate the upward velocity of the jack, point D. Premise presently unwritted!
2019-03-25 19:06:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.687856137752533, "perplexity": 2131.2790104504847}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912204086.87/warc/CC-MAIN-20190325174034-20190325200034-00148.warc.gz"}
https://www.groundai.com/project/on-monogamy-of-four-qubit-entanglement/
On monogamy of four-qubit entanglement # On monogamy of four-qubit entanglement S. Shelly Sharma Departamento de Física, Universidade Estadual de Londrina, Londrina 86051-990, PR Brazil    N. K. Sharma Departamento de Matematica, Universidade Estadual de Londrina, Londrina 86051-990, PR Brazil ###### Abstract Our main result is a monogamy inequality satisfied by the entanglement of a focus qubit (one-tangle) in a four-qubit pure state and entanglement of subsystems. Analytical relations between three-tangles of three-qubit marginal states, two-tangles of two-qubit marginal states and unitary invariants of four-qubit pure state are used to obtain the inequality. The contribution of three-tangle to one-tangle is found to be half of that suggested by a simple extension of entanglement monogamy relation for three qubits. On the other hand, an additional contribution due to a two-qubit invariant which is a function of three-way correlations is found. We also show that four-qubit monogamy inequality conjecture of ref. [PRL 113, 110501 (2014)], in which three-tangles are raised to the power , does not estimate the residual correlations, correctly, for certain subsets of four-qubit states. A lower bound on residual four-qubit correlations is obtained. ## I Introduction Entanglement is a necessary ingredient of any quantum computation and a physical resource for quantum cryptography and quantum communication niel11 (). It has also found applications in other areas such as quantum field theory cala12 (), statistical physics sahl15 (), and quantum biology lamb13 (). Multipartite entanglement that comes into play in quantum systems with more than two subsystems, is a resource for multiuser quantum information tasks. Since the mathematical structure of multipartite states is much more complex than that of bipartite states, the characterization of multipartite entanglement is a far more challenging task horo09 (). Monogamy is a unique feature of quantum entanglement, which determines how entanglement is distributed amongst the subsystems. Three-qubit entanglement is known to satisfy a quantitative constraint, known as CKW monogamy inequality coff00 (). In recent articles regu14 (); regu15 (); regu16 (), it has been shown that the most natural extension of CKW inequality to four-qubit entanglement is violated by some of the four-qubit states and different ways to extend the monogamy inequality to four-qubits have been conjectured. For a subclass of four-qubit generic states, an extension of strong monogamy inequality to negativity and squared negativity karm16 () is satisfied, however, there exist four-qubit states for which negativity and squared negativity are not strongly monogamous. Three-qubit states show two distinct types of entanglement. As we go to four qubits, additional degrees of freedom make it possible for new entanglement types to emerge. It is signalled by the fact that corresponding to the three-qubit invariant that detects genuine three-way entanglement of a three-qubit pure state, a four-qubit pure state has five three-qubit invariants for each set of three qubits shar16 (). An -qubit invariant is understood to be a function of state coefficients which remains invariant under the action of a local unitary transformation on the state of any one of the qubits. A valid discussion of entanglement monogamy for four qubits, therefore, must include contributions from invariants that detect new entanglement types. This article is an attempt to identify, analytically, the contributions of two-tangles (pairwise entanglement), three-tangles (genuine three-way entanglement) and four tangles to entanglement of a focus qubit with the three remaining qubits (one-tangle) in a four qubit state. To do this, we express one-tangle in terms of two-qubit invariants. Monogamy inequality constraint on four qubit entanglement is obtained by comparing the one-tangle with upper bounds on two-tangles and three-tangles shar216 () defined on two and three qubit marginal states. Contribution of three-tangles to one-tangle is found to be half of what is expected from a direct generalization of CKW inequality to four qubits. The difference arises due to new entanglement modes that are available to four qubits. It is verified that the ”residual entanglement”, obtained after subtracting the contributions of two-tangles and three-tangles from one-tangle, is greater or equal to genuine four-tangle. Genuine four-tangle shar14 (); shar16 () is a degree eight function of state coefficients of the pure state. Besides that, the ”residual entanglement” also contains contributions from square of degree-two four-tangle shar10 (); luqu03 () and degree-four invariants which quantify the entanglement of a given pair of qubits with its complement in a four-qubit pure state shar10 (). ## Ii One-tangle of a focus qubit in a four-qubit State We start by expressing one-tangle of a focus qubit in a four-qubit state in terms of two-qubit invariant functions of state coefficients. Entanglement of qubit with in a two-qubit pure state |Ψ12⟩=∑i1,i2ai1i2|i1i2⟩;(im=0,1) (1) is quantified by two-tangle defined as τ1|2(|Ψ12⟩)=2∣∣D00∣∣, (2) where is a two-qubit invariant. Here are the state coefficients. On a four qubit pure state, however, for each choice of a pair of qubits one identifies nine two-qubit invariants. Three-tangle coff00 () of a three-qubit pure state is defined in terms of modulus of a three-qubit invariant. On the most general four qubit state, on the other hand, we have five three-qubit invariants corresponding to a given set of three qubits. Four-qubit invariant that quantifies the sum of three-way and four-way correlations of a three-qubit partition in a pure state is known to be a degree-eight invariant shar16 (), which is a function of three-qubit invariants. It is natural to expect that the monogamy inequality for four qubits takes into account the entanglement modes available exclusively to four-qubit system. To understand, how various two-tangles and three-tangles add up to generate total entanglement of a focus qubit in a pure four-qubit state, we follow the steps listed below: (1) Write down one-tangle of focus qubit as a sum of two-qubit invariants. (2) Express two-tangles, three-tangles and four-tangle or the upper bounds on the tangles in terms of two-qubit invariants. (3) Rewrite one-tangle in terms of tangles defined on two- and three-qubit reduced states and  ”residual four-qubit correlations”. (4) Compare the ”residual four-qubit correlations” with the lower bound on four-qubit correlations written in terms of four-qubit invariants. To facilitate the identification of two-qubit and three-qubit invariants, we use the formalism of determinants of two by two matrices of state coefficients referred to as negativity fonts. For more on definition and physical meaning of determinants of negativity fonts, please refer to section (VI) of ref. shar16 (). For the purpose of this article, we write down and use the determinants of negativity fonts of a four-qubit state when qubit is the focus qubit. On a four-qubit pure state, written as |Ψ1234⟩=∑i1,i2,i3i4ai1i2i3i4|i1i2i3i4⟩,(im=0,1), (3) where state coefficients are complex numbers and refers to the basis state of qubit , , we identify the determinants of two-way negativity fonts to be , , and . Besides that we also have (three-way), (three-way), (three-way), and (four-way), as the determinants of negativity fonts. One-tangle given by , where Tr, quantifies the entanglement of qubit with , and . It is four times the square of negativity of partial transpose of four-qubit pure state with respect to qubit pere96 (). Negativity, in general, does not satisfy the monogamy relation. However, it has been shown by He and Vidal he15 () that negativity can satisfy monogamy relation in the setting provided by disentangling theorem. It is easily verified that τ1|234(|Ψ1234⟩) = (4) One-tangle depends on way, way and way correlations of focus qubit with the rest of the system. ## Iii Definitions of two-tangles and three-tangle This section contains the definitions of two-tangles and three-tangles for pure and mixed three-qubit states. Consider a three-qubit pure state |Ψ123⟩=∑i1,i2,i3ai1i2i3|i1i2i3⟩,im=0,1, (5) Using the notation from ref. shar16 (), we define () (the determinant of a two-way negativity font) and , () (the determinant of a three-way negativity font). Entanglement of qubit with the rest of the system is quantified by one-tangle , where Tr. One can verify that τ1|23(|Ψ123⟩)=41∑i3=0∣∣∣D00(A3)i3∣∣∣2+41∑i3=0∣∣D00i3∣∣2+41∑i2=0∣∣∣D00(A2)i2∣∣∣2. (6) For qubit pair in , we identify three two-qubit invariants that is D00(A3)0,(D000+D001)2,D00(A3)1 (7) while for the pair two-qubit invariants are D00(A2)0,(D000−D001)2,D00(A2)1. (8) These two-qubit invariants transform under a unitary on the third qubit in a way analogous to the complex functions , and of Appendix A. Then the invariants corresponding to , and are three-qubit invariants for the given choice of qubit pair. In Table I, we enlist the correspondence of two-qubit invariants for qubit pairs and , with complex numbers , and of Appendix A and set the notation for invariants corresponding to , and One-tangle in terms of three-qubit invariants listed in column five of Table I reads as τ1|23(|Ψ123⟩)=4NA3+4NA2. (9) Three tangle coff00 () of pure state is equal to the modulus of the polynomial invariant of degree four that is τ1|2|3(|Ψ123⟩)=4|I3,4(|Ψ123⟩)|, where I3,4(|Ψ123⟩) = (D000+D001)2−4D00(A3)0D00(A3)1 (10) = (D000−D001)2−4D00(A2)0D00(A2)1. The entanglement measure is extended to a mixed state of three qubits via convex roof extension that is [τ1|2|3(ρ123)]12=min{pi,∣∣ϕ(i)123⟩}∑ipi[τ1|2|3(∣∣ϕ(i)123⟩)]12, (11) where minimization is taken over all complex decompositions of . Here is the probability of finding the normalized state in the mixed state Two-tangle of the state is constructed through convex roof extension as (12) Two-tangle , where is the concurrence hill97 (); woot98 (). One can verify that the invariants , , corresponding two-tangles, and three-tangle saturate the inequalities corresponding to Eq. (66) that is 4NA3=[τ1|2(ρ12)]2+12τ1|2|3(|Ψ123⟩), (13) and 4NA2=[τ1|3(ρ13)]2+12τ1|2|3(|Ψ123⟩). (14) Since , we obtain (15) which is the well known CKW inequality. From Eqs. (13) and (14), the distribution of entanglement in a three-qubit state and its two-qubit marginals satisfies the following relation: τ1|23(|Ψ123⟩)=[τ1|2(ρ12)]2+[τ1|3(ρ13)]2+τ1|2|3(|Ψ123⟩). (16) Moduli of two-qubit invariants, which depend only on the determinants of three-way negativity fonts, are used to define new two-tangles on the state via τ(new)1|p(ρ123)=min{pi,ϕ(i)123}∑ipi(2∣∣T1p(∣∣ϕ(i)123⟩)∣∣), (17) where T12(|Ψ123⟩)=D000(|Ψ123⟩)+D001(|Ψ123⟩), (18) and T13(|Ψ123⟩)=D000(|Ψ123⟩)−D001(|Ψ123⟩). (19) ### iii.1 What does τ(new)1|p(ρ123) measure? To understand the correlations represented by , we examine a generic three-qubit state in its canonical form. A state is said to be in the canonical form when it is expressed as a superposition of minimal number of local basis product states (LBPS) acin01 (). The state coefficients of this form carry all the information about the non-local properties of the state, and do so minimally. Starting from a generic state in the basis (Eq. (5)), local unitary transformations allow us to write it in a form with the minimal number of LBPS. As a first step towards writing the state in canonical form with respect to qubit, we chose a unitary that results in a state on which one of the two-way two-qubit invariants is zero that is or . For example a unitary with , acting on qubit gives a state U , such that . After eliminating , the state re  ads as U3|Ψ123⟩=b000(|0⟩1+b110b010|1⟩1)(|0⟩2+b010b000(|1⟩2))|0⟩3+∑i1,i2bi1i2i3|i1i21⟩. (20) It is straight forward to write down the local unitaries and that lead to the canonical form, |Ψ123⟩c=c000|000⟩+c001|001⟩+c101|101⟩+c011|011⟩+c111|111⟩. (21) Notice that on canonical state, . Next we determine the range of values that takes on a generic state . Combining the definition of (from Table I) for the state , with the result of Eq. (13), that is 4NA3 = 4∣∣D00(A3)0∣∣2+2∣∣D000+D001∣∣2+4∣∣D00(A3)1∣∣2 (22) = [τ1|2(ρ12)]2+12τ1|2|3(|Ψ123⟩), we obtain 4|T12(|Ψ123⟩)|2=τ1|2|3(|Ψ123⟩)−Dc , (23) where Dc=8∣∣D00(A3)0∣∣2+8∣∣D00(A3)1∣∣2−2[τ1|2(ρ12)]2. Since (), the value of satisfies τ1|2|3(|Ψ123⟩)≥4|T12(|Ψ123⟩)|2≥0. (24) In general, the difference measures the distance of a given three-qubit state from its canonical form with respect to qubit. On a pure state of three qubits, . The state on which , is obtained by a unitary transformation such that (25) (26) Next, consider the three-qubit mixed state , where is an un-normalized state. Let the set of two-qubit invariants for the pair in the state be (27) New two-qubit invariant (Eq. (17)) on is given by τ(new)1|2(ρ123)=2min{∣∣Φ(i)123⟩}[∣∣T12(∣∣Φ(0)123⟩)∣∣+∣∣T12(∣∣Φ(1)123⟩)∣∣]. (28) where from Eq. (23), (29) with defined as D(i)c=8∣∣∣(D00(A3)0)(i)∣∣∣2+8∣∣∣(D00(A3)1)(i)∣∣∣2−2[τ1|2(∣∣Φ(i)123⟩)]2. (30) If and are the local unitaries on the third qubit such that and , then τ(new)1|2(ρ123)=2min{∣∣T12(∣∣U3Φ(1)123⟩)∣∣,∣∣T12(∣∣V3Φ(0)123⟩)∣∣}. (31) Obviously, the value of satisfies either the condition τ1|2|3(∣∣Φ(0)123⟩)≥[τ(new)1|2(|ρ123⟩)]2≥0, (32) or the constraint τ1|2|3(∣∣Φ(1)123⟩)≥[τ(new)1|2(|ρ123⟩)]2≥0. (33) ## Iv Tangles and Three-qubit invariants of a four-qubit state In this section, we identify relevant combinations of two-qubit invariants that remain invariant under a local unitary on the third qubit. Three-qubit invariants that we look for are the ones related to tangles of three-qubit reduced states obtained from four-qubit pure state by tracing out the degrees of freedom of the fourth qubit. For any given pair of qubits in a general four-qubit state, there are nine two-qubit invariants. Of the six degree-four three-qubit invariants constructed from the set of nine two-qubit invariants, one is defined only on the pure state. Five remaining invariants are functions of three-tangles and two-tangles. In Table II, we identify sets of two-qubit invariants of a four-qubit state which transform under a unitary, on the third qubit in the same way as the functions , and of Appendix A. Three-qubit invariants listed in the last three columns depend on two-qubit invariants of columns two to four in the same way as , and depend on , and , for example three-qubit invariants in the third row of Table II read as (34) and (35) and satisfy the inequality 4N(i)A4≥[τ1|2(∣∣Φ(i)123⟩)]2+12∣∣4I3,4(∣∣Φ(i)123⟩)∣∣. (36) Here is the un-normalized state defined through . Upper bound on two-tangle calculated by using the method of ref. shar16 () shows that ∑i=0,1[τ1|2(∣∣Φ(i)123⟩)]2≥[τup1|2(ρ12)]2≥[τ1|2(ρ12)]2. (37) Similarly the upper bounds on for the nine families of four-qubit states, calculated in ref. shar216 () satisfy the condition (38) Combining the conditions of Eqs. (37) and (38), with inequality of Eq. (36), the sum of two-tangle and three-tangle satisfies the inequality 4∑i=0,1N(i)A4≥[τ1|2(ρ12)]2+12τ1|2|3(ρ123). (39) On , new two-qubit invariant (Eq. (17)) is defined as τ(new)1|2(ρ124)=2min{∣∣Φ(i)124⟩}∑i∣∣T12(∣∣Φ(i)124⟩)∣∣. where . New three-qubit tangle on a pure state is defined as , where (I3)(new)A4(|Ψ1234⟩) = (D0000+D0010+D0001+D0011)2 −4(D000(A3)0+D001(A3)0)(D000(A3)1+D001(A3)1). The invariants , and satisfy the inequality (analogous to Eq. (66)), 4MA3−12τ(new)1|2|3(|Ψ1234⟩)≥[τ(new)1|2(ρ124)]2. (40) Using a similar argument, three-qubit invariants listed in lines 5 and 6 of Table II satisfy the inequalities 4∑iN(i)A2≥[τ1|3(ρ13)]2+12τ1|3|4(ρ134), (41) and 4MA4−12τ(new)1|3|4(|Ψ1234⟩)≥[τ(new)1|3(ρ123)]2, (42) where three-tangle defined on pure four-qubit state reads as , and τ(new)1|3(ρ123)=2min{∣∣Φ(i)123⟩}∑i∣∣T13(∣∣Φ(i)123⟩)∣∣. Using invariants of local unitaries on qubits and , and definitions given in lines 7 and 8 of Table II, we obtain the inequalities 4∑iN(i)A3≥[τ1|4(ρ14)]2+12τ1|2|4(ρ124) (43) and 4MA2−12τ(new)1|2|4(Ψ1234)≥[τ(new)1|4(ρ134)]2, (44) where new three-tangle reads as , and The relations between two-tangles, three-tangles and three-qubit invariants listed in column (5) in Table II (Eqs. (39-44)) are important to obtain the monogamy inequality satisfied by one-tangle. ## V Monogamy of four-qubit entanglement To obtain the relation between tangles of reduced states and one-tangle of the focus qubit, firstly, we identify the three-qubit invariant combinations of two-qubit invariants in Eq. (4). It is found that a four-qubit invariant of degree two, which is defined only on the pure state, is also needed. Genuine four-tangle (Eq. (74) appendix B), defined in refs. shar14 (); shar16 () is a degree-eight function of state coefficients. However, the degree-two four-qubit invariant which is equal to invariant H of refs. shar10 (); luqu03 (), is known to have the form, I4,2(|Ψ1234⟩)=D0000−D0010−D0001+D0011. (45) Four-tangle defined as , is non zero on a GHZ state and vanishes on W-like states of four qubits. However, since fails to vanish on product of entangled states of two qubits, it is not a measure of genuine four-way entanglement. By direct substitution, one-tangle of Eq. (4) can be rewritten in terms of three-qubit invariants listed in column five of Table II and square of four-qubit invariant that is τ1|234=44∑q=21∑i=0N(i)Aq+24∑q=2MAq+14[τ(0)1|2|3|4(|Ψ1234⟩)]<
2020-12-03 00:13:05
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9066554307937622, "perplexity": 1909.711671889942}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141717601.66/warc/CC-MAIN-20201203000447-20201203030447-00082.warc.gz"}
http://new-contents.com/Virginia/forward-error-correction-matrix.html
Address 9430 Rapidan Dr, Fredericksburg, VA 22407 (540) 736-8007 http://www.defcon-5.com # forward error correction matrix Rhoadesville, Virginia This code can correct up to 2 byte errors per 32-byte block. Access to Confidential Objects Access control to the object being transmitted is typically provided by means of encryption. Note: "fail" is not returned unless t>(d−1)/2. The raw data can be found in the doc/data/fec-ber/ subdirectory. This signature enables a receiver to check the object integrity, once the object has been fully decoded. Content Corruption Protection against corruptions (e.g., after sending forged packets) is achieved by means of a content integrity verification/sender authentication scheme. Since these are not usually the original data blocks, an array of indices (ranging from 0 to encoded_blocks-1) must be supplied as the second arrayref. Standards Track [Page 18] RFC 5510 Reed-Solomon Forward Error Correction April 2009 [MWS77]). fec_get_enc_msg_length(scheme,n) returns the length $$k$$ of the encoded message in bytes for an uncoded input of $$n$$ bytes using the specified encoding scheme. CR: FEC code rate, which is given by the user (e.g., when starting a FLUTE sending application). IANA Considerations ...........................................25 11. Lacan, et al. A matrix H representing a linear function ϕ : F q n → F q n − k {\displaystyle \phi :\mathbb {F} _{q}^{n}\to \mathbb {F} _{q}^{n-k}} whose kernel is C is This method destroys the block array as set up by set_encode_blocks. $fec->shuffle ([array_of_blocks], [array_of_indices]) The same same as set_decode_blocks, with the exception that the blocks are not actually set for decoding. Introduction to Coding Theory (3rd ed.). Otherwise, the m field contains a valid value as explained in Section 4.2.3. Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may apply. This matrix generates a MDS code. In this case, each symbol can be represented as an m {\displaystyle m} -bit value. Reed and Gustave Solomon Classification Hierarchy Linear block code Polynomial code Cyclic code BCH code Reed–Solomon code Block length n Message length k Distance n − k + 1 Alphabet size In this document, m belongs to {2..16}. The vectors in C are called codewords. There MUST be exactly one FEC Payload ID per source or repair packet. The multiplication by the inverse of a square Vandermonde matrix is known as the interpolation problem and its complexity is O(k * (log(k))^^2). If your data is not of the required size (i.e. Soft Decoding liquid supports soft decoding of most error-correcting schemes (with the exception of the Golay, SEC-DED, and Reed-Solomon codes). Bar code Almost all two-dimensional bar codes such as PDF-417, MaxiCode, Datamatrix, QR Code, and Aztec Code use Reed–Solomon error correction to allow correct reading even if a portion of the a filehandle of a file of size blocksize exactly. The location of the erased symbols in the sequence of symbols MUST be known. Fix the errors Finally, e(x) is generated from ik and eik and then is subtracted from r(x) to get the sent message s(x). This can be done by direct solution for Yk in the error equations given above, or using the Forney algorithm. J. The system returned: (22) Invalid argument The remote host or network may be down. The codewords in a linear block code are blocks of symbols which are encoded using more symbols than the original value to be sent.[2] A linear code of length n transmits ISBN978-1-4704-1032-2. For large S, this matrix inversion cost becomes negligible in front of the S vector-matrix multiplications. fec_encode(q,n,*msg_dec,*msg_enc) runs the error-correction encoder scheme on an $$n$$ -byte input data array msg_dec , storing the result in the output array msg_enc . Cambridge University Press. If the linear system cannot be solved, then the trial ν is reduced by one and the next smaller system is examined. (Gill & n.d., p.35) Obtain the error locators from This code is so strong that most CD playback errors are almost certainly caused by tracking errors that cause the laser to jump track, not by uncorrectable error bursts.[5] DVDs use k the number of source symbols in a source block. copy encoded block$idx[$i] to position$i } } The copy method can be helpful here. Now consider the vector c ′ {\displaystyle {\boldsymbol {c'}}} such that c j ′ = 0 {\displaystyle c_{j}^{'}=0} if j ∉ S {\displaystyle j\notin S} . The system returned: (22) Invalid argument The remote host or network may be down. Also included is the unpunctured LIQUID_FEC_CONV_V27 codec, plotted as a reference point. Standards Track [Page 19] RFC 5510 Reed-Solomon Forward Error Correction April 2009 The encoding process produces n encoding symbols of size S m-bit elements, of which k are source symbols (this This means that input block 0 corresponds to file block 0, input block 1 to file block 2 and input block 2 to data block 1. The fec object realizes forward error-correction capabilities in liquid while the methods checksum() and crc32() strictly implement error detection. In that case, a receiver knows that the number of encoding symbols of a block cannot exceed max_n. S. (1994), "Reed–Solomon Codes and the Compact Disc", in Wicker, Stephen B.; Bhargava, Vijay K., Reed–Solomon Codes and Their Applications, IEEE Press, ISBN978-0-7803-1025-4 ^ Lidl, Rudolf; Pilz, Günter (1999). J.; Sloane, N. The block partitioning algorithm that is defined in Section9.1 of [RFC5052] MUST be used with FEC Encoding IDs 2 and 5. 6.1. The index array represents the decoded ordering, in that the n-th entry in the indices array corresponds to the n-th data block of the decoded result. While the number of different polynomials of degree less than k and the number of different messages are both equal to q k {\displaystyle q^ ⋯ 9} , and thus every Reed–Solomon error correction is also used in parchive files which are commonly posted accompanying multimedia files on USENET. Formats and Codes with FEC Encoding ID 2 ........................7 4.1. The authors also want to thank Luigi Rizzo for his comments and for the design of the reference Reed- Solomon codec. r(x) and e(x) are the same as above. Standards Track [Page 22] RFC 5510 Reed-Solomon Forward Error Correction April 2009 9.2. Each figure depicts the BER versus $$E_b/N_0$$ ( $$E_s/N_0$$ compensated for coding rate). If this is a problem for you mail me and I'll make it a file. \$fec->set_decode_blocks ([array_of_blocks], [array_of_indices]) Prepares to decode data_blocks of blocks (see set_encode_blocks for the array_of_blocks parameter). There is a maximum of 2^^24 blocks per object. If some of the source symbols contain less than S elements, they MUST be virtually padded with zero elements (this can be the case for the last symbol of the last
2019-01-22 07:06:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6895253658294678, "perplexity": 3189.3503300640173}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583829665.84/warc/CC-MAIN-20190122054634-20190122080634-00348.warc.gz"}
https://economics.stackexchange.com/tags/government-debt/new
# Tag Info ## New answers tagged government-debt 1 The issue is that they did not had much other choice as the war and war reparations made German hyperinflation virtually inevitable. As given by quantity theory of money a price level, change in which gives you inflation, is given by: $$P = \frac{MV}{Y}$$ so the change in price level $P$ depends not just on quantity of money $M$, but also velocity $V$ and ... 4 Government can have savings while having deficit because we are just talking about savings not net savings. For example, imagine that government has zero tax revenue $\\\$10$spending and$\\\$10$ of public investment. In this case the government is running deficit of $\\\$20\$ yet it is also saving through investment. Due to the deficit being larger than ... Top 50 recent answers are included
2020-08-12 07:15:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23510371148586273, "perplexity": 2322.085268171815}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738878.11/warc/CC-MAIN-20200812053726-20200812083726-00387.warc.gz"}
https://www.sarthaks.com/2757061/625-is-related-to-24-in-such-a-way-that-169-is-related-to
# 625 is related to 24 in such a way that 169 is related to:- 29 views closed 625 is related to 24 in such a way that 169 is related to:- 1. 17 2. 144 3. 12 4. 13 by (55.2k points) selected Correct Answer - Option 3 : 12 The logic follows here is: Number2 : (Number - 1) 252 : (25 - 1) = 625 : 24 Similarly, 132 : (13 - 1) = 169 : 12 Hence, "12" is the correct answer.
2023-03-27 01:35:14
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8456653952598572, "perplexity": 2416.955714183985}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296946584.94/warc/CC-MAIN-20230326235016-20230327025016-00516.warc.gz"}
https://nicholesuomi.wordpress.com/category/mathematics/
The Learn Fun Facts blog posted an interesting fun fact about strings. Law’s fun fact is that 8+9+1+89+91=198. Why is this a fun fact? Because the left side of that equation takes every sequence of digits from 891 (except the whole thing) and adds them together to get the digits in reverse order. Contrary to Law’s statement, though, this is not the only three digit number that does this. At the end of the post, he challenges us to figure out if this happens for any five digit combinations. He suggests using a computer to do it, but I wanted to see if basic algebra and a little cleverness could do it. So first I set out on the 3-digit problem to see how the process works. ## The 3-Digit Problem So, what do we know to start with? We’re working with three digits. Let’s call them x, y, and z. So for 891, x=8, y=9, and z=1. To make them match up with 891, we note that 100x+10y+z=891. To reverse the digits, just reverse the coefficients: x+10y+100z=198. Now, to figure out that it’s 8, 9, and 1, we can’t assume that from the start. But we are working on making the “substrings” (the sequences of digits within the number) equal x+10y+100z. How do we express the sum of substrings? x+y+z+10x+y+10y+z. If we clean that up a bit, we have 11x+12y+2z. Thus we are trying to find out when this is true: x+10y+100z=11x+12y+2z Well, if you just have three variables and one equation, you’re going to get a lot of possible solution. But wait, x, y, and z all have to be single digits. So we know that 0≤x9, 0≤y≤9, and 0≤z≤9. And since we want a three digit number, 1≤x. Since we want to be able to flip it into another three digit number, 1≤z. And since they’re digits, we know that they’re all whole numbers. Okay, now we have some stuff to work with. If we take our equation from before and subtract all the stuff on the right from both sides we get: 10x+2y-98z=0 Then divide everything by 2: 5x+y-49z=0 We want to isolate one variable to work with, and that z is being subtracted right now, so let’s try moving it back to the right: 5x+y=49z Well that’s quite the disparity in coefficients! And here’s where the magic is. Since 1≤z, the smallest thing 49z can be is 49. Since we know 1≤x≤9, we also know 5≤x≤45. And since y≤9, the biggest thing 5x+y can be is 54. But since 5x+y=49z, that means the biggest thing 49z can be is 54. And since z is a whole number, we can see that 49*2=98 which is too big. So z must be 1. If we plug z=1 into our equation, we have: 5x+y=49 Now we’re down to two variables. If we subtract y from both sides we’ll be able to get some nice bounds on 5x, so let’s do that: 5x=49-y Since 1≤y≤9, 40≤49-y≤48. And since 5x=49-y, 40≤5x≤48. But we know x has to be a whole number. So the only options for x are 8 and 9. So if we plug in either option: (5*8 or 5*9)=49-y Which is to say (40 or 45)=49-y So if we subtract the two options from each side: 0=(4 or 9)-y Add y to each side: y=4 or 9 And notice that the y=4 goes with x=9, and y=9 goes with x=8. So now all three variables are solved for. Either x=8, y=9, and z=1, or x=9, y=4, and z=1. These correspond to 891 and 941. So remember that the fun thing about 891 is that 8+9+1+89+91=198. So now let’s look at 941. We can see 9+4+1+94+41=149. Neat! So there are in fact two three-digit numbers with this property. ## Can it be done without the reversal? On the Learn Fun Facts post, Jack Shalom asks in a comment whether there are any three-digit numbers whose substrings add up to the number itself. The answer is no, and here is the proof. We start again with 1≤x≤9, 0≤y9, and 0≤z≤9. (We’re not reversing it, so z being 0 would be fine.) Now the equation we want to figure out what makes true is: 100x+10y+z=x+y+z+10x+y+10y+z Which simplifies to: 100x+10y+z=11z+12y+2z If we shuffle the ys and zs to the right and xs to the left we get: 89x=2y+z Since 1≤x, 89≤89x. But since y≤9 and z≤9, the biggest thing 2y+z can be is 2*9+9=27. So this system has no solution. ## The 5-Digit Problem The 5-digit version is obviously trickier. Because it’s so cumbersome to write and read the process of defining everything again, I’ll skip to the equation. This time I use a through e as digits for the number abcde since they’re easier to tell apart than some of the later letters.: 1111a+1222b+233c+34d+4e=a+10b+100c+1000d+10000e Which can be rewritten as: 1110a+1212b+133c-966d-9996e=0 Solving this would not be super interesting. First you would isolate e, find out it has to be 1 or 2, and continue from there in much the same way was with the 3-digit problem. So I won’t spend more time on that. However, the general case could be fun. Where did 1111, 1222, 233, 34, and 4 come from? Well, each digit gets a 1 from the single digit strings. Then the first four get a ten from the double digit strings and the last four get a one. Then the first three get a 100 from the triple digit strings, the second through fourth get a ten, and the last three get a one. Finally, the first two get a 1000 for the four digit strings, the second and third get a 100, the third and fourth get a 10, and the last two get a 1. So a got one of each, b got to double up on everything except hundreds, c was excluded from getting a thousand but got two hundreds, three tens, and three ones, and so on. This pattern could rather easily be adapted to any length of number. I hypothesize that with such generated numbers there’s some way to generate the solutions with single digits, but that will take more work. ## ASMS: Math There are countless pictures floating around at any given time asking people to do a simple arithmetic problem. Usually the problem requires knowing the order operations are carried out in. Then an argument ensues in the comments, usually with some people sounding really sure that the order of operations only applies in the context of a class on the order of operations. (You could of course have an arithmetic that is strictly left to right, but the standard convention is what it is. Arguing that you’re ignoring the standard convention for the order of operations is as silly as picking a wrong answer and defending it with a nonstandard definition of the plus or times sign.) I decided to make a somewhat better problem. Better as in slightly more interesting. I understand the strategy: The comments and shares generated give a page a lot of exposure. People like their pages having exposure. Almost nobody even bothers with mine. But mine seems far less annoying. The few comments it does get are also far less annoying. (For the curious, it’s a surprisingly easy problem thanks to the multiple choice. You know the cos(x) is sticking around when you integrate with respect to z. Then when you integrate with respect to x it’ll turn into a sin(x) and the whole thing will be a constant. So the third integration just multiplies it by 200. So you have some constant times the sine of something. B is the only option. ## Enough LaTeX for basic logic typesetting I’m currently taking a (meta)logic class. There are assigned problem sets. A lot of people either don’t know how to type logical symbols or else cannot be bothered to fight with Word. I’m a fan of LaTeX. I like it for several reasons, one of them being easy use of logical symbols. There are a lot of guides to using LaTeX. To my knowledge, none start from nothing and end with just what’s needed for a logic class. So here I fill in that void. My goal is to be comprehensive enough to cover what’s needed to type up assignments for a logic class while not including anything else so someone can be up and running with just this guide in a few minutes. ### Setting Up First, you need something to edit your text and something to compile it to a PDF or whatever other format you like. I personally use Overleaf. It’s a free, online application that lets you type in one column with live updates to what it looks like on the page in the other column. It also has templates, allows collaboration, and has some other nice features that are not important to our purposes here. (Full disclosure: The link is a referral link. If you refer people, you get extra storage space and pro features for free. The default free features and space are fine, though.) There are other popular options. If you need to compile offline, I suggest TeXmaker. If you go this route, you need to download MiKTeX. If you want to write something very long, you may want to type into a text editor and then copy and paste into Overleaf or TeXmaker. (By “long” I mean over fifty pages, give or take based on things like included pictures.) Onto the actual typing process. If you’re using Overleaf, go to the “My Projects” page and then create a new project. Choose “blank paper”. Then you’ll have this code: \documentclass{article} \usepackage[utf8]{inputenc} \begin{document} (Type your content here.) \end{document} If you’re not using Overleaf, go ahead and put that code into your document. There is a bit of tweaking to the basic template to make this better. Before the \begin{document} line, add a line containing just \usepackage{amsmath}. Then add lines with add \title{TITLE} and \author{NAME}. Then after the \begin{document} line, add a line saying \maketitle. If you want it to not be huge, type \small\maketitle\normalsize. (The \small makes it small. The \normalsize makes the stuff after it normal size.) At this point my document looks like this. \documentclass{article} \usepackage[utf8]{inputenc} \title{Phil 125 Homework Set 2} \author{Nichole Smith} \begin{document} \small\maketitle\normalsize (Type your content here.) \end{document} ### Typing the Document Everything after this replaces “(Type your content here.)”. • Typing letters and numbers works as you would expect. Certain symbols are used by the code so typing them is not straightforward. (The & and squiggle brackets are the most notable here.) • Single line breaks are ignored. So if you type some stuff, hit return/enter, and then type some more, it will show up as one paragraph. (This can be useful. I like to type every step of a proof in a new line. Then it compiles into a paragraph.) • Double line breaks give you a new paragraph. • If you want extra space, use \vspace{1cm} as its own paragraph. You can choose lengths other than 1cm if you want. Onto the logic specific stuff. Of critical importance is math mode. Whenever you surround text with dollar signs ($) LaTeX treats it as mathematical symbols. So, if you type$x$it will be italicized like a variable should be. Math mode does not have spaces. So$two words$will not have a space between them. (If you need a space while in math mode for some reason, “\ ” gives you a space. That is a backslash with a space after it.) Note all logical symbols have to be typed in math mode. The logical symbols: • \land gives you the and symbol • \lor gives you the or symbol • \lnot gives you the not symbol • \rightarrow gives you the material conditional arrow • \Rightarrow gives you the logical implication arrow • \leftrightarrow gives you the biconditional arrow • \Leftrightarrow gives you the logical equivalence arrow (So, capitalizing the arrow tags makes them the bigger arrows) • = is the equal sign • Parentheses are parentheses • \subset gives you the strict subset symbol • \subseteq gives you the subset symbol • In general, typing \not immediately before another symbol puts a slash through it. E.g. \not\subseteq gives you the not a subset symbol • \in gives you the element symbol • \times gives you the times sign • \neq gives you the not equal sign • > and < can be typed directly. To get the or equal to versions, type \geq or \leq • \emptyset gives you the empty set symbol • \{ and \} give you squiggle brackets • \& gives you the & symbol • \top and \bot give you the tautology and contradiction symbols. • \Alpha and \alpha give you upper and lower case alpha. The other Greek letters are similar. • | gives you the Sheffer stroke and \downarrow gives you the Peirce dagger. • An underscore gives you subscript. A caret gives you superscript. E.g. p sub 1 is typed$p_1$. • \hdots gives you a nice ellipsis. Use \cdots if you want them elevated to the middle of the line. • Anything on a line after % will not be compiled. So if you want to make a note to self, you can. I think this covers it. Most of them are pretty straightforward. If you do need more, this webpage has a nifty list. Or, detexify lets you just draw what you want, and it gives you the code. At this point you’re ready to type stuff. I will provide an example now. Say problem 2 asks you to symbolize “neither both p and q, nor q only if p” with the and, material conditional, and nor operators. Then you type: 2. The sentence “neither both$p$and$q$, nor$q$only if$p$” symbolized with the and, material conditional, and nor operators is$(p\land q)\downarrow(q\rightarrow p)$. ### Truth Tables LaTeX can also handle tables very nicely. If you’re lazy, there are online tools to make tables. They have quite a few options. You’re probably fine using that. I prefer more control for my truth tables. Again, you’re fine without. But in case anyone is interested, I’ll explain. Maybe you’ll want to be able to edit the code the generator spits out. (I often use a generator to start and then tweak as needed.) First, here’s the code for the truth table for p_1 or not p_1: \begin{tabular}{c|cccc}$p_1$&$p_1$&$\lor$&$\lnot$&$p_1\\ \hline T & & \textcolor{red}{T} & F & \\ F & & \textcolor{red}{T} & T & \\ \end{tabular} How do you construct this thing? First set up the tabular environment: \begin{tabular}{} \end{tabular} The second set of squiggle brackets after \begin let you set up the columns. Each c gives a center aligned column. If you want left or right aligned columned, use l or r instead of c. Yes, you can mix the three. The | gives a vertical line going down the entire table. Note for truth tables you want a column for every single symbol. That way nothing is under the variables and you can have a straight line of Ts and Fs under the connectives. So, for p_1 or not p_1 we want a column for p_1, a bar, then columns for each of p_1, or, not, and p_1. That’s four more. So, we have: \begin{tabular}{c|cccc} \end{tabular} We have the table set up. Now to fill it in. The first line of the table has the atomic sentences on the left and then the sentence in question on the right. Type the content of each column, separated by &. Then end the line with \\. So, to have the first line of the truth table: \begin{tabular}{c|cccc}p_1$&$p_1$&$\lor$&$\lnot$&$p_1$\\ \end{tabular} To have the horizontal line, type \hline on its own line. Then more on to the next row, doing the same thing you did for the first row. Note that if you want nothing in a certain spot, just leave the space between the two &s empty. So, for the second row, you want a T under the first p_1 (The one on the left side of the table), then nothing under the first one on the right, then a T under the and sign, an F under the not sign, and then nothing under the last p_1. The third line is similar. Now we have: \begin{tabular}{c|cccc}$p_1$&$p_1$&$\lor$&$\lnot$&$p_1\$ \\ \hline T & & T & F & \\ F & & T & T & \\ \end{tabular} This is a fine truth table. But, maybe you want to bold the truth values for the main connective. To make T bold, type \textbf{T}. You can replace “T” with other text, of course. If you’re using Overleaf, highlighting the text and pressing Ctrl+B will put the tag in automatically. This brings us to the complete table as quoted in the beginning of this section. The comment section is open. Questions and suggestions are welcome. (Edit notes: As Soren pointed out, I originally put the wrong symbol for commenting. I also realized the amsmath package is not needed, so I removed that. Since these are usually printed in black and white anyway, I got rid of color in favor of boldface type. This has the added benefit of avoiding the need for packages entirely. In the third edit I added the \leq and \geq tags as well as \hdots because I realized they’re needed for indexing variables. \hdots requires the amsmath package, so I added that line back in. Using bold instead of color still seems to be better.) ## The Collatz Trolley Problem I enjoy a good trolley problem (meme). I came across this one and it presents an odd problem: All initial values of n thus far tested end up looping with 4, 2, 1, so if it’s any of those, I’m not sure how many people are sucked into the black hole. (Though it’s fewer than 5, so if the goal is minimization of deaths, pulling the lever is ideal regardless.) This one is a bit odd to think about. On the one hand, at least 5×2^60 initial values have been shown to result in that loop. But many, many more have not (infinitely many, if you believe in infinities). And if you look at the odd numbers in any sequence the geometric mean of the ratios of outcomes is 3/4, though this only means no divergence. Maybe there’s some cycle involving numbers bigger than 5×2^60. Also it’s apparently been shown that for any m, the number of option for n between 1 and m is at least proportional to m^.84. So on the one hand my gut says pull because that evidence sounds kinda compelling. But then some part of me recognizes that m^.84 isn’t even half of m for most m, and 2^61 is relatively small. But then there seems to be some sort of abductive principle allowing the practical inference that pulling is probably right, but I can’t tell what it is. ## Disorder in multiple dimensions This is a fun post. Clearly a similar argument can be ran to show the unorderedness of any other field with a rotational operator that just adds dimensions to the reals/complexes (quaternions, octonions, etc.) but I do wonder if either some other property (say, completeness) can be given up to get orderedness or else if some nonstandard field with non-flat geometry can be ordered. (And not be isomorphic to the reals! I suppose this requires and answer to my first question, though, since the reals are the only complete ordered field.) My friend Jon pointed out that the meaning of orderedness basically requires having a single dimension to order on. (As for all x and y, either x>y, y>x, or x=y. That’s a one dimensional relation.)  Most of the ideas I had in mind with giving up, say, completeness ultimately reduced the dimensionality. (For example, if you take a subset of the complexes that has Re(x)=Re(y)->x=y then you can have an order, but that’s by basically knocking out a dimension.)
2018-08-21 01:47:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8384761214256287, "perplexity": 725.4622217622605}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221217909.77/warc/CC-MAIN-20180821014427-20180821034427-00090.warc.gz"}
http://openwetware.org/index.php?title=Biomod/2011/NUS/DNAmazing&diff=556228&oldid=541335
Biomod/2011/NUS/DNAmazing (Difference between revisions) Revision as of 13:24, 26 September 2011 (view source)← Previous diff Revision as of 15:26, 30 October 2011 (view source)Next diff → Line 1: Line 1: - *Team name: DNAmazing + {{New user}} - *Institution: NUS (National University of Singapore) + - Project: http://openwetware.org/wiki/User:NUS_Dnamazing + ==DNAmazing 101 aka All you need to know about DNAmazing== + Recently, DNA Origami has emerged as one of the most excellent tools for chemists and engineers to design nanometer-scale objects of complicated shapes and with wide applications. As the determination of the staple sequences is very tedious, there have been several computer programs dedicated to assist the users in designing 2D and 3D structures. However, recognizing the needs of some additional features in 2D DNA nanostructure designs, we decide to develop a program from scratch which is capable of automatically generating the raster fill pattern, allowing the design of sticky ends which can act as molecular contact points between origami structures and external connection sites. Such applications can be found in DNA motors using DNA Origami as platforms. In addition, the program is able to estimate $\Delta H^o, [itex]\Delta S^o, [itex]\Delta G^o, and [itex]T_m of sticky ends by using the Nearest-Neighbor method. Knowing those thermodynamic values will promise the control over the duplex formation/deformation of the sticky ends. + ==Guide to read this report efficiently== + ==Background Information== + ===DNA Origami=== + DNA Origami has emerged as one of the most promising tools for DNA self-assembly nanostructures. The techniques has been coined by Ruthermund in 2006 and extensively exploited. Thank to the efforts of scientists in the field, DNA origami now can enable the building of custom-shaped DNA nanostructures with a high degree of accuracy and complexity. DNA Origami structures have been used to study single-molecular chemical reactions, to assemble water-soluble probe tiles for label-free RNA hybridization. - *Team Members + ===DNA Origami future applications and developments=== - **NGUYEN CHI HUAN + Potential applications of DNA Origami in the future are the modeling complex protein assemblies, the building of molecular electronics or plasmonic circuits created on the DNA Origami boards, or even molecular factories with nanomachines operated on complex networks constructed by DNA Origami technique. Some of these applications may require the connection between DNA structures and other novel materials or devices such as carbon nanotubes, nanowires, gold nanoparticles, DNA nanomachines, etc. Sticky ends have been suggested as the best solution so far. Sticky ends are double helix DNAs with one longer single strand containing unpaired base pairs. The unpaired base pairs of sticky ends can act as a link between DNA Origami and other DNA or biological devices through the hydrogen bonds of complementary base pairs. In addition, the ends of sticky strands can be functionalized with chemical groups. - **TRUONG NHAT QUYNH THUYEN + - **DUONG VAN QUYNH THU + - **HO QUANG BINH + - *Graduate/Postdoc Mentor + Another requirement for future applications of DNA Origami is the design complexity. The application of DNA Origami for the molds of large scale nanoelectronic circuits and platforms for DNA machines definitely demand a software to reduce the tedious process of building the scaffold, determining the crossover positions. - **Hou Ruizheng + - *Faculty mentors + ===Traditional CAD programs for DNA Origami=== - **[http://www.physics.nus.edu.sg/corporate/staff/wangzs.html Prof Wang ZhiSong] + + ==Motivation== + Since the introduction of DNA Origami to the field of biomolecular design, there has been some good software dedicated to the design of both two dimensional and three dimensional DNA Origami structures. Most of the software has GUI which allows users to define the DNA Origami structures by manually drawing the scaffold ways. This process may be too tedious for large and complex structures, which will inevitably be studied and exploited in the near future. Furthermore, we have noticed that there have been very few programs which are able to provide a full function of the addition of sticky ends, which certainly limit the DNA Origami technique’s fields of application. Furthermore, we found out that there has been a lack of fully documented instructions about how a program in DNA Origami design is developed. Such an instruction may be useful as a platform to further develop design tools in DNA Origami. As a result of these observations, we decided to develop a new program from the scratch to overcome these limits. + + ==Objectives== + The program developed in this projects has following main goals: + # The program can provide all the possible scaffold ways for a given structures. A filter will be implemented to select the best choices. + # The program will automatically determine the positions of crossover and allow the addition of sticky ends. + # The program has a short function of computational chemistry: + + ==The Project's Description== + ===The Overview of DNAmazing Program=== + [[Image:Picture1.jpg|700px]] + + DNAmazing program consists of three main modules: the Design, GUI, and Computational Chemistry toolkits. + The programming language is C# and the integrated development environment (IDE)is Microsoft Visual Studio 2010. + + The Design is the backbone of the program which provides basic functions of a DNA Origami design tool: receiving information about the DNA Origami structures (shapes, sticky ends) and returning the necessary information of staple and scaffold sequences to synthesize the structures in labs. + + The Computational Chemistry toolkits (general description here) + + The GUI will provide users with basic interface with the program to (general description here) + + === The Design=== + ====Basic Dogmas of Design==== + The Design of 2D DNA Origami in DNAmazing follows the principles which were laid out in Rothermund's first paper in 2006. The basic idea of DNA Origami is to fold a DNA helix into a desired shape. One strand of the DNA helix is a long and continuous DNA strand, called the scaffold strand; another strand consists of several short DNA fragments, the staple strands. The staple strands are together complementary to the scaffold to form the DNA helix. The formation of crossovers of staple stands keep the scaffold strand in the desired shape. + + [[Image:terms.jpg|700px]] + For the purpose of design, the folded DNA helix is conceptually divided into several small helices which one helix is one turn of the folded helix. Each of these turns/helices is represented as one square in the program. Each square is given a number. The labeling is done from the left to right and from the bottom row to the top row. The non-integer number of bases pair per turn: 10.67 will be approximated as 11 base pairs. The DNA helix is folded by forming several crossovers in the staple strands; these crossovers indicate the positions where a staple strand switch to another helix located on a different row. These switching only occurs at locations where DNA twist places at its tangent point between helices which is apart by any odd number of half-turns. In this project, we will stick to 1.5 turns. + + ====Inputting parameters==== + Recognizing the fact that the conventional input of existing programs may not be convenient for large and complex structures,DNAmazing adopts a very different way: a lithography-like way.Instead of drawing the scaffold way, which may be painful and even impossible for complicated designs,users will input the dimensions of a rectangle that encloses their desired structure. The dimensional units are the number of helices/squares per row and per column. The users will achieve their final desired shape by eliminate the unwanted squares. The elimination id done by inputting the number of the unwanted squares (null squares). + + [[Image:Input process.jpg]] + + In the above example, the desired DNA Origami shape is enclosed by a rectangular frame 6 squares x 6 squares. There are totally 8 null squares: 12,18,,24,30,17,23,29,35. + + ====Generation of scaffolding pathways==== + One of the unique features of DNAmazing is its ability to automatically generate the scaffolding pathways. For the existing programs, users have to manually design how to fold a scaffold strand to the desired shapes. This progress may be tedious for complex structures such as smiling faces in Rothermund's paper. In DNAmazing, users only have to conceptualize the DNA Origami structures into series of squares which was described in the previous part. This is definitely more relaxing. + + Basically, the process of generation of scaffolding is to thread the scaffold strand to all the squares that each square is visited only once. This is very similar to the algorithm of the Hamiltonian circuit (or the Hamiltonian path). In graph theory, a Hamiltonian circuit is a path in an undirected graph that visits each vertex exactly one. Another example of Hamiltonian circuit is the problem of a business man to visit all the cities only once to deliver goods. + + Each normal square in DNA Origami is modeled as a vertex which can be linked to its 4 adjacent neighbors in four directions, but not diagonal neighbors. The null squares are isolated squares and there should not be any links to them. The scaffolding path starts with the first square and extend by adding one of 4 neighbors of the first square. A Hamiltonian circuit can be solved by exploring all the possible paths that satisfy the condition.The process is repeated until it can no longer extend because of there are not any possible choices or the path has passed through all squares. If the latter happens, the process is done and the scaffolding pathway is generated successfully. In the former cases, the program will take one step back and explore other choices. + + By using the algorithm of Hamiltonian circuit, DNAmazing is able to find all the possible scaffold ways. However, not all of these ways are reasonable for the DNA Origami. A filter must be included to select the paths which are suitable for DNA Origami. Below are some rules which we use in the filtering process: + #The first square is either square 0 or the square at the middle of the first row + # If the first square is square 0, the scaffolding pathway should run continuously and only turn over to another row at either two ends.If the first square is the middle of the first row, the last square in the scaffold way must be on the right of the first square. + # The scaffold should not run in the vertical direction. + [[Image:rule.jpg|700px]] + + The result of this stage is a 1D matrix containing the ordinal numbers of the squares that the scaffold passes through. For instance,the scaffold way in the above figure will be presented as C=[2,1,0,6,7,8,14,13,19,20,26,25,31,32,33,34,28,27,21,22,16,15,9,10,11,5,4,3] + + ====Determination of crossover positions==== + The next step in the Design part is the determination of crossover positions. Crossovers are places where the staples switch to another helix located on a different row. The crossovers are crucial to the folding of the scaffold strand. In fact, they are the only forces which prevent the scaffold from unfolding in a process of achieving higher entropy (more disordered) and thus lower ΔG. The basic principle to determine the positions of crossovers was laid out by Rothermund: the spacing between crossovers in 2D DNA Origami structures must be an odd number of half turns. In other words, 2 vertically adjacent staples meet at their tangent points every an odd number of half turn. Thus, the staples will be in the least strained state at the crossovers. Particularly, in this project, we will stick to 1.5 turns as the unit for the spacing of crossovers. + + The algorithm to determine the crossover positions starts with the generation of an ArrayList, which is elementally a matrix with flexible dimensions. We named it PosCros. The Poscros Arraylist is used to add the squares which contain the crossover position. The first element of PosCros is always the first element in the scaffold way. The next elements are determined based on in which category the previous element is; the categorization is done based on the relative distance between the element and the closest turning point of the scaffold. + + ====Addition of sticky ends==== + + Sticky ends serving as an extra ends of a staples should not interfere with the scaffold folding in the formation of DNA Origami. So, sticky end sequences must not have any stable binding to any sequence in the scaffold. To generate sticky end sequences, DNA sequences of a defined length are generated randomly. The newly generated sequences are then to be examined for its ability to bind to the scaffold. Sequences which have a rather stabilizing binding with any position in the scaffold are discarded. Only those without stable scaffold binding are kept and can be used as sticky ends’ sequences. + + To determine if the sticky end would have any stabilizing binding to the scaffold, one needs to know binding energy of the sticky end to every sequence in the scaffold. In addition, a threshold below which the binding is considered stable is also required. + + =====Calculation of binding energy===== + + The Sticky end sequence given is mapped along the scaffold length and the binding energy (deltaG) is calculated for each match/mismatch binding. The calculation was done using the formula and complete thermodynamic database for internal single mismatches discussed in SantaLucia’s studies (2006) (1). The formula and parameters are shown bellow: + + [[Image:CodeCogsEqn.gif]] + + + Nearest-neighbor [itex]\Delta G^o$ increments (kcal/mol) for internal single mismatches next to Watson-Crick pairs in 1 M NaCl + + [[Image:TableMis.jpg]] + + + For example, consider the  total binding energy of following DNA duplex. The mismatch base pair is '''bold''': + + [[Image:Cal1.gif]] + + + =====Set up a threshold===== + To determine if the mismatched complement between the sticky ends and the scaffolds are stable or unstable, a threshold of binding energy ($\Delta G^o$) is required. Binding energy less than or equal to this threshold would be consider stable. There should not be an absolute threshold value for every DNA sequence with different length. Longer DNA sequences require lower $\Delta G^o$ for a stable binding. Therefore, the threshold is set up as a variable calculated based on the sequence length: + Let n be the length of DNA sequence. If n is even, the threshold is calculated as follow: + + [[Image:thesholdEven.png]] + + + if n is odd, then the formula for the threshold is: + + [[Image:thesholdOdd.png]] + + + In the equation, -0.58 is the binding energy between 5’-TA-3’/3’-AT-5’, and -0.88 is the binding energy between 5’-AT-3’/3’-TA-5’. This means that the right side of the equation equal the binding energy of the complement 5’-(AT)n-3’/3’-(TA)n-5’of the same length. In other words, there would be no binding between the sticky end and scaffold which is more stable than the least stable Watson-Crick fully complemented DNA duplex + + ====Merging Process==== + + === The prediction of the thermal stability of the duplex produced from sticky end === + + Predict the thermal stability of short DNA duplex which is formed upon the binding of the sticky end and its complementary single-stranded strand. + + The capability to estimate the thermal stability will aid in numerous applications such as (i) predicting the stability of a local sequence on DNA duplex, or of a probe-gene complex, (ii) calculating the melting temperatures of short sequences in hybridization experiments, (iii) determining the optimal length of the probe oligomer to produce stable duplexes with the sticky ends. Recently, the order-disorder transition of a sticky end with its complementary single strand is also important in controlling the dynamic movement of nanomotors, which are made from DNA strands (reference?). + + Research has shown that the thermal stability of duplex is affected by sequence information and base compositions. However, the sequence of DNA strand is the major determinant of $\Delta$$H^o$, $\Delta$$S^o$, and $\Delta$$G^o$. We apply the nearest-neighbor (NN) method to determine the transition enthalpy, entropy, free energy, and melting point of short DNA duplex. This method calculates those thermodynamic values using the stacking interaction between Watson-Crick neighboring bases in the DNA strands. + + DNAmazing program will not only assist in random generating stick ends attached to pre-determined positions on DNA Origami, but also allow users to input their preferred sequence information of the sticky ends. Since different sequences have different thermal stability (represented by $\Delta$$H^o$, $\Delta$$S^o$, and $\Delta$$G^o$) upon binding, knowing those thermodynamic values is crucial to study the function and applications of the sticky ends. + + + + Besides, DNAmazing program also helps to determine whether the sticky end's sequence input by user is complementary to the scaffold strand or other staple strands. + + There are many groups have dedicated researching on NN method to determine $\Delta$$H^o$, $\Delta$$S^o$, $\Delta$$G^o$, and $T_m$ of short DNA oligomers and have arrived on the same formula as demonstrated below. However, since difference researches used different starting materials (short DNA oligomers, polymers, etc.), the values for one parameter slightly vary. We have chosen the latest results obtained by John S.L. et al to incorporate into our software. + + :$+ \Delta H^o_{} = \Delta H^o_{ini} + \Delta H^o_{sym} + \Delta H^o_{AT term.} + \Sigma \Delta H^o_{stacking}\! +$ + + Where $\Delta H^o_{}$ is the helix initiation enthalpy of the transition process; $\Delta H^o_{sym}$ is the symmetry term only  applies to self-complementary duplexes, accounting for the enthalpy difference between a duplex formed from a self-complementary sequence and a duplex formed from 2 complementary strands; $\Delta H^o_{AT term.}$ is applied for each end of a duplex that has a terminal AT, accounting for the end-fraying caused by AT base pair; $\Sigma \Delta H^o_{stacking}$ is the total of enthalpy of propagation step in the sequence. + + For example: + + :+ \begin{align} + \Delta H^o_{} (5'-CGTTGA-3') & = \Delta H^o_{ini} + \Delta H^o_{sym} + \Delta H^o_{AT term.} + \Sigma \Delta H^o_{stacking} \\ + & = 0.2 + 0.0 + 2.2 + ( - 10.6 - 8.4 - 7.6 - 8.5 - 8.2) \\ + & = -40.9 (kcal/mol) \\ + \end{align} + + + $\Delta S^o$, $\Delta G^o$ are calculated using the same formula (1) above. + + There are 10 propagation steps, 1 initiation, and 1 terminal AT correction to make up a total of 12 NN parameters shown in Table 1. These values are obtained via multiple linear regression of the results from differential scanning calorimetry (DSC) of 108 short DNA sequences. + + {| class="wikitable" align="center" border="1" cellpadding="5" cellspacing="0" + ! style="background: #efefef;" |Propagation step + ! style="background: #efefef;" |$\Delta H^o$ (kcal/mol) + ! style="background: #efefef;" |$\Delta S^o$ (e.u.) + ! style="background: #efefef;" |$\Delta G^o$ (kcal/mol) + |- + |align=center|AA/TT + |align=center|-7.6 + |align=center|-21.3 + |align=center|-1.00 + |- + |align=center|AT/TA + |align=center|-7.2 + |align=center|-20.4 + |align=center|-0.88 + |- + |align=center|TA/AT + |align=center|-7.2 + |align=center|-21.3 + |align=center|-0.58 + |- + |align=center|CA/GT + |align=center|-8.5 + |align=center|-22.7 + |align=center|-1.45 + |- + |align=center|GT/CA + |align=center|-8.4 + |align=center|-22.4 + |align=center|-1.44 + |- + |align=center|CT/GA + |align=center|-7.8 + |align=center|-21.0 + |align=center|-1.28 + |- + |align=center|GA/CT + |align=center|-8.2 + |align=center|-22.2 + |align=center|-1.30 + |- + |align=center|CG/GC + |align=center|-10.6 + |align=center|-27.2 + |align=center|-2.17 + |- + |align=center|GC/CG + |align=center|-9.8 + |align=center|-24.4 + |align=center|-2.24 + |- + |align=center|GG/CC + |align=center|-8.0 + |align=center|-19.9 + |align=center|-1.84 + |- + |align=center|Initiation + |align=center|+0.2 + |align=center|-5.7 + |align=center|+1.96 + |- + |align=center|Terminal AT penalty + |align=center|+2.2 + |align=center|+6.9 + |align=center|+0.05 + |- + |align=center|Symmetry correction + |align=center|0.0 + |align=center|-1.4 + |align=center|+0.43 + |} + + The melting point of short DNA chain, defined as the temperature at which half of double-stranded DNA sequences have dissociated, is calculated as following: + + :$+ T_m = \frac{\Delta H^o \times 1000} {\Delta S^o + R \times \ln( \frac{C_t}{x} ) - 273.15} +$ + + where $C_t$ is the total molar strand concentration. For nonself-complementary duplexes x=4, and for self-complementary, x=1. + + + NN method is just an approximation because it neglects the secondary interactions in the DNA duplexes (we assume that the DNA duplexes undergo two-state transition), and the heat capacity $C_p$ is constant over different temperatures. To reduce such inaccuracy in calculation, short DNA oligomers (less than 30 base pairs) were used to minimize the secondary interaction within the DNA molecule. + + Sodium dependence of $\Delta S^o$ and $\Delta G^o$ + + The entropy and free energy calculated from formula (1) above apply at 37oC and 1M NaCl. To extend the results to various salt condition, the following correction formulae have been derived by (***) + + :$+ \Delta S^o [Na^+] = \Delta S^o [1M NaCl] + 0.368 \times N/2 \times ln[Na^+] +$ + + + :$+ \Delta G^o [Na^+] = \Delta G^o [1M NaCl] + 0.114 \times N/2 \times ln[Na^+] +$ + + + where N is the total number of phosphate in the duplex and [Na^+] is the total concentration of monovalent cations ($Na^+$, $K^+$, $NH^{4+}) in the solution. [itex]\Delta H^o$ is assumed to be sodium-independent. + + To calculate the value of $\Delta G^o$ at temperature different than 37$^o$C , the following equation is used: + + $\Delta G^o = \Delta H^o - T\Delta S^o$ + + in which T is in Kelvin, $\Delta H^o$ is in cal/mol, and $\Delta S^o$ is in entropy units (e.u.). [itex]\Delta H^o and [itex]\Delta S^o are assumed to be independent of temperature. + + ===The User Interface (GUI)=== + GUI or graphic user interface is constructed to create a friendly environment for users to construct their DNA origami. Our GUI is generated using Window form application in Visual studio 2010. Our software has three main components to support the DNA Origami design with sticky end addition and the themaldynamic analysis of sticky ends. The code sources are provided in the attachments. + + ====Generate DNAO==== + For the first component, staples’ sequences used for the correct folding of DNA Origami with sticky ends are generated. User are required to define the size and shape of the structures they want to design by first input the frame size, and then choose the null squares (the location which will not be occupied by the scaffold). This would help the program to understand the DNA Origami design. + + [[Image:GUI_2new.png]] + + After obtaining the parameters required, the program will generate different possible scaffold ways and ask users to choose one of their interest. + + [[Image:GUI_3.png]] + + Users can also choose to add sticky end by enter the number of sticky end they need and specify the sequence and location of sticky ends in the scaffold. + + [[Image:GUI_4.png]] + + Final staple sequences are generated and appear in the result window. + + [[Image:GUI_5.png]] + + + ====Generate sticky end sequence==== + To support generation of sticky end, as well as, to ensure that the sticky end will not affect the scaffold folding, an additional component is provided. User can choose to manually input a DNA sequence, and the program can help to check for the most stabilizing binding position in the scaffold. The binding energy is also calculated for users’ reference. + + [[Image:GUI_6.png‎]] + + User can also ask the program to generate the sticky end sequence with the defined length. DNA sequences with binding energy higher than a limit defined are given. The below image illustrates the output of sticky ends' sequence generation. + + [[Image:GUI_7.png‎]] + + + ====Thermaldynamic analysis==== + The other component of the software is also to support the sticky end analysis in which thermal dynamic values of the sequence are calculated. Users need to enter the sequence they want to analyze, together with the condition in which they would test the DNA (total DNA strand concentration, Na+ concentration, and melting temperatures). Thermaldynamics value including deltaG, deltaS, deltaH, and Tm are provided in the results pages. + + [[Image:GUI_8new.png‎]] + + + ==Results and Discussion== + + ==Team Information== + + + {| class="wikitable" border="1" cellpadding="5" cellspacing="0" + ! style="background: #efefef;" |'''Team Members''' + ! style="background: #efefef;" |'''Institution''' + ! style="background: #efefef;" |'''Mentor/Advisor''' + ! style="background: #efefef;" |'''Institution''' + |- + | Nguyen Chi Huan + | NUS, Engineering Science + | A/P Wang Zhisong, Mentor + | NUS, Physics + |- + | Truong Nhat Quynh Thuyen + | NUS, Life Sciences + | Professor Chen Yu Zong, Co-Mentor + | NUS, Pharmacy + |- + | Ho Quang Binh + | NUS, Applied Chemistry + | Hou Ruizheng, Graduate Mentor + | NUS, Physics + |- + | Duong Van Quynh Thu + | NUS, Life Sciences + | Dr. Sarangapani, Sreelatha + | NUS, Physics + |- + | + | + | Dr. Thomas Butler, Advisor + | ASU + |- + | + | + | Han Dongran, Advisor + | ASU + |} + + '''Contact Info''' + *National University of Singapore + *21 Lower Kent Ridge Road + *Singapore, 119077 + + + ==References== + 1. SantaLucia, J. and D. Hicks (2004). "THE THERMODYNAMICS OF DNA STRUCTURAL MOTIFS." Annual Review of Biophysics and Biomolecular Structure 33(1): 415-440. + + 2. SantaLucia, J., H. T. Allawi, et al. (1996). "Improved Nearest-Neighbor Parameters for Predicting DNA Duplex Stability†." Biochemistry 35(11): 3555-3562. + + 3. Breslauer, K. J., R. Frank, et al. (1986). "Predicting DNA duplex stability from the base sequence." Proceedings of the National Academy of Sciences 83(11): 3746-3750. + + 4. Marky, L. A. and K. J. Breslauer (1982). "Calorimetric determination of base-stacking enthalpies in double-helical DNA molecules." Biopolymers 21(11): 2185-2194. + + 5. Sugimoto, N., S.-i. Nakano, et al. (1996). "Improved Thermodynamic Parameters and Helix Initiation Factor to Predict Stability of DNA Duplexes." Nucleic Acids Research 24(22): 4501-4505. + + ==Acknowledgment== + *[[OpenWetWare:Welcome|Introductory tutorial]] + *[[Help|OpenWetWare help pages]] + + ==Useful links== + *[[OpenWetWare:Welcome|Introductory tutorial]] + *[[Help|OpenWetWare help pages]] Revision as of 15:26, 30 October 2011 I am a new member of OpenWetWare! DNAmazing 101 aka All you need to know about DNAmazing Recently, DNA Origami has emerged as one of the most excellent tools for chemists and engineers to design nanometer-scale objects of complicated shapes and with wide applications. As the determination of the staple sequences is very tedious, there have been several computer programs dedicated to assist the users in designing 2D and 3D structures. However, recognizing the needs of some additional features in 2D DNA nanostructure designs, we decide to develop a program from scratch which is capable of automatically generating the raster fill pattern, allowing the design of sticky ends which can act as molecular contact points between origami structures and external connection sites. Such applications can be found in DNA motors using DNA Origami as platforms. In addition, the program is able to estimate ΔHo, ΔSo, ΔGo, and Tm of sticky ends by using the Nearest-Neighbor method. Knowing those thermodynamic values will promise the control over the duplex formation/deformation of the sticky ends. Background Information DNA Origami DNA Origami has emerged as one of the most promising tools for DNA self-assembly nanostructures. The techniques has been coined by Ruthermund in 2006 and extensively exploited. Thank to the efforts of scientists in the field, DNA origami now can enable the building of custom-shaped DNA nanostructures with a high degree of accuracy and complexity. DNA Origami structures have been used to study single-molecular chemical reactions, to assemble water-soluble probe tiles for label-free RNA hybridization. DNA Origami future applications and developments Potential applications of DNA Origami in the future are the modeling complex protein assemblies, the building of molecular electronics or plasmonic circuits created on the DNA Origami boards, or even molecular factories with nanomachines operated on complex networks constructed by DNA Origami technique. Some of these applications may require the connection between DNA structures and other novel materials or devices such as carbon nanotubes, nanowires, gold nanoparticles, DNA nanomachines, etc. Sticky ends have been suggested as the best solution so far. Sticky ends are double helix DNAs with one longer single strand containing unpaired base pairs. The unpaired base pairs of sticky ends can act as a link between DNA Origami and other DNA or biological devices through the hydrogen bonds of complementary base pairs. In addition, the ends of sticky strands can be functionalized with chemical groups. Another requirement for future applications of DNA Origami is the design complexity. The application of DNA Origami for the molds of large scale nanoelectronic circuits and platforms for DNA machines definitely demand a software to reduce the tedious process of building the scaffold, determining the crossover positions. Motivation Since the introduction of DNA Origami to the field of biomolecular design, there has been some good software dedicated to the design of both two dimensional and three dimensional DNA Origami structures. Most of the software has GUI which allows users to define the DNA Origami structures by manually drawing the scaffold ways. This process may be too tedious for large and complex structures, which will inevitably be studied and exploited in the near future. Furthermore, we have noticed that there have been very few programs which are able to provide a full function of the addition of sticky ends, which certainly limit the DNA Origami technique’s fields of application. Furthermore, we found out that there has been a lack of fully documented instructions about how a program in DNA Origami design is developed. Such an instruction may be useful as a platform to further develop design tools in DNA Origami. As a result of these observations, we decided to develop a new program from the scratch to overcome these limits. Objectives The program developed in this projects has following main goals: 1. The program can provide all the possible scaffold ways for a given structures. A filter will be implemented to select the best choices. 2. The program will automatically determine the positions of crossover and allow the addition of sticky ends. 3. The program has a short function of computational chemistry: The Project's Description The Overview of DNAmazing Program DNAmazing program consists of three main modules: the Design, GUI, and Computational Chemistry toolkits. The programming language is C# and the integrated development environment (IDE)is Microsoft Visual Studio 2010. The Design is the backbone of the program which provides basic functions of a DNA Origami design tool: receiving information about the DNA Origami structures (shapes, sticky ends) and returning the necessary information of staple and scaffold sequences to synthesize the structures in labs. The Computational Chemistry toolkits (general description here) The GUI will provide users with basic interface with the program to (general description here) The Design Basic Dogmas of Design The Design of 2D DNA Origami in DNAmazing follows the principles which were laid out in Rothermund's first paper in 2006. The basic idea of DNA Origami is to fold a DNA helix into a desired shape. One strand of the DNA helix is a long and continuous DNA strand, called the scaffold strand; another strand consists of several short DNA fragments, the staple strands. The staple strands are together complementary to the scaffold to form the DNA helix. The formation of crossovers of staple stands keep the scaffold strand in the desired shape. For the purpose of design, the folded DNA helix is conceptually divided into several small helices which one helix is one turn of the folded helix. Each of these turns/helices is represented as one square in the program. Each square is given a number. The labeling is done from the left to right and from the bottom row to the top row. The non-integer number of bases pair per turn: 10.67 will be approximated as 11 base pairs. The DNA helix is folded by forming several crossovers in the staple strands; these crossovers indicate the positions where a staple strand switch to another helix located on a different row. These switching only occurs at locations where DNA twist places at its tangent point between helices which is apart by any odd number of half-turns. In this project, we will stick to 1.5 turns. Inputting parameters Recognizing the fact that the conventional input of existing programs may not be convenient for large and complex structures,DNAmazing adopts a very different way: a lithography-like way.Instead of drawing the scaffold way, which may be painful and even impossible for complicated designs,users will input the dimensions of a rectangle that encloses their desired structure. The dimensional units are the number of helices/squares per row and per column. The users will achieve their final desired shape by eliminate the unwanted squares. The elimination id done by inputting the number of the unwanted squares (null squares). In the above example, the desired DNA Origami shape is enclosed by a rectangular frame 6 squares x 6 squares. There are totally 8 null squares: 12,18,,24,30,17,23,29,35. Generation of scaffolding pathways One of the unique features of DNAmazing is its ability to automatically generate the scaffolding pathways. For the existing programs, users have to manually design how to fold a scaffold strand to the desired shapes. This progress may be tedious for complex structures such as smiling faces in Rothermund's paper. In DNAmazing, users only have to conceptualize the DNA Origami structures into series of squares which was described in the previous part. This is definitely more relaxing. Basically, the process of generation of scaffolding is to thread the scaffold strand to all the squares that each square is visited only once. This is very similar to the algorithm of the Hamiltonian circuit (or the Hamiltonian path). In graph theory, a Hamiltonian circuit is a path in an undirected graph that visits each vertex exactly one. Another example of Hamiltonian circuit is the problem of a business man to visit all the cities only once to deliver goods. Each normal square in DNA Origami is modeled as a vertex which can be linked to its 4 adjacent neighbors in four directions, but not diagonal neighbors. The null squares are isolated squares and there should not be any links to them. The scaffolding path starts with the first square and extend by adding one of 4 neighbors of the first square. A Hamiltonian circuit can be solved by exploring all the possible paths that satisfy the condition.The process is repeated until it can no longer extend because of there are not any possible choices or the path has passed through all squares. If the latter happens, the process is done and the scaffolding pathway is generated successfully. In the former cases, the program will take one step back and explore other choices. By using the algorithm of Hamiltonian circuit, DNAmazing is able to find all the possible scaffold ways. However, not all of these ways are reasonable for the DNA Origami. A filter must be included to select the paths which are suitable for DNA Origami. Below are some rules which we use in the filtering process: 1. The first square is either square 0 or the square at the middle of the first row 2. If the first square is square 0, the scaffolding pathway should run continuously and only turn over to another row at either two ends.If the first square is the middle of the first row, the last square in the scaffold way must be on the right of the first square. 3. The scaffold should not run in the vertical direction. The result of this stage is a 1D matrix containing the ordinal numbers of the squares that the scaffold passes through. For instance,the scaffold way in the above figure will be presented as C=[2,1,0,6,7,8,14,13,19,20,26,25,31,32,33,34,28,27,21,22,16,15,9,10,11,5,4,3] Determination of crossover positions The next step in the Design part is the determination of crossover positions. Crossovers are places where the staples switch to another helix located on a different row. The crossovers are crucial to the folding of the scaffold strand. In fact, they are the only forces which prevent the scaffold from unfolding in a process of achieving higher entropy (more disordered) and thus lower ΔG. The basic principle to determine the positions of crossovers was laid out by Rothermund: the spacing between crossovers in 2D DNA Origami structures must be an odd number of half turns. In other words, 2 vertically adjacent staples meet at their tangent points every an odd number of half turn. Thus, the staples will be in the least strained state at the crossovers. Particularly, in this project, we will stick to 1.5 turns as the unit for the spacing of crossovers. The algorithm to determine the crossover positions starts with the generation of an ArrayList, which is elementally a matrix with flexible dimensions. We named it PosCros. The Poscros Arraylist is used to add the squares which contain the crossover position. The first element of PosCros is always the first element in the scaffold way. The next elements are determined based on in which category the previous element is; the categorization is done based on the relative distance between the element and the closest turning point of the scaffold. Sticky ends serving as an extra ends of a staples should not interfere with the scaffold folding in the formation of DNA Origami. So, sticky end sequences must not have any stable binding to any sequence in the scaffold. To generate sticky end sequences, DNA sequences of a defined length are generated randomly. The newly generated sequences are then to be examined for its ability to bind to the scaffold. Sequences which have a rather stabilizing binding with any position in the scaffold are discarded. Only those without stable scaffold binding are kept and can be used as sticky ends’ sequences. To determine if the sticky end would have any stabilizing binding to the scaffold, one needs to know binding energy of the sticky end to every sequence in the scaffold. In addition, a threshold below which the binding is considered stable is also required. Calculation of binding energy The Sticky end sequence given is mapped along the scaffold length and the binding energy (deltaG) is calculated for each match/mismatch binding. The calculation was done using the formula and complete thermodynamic database for internal single mismatches discussed in SantaLucia’s studies (2006) (1). The formula and parameters are shown bellow: Nearest-neighbor ΔGo increments (kcal/mol) for internal single mismatches next to Watson-Crick pairs in 1 M NaCl For example, consider the total binding energy of following DNA duplex. The mismatch base pair is bold: Set up a threshold To determine if the mismatched complement between the sticky ends and the scaffolds are stable or unstable, a threshold of binding energy (ΔGo) is required. Binding energy less than or equal to this threshold would be consider stable. There should not be an absolute threshold value for every DNA sequence with different length. Longer DNA sequences require lower ΔGo for a stable binding. Therefore, the threshold is set up as a variable calculated based on the sequence length: Let n be the length of DNA sequence. If n is even, the threshold is calculated as follow: if n is odd, then the formula for the threshold is: In the equation, -0.58 is the binding energy between 5’-TA-3’/3’-AT-5’, and -0.88 is the binding energy between 5’-AT-3’/3’-TA-5’. This means that the right side of the equation equal the binding energy of the complement 5’-(AT)n-3’/3’-(TA)n-5’of the same length. In other words, there would be no binding between the sticky end and scaffold which is more stable than the least stable Watson-Crick fully complemented DNA duplex The prediction of the thermal stability of the duplex produced from sticky end Predict the thermal stability of short DNA duplex which is formed upon the binding of the sticky end and its complementary single-stranded strand. The capability to estimate the thermal stability will aid in numerous applications such as (i) predicting the stability of a local sequence on DNA duplex, or of a probe-gene complex, (ii) calculating the melting temperatures of short sequences in hybridization experiments, (iii) determining the optimal length of the probe oligomer to produce stable duplexes with the sticky ends. Recently, the order-disorder transition of a sticky end with its complementary single strand is also important in controlling the dynamic movement of nanomotors, which are made from DNA strands (reference?). Research has shown that the thermal stability of duplex is affected by sequence information and base compositions. However, the sequence of DNA strand is the major determinant of ΔHo, ΔSo, and ΔGo. We apply the nearest-neighbor (NN) method to determine the transition enthalpy, entropy, free energy, and melting point of short DNA duplex. This method calculates those thermodynamic values using the stacking interaction between Watson-Crick neighboring bases in the DNA strands. DNAmazing program will not only assist in random generating stick ends attached to pre-determined positions on DNA Origami, but also allow users to input their preferred sequence information of the sticky ends. Since different sequences have different thermal stability (represented by ΔHo, ΔSo, and ΔGo) upon binding, knowing those thermodynamic values is crucial to study the function and applications of the sticky ends. Besides, DNAmazing program also helps to determine whether the sticky end's sequence input by user is complementary to the scaffold strand or other staple strands. There are many groups have dedicated researching on NN method to determine ΔHo, ΔSo, ΔGo, and Tm of short DNA oligomers and have arrived on the same formula as demonstrated below. However, since difference researches used different starting materials (short DNA oligomers, polymers, etc.), the values for one parameter slightly vary. We have chosen the latest results obtained by John S.L. et al to incorporate into our software. $\Delta H^o_{} = \Delta H^o_{ini} + \Delta H^o_{sym} + \Delta H^o_{AT term.} + \Sigma \Delta H^o_{stacking}\!$ Where $\Delta H^o_{}$ is the helix initiation enthalpy of the transition process; $\Delta H^o_{sym}$ is the symmetry term only applies to self-complementary duplexes, accounting for the enthalpy difference between a duplex formed from a self-complementary sequence and a duplex formed from 2 complementary strands; $\Delta H^o_{AT term.}$ is applied for each end of a duplex that has a terminal AT, accounting for the end-fraying caused by AT base pair; $\Sigma \Delta H^o_{stacking}$ is the total of enthalpy of propagation step in the sequence. For example: \begin{align} \Delta H^o_{} (5'-CGTTGA-3') & = \Delta H^o_{ini} + \Delta H^o_{sym} + \Delta H^o_{AT term.} + \Sigma \Delta H^o_{stacking} \\ & = 0.2 + 0.0 + 2.2 + ( - 10.6 - 8.4 - 7.6 - 8.5 - 8.2) \\ & = -40.9 (kcal/mol) \\ \end{align} ΔSo, ΔGo are calculated using the same formula (1) above. There are 10 propagation steps, 1 initiation, and 1 terminal AT correction to make up a total of 12 NN parameters shown in Table 1. These values are obtained via multiple linear regression of the results from differential scanning calorimetry (DSC) of 108 short DNA sequences. Propagation step ΔHo (kcal/mol) ΔSo (e.u.) ΔGo (kcal/mol) AA/TT -7.6 -21.3 -1.00 AT/TA -7.2 -20.4 -0.88 TA/AT -7.2 -21.3 -0.58 CA/GT -8.5 -22.7 -1.45 GT/CA -8.4 -22.4 -1.44 CT/GA -7.8 -21.0 -1.28 GA/CT -8.2 -22.2 -1.30 CG/GC -10.6 -27.2 -2.17 GC/CG -9.8 -24.4 -2.24 GG/CC -8.0 -19.9 -1.84 Initiation +0.2 -5.7 +1.96 Terminal AT penalty +2.2 +6.9 +0.05 Symmetry correction 0.0 -1.4 +0.43 The melting point of short DNA chain, defined as the temperature at which half of double-stranded DNA sequences have dissociated, is calculated as following: $T_m = \frac{\Delta H^o \times 1000} {\Delta S^o + R \times \ln( \frac{C_t}{x} ) - 273.15}$ where Ct is the total molar strand concentration. For nonself-complementary duplexes x=4, and for self-complementary, x=1. NN method is just an approximation because it neglects the secondary interactions in the DNA duplexes (we assume that the DNA duplexes undergo two-state transition), and the heat capacity Cp is constant over different temperatures. To reduce such inaccuracy in calculation, short DNA oligomers (less than 30 base pairs) were used to minimize the secondary interaction within the DNA molecule. Sodium dependence of ΔSo and ΔGo The entropy and free energy calculated from formula (1) above apply at 37oC and 1M NaCl. To extend the results to various salt condition, the following correction formulae have been derived by (***) $\Delta S^o [Na^+] = \Delta S^o [1M NaCl] + 0.368 \times N/2 \times ln[Na^+]$ $\Delta G^o [Na^+] = \Delta G^o [1M NaCl] + 0.114 \times N/2 \times ln[Na^+]$ where N is the total number of phosphate in the duplex and [Na^+] is the total concentration of monovalent cations (Na + , K + , NH4 + ) in the solution. ΔHo is assumed to be sodium-independent. To calculate the value of ΔGo at temperature different than 37oC , the following equation is used: ΔGo = ΔHoTΔSo in which T is in Kelvin, ΔHo is in cal/mol, and ΔSo is in entropy units (e.u.). ΔHo and ΔSo are assumed to be independent of temperature. The User Interface (GUI) GUI or graphic user interface is constructed to create a friendly environment for users to construct their DNA origami. Our GUI is generated using Window form application in Visual studio 2010. Our software has three main components to support the DNA Origami design with sticky end addition and the themaldynamic analysis of sticky ends. The code sources are provided in the attachments. Generate DNAO For the first component, staples’ sequences used for the correct folding of DNA Origami with sticky ends are generated. User are required to define the size and shape of the structures they want to design by first input the frame size, and then choose the null squares (the location which will not be occupied by the scaffold). This would help the program to understand the DNA Origami design. After obtaining the parameters required, the program will generate different possible scaffold ways and ask users to choose one of their interest. Users can also choose to add sticky end by enter the number of sticky end they need and specify the sequence and location of sticky ends in the scaffold. Final staple sequences are generated and appear in the result window. Generate sticky end sequence To support generation of sticky end, as well as, to ensure that the sticky end will not affect the scaffold folding, an additional component is provided. User can choose to manually input a DNA sequence, and the program can help to check for the most stabilizing binding position in the scaffold. The binding energy is also calculated for users’ reference. User can also ask the program to generate the sticky end sequence with the defined length. DNA sequences with binding energy higher than a limit defined are given. The below image illustrates the output of sticky ends' sequence generation. Thermaldynamic analysis The other component of the software is also to support the sticky end analysis in which thermal dynamic values of the sequence are calculated. Users need to enter the sequence they want to analyze, together with the condition in which they would test the DNA (total DNA strand concentration, Na+ concentration, and melting temperatures). Thermaldynamics value including deltaG, deltaS, deltaH, and Tm are provided in the results pages. Team Information Nguyen Chi Huan NUS, Engineering Science A/P Wang Zhisong, Mentor NUS, Physics Truong Nhat Quynh Thuyen NUS, Life Sciences Professor Chen Yu Zong, Co-Mentor NUS, Pharmacy Ho Quang Binh NUS, Applied Chemistry Hou Ruizheng, Graduate Mentor NUS, Physics Duong Van Quynh Thu NUS, Life Sciences Dr. Sarangapani, Sreelatha NUS, Physics Contact Info • National University of Singapore • 21 Lower Kent Ridge Road • Singapore, 119077 References 1. SantaLucia, J. and D. Hicks (2004). "THE THERMODYNAMICS OF DNA STRUCTURAL MOTIFS." Annual Review of Biophysics and Biomolecular Structure 33(1): 415-440. 2. SantaLucia, J., H. T. Allawi, et al. (1996). "Improved Nearest-Neighbor Parameters for Predicting DNA Duplex Stability†." Biochemistry 35(11): 3555-3562. 3. Breslauer, K. J., R. Frank, et al. (1986). "Predicting DNA duplex stability from the base sequence." Proceedings of the National Academy of Sciences 83(11): 3746-3750. 4. Marky, L. A. and K. J. Breslauer (1982). "Calorimetric determination of base-stacking enthalpies in double-helical DNA molecules." Biopolymers 21(11): 2185-2194. 5. Sugimoto, N., S.-i. Nakano, et al. (1996). "Improved Thermodynamic Parameters and Helix Initiation Factor to Predict Stability of DNA Duplexes." Nucleic Acids Research 24(22): 4501-4505.
2013-12-10 06:01:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 9, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9500501155853271, "perplexity": 5142.449255625022}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164009894/warc/CC-MAIN-20131204133329-00013-ip-10-33-133-15.ec2.internal.warc.gz"}
https://mathshistory.st-andrews.ac.uk/Biographies/Von_Neumann/
# John von Neumann ### Quick Info Born 28 December 1903 Budapest, Hungary Died 8 February 1957 Washington D.C., USA Summary John Von Neumann built a solid framework for quantum mechanics. He also worked in game theory, studied what are now called von Neumann Algebras, and was one of the pioneers of computer science. ### Biography John von Neumann was born János von Neumann. He was called Jancsi as a child, a diminutive form of János, then later he was called Johnny in the United States. His father, Max Neumann, was a top banker and he was brought up in a extended family, living in Budapest where as a child he learnt languages from the German and French governesses that were employed. Although the family were Jewish, Max Neumann did not observe the strict practices of that religion and the household seemed to mix Jewish and Christian traditions. It is also worth explaining how Max Neumann's son acquired the "von" to become János von Neumann. Max Neumann was eligible to apply for a hereditary title because of his contribution to the then successful Hungarian economy and in 1913 he paid a fee to acquire a title, but he did not change his name. His son, however, used the German form von Neumann where the "von" indicated the title. As a child von Neumann showed he had an incredible memory. Poundstone, in [8], writes:- At the age of six, he was able to exchange jokes with his father in classical Greek. The Neumann family sometimes entertained guests with demonstrations of Johnny's ability to memorise phone books. A guest would select a page and column of the phone book at random. Young Johnny read the column over a few times, then handed the book back to the guest. He could answer any question put to him (who has number such and such?) or recite names, addresses, and numbers in order. In 1911 von Neumann entered the Lutheran Gymnasium. The school had a strong academic tradition which seemed to count for more than the religious affiliation both in the Neumann's eyes and in those of the school. His mathematics teacher quickly recognised von Neumann's genius and special tuition was put on for him. The school had another outstanding mathematician one year ahead of von Neumann, namely Eugene Wigner. World War I had relatively little effect on von Neumann's education but, after the war ended, Béla Kun controlled Hungary for five months in 1919 with a Communist government. The Neumann family fled to Austria as the affluent came under attack. However, after a month, they returned to face the problems of Budapest. When Kun's government failed, the fact that it had been largely composed of Jews meant that Jewish people were blamed. Such situations are devoid of logic and the fact that the Neumann's were opposed to Kun's government did not save them from persecution. In 1921 von Neumann completed his education at the Lutheran Gymnasium. His first mathematics paper, written jointly with Fekete the assistant at the University of Budapest who had been tutoring him, was published in 1922. However Max Neumann did not want his son to take up a subject that would not bring him wealth. Max Neumann asked Theodore von Kármán to speak to his son and persuade him to follow a career in business. Perhaps von Kármán was the wrong person to ask to undertake such a task but in the end all agreed on the compromise subject of chemistry for von Neumann's university studies. Hungary was not an easy country for those of Jewish descent for many reasons and there was a strict limit on the number of Jewish students who could enter the University of Budapest. Of course, even with a strict quota, von Neumann's record easily won him a place to study mathematics in 1921 but he did not attend lectures. Instead he also entered the University of Berlin in 1921 to study chemistry. Von Neumann studied chemistry at the University of Berlin until 1923 when he went to Zürich. He achieved outstanding results in the mathematics examinations at the University of Budapest despite not attending any courses. Von Neumann received his diploma in chemical engineering from the Technische Hochschule in Zürich in 1926. While in Zürich he continued his interest in mathematics, despite studying chemistry, and interacted with Weyl and Pólya who were both at Zürich. He even took over one of Weyl's courses when he was absent from Zürich for a time. Pólya said [18]:- Johnny was the only student I was ever afraid of. If in the course of a lecture I stated an unsolved problem, the chances were he'd come to me as soon as the lecture was over, with the complete solution in a few scribbles on a slip of paper. Von Neumann received his doctorate in mathematics from the University of Budapest, also in 1926, with a thesis on set theory. He published a definition of ordinal numbers when he was 20, the definition is the one used today. Von Neumann lectured at Berlin from 1926 to 1929 and at Hamburg from 1929 to 1930. However he also held a Rockefeller Fellowship to enable him to undertake postdoctoral studies at the University of Göttingen. He studied under Hilbert at Göttingen during 1926-27. By this time von Neumann had achieved celebrity status [8]:- By his mid-twenties, von Neumann's fame had spread worldwide in the mathematical community. At academic conferences, he would find himself pointed out as a young genius. Veblen invited von Neumann to Princeton to lecture on quantum theory in 1929. Replying to Veblen that he would come after attending to some personal matters, von Neumann went to Budapest where he married his fiancée Marietta Kovesi before setting out for the United States. In 1930 von Neumann became a visiting lecturer at Princeton University, being appointed professor there in 1931. Between 1930 and 1933 von Neumann taught at Princeton but this was not one of his strong points [8]:- His fluid line of thought was difficult for those less gifted to follow. He was notorious for dashing out equations on a small portion of the available blackboard and erasing expressions before students could copy them. In contrast, however, he had an ability to explain complicated ideas in physics [3]:- For a man to whom complicated mathematics presented no difficulty, he could explain his conclusions to the uninitiated with amazing lucidity. After a talk with him one always came away with a feeling that the problem was really simple and transparent. He became one of the original six mathematics professors (J W Alexander, A Einstein, M Morse, O Veblen, J von Neumann and H Weyl) in 1933 at the newly founded Institute for Advanced Study in Princeton, a position he kept for the remainder of his life. During the first years that he was in the United States, von Neumann continued to return to Europe during the summers. Until 1933 he still held academic posts in Germany but resigned these when the Nazis came to power. Unlike many others, von Neumann was not a political refugee but rather he went to the United States mainly because he thought that the prospect of academic positions there was better than in Germany. In 1933 von Neumann became co-editor of the Annals of Mathematics and, two years later, he became co-editor of Compositio Mathematica. He held both these editorships until his death. Von Neumann and Marietta had a daughter Marina in 1935 but their marriage ended in divorce in 1937. The following year he married Klára Dán, also from Budapest, whom he met on one of his European visits. After marrying, they sailed to the United States and made their home in Princeton. There von Neumann lived a rather unusual lifestyle for a top mathematician. He had always enjoyed parties [8]:- Parties and nightlife held a special appeal for von Neumann. While teaching in Germany, von Neumann had been a denizen of the Cabaret-era Berlin nightlife circuit. Now married to Klára the parties continued [18]:- The parties at the von Neumann's house were frequent, and famous, and long. Ulam summarises von Neumann's work in [35]. He writes:- In his youthful work, he was concerned not only with mathematical logic and the axiomatics of set theory, but, simultaneously, with the substance of set theory itself, obtaining interesting results in measure theory and the theory of real variables. It was in this period also that he began his classical work on quantum theory, the mathematical foundation of the theory of measurement in quantum theory and the new statistical mechanics. His text Mathematische Grundlagen der Quantenmechanik (1932) built a solid framework for the new quantum mechanics. Van Hove writes in [36]:- Quantum mechanics was very fortunate indeed to attract, in the very first years after its discovery in 1925, the interest of a mathematical genius of von Neumann's stature. As a result, the mathematical framework of the theory was developed and the formal aspects of its entirely novel rules of interpretation were analysed by one single man in two years (1927-1929). Self-adjoint algebras of bounded linear operators on a Hilbert space, closed in the weak operator topology, were introduced in 1929 by von Neumann in a paper in Mathematische Annalen . Kadison explains in [22]:- His interest in ergodic theory, group representations and quantum mechanics contributed significantly to von Neumann's realisation that a theory of operator algebras was the next important stage in the development of this area of mathematics. Such operator algebras were called "rings of operators" by von Neumann and later they were called $W^{*}$-algebras by some other mathematicians. J Dixmier, in 1957, called them "von Neumann algebras" in his monograph Algebras of operators in Hilbert space (von Neumann algebras). In the second half of the 1930's and the early 1940s von Neumann, working with his collaborator F J Murray, laid the foundations for the study of von Neumann algebras in a fundamental series of papers. However von Neumann is known for the wide variety of different scientific studies. Ulam explains [35] how he was led towards game theory:- Von Neumann's awareness of results obtained by other mathematicians and the inherent possibilities which they offer is astonishing. Early in his work, a paper by Borel on the minimax property led him to develop ... ideas which culminated later in one of his most original creations, the theory of games. In game theory von Neumann proved the minimax theorem. He gradually expanded his work in game theory, and with co-author Oskar Morgenstern, he wrote the classic text Theory of Games and Economic Behaviour (1944). Ulam continues in [35]:- An idea of Koopman on the possibilities of treating problems of classical mechanics by means of operators on a function space stimulated him to give the first mathematically rigorous proof of an ergodic theorem. Haar's construction of measure in groups provided the inspiration for his wonderful partial solution of Hilbert's fifth problem, in which he proved the possibility of introducing analytical parameters in compact groups. In 1938 the American Mathematical Society awarded the Bôcher Prize to John von Neumann for his memoir Almost periodic functions and groups. This was published in two parts in the Transactions of the American Mathematical Society, the first part in 1934 and the second part in the following year. Around this time von Neumann turned to applied mathematics [35]:- In the middle 30's, Johnny was fascinated by the problem of hydrodynamical turbulence. It was then that he became aware of the mysteries underlying the subject of non-linear partial differential equations. His work, from the beginnings of the Second World War, concerns a study of the equations of hydrodynamics and the theory of shocks. The phenomena described by these non-linear equations are baffling analytically and defy even qualitative insight by present methods. Numerical work seemed to him the most promising way to obtain a feeling for the behaviour of such systems. This impelled him to study new possibilities of computation on electronic machines ... Von Neumann was one of the pioneers of computer science making significant contributions to the development of logical design. Shannon writes in [29]:- Von Neumann spent a considerable part of the last few years of his life working in [automata theory]. It represented for him a synthesis of his early interest in logic and proof theory and his later work, during World War II and after, on large scale electronic computers. Involving a mixture of pure and applied mathematics as well as other sciences, automata theory was an ideal field for von Neumann's wide-ranging intellect. He brought to it many new insights and opened up at least two new directions of research. He advanced the theory of cellular automata, advocated the adoption of the bit as a measurement of computer memory, and solved problems in obtaining reliable answers from unreliable computer components. During and after World War II, von Neumann served as a consultant to the armed forces. His valuable contributions included a proposal of the implosion method for bringing nuclear fuel to explosion and his participation in the development of the hydrogen bomb. From 1940 he was a member of the Scientific Advisory Committee at the Ballistic Research Laboratories at the Aberdeen Proving Ground in Maryland. He was a member of the Navy Bureau of Ordnance from 1941 to 1955, and a consultant to the Los Alamos Scientific Laboratory from 1943 to 1955. From 1950 to 1955 he was a member of the Armed Forces Special Weapons Project in Washington, D.C. In 1955 President Eisenhower appointed him to the Atomic Energy Commission, and in 1956 he received its Enrico Fermi Award, knowing that he was incurably ill with cancer. Eugene Wigner wrote of von Neumann's death [18]:- When von Neumann realised he was incurably ill, his logic forced him to realise that he would cease to exist, and hence cease to have thoughts ... It was heartbreaking to watch the frustration of his mind, when all hope was gone, in its struggle with the fate which appeared to him unavoidable but unacceptable. In [5] von Neumann's death is described in these terms:- ... his mind, the amulet on which he had always been able to rely, was becoming less dependable. Then came complete psychological breakdown; panic, screams of uncontrollable terror every night. His friend Edward Teller said, "I think that von Neumann suffered more when his mind would no longer function, than I have ever seen any human being suffer." Von Neumann's sense of invulnerability, or simply the desire to live, was struggling with unalterable facts. He seemed to have a great fear of death until the last... No achievements and no amount of influence could save him now, as they always had in the past. Johnny von Neumann, who knew how to live so fully, did not know how to die. It would be almost impossible to give even an idea of the range of honours which were given to von Neumann. He was Colloquium Lecturer of the American Mathematical Society in 1937 and received the its Bôcher Prize as mentioned above. He held the Gibbs Lectureship of the American Mathematical Society in 1947 and was President of the Society in 1951-53. He was elected to many academies including the Academia Nacional de Ciencias Exactas (Lima, Peru), Academia Nazionale dei Lincei (Rome, Italy), American Academy of Arts and Sciences (USA), American Philosophical Society (USA), Instituto Lombardo di Scienze e Lettere (Milan, Italy), National Academy of Sciences (USA) and Royal Netherlands Academy of Sciences and Letters (Amsterdam, The Netherlands). Von Neumann received two Presidential Awards, the Medal for Merit in 1947 and the Medal for Freedom in 1956. Also in 1956 he received the Albert Einstein Commemorative Award and the Enrico Fermi Award mentioned above. Peierls writes [3]:- He was the antithesis of the "long-haired" mathematics don. Always well groomed, he had as lively views on international politics and practical affairs as on mathematical problems. ### References (show) 1. J Dieudonne, Biography in Dictionary of Scientific Biography (New York 1970-1990). See THIS LINK. 2. Biography in Encyclopaedia Britannica. http://www.britannica.com/biography/John-von-Neumann 3. W Aspray, John von Neumann and the origins of modern computing (Cambridge, M., 1990). 4. S J Heims, John von Neumann and Norbert Wiener: From mathematics to the technologies of life and death (Cambridge, MA, 1980). 5. T Legendi and T Szentivanyi (eds.), Leben und Werk von John von Neumann (Mannheim, 1983). 6. N Macrae, John von Neumann (New York, 1992). 7. W Poundstone, Prisoner's dilemma (Oxford, 1993). 8. N A Vonneuman, John von Neumann: as seen by his brother (Meadowbrook, PA, 1987). 9. A Adám, John von Neumann, Life and work of John von Neumann (Mannheim, 1983), 11-43. 10. H Araki, Some of the legacy of John von Neumann in physics: theory of measurement, quantum logic, and von Neumann algebras in physics, The legacy of John von Neumann (Providence, R.I., 1990), 119-136. 11. W Aspray, The mathematical reception of the modern computer: John von Neumann and the Institute for Advanced Study computer, Studies in the history of mathematics (Washington, DC, 1987), 166-194. 12. W Aspray, John von Neumann's contributions to computing and computer science, Ann. Hist. Comput. 11 (3) (1989), 189-195. 13. W Aspray, The transformation of numerical analysis by the computer: an example from the work of John von Neumann, The history of modern mathematics II (Boston, MA, 1989). 14. W Aspray, The origins of John von Neumann's theory of automata, The legacy of John von Neumann (Providence, R.I., 1990), 289-309. 15. H Baumgärtel, John von Neumann, aus Leben und Werk, Mitt. Math. Ges. DDR 3-4 (1982), 87-98. 16. G Birkhoff, Von Neumann and lattice theory, Bull. Amer. Math. Soc. 64 (1958), 50-56. 17. P R Halmos, The legend of John von Neumann, Amer. Math. Monthly 80 (1973), 382-394. 18. P R Halmos, Von Neumann on measure and ergodic theory, Bull. Amer. Math. Soc. 64 (1958), 86-94. 19. I Halperin, The extraordinary inspiration of John von Neumann, The legacy of John von Neumann (Providence, R.I., 1990), 15-17. 20. T A Heppenheimer, How Von Neumann showed the way, American heritage of invention and technology 6 (2) (1990), 8-16. 21. R V Kadison, Theory of operators, Part I. Operator algebras, Bull. Amer. Math. Soc. 64 (1958), 61-85. 22. H W Kuhn and A W Tucker, John von Neumann's work in the theory of games and mathematical economics, Bull. Amer. Math. Soc. 64 (1958), 100-122. 23. P D Lax, Remembering John von Neumann, John von Neumann: a personal view, The legacy of John von Neumann (Providence, R.I., 1990), 5-7. 24. F J Murray, Theory of operators, Part I. Single operators, Bull. Amer. Math. Soc. 64 (1958), 57-60. 25. L Rédei, John von Neumann's work in algebra and number theory (Hungarian), Mat. Lapok 10 (1959), 226-230. 26. U Rellstab, New insights into the collaboration between John von Neumann and Oskar Morgenstern on the Theory of games and economic behavior, Toward a history of game theory (Durham, NC, 1992), 77-93. 27. G Révész, John von Neumann und der Rechner, Life and work of John von Neumann (Mannheim, 1983), 99-113. 28. C E Shannon, Von Neumann's contributions to automata theory, Bull. Amer. Math. Soc. 64 (1958), 123-129. 29. F Smithies, John von Neumann, J. London Math. Soc. 34 (1959), 373-384. 30. N Stern, John von Neumann's influence on electronic digital computing, 1944-1946, Ann. Hist. Comput. 2 (4) (1980), 349-362. 31. J Szelezsán, John von Neumann, 'der Rechentechniker' (sein Werk auf dem Gebiet der numerischen Methoden), Life and work of John von Neumann (Mannheim, 1983), 115-151. 32. J Todd, John von Neumann and the national accounting machine, SIAM Rev. 16 (1974), 526-530. 33. C B Tompkins, John von Neumann, 1903-1957, Math. Tables Aids Comput. 11 (1957), 127-128. 34. S Ulam, John von Neumann, 1903-1957, Bull. Amer. Math. Soc. 64 (1958), 1-49. 35. L van Hove, Von Neumann's contributions to quantum theory, Bull. Amer. Math. Soc. 64 (1958), 95-99. 36. N Vonneuman, The philosophical legacy of John von Neumann, in the light of its inception and evolution in his formative years, The legacy of John von Neumann (Providence, R.I., 1990), 19-24. 37. M v N Whitman, John von Neumann: a personal view, The legacy of John von Neumann (Providence, R.I., 1990), 1-4. 38. O P Yee, On the life and work of John von Neumann, Menemui Mat. 4 (3) (1982), 114-127. Other pages about John von Neumann: Other websites about John von Neumann: ### Cross-references (show) Written by J J O'Connor and E F Robertson Last Update October 2003
2023-03-31 22:29:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41075754165649414, "perplexity": 1943.3262727459867}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949689.58/warc/CC-MAIN-20230331210803-20230401000803-00785.warc.gz"}
https://beta.akmos.com.br/sarei/postulates-of-planck%27s-quantum-theory-in-points-bb7621
# postulates of planck's quantum theory in points posted in: Uncategorized | 0 An atom consists of a dense stationary nucleus situated at the centre with the electron revolving around it in circular orbits without emitting any energy. (ionisation.) Wiktionary Kinetic energy of the electron =1/2 mv2. Max Planck, in full Max Karl Ernst Ludwig Planck, (born April 23, 1858,[Germany]—died October 4, 1947 ) . From the wave perspective, all forms of EM radiation may be described in terms of their wavelength and frequency. Important postulates : Postulate 1. photoelectric effectThe emission of electrons from the surface of a material following the absorption of electromagnetic radiation. The different energy levels are numbered as 1, 2, 3, 4, (from nucleus onwards) or K, L, M, N etc. (iv) These rays do not require any medium for propagation. Postulate 2 The waves cause the wave-like behaviour when particles go through slits, etc. The radius of the smallest orbit (n = 1) for the hydrogen atom (Z = 1) is r0. Boundless Learning Since this energy expression consists of so many fundamental constants, we are giving you the following simplified expressions. (iii) The energy of a quantum is directly proportional to the frequency of the radiation. CC BY-SA 3.0. http://www.boundless.com//physics/definition/photoelectric-effect Replacement of light with twice the intensity and half the frequency will not produce the same outcome, contrary to what would be expected if light acted strictly as a wave. Planck’s Law states that Planck’s Law where h=6.62x10-34 Js is the Planck’s constant. In fact, electromagnetic radiation is itself quantized, coming in packets known as photons and having energy E=h\nu. In the next two sections, we will see that the energy carried by light also is quantized in units … Hence these orbits are called stationary state. #IQRADegreeCollegeOfficial1st Year Chemistry || Ch.05-Planck's Quantum Theory Postulates Of Bohr's Atomic Model The quantum of energy absorbed is equal to the difference in energies of the two concerned levels. Ordinarily, an electron continues to move in a particular stationary state without losing energy. Bohr developed a model for the hydrogen atom and hydrogen-like one-electron species (hydrogenic species). planck’s quantum theory of black body radiation The findings in the black body radiation led Max plank in 1901 to postulate that radiant energy is quantized i.e. The Quantum Theory There are five major ideas represented in the Quantum Theory: Energy is not continuous, but comes in small but discrete units. Electromagnetic Waves : The electron was not a known fact, but Planck proved that the atomic structure with separate subparticles (electrons) must exist. Planck came up with the idea when attempting to explain blackbody radiation, work that provided the foundation for his quantum theory. Until the late 19th century, Newtonian physics dominated the scientific worldview. Exercise $$\PageIndex{24}$$ Show how the energy-frequency relation used by Planck, Einstein, and Bohr results from the time-dependent Schrödinger equation. However, by the early 20th century, physicists discovered that the laws of classical mechanics are not applicable at the atomic scale, and experiments such as the photoelectric effect completely contradicted the laws of classical physics. A prism can be used to separate the wavelengths, making them easy to identify. http://en.wiktionary.org/wiki/electromagnetic_radiation, http://en.wikipedia.org/wiki/Planck_constant, http://www.boundless.com//physics/definition/photoelectric-effect, http://en.wikipedia.org/wiki/Quantum_mechanics, http://en.wikibooks.org/wiki/General_Chemistry/Introduction_to_Quantum_Theory, http://commons.wikimedia.org/wiki/File:Nitrogen.Spectrum.Vis.jpg, http://commons.wikimedia.org/wiki/File:Wavelength.png, https://www.boundless.com/chemistry/textbooks/boundless-chemistry-textbook/, Calculate the energy element E=hv, using Planck’s Quantum Theory. For each metal, there is a minimum threshold frequency of EM radiation at which the effect will occur. (iii) All the electromagnetic waves travel with the velocity of light. Energy is not emitted or absorbed continuously. nature of quantum theory [5]. When the electrons return to the ground state, they emit energy of various wavelengths. Planck postulated that the energy of oscillators in a blackbody is quantized by E = nh\nu, where n = 1, 2, 3, ..., h is Planck's constant, and \nu is the frequency, and used this postulate in his derivation of the Planck law of blackbody radiation. Main postulates of Plank’s Quantum Theory: The main points of the theory are: 1. Wikipedia In order to explain these facts Planck (1901) gave a theory called Planck’s quantum theory of radiation. According to Planck: E=h $\nu$, where h is Planck’s constant (6.62606957 (29) x 10 -34 J s), ν is the frequency, and E is energy of an electromagnetic wave. According to Planck: E=h$\nu$, where h is Planck’s constant (6.62606957(29) x 10-34 J s), ν is the frequency, and E is energy of an electromagnetic wave. Planck’s quantum theory: According to this theory: Energy emitted or absorbed is not continuous, but is in the form of packets called quanta .In terms of light it is called as photon. Planck’s quantum theory : Max Planck in 1901 was studying the energy of radiations of different frequencies radiated from hot bodies, known as black bodies. A few of the postulates have already been discussed in section 3. CC BY-SA 3.0. http://en.wiktionary.org/wiki/electromagnetic_radiation This expression shows that only certain energies are allowed to the electron. h = Planck’s constant (6.626×10 –34 J.s) ν= Frequency of radiation When n =, E = 0, which corresponds to an ionized atom i.e., the electron i.e., the electron and nucleus are infinitely separated. Public domain. Wikimedia As long as an electron is revolving in an orbit it neither loses nor gains energy. In 1856, James Clark Maxwell stated that light, X-rays,-rays and heat etc. The main points of quantum theory are When an electric current is passed through a gas, some of the electrons in the gas molecules move from their ground energy state to an excited state that is further away from their nuclei. The quantum hypothesis, first suggested by Max Planck (1858–1947) in 1900, postulates that light energy can only be emitted and absorbed in discrete bundles called quanta. Most importantly, this perspective points to a possible ex- tension of QM along the line discussed in [2], one relevant to a background independent His work led to Albert Einstein determining that light exists in discrete quanta of energy, or photons. (iv) A body can radiate or absorb energy in whole number multiples of a quantum hv, 2hν, 3hν ….. nhν. However, by the early 20th century, physicists discovered that the laws of classical mechanics do not apply at the atomic scale. Wikipedia •  Most measurements don’t involve a single but rather large … Planck's constant presentation 1. Planck further assumed that when an oscillator changes from a state of energy E1 to a state of lower energy E2, the discrete amount of energy E1 − E2, or quantum of radiation, is equal to the product of the frequency of the radiation, symbolized by the Greek letter ν and a constant h, now called Planck’s constant, that he determined from blackbody radiation data; i.e., E1 − E2 = h ν. Planck postulated that the energy of light is proportional to the frequency, and the constant that relates them is known as Planck’s constant (h). by absorbing one. The smallest bundle or packet of energy is known as quantum. Each wave packet or quantum is associated with a definite amount of energy. http://commons.wikimedia.org/wiki/File:Wavelength.png CC BY-SA. The elementary particles … This new state of the electron is called an excited state. Such a stable state of the atom is called a ground state or normal state. Planck is considered the father of the Quantum Theory. As a result of these observations, physicists articulated a set of theories now known as quantum mechanics. Wavelength is the distance from one wave peak to the next, which can be measured in meters. Where, E = Energy of the radiation. This particular resource used the following sources: http://www.boundless.com/ Basic Postulates of Quantum Theory The modern formulation of quantum theory rests primarily on the ideas of Erwin Schr dinger, Werner Heisenberg and P.A.M. Dirac. In the case of light, the quantum is known as a photon. Quantum Field Theory) 15 Correspondence principle ... Postulates of Quantum Mechanics On the basis of these studies, Planck postulated a theory known as Planck’s quantum theory of radiations. The energies are negative since the energy of the electron in the atom is less than the energy of a free electron (i.e., the electron is at an infinite distance from the nucleus) which is taken as zero. Frequency is the number of waves that pass by a given point each second. (iii) The energy of a quantum is directly proportional to the frequency of the radiation . This began with Planck's formula for explaining the continuous spectrum of light radiation from heated solids. 2. Today we discuss the beginnings of the so-called Quantum Theory. Each stationary state is associated with a definite amount of energy and it is also known as energy levels. In 1900, Max Planck pustulated that the electromagnetic energy is emitted not continuously (like by vibrating oscillators), but by discrete portions or quants. His basic postulate was that a) atoms have sub-particles; and b) those operate, in gas, as oscillating (the start of wave function quantum) store of energy (the quantum of energy). Hence only certain orbits whose radii are given by the above equation are available for the electron. In the case of light, the quantum of energy is often called the photon. Planck's law Planck's theory describing black body radiation (1900): ... Quantum Theory (incl. Planck (cautiously) insisted that this was simply an aspect of the processes of absorption and emission of radiation and had nothing to do with the physical reality of the radiation itself. MEASURING PLANCK’S CONSTANT Joseph R. Groele 2. Postulate 1. If energy is supplied to an electron, it may jump (excite) instantaneously from lower energy (say 1) to higher energy level (say 2, 3, 4, etc.) Electromagnetic (EM) radiation is a form of energy with both wave-like and particle-like properties; visible light being a well-known example. In case of light the quantum is known as photon. The main points of quantum theory are (i) Substances radiate or absorb energy discontinuously in the form of small packets or bundles of energy. Boundless Learning E directly proportional to v where v is frequency. Consider an electron of mass‘m’ and charge ‘e’ revolving around a nucleus of charge Ze (where Z = atomic number and e is the charge of the proton) with a tangential velocity v. r is the radius of the orbit in which electron is revolving. The greater the distance of the energy level from the nucleus, the more is the energy associated with it. In some ways, quantum mechanics completely changed the way physicists viewed the universe, and it also marked the end of the idea of a clockwork universe (the idea that universe was predictable). German theoretical physicist who originated quantum theory, which won him the Nobel Prize for Physics in 1918. Planck’s quantum theory explains emission and absorption of radiation. Energy absorbed or released in an electron jump,(ΔE) is given by ΔE =E2-E1 =hν where E2and E1 are the energies of the electron in the first and second energy levels, and is the frequency of radiation absorbed or emitted. Max Planck in 1901 studied the energy of radiations of different frequencies radiated from hot bodies, known as black bodies. Wave theory was given by C. Huygens. (Note: If the energy supplied to a hydrogen atom is less than 13.6 eV, it will accept or absorb only those quanta which can take it to a certain higher energy level i.e., all those photons having energy less than or more than a particular energy level will not be absorbed by a hydrogen atom. Planck is considered the father of the Quantum Theory. The force of attraction between the nucleus and an electron is equal to the centrifugal force of the moving electron. According to Rutherford’s model, an atom has a central nucleus and electron/s revolve around it like the sun-planet system. Instead, there are discrete lines created by different wavelengths. 2) The energy of each quantum is directly proportional to the frequency of radiation i.e. Each photon carries an energy which is directly proportional to the frequency of wavelength i.e. {\displaystyle \nu } (the Greek letter nu, not the Latin letter v) is the frequency of the oscillator. To explain these radiations, Max Planck put forward a theory known as Planck’s quantum theory. Wikibooks Since the excited state is less stable, the atom will lose its energy and come back to the ground state. CC BY-SA 3.0. http://en.wikipedia.org/wiki/Quantum_mechanics However, in 1905, Albert Einstein reinterpreted Planck’s quantum hypothesis and used it to explain the photoelectric effect, in which shining light on certain materials can eject electrons from the material. In the case of light, the quantum is known as a photon. In that case, the effect of light would be cumulative—the light should add up, little by little, until it caused electrons to be emitted. Again, we follow the presentation of McQuarrie , with the exception of postulate 6, which McQuarrie does not include. According to Bohr’s postulate of angular momentum quantization, we have. The energy of the radiation absorbed or emitted is directly proportional to the frequency of the radiation. •  Postulates 4 and 5 deal with the measurement process and are expressed in terms of probabilities. There are particles, and if in motion, there are waves. (iii) The energy of a quantum is directly proportional to the frequency of the radiation. Wikimedia it is radiated in form of energy packets. The quantum of energy is always an integer. Postulate 3. In order to explain these facts Planck (1901) gave a theory called Planck's quantum theory of radiation. (ii) The smallest packet of energy is called quantum. Light energy is determined by its frequency of vibration, f. Energy, E, is described by th… E depends upon v (nu). A few important characteristics of these are listed : The Planck postulate (or Planck's postulate ), one of the fundamental principles of quantum mechanics, is the postulate that the energy of oscillators in a black body is quantized, and is given by. This is because electrons release specific wavelengths of light when moving from an excited state to the ground state. Exercise $$\PageIndex{25}$$ Show how the de Broglie relation follows from the postulates of Quantum Mechanics using the definition of the momentum operator. (i) Substances radiate or absorb energy discontinuously in the form of small packets or bundles of energy. Quantum of light is called a photon. Hello there The Planck postulate (or Planck's postulate), one of the fundamental principles of quantum mechanics, is the postulate that the energy of oscillators in a black body is quantized, and is given by The Valence Bond Theory was developed in order to explain chemical bonding using the method of quantum mechanics. The relation is E = hυ, Where E is energy of photon, h is Planck’s constant and υ is the frequency of the radiation. Max Planck named this minimum amount the “quantum,” plural “quanta,” meaning “how much.” One photon of light carries exactly one quantum of energy. emit energy continuously in the form of radiations or waves and the energy is called radiant energy. E ∝ ν. E = hν Its important postulates are. The wavelength or frequency of any specific occurrence of EM radiation determine its position on the electromagnetic spectrum and can be calculated from the following equation: where c is the constant 3.0 x 108 m/sec (the speed of light in a vacuum), $\lambda$ = wavelength in meters, and $\nu$=frequency in hertz (1/s). Classical Newtonian physics at the time was widely accepted in the scientific community for its ability to accurately explain and predict many phenomena. While the wavelength and frequency of EM radiation may vary, its speed in a vacuum remains constant at 3.0 x 108 m/sec, the speed of light. The main points of quantum theory are : (i) Substances radiate or absorb energy discontinuously in the form of small packets or bundles of energy. Class 11 Chemistry Planck’s quantum theory. But if the energy supplied to a hydrogen atom is more than 13.6 eV then all photons are absorbed and excess energy appears as kinetic energy of emitted photo electrons. Energy less than quantum can never be absorbed or emitted. Radius of nth orbit for an atom with atomic number Z is simply written as, The total energy, E of the electron is the sum of kinetic energy and potential energy. (ii) The smallest packet of energy is called quantum. The smallest amount of energy that can be emitted or absorbed in the form of electromagnetic radiation is known as quantum. Postulates of Quantum Mechanics In this section, we will present six postulates of quantum mechanics. He applied quantum theory in considering the energy of an electron bound to the nucleus. (adsbygoogle = window.adsbygoogle || []).push({}); In the late 18th century, great progress in physics had been made. Bohr’s Atomic Model: This theory primarily focuses on the formation of individual bonds from the atomic orbitals of the participating atoms during the formation of a molecule. Quantum is the smallest denomination of energy. Meanwhile, the energy of radiation is expressed in terms of frequency as, E = h ν. E v (or) E = hν were ν is the frequency of radiation and h is Planck’s constant having the value 6.626×10-27 erg–sec or 6.626×10-34 J–sec. When a black body is heated, it emits thermal radiation of different wavelengths or frequency. It is important to note that by using this equation, one can determine the wavelength of light from a given frequency and vice versa. 1)Energy is not emitting continuously but discontinuously in the form of small packets of energy known as quanta of energy. 1hυ, 2hυ, 3hυ etc Of the finite number of circular orbits around the nucleus, an electron can revolve only in those orbits whose angular momentum (mvr) is an integral multiple of factor h/2π, ; where, m = mass of the electron v = velocity of the electron; n = orbit number in which electron is present; r = radius of the orbit. Boundless vets and curates high-quality, openly licensed content from around the Internet. Instead, there is a clear-cut minimum frequency of light that triggers electron ejection. Planck’s Quantum Theory Max Planck in 1900, put forward a theory known after his name as Planck’s Quantum theory The main points of this theory are : 1)The radiant energy is emitted or absorbed not continuously but discontinuously in the form of small discrete packets of energy.Each such packet of energy is called quantum. If light acted only as a wave, then there should be a continuous rainbow created by the prism. What seems today inconceivable will appear one day, from a higher stand point, quite simple and harmonious. The lowest energy level of the atom corresponds to n = 1, and as the quantum number, E becomes less negative. where n is the positive integer. The electrostatic force balances the centrifugal force for the stable electron orbit. Bohr model of the hydrogen atom attempts to plug in certain gaps as suggested by Rutherford’s model by including ideas from the newly developing Quantum hypothesis. The wave model cannot account for something known as the photoelectric effect. On the basis of these studies, Planck postulated a theory known as Planck’s quantum theory of radiations. Quantum theory was given by Max Planck in 1900. However, the fundamental difference between the two is that, while the planetary system is held in place by the gravitational force, the nucl… (i) They emit energy continuously in the form of radiation or waves. CC BY-SA 3.0. http://en.wikibooks.org/wiki/General_Chemistry/Introduction_to_Quantum_Theory It can be emitted as whole number multiple of quantum I.e. Absorption and emission of radiant energy does not takes place continuously but it takes place in the form of packets of energy called quanta. Quantum mechanics was born! By Coulomb’s Law, the electrostatic force of attraction between the moving electron and nucleus is Coulombic force, The centrifugal force acting on the electron is. CC BY-SA 3.0. http://en.wikipedia.org/wiki/Planck_constant The photoelectric effect could not be rationalized based on existing theories of light, as an increase in the intensity of light did not lead to the same outcome as an increase in the energy of the light. (ii) The radiation consists of electric and magnetic fields which oscillate perpendicular to each other and also perpendicular to the direction in which the radiations propagate. This effect is observed when light focused on certain metals emits electrons. This observation led to the discovery of the minimum amount of energy that could be gained or lost by an atom. In case of light, a quantum of light is known as a photon. The greater the value of n, i.e., farther the energy level from the nucleus, the greater the radius. Radius and Energy levels of the hydrogen atom : electromagnetic radiationRadiation (quantized as photons) consisting of oscillating electric and magnetic fields oriented perpendicularly to each other, moving through space. CC BY-SA 3.0. http://commons.wikimedia.org/wiki/File:Nitrogen.Spectrum.Vis.jpg The implication was that frequency is directly proportional to energy, with the higher light frequencies having more energy. Summary The agreement between Planck’s theory and the experimental observation provided strong evidence that the energy of electron motion in matter is quantized. Postulates of Planck’s quantum theory are as follows – Matter radiate energy or absorb energy in discrete quantities discontinuously in the form of small packets or bundles. These waves are associated with electric with electric and magnetic fields and are, therefore, known as electromagnetic waves (or radiations). Planck found that the vibrational energy of atoms in a solid is not continuous but has only discrete (distinct) values. (ii) The smallest packet of energy is called quantum. Energy which is directly proportional to the frequency of the quantum of energy and it also! Found that the atomic orbitals of the radiation atom has a central nucleus and electron/s revolve around it the! Each second and hydrogen-like one-electron species ( hydrogenic species ) where h=6.62x10-34 is... Around the Internet continuous rainbow created by different wavelengths or frequency radiations of wavelengths! Of radiant energy does not takes place continuously but it takes place continuously but discontinuously in form... And are, therefore, known as a result of these studies, Planck a... Js is the distance of the so-called quantum theory: when a body... Quantized, coming in packets known as photons and having energy E=h\nu electrons the. Wavelengths, making them easy to identify the frequency of wavelength i.e separate the wavelengths, making easy. Is determined by its frequency of the postulates have already been discussed in 3... Or waves and the energy of the radiation ) consisting of oscillating and... And an electron continues to move in a solid is not emitting but! Nor gains energy letter v ) is r0 discontinuously in the form of radiations of electron! Smallest packet of energy called quanta s constant whole number multiple of mechanics. The force of the minimum amount of energy explaining the continuous spectrum of.... E = h ν the scientific worldview radiations or waves and the energy is often the! As Planck ’ s quantum theory of radiation not emitting continuously but it takes place in the form of or... Electron was not a known fact, electromagnetic radiation is known as a photon magnetic fields oriented perpendicularly to other. The discovery of the postulates have already been discussed in section 3 a molecule, by the early 20th,. Require any medium for propagation in case of light when moving from an excited is... In an orbit it neither loses nor gains energy account for something known as quantum McQuarrie does not place! State to the frequency of radiation or waves in 1900 these waves are associated with with. { \displaystyle \nu } ( the Greek letter nu, not the Latin letter v is... Is expressed in terms of frequency as, E becomes less negative energy as. As long as an electron continues to move in a particular stationary state without losing.... Quantum i.e ordinarily, an atom definite amount of energy that could be gained or lost by an atom return. Explain blackbody radiation, work that provided the foundation for his quantum theory ( incl carries energy! Now known as quantum photons and having energy E=h\nu black bodies consists so. Wave packet or quantum is directly proportional to the ground state s Joseph! ( iv ) these rays do not apply at the atomic orbitals of the postulates have been. Heated solids not require any medium for propagation will occur, coming in packets as... Max Planck put forward a theory known as photons ) consisting of electric... Its ability to accurately explain and predict many phenomena ) values wavelength is distance... State to the electron is equal to the frequency of the two concerned levels as electron! In 1900 since this energy expression consists of so many postulates of planck's quantum theory in points constants, we present... Radiation at which the effect will occur the lowest energy level from the nucleus and revolve! Foundation for his quantum theory in considering the energy of each quantum is directly proportional to electron! This section, we have listed: ( i ) They emit continuously... Atoms in a solid is not emitting continuously but it takes place in form... As the photoelectric effect particles, and as the quantum is known as quantum the energy of the have... Multiple of quantum i.e energy known as Planck ’ s quantum theory was by! Early 20th century, Newtonian physics dominated the scientific worldview energy continuously in the scientific community its... Model for the hydrogen atom ( Z = 1 ) is the number of waves that pass by a point... This energy expression consists of so many fundamental constants, we follow the presentation of McQuarrie, with idea. Not a known fact, but Planck proved that the atomic structure with separate subparticles ( electrons must. Came up with the idea when attempting to explain blackbody radiation, work that provided the foundation for his theory... Continues to move in a solid is not continuous but has only discrete ( distinct ).... Whose radii are given by C. Huygens the case of light radiation from heated solids between! Hydrogen-Like one-electron species ( hydrogenic species ) radii are given by the prism separate subparticles ( electrons ) exist! Participating atoms during the formation of a molecule six postulates of Plank ’ s Law states postulates of planck's quantum theory in points ’! Particular stationary state is associated with a definite amount of energy that can be emitted or absorbed in case. A few important characteristics of these observations, physicists discovered that the laws of mechanics! Seems today inconceivable will appear one day, from a higher stand point, quite simple and harmonious or... Of attraction between the nucleus, the quantum theory the sun-planet system EM radiation at which the effect occur... Waves ( or radiations ) of a quantum of energy that can be emitted as whole number multiple quantum... There should be a continuous rainbow created by different wavelengths the more is the distance the... Not continuous but has only discrete ( distinct ) values continues to move in a particular state! Do not require any medium for propagation of postulate 6, which can be as. Shows that only certain orbits whose radii are given by max Planck in 1901 studied the of... Atomic scale for explaining the continuous spectrum of light radiation from heated.... Or lost by an atom has a central nucleus and an electron is called.... That only certain orbits whose radii are given by the above equation are available for electron... S constant Joseph R. Groele 2 model for the stable electron orbit there should a! When attempting to explain blackbody radiation, work that provided the foundation his. On the basis of these studies, Planck postulated a theory known as a photon as.. Scientific worldview = 1, and if in motion, there is a clear-cut minimum frequency of,... In an orbit it neither loses nor gains energy a definite amount of energy nucleus and electron. A solid is not emitting continuously but it takes place continuously but discontinuously in scientific!, E = h ν, moving through space smallest orbit ( n = 1 ) for the electron thermal! ( 1901 ) gave a theory known as Planck ’ s constant Joseph Groele. When moving from an excited state to the frequency of the moving electron not but! Few important characteristics of these studies, Planck postulated a theory known as quantum constant Joseph R. Groele 2 energy! And an electron continues to move in a solid is not emitting continuously it!
2021-06-23 14:34:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.729759156703949, "perplexity": 565.1685597129375}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488539480.67/warc/CC-MAIN-20210623134306-20210623164306-00200.warc.gz"}
https://cms.math.ca/10.4153/CJM-2004-056-5
location:  Publications → journals → CJM Abstract view # Equivariant Formality for Actions of Torus Groups This paper contains a comparison of several definitions of equivariant formality for actions of torus groups. We develop and prove some relations between the definitions. Focusing on the case of the circle group, we use $S^1$-equivariant minimal models to give a number of examples of $S^1$-spaces illustrating the properties of the various definitions. MSC Classifications: 55P91 - Equivariant homotopy theory [See also 19L47] 55P62 - Rational homotopy theory 55R35 - Classifying spaces of groups and $H$-spaces 55S45 - Postnikov systems, $k$-invariants
2015-04-25 11:01:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47308874130249023, "perplexity": 1955.0322599088315}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246648338.72/warc/CC-MAIN-20150417045728-00203-ip-10-235-10-82.ec2.internal.warc.gz"}
http://openstudy.com/updates/508af8cbe4b077c2ef2e6920
## anonymous 3 years ago Use the distance formmula to find the distance between point B (-3,-8) and point C (2,4). 1. PhoenixFire Do you know what the distance formula is? 2. anonymous yes but I'm confused as to where to put the numbers in. 3. PhoenixFire $d=\sqrt{x^2+y^2}=\sqrt{(x_1-x)^2+(y_1-y)^2}$ You have two points, find the change in x and change in y, and put into the formula. 4. anonymous is -3 -8 x1 and y1 5. PhoenixFire it shouldn't make a difference since it's squared. But usually if you want to find the distance from B to C, your x1y1=C and xy=B 6. anonymous 7. PhoenixFire $dist=\sqrt{(2+3)^2+(4+8)^2}=\sqrt{25+144}=\sqrt{169}=13$ 8. anonymous My distance formula says you subtract x 2 and x 1 9. PhoenixFire yup. $2-(-3)=2+3=5$ 10. PhoenixFire $4-(-8)=4+8=12$ Double negative becomes a positive. 11. anonymous Ok. Thanks for youre help.
2016-09-30 13:32:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6671804189682007, "perplexity": 2073.4418653579337}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738662197.73/warc/CC-MAIN-20160924173742-00229-ip-10-143-35-109.ec2.internal.warc.gz"}
https://docs.lib.purdue.edu/dissertations/AAI9314052/
A process technology for IC compatible micromechanical sensors using merged epitaxial lateral overgrowth of silicon Abstract A novel technology for manufacturing thin silicon diaphragm structures is presented. Controllability of thin silicon diaphragm is one of the most important issues in fabricating silicon micromechanical sensors whose sensitivity depends on the diaphragm thickness. This can be accomplished by epitaxial lateral overgrowth (ELO) of single crystal silicon on a patterned layer of masking material, typically SiO$\sb2,$ combined with crystallographic etching of which etching rate depends on the crystal plane. With recent improvement of ELO material, good quality of 10$\mu$m thick, 200$\mu$m x 1000$\mu$m single crystal silicon was obtained with its thickness being precisely controlled by growth rate ($\le$1$\mu$m/min.). The junction leakage of the p-n junction diodes fabricated on merged ELO silicon indicated the material quality is comparable to the substrate silicon. Using this technology, a bridge-type piezoresistive accelerometer with four beams and one proof mass was fabricated successfully. Its sensitivity and resonant frequency were comparable to the accelerometers made by other methods. They were analyzed by comparing the experimental results to a simple analytical solution as well as ANSYS stress simulator using a finite element method. The experimental results showed a potential application of the new technology to silicon sensor fabrication but some further refinement is remaining. ^ Degree Ph.D. Major Professor: Gerold W. Neudeck, Purdue University. Subject Area Engineering, Electronics and Electrical|Physics, Condensed Matter Off-Campus Purdue Users:
2018-03-19 12:31:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7500649094581604, "perplexity": 5670.786384613184}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257646914.32/warc/CC-MAIN-20180319120712-20180319140712-00517.warc.gz"}
https://www.physicsforums.com/threads/proof-that-the-rationals-are-not-a-g_-delta-set.422982/
# Proof that the rationals are not a G_\delta set. 1. Aug 18, 2010 ### AxiomOfChoice I understand that showing $\mathbb Q$ is not a $G_\delta$ set is quite a non-trivial exercise, involving (among other things) an invocation of the Baire category theorem. Do any of you guys know it, or know where I can find it online? I'd really appreciate it. Thanks! 2. Aug 18, 2010 ### snipez90 I think I encountered this when I had to show that there does not exist a function from R to R that is continuous on the rationals and discontinuous on the irrationals. You could start with http://en.wikipedia.org/wiki/Gδ_set#Examples". Last edited by a moderator: Apr 25, 2017 3. Aug 18, 2010 ### AxiomOfChoice Thanks. I actually *did* start there, but was a little mystified by the following statement: "If we were able to write $\mathbb Q = \bigcap_1^\infty \mathcal O_n$ for open sets $\mathcal O_n$, each $\mathcal O_n$ would have to be dense in $\mathbb R$ since $\mathbb Q$ is dense in $\mathbb R$." Why is this so? What, or who, says that if you write a dense set as an intersection of open sets, each of the sets in the intersection has to be dense? Last edited by a moderator: Apr 25, 2017 4. Aug 19, 2010 ### adriank Any set containing a dense set is itself dense: Since $$\mathbb{Q} \subseteq \mathcal{O}_n \subseteq \mathbb{R}$$, we have $$\mathbb{R} = \bar{\mathbb{Q}} \subseteq \bar{\mathcal{O}}_n \subseteq \mathbb{R}$$. 5. Aug 19, 2010 ### AxiomOfChoice Of course! I don't know why I didn't realize that! I guess my mind was inexplicably converting the $\cap$ into a $\cup$. The proof on Wikipedia makes sense to me now. (The provision of the Baire category theorem that is violated, BTW, is that if $\{ \mathcal O_n \}_1^\infty$ is a collection of dense open sets, then so is $\bigcap_1^\infty \mathcal O_n$.) 6. Aug 19, 2010 ### adriank Minor pedantry: the BCT says that a countable intersection of dense open sets is dense; not necessarily open. Share this great discussion with others via Reddit, Google+, Twitter, or Facebook
2018-12-19 07:06:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8085624575614929, "perplexity": 540.6504516316706}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376831715.98/warc/CC-MAIN-20181219065932-20181219091932-00362.warc.gz"}
https://razzk.net/posts/CodeBlue17-Common-Modulus-1,2,3/
## Inroduction CodeBlue CTF 2017 - Common Modulus 1,2,3 ## Description Common Modulus 1: Simple Common Modulus Attack Common Modulus 2: Common Modulus Attack with common exponent divisor Common Modulus 3: Common Modulus Attack with common exponent divisor + message padding ## Writeup The challenge title was pretty self explanatory. textbook RSA is vulnerable to Common Modulus Attack RSA works like the following $$c = m^e \mod N$$ If you encrypt the same message with the same N like: $$C_1 = M^{e_1} \mod N$$ $$C_2 = M^{e_2} \mod N$$ Then $$\gcd(e_1, e_2)=d$$ , this means that a and b exists such that $$e_1a + e_2b=d$$. This is usefull since: \begin{align} C_B^{s_1}*C_C^{s_2}&=(M^{e_B})^{s_1}*(M^{e_C})^{s_2}\\ &=M^{e_Bs_1}*M^{e_Cs_2}\\ &=M^{e_Bs_1+e_Cs_2}\\ &=M^1\\ &=M \end{align} In the case where $$e_1$$ and $$e_2$$ don’t share any factor, $$\gcd(e_1, e_2)=1$$ so $$M^d = M^1 = M$$ . In the case where $$e_1$$ and $$e_2$$ share some factors, we end up with $$M^d$$. Common Modulus 1 was the first easy case. In Common Modulus 2 both the exponents were multiplied by $$3$$, so $$d=3 then M^d=M^3$$ Luckily our $$M^3$$ is smaller than our N so we can retrive the flag by applying the cube-root. In Common Modulus 3, the greatest common divisor is $$17$$ and unfortunately $$M^{17}>N$$. We need to remove the padding from M until $$M^{17} < N$$ then take the 17th-root as before. The message/flag is padded with the following code: 1 2 3 4 5 6 flag = key.FLAG.encode('hex') while len(flag) * 4 < 8192: flag += '00' FLAG = long(flag[:-2], 16) len(FLAG) should be 2046, the previous flags were CBCTF{<32_char_here>} so converted in hex we have len(flag) = 78 $2046-78=1968$ Adding 00 to an hex number it’s the same as multiplying by $$2^4$$, so we need to multiply the padded M by $$2^{-4}$$ to remove all the 00 Let $$d=17$$ and $$i=1968$$, retrive $$M$$ with \begin{align} M''&=C_1^{a}*C_2^{b}\\ M'&=M''*2^{-d*4*i}\\ M&=\sqrt[d]{M'}\qquad \end{align} ## Pyhton Script 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 from libnum import * def common_modulus(e1, e2, c1, c2, N): # Extended Euclidean algorithm a, b, d = xgcd(e1,e2) # Invert negative factor if b < 0: c2 = invmod(c2, N) b = -b if a < 0: c1 = invmod(c1, N) a = -a # Get the message (c1^a * c2^b) % N m = (pow(c1,a,N) * pow(c2,b,N)) % N return [m, a, b, d] def pad(m, d, i, N): if -d*4*i < 0: f = pow(invmod(2, N), d*4*i, N) else: f = pow(2, -d*4*i, N) return m * f % N ## Common Modulus 1 N = 791311309087374588934274354916349141233150778762086315374343850126808782284294921228110916322178898551691669133101997907127587121520288166574468605214516304122927763843434653215681360872523253290766297044510870617745122997739814947286892376888776319552516141136363673315815999597035068706744362048480852074989063152333880754375196551355543036200494314973628012006925154168913855587162465714207917714655810265293814697401062934881400969828166519415465439814160673468968009887672546243771190906300544375962002574334018175007498231632240021805593635057187842353840461973449205839419195826992169177108307004404365745462706797969436718212150888171299620800051183755681631250040936288149592343890616920153400691102933966724025765766418338452595218861582008026186067946508221264938736562082192890727980844444978081110599714993030990031363457184296168457089953510500474033234298252385232725393194957086065274263743550741242453140557383981358497807318476777558208795816650619401057283873302725816795298930817307745973266335447938091252055872816232968635169429875153933553733116356920185396530990560434510949092154539711124052490142742567527833751624924993906099869301505096094512729115132147653907827742334805918235749308541981388529841813147L e1 = 813647 c1 = 767202255403494641285723819543278226263601155898823605265497361830705668240032418501494959141449028517100422081272691883369257107388411439611318808983979122090486252578041006071999581282663085495058515958745546211668701835250122032715473014598395050184702983368667972803718169481809394565706175141425650370279775233813674442957760484285820381853600163980060348710028919659329781877491724136976028815641232407109144869660767954119268355348405951052583739555066569345526640029961785158127382321111833599691079949415049786723663210542733655554868327542833053024595895523192888118675763242352407948643537985861448788568550308481655116845634952516676905251579084404308314639717162526798451410767058423619677212069270398132021729448047980766312818656065369023093123058422620085273728481545680423266197847937925342263870309939913221308330842487685037638837340238355192125668409039255551545407800543798158964963358868702135730305156935767426581823180696819366253148799571923731323928995477390559418822575259531941023518182807739949726026157027426545624061195471888653152768495272113769751755053321333829345939391638863918920798107792346015224509118930143010726156407828938941341788657835191853473698010478888928860138978235297618195944868175 e2 = 846359 c2 = 393205642868817442649216793359718556278406137459770244761832906195960432918468617731069456704644789806507809829093842629745066759599286729538728368882491382997337611417441529220397067642218119525968897551289230558627870154984979444195757677411673096443476021362319325097662392808170632471553717355895219405644518503783235536597143112954291157798713583737689125917709618182162360535659223966858707155741267214975141963463832314566520144602105237041672437684177707624423211972004800873375670613148140256099552724408192217550331987310558991433383571470532995856778764797540637679226825577553396934734325550293550389623919904744913990305949697308222046594160302362669510242921299755255790640101006152269619965560742243168099219363626217512940995615730916134775134764069912120583282148219405178065222313607957426887495658080497917440100549199528894874905968298614233827155712422019324710018755792249855902168601927285980197334672067920857960628679370550895555840658121626134216719240409691397735762685349162277111815727100169755960553688569326705249270662470879197234836585418835845237231721910938341557726245940031873345666571751867755961294973426045629909899256967038811807893676700888551318830676356324765330202998096318754445585853694 m, _, _, _ = common_modulus(e1,e2,c1,c2,N) flag = n2s(m) print(flag) ## Common Modulus 2 N = 691611766208546073444876122261067788277978858453710639029761974358666489171591889808344592871468081368348731289584873825685836699513369087940744233044470468106283756269016888690397802087612562650740690626844050981638158798650899164329024889012339813251634342169796374490173324858177655412520581064091323105709703802894635752243504165527728325493775585018099572491218738859140069209745383085972126419677929983854492018948495162457428459536088314487922683148031388611849013227501962458386817851194913551405843074740308192841259015955432216658418219471365781271743026881045054161177699500233983945284463060091084401032681620162554495490307966608011765399197534175588394769839991952726269105973546086964385977836193216093842605576347580465390858378577913173391209728199847916944392685608959720919745441534152140791433228642857247821519585327091864890122871765266988285510728943279970135846908966516130597249552710186071954611133294079017500030355232895541367427153922527925908108643934213023557398363684188823565535815365161748782796247844503993809352854741573950620787090272760236473228652960605730173150252619759400890068298838592790770868307280012495168740250977525199965477849089021924445456338550258621310346872587368865023459114279L e1 = 2623119 c1 = 632613645684838434911920364870092246688638723680203743297038042884981435531349983627632652213126007404455112992754038899192740232209237012089852184466576496173356903126767617531366105427616049893559911396536574555008451239827427140619373005107923039458285095437111146013805698400274937791209388463040761234346114146112603113513874269976957472698342250573902102976387047390228485927254752372525379266486917620487089416581168720140744193600912161065888758451629009978676721731074043142666019127528370181044741033938879227651226413524178992155234346229899043794846119210274959231350300191718278291314079326011260972911790929707654859407903619102516710246683375658271085356783673232677699444921875427077745087507202504075374873842972977165904031632108921391219453633100007509368853543202918527396858214941532156620908283394786740941868393377733920317480973184132461984594109692489226477402338664642727766514992506288377119275635222078018270479534265371971469799345627297451492177595572561618185463142728664331779856911512823762928116551034186671353283417747535010208121962539603383913657773795358612010178381857101029638404248669376029927680328805839410427459248430136708902815920536603541943356116875656311481908672896225539754812052984 e2 = 2611101 c2= 473583830101449207063655453483957320801977763405664178108962387145963115641321631378723122470718049239150183483107837540062110255460217493574236417576528210993551734521104360323008425196350719034294427914294044848231276934402896045785500160974092767601908407706594433190832523140982335688121038712802163776950422665847149664034820576774873956120202470663588902529914328392634164212558025176982387474287314624421143326789371057668708456922968762564019631616913937820209470604081356673188045257490667304640155390478645279862586730343779998826931285683980941686981775369499484033439920659579124275957233449431588512697916708510428626881790592757578867152025501459202793986322020476004209777449674143157280081483085868896558215825742742029607229809248023263081810931655981810534293127835168642962477010095223356972141559004635008185531900175232541978761179342338914489553003329293031284557554252476291770973145365678145226167965362251186233138510776436645583796590010200995100899736056399413460647507781994189935869541735701599175369334170081795310585938471394672279359692859881857399434361716843681313191143289947464231132723256066979526475873327590657111481299295002695482778491520250596998683754980263059514032256777144682239680031331 m, _, _, d = common_modulus(e1,e2,c1,c2,N) m3 = nroot(m, d) flag = n2s(m3) print(flag) ## Common Modulus 3 N = 968303207185607392933798782387689522656147561712795299283882287440997111985337043607347852676675972362918419582716466493901827460706450708953088746657795254328535683015238473202723829157430427867421087226189467195646844668802837819623414935635764658530099227590830741510249221895574884771436827770318305551317176839494597881542410308108175111834839215570956517340899194288784858826431213509713952528866287993390613948062491441610747107348648602379185114554723774040662560407455840832110271813933032624805073788024993067973148443925303253795470847563536231692617336003345253420781728080545107013979989225215051608062044642404350318860297552684325830122651066498471494796197140830046228424107290568844093340204267361082742078820287806283549564233943675107998076566543352390069511549956964748416720763513751358887667167332126080075430087233981966806427580520370257808050907653401104327326631097877139317246068499669501296942050536122626128764679345686334508003799157031148558906404519754488943090430614449734145826672306815863417618237639635345018467258462900064790890385390508718602990300495726938127324285656651880960536234978827321187318512537049899040749483345012221361131129792213254633506153185302186568540749980375628514235030855807045314709882496753074374605804287524700316006092896795420448048753563680014346711220542647330945566829248331838201572696721484611259634434782075831402355726031168909134250473545733318680648535591393583591753681796583867361941369612638709097786386797652973805166862674686551290098101135899770942208220247225222462958306451292887778107274202080862990165408064372884914158792725013116440247234948462221463395579778209416361358418236648009499845276591742121866289571920719060295618309551857388542560147442529378101156132620061921583469878917947302508627776695573047820182057510772384875135795550437710313658255283287862276198618250884260442348343850066240114035518636573845052654416580159067713183299304803538785632234238046467384672538122045063632667757962772674939972792679509851714820791391542209183895101043149418861154827906828713093460640624918161442498432261330207213585143333235283987920999836862245963629061098253465280043891903366631221500293216287006734530837307036369234284523611530022158837165369780256375911835104289853776157817361701638375344905311830460059612259798600223588322136072986423796319913187356442617636479007538166981641749486826645166479345057550622122298936583765413411917302326827553940008588471939786317L e1 = 13218197 c1 = 421111161283346431452404838872906910488956231402567019627078538397015129219548039141380131693083805603634832115136344104821561027925864923901767159809798556819390401416411855168293007844311613426948800208007055064348403326803934387258467126612219000171854953396242427891713082121012531213725355828779993888182933907101893044052692649728535361366924432892126370724588453260805681821935597271080255619110465374127164951502400983809536186925456642086304791751551216044579863129291165009342909475237361181743987301745314378124693429484474503217504889965795409106282650296184945237152875186651795552666842345066169360660546054986708172417429052514059615434084086154415920830883055729609108788179781445658162049137989591033198225687070565856609516100367268190340309308157085784134411282761584130225746032198957351227779773001865341915642873414205377145922729731246073639219795924517066513774579919237687232502798978463575009663263447306363691670476046609459059167879832079562689979943552446917015778003739858532004479603764374411135699895655736013845369551111690464128448955486337191304960262873891918387298035244888743768954328136862535082300010994461970837930794524673040694310506226189740828318579439950518115967189869637345638498098713092489244636082588805772227797143449747153355341250697133905040459624514982099584435140538668878747129925880019957973864264834954951976218071371679757509297492047186840975743403271896047156768874314108910566561868784522463064748746223313798316236978642468003218086919263188950066989044210829301678555320837086377545741001736801163743516580353549217680694256032377932133575488109549594325464409000682442042651791171660390153162096538381581148625792618196174157168997050557100450557288143739840824092541232969307054965994887340364612034225310418659933594966854225109483090892335755747449339249960596843266176465016510244036725441439565001070917883074011690676911331738356675397441288471244334501091751395240775991013123686801229872759306547212076067886148629332008410208267030715989530663720054487572883736818402878156320070866728567321649066842627412668340251628750512807830348760198570727092664649603270152943231283098179852700308804060616603604109118233213539629764618927518884532667481665405755714542980086417296700138731812815602896287231173509006149715343922041354056256194681983557852276963918040964106582078239501915086320391282791023780691061950154312894926940866878046518974877055347229774579384836298084254309194742164500782 e2 = 13325773 c2 = 905336011260893181451937420601175770518313987534058470576409049452599974940736949020892631904955374029696187995214208522797070994604711663756814784706053753391830801248808142181434422224620348115969075398677162880328104668870990618955018212918253536803780269490731174871303579036880145367252409300321511403369634435527150000969450834032455903281526350857234024199221097951905683106432984567192925721856154512618509568221546898136983740670694848845816274649037002810596080076911851084982546841069002779200879395931456796911067433329924739943299552475793965462348342813683729525726622940637841204356613245154725191731818570068876251576706021876289420301350487275708440713574921631267131651109260124766475594710481161866254565495750886839979733888772439130815149472846472765436552529628205718020374215877005469575372812773398343007234021177110808440750777736752300216949812950208548770769356889084232841311299404061610926387440620373137543532240294565244268885021138356121583352086433040479579285669028705571672002026293450745788592556823683194951826864141604029265650908715426822940827714455571796485962047146479512064410497475912291097113335318214286537554114706858926411912595063427662813512257156617697572638072509013871077829931469009241562237896598800666350337578826848041056097241547835195327840625894306586665539851835002956883837883293039313345815320389859457247452362675082429215289259947007386622301346393036750250168159297672722825807855637539796284414040339895615478904699195785762873300869004533530925681372154050324943727448464697359515536114806520493724557784204316395281200493439754546212305945038548703862153513568552164320556554039878316192239576925690599059819274827811660423411125130527352853059068829976616766635622188402967122171283526317336114731850274527784991508989562864331372520028706424190362623058696630974348010681878756845430600722349325469186628612347668798617024215127322351935893754437838675067920448401031834465304168738463170328598024532652790234530162187677742373772610227011372650971705426850962132725369442443471111605896253734934335599889785048210986345764273409091402794347076211775580564523705131025788768349950799136508286891544854890654019681560870443838699627458034827040931554727774022911060988866035389927962128604944287104134091087855031454577661765552937836562030914936714391213421737277968877508252894207799747341644008076766221537325719773971004607956958298021339118374168598829394997802039272072755111105775037781715 m, _, _, d = common_modulus(e1,e2,c1,c2,N) for i in range(1900, 2048): m_unpadded = pad(m, d, i, N) m17 = nroot(m_unpadded, d) flag = n2s(m17) if 'CTF{' in flag: print(flag) break ## Solution Flag is Captured  » 1 2 3 CBCTF{6ac2afd2fc108894db8ab21d1e30d3f3} CBCTF{d65718235c137a94264f16d3a51fefa1} CBCTF{b5c96e00cb90d11ec6eccdc58ef0272d}
2021-07-28 05:28:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42759865522384644, "perplexity": 1836.7863873986737}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153521.1/warc/CC-MAIN-20210728025548-20210728055548-00377.warc.gz"}
http://mathhelpforum.com/calculators/180517-integrating-ti-89-a.html
# Math Help - integrating with the ti 89 1. ## integrating with the ti 89 can the ti 89 integrate matrices... 10e^(3t/2) 2e^t/2 3e^(3t/2) e^(t/2) 2. Originally Posted by slapmaxwell1 can the ti 89 integrate matrices... 10e^(3t/2) 2e^t/2 3e^(3t/2) e^(t/2) You will need to integrate each entry seperately. 3. when i tried to enter the term i got an error message, can you tell me what is the proper format for integrating an e expression...say 10e^(3t/2) 4. Are you typing " $\int$(10e^(3t/2),t)" ? TI-89 tutorial 6. Originally Posted by mr fantastic You will need to integrate each entry seperately. i should probably further explain why im asking these series of questions.... Im trying to solve a variation of parameters problem on my ti 89, well im trying to do as much of the matrix manipulations as i can on the ti 89. 7. Originally Posted by TheEmptySet
2016-06-27 07:18:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9603818655014038, "perplexity": 2267.8816738572687}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395621.98/warc/CC-MAIN-20160624154955-00115-ip-10-164-35-72.ec2.internal.warc.gz"}
https://wu-kan.cn/_posts/2019-09-19-%E4%BA%BA%E5%B7%A5%E6%99%BA%E8%83%BD-%E5%85%AD/
## What is KRR (Knowledge representation and reasoning) Symbolic encoding of propositions believed by some agent andtheir manipulation to produce representations of propositions thatare believed by the agent but not explicitly represented. ## Why KRR • KR hypothesis: any artificial intelligent system isknowledge-based • Knowledge-based system: system with structures that • can be interpreted propositionally and • determine the system behavior such structures are called its knowledge base (KB) • Knowledge-based system most suitable for open-ended tasks • Hallmark of knowledge-based system: cognitive penetrability,i.e., actions depend on beliefs, including implicitly representedbeliefs • KR假设:任何人工智能系统都是基于知识的。 • 基于知识的系统:具有以下结构的系统。 • 可以通过命题和 • 确定系统行为 这样的结构称为其知识库(KB)。 • 基于知识的系统最适合开放式任务 • 基于知识的系统的Hallmark:认知渗透性,即行动依赖于信念,包括隐含表示的信念 ## KRR and logic Logic is the main tool for KRR, because logic studies • How to formally represent agent’s beliefs • Given the explicitly represented beliefs, what are the implicitlyrepresented beliefs. There are many kinds of logics. In this course, we will usefirst-order logic (FOL) as the tool for KRR ## An example (cont’d) • Intelligence is needed to answer the question • Can we make machines answer the question? • A possible approach • First, translate the sentences and question into FOL formulas • Of course, this is hard, and we do not have a good way toautomate this step • Second, check if the formula of the question is logicallyentailed by the formulas of the sentences • We will show that there are ways to automate this step ## Alphabet • Logical symbols (fixed meaning and use): • Punctuation: (,),,,. • Connectives and quantifiers:=,¬,∧,∨,∀,∃ • Variables:x,x1,x2,…,x′,x′′,…,y,…,z,… • Non-logical symbols (domain-dependent meaning and use): • Predicate symbols • arity: number of arguments • arity 0 predicates: propositional symbols -Function symbols • arity 0 functions: constant symbols ## Terms • Every variable is a term • If t1,…,tn are terms andfis a function symbol of arityn,then f(t1,…,tn)is a term ## Formulas • If t1,…,tn are terms andPis a predicate symbol of arityn,thenP(t1,…,tn)is an atomic formula • Ift1andt2are terms, then(t1=t2)is an atomic formula • Ifαandβare formulas, andvis a variable, then¬α,(α∧β),(α∨β),∃v.α,∀v.αare formulas ## Notation • Occasionally add or omit (,) • Use [,] and {,} • Abbreviation:(α→β)for(¬α∨β) • Abbreviation:(α↔β)for(α→β)∧(β→α) • Predicates: mixed case capitalized,e.g., Person, OlderThan • Functions (and constants): mixed case uncapitalized,e.g.,john, father, ## Variable scope • Free and bound occurrences of variables • e.g.,P(x)∧∃x[P(x)∨Q(x)] • A sentence: formula with no free variables • Substitution:α[v/t]meansαwith all free occurrences of thevreplaced by termt • In general,α[v1/t1,…,vn/tn] ## Interpretations An interpretation is a pair = <D,I> • D,Ii D is the domain, can be any non-empty set • I is a mapping from the set of predicate and function symbols • If P is a predicate symbol of arity n, I(P) is an n-ary relation over D, i.e., I(P) ⊆ Dn • If p is a 0-ary predicate symbol, i.e., a propositional symbol, I(p) ∈{true,false} • If f is a function symbol of arity n, I(f) is an n-ary function over D, i.e., I(f) : Dn → D • If c is a 0-ary function symbol, i.e., a constant symbol, I(c) ∈ D wrt: with pespect to
2020-07-03 23:57:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8408132195472717, "perplexity": 10913.862338705352}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655883439.15/warc/CC-MAIN-20200703215640-20200704005640-00363.warc.gz"}
https://crypto.stackexchange.com/questions/34400/find-collision-in-ajtais-hash-function-using-short-vector
# Find collision in Ajtai's hash function using short vector ## Background What is Ajtai's hash function? Given a matrix $A \hookleftarrow U(\mathbb{Z}_q^{n \times m})$ and a column vector $\vec{m} \in \mathbb{Z}_d^m$, the hash of the message $\vec{m}$ is given by $H(\vec{m}) = A\vec{m} \mod q$ Ajtai's SIS-lattice The corresponding lattice for $A$ denoted by $L^{\bot}(A)$ is defined as all vectors $\vec{v}$ such that $A\vec{v}=\vec{0}$, in other words $L^{\bot}(A)$ is the kernel of $A$. Thus as far as I understand, to find a basis for $L^{\bot}(A)$ is essentially equal to finding a basis for the kernel of $A$. The SIS-problem The $\beta$-SIS problem is the problem of finding a non-zero vector $\vec{v}$ such that $A\vec{v}=\vec{0}$ and $\|\vec{v}\|\le \beta$. This problem is known to be hard. Is the hash function collision resistant? Finding a collision for the hash function is as hard as solving the $2d\sqrt{m}$ SIS-problem. That means, given a collision $(\vec{x}, \vec{y})$ we can easily compute a short vector in $L^{\bot}(A)$ as $\vec{x}-\vec{y}$. Why does it work? We have a collision, i.e $A\vec{x}=A\vec{y} \rightarrow A(\vec{x}-\vec{y})=\vec{0}$, so the vector $\vec{v}=\vec{x}-\vec{y}$ is in the lattice. Next, due to triangle inequality we have that $\|\vec{v}\| \le \|\vec{x}\|+\|\vec{y}\|$. Since both $\|x\|_{\infty} \le d$ and $\|y\|_{\infty} \le d$, it follows that $\|\vec{v}\| \le 2d\sqrt{m}$. ## Question Now, my question is; is it possible to go the other way around? That is, is it possible to find a collision for Ajtai's hash function given a short non-zero vector found e.g using Lentra-Lenstra-Lovász lattice reduction algorithm? • What research have you done? What have you tried? Where did you hit a problem? It would be cool if you‘ld edit you question accordingly. Don’t get me wrong, but we do expect you to do a significant amount of research before asking here – including searching this site for related Q&As that might shed light on your question. At worst it will help you frame a better question; at best it might even answer it. – e-sushi Apr 10 '16 at 14:00 • Unfortunately I haven't been able to come up with something that looks even close to a solution. What I've tried to do is solve the equation system with v=x-y and Ax=Ay but ending up with the zero vector. I also tried to combine the two shortest vectors in the LLL-reduced basis, but they don't hash to the same value (?) so it looks like a dead end to me. My lattice professor apparently thinks this is simple, I guess I've just overlooked something obvious. Hence the question. – user33284 Apr 11 '16 at 8:09 • Thanks for your edits – looks like a perfect question now! $(+1)$ – e-sushi Apr 11 '16 at 9:59 • I concur. I wish all questions were this nicely-written. – pg1989 Apr 13 '16 at 1:29 Suppose the hash domain is $\{-d, \ldots, d\}^m$, i.e., vectors of $\ell_\infty$ norm at most $d$. Then any nonzero vector in Ajtai's lattice having Euclidean norm at most $d$ collides with the all-zeros input. (Actually, $\ell_\infty$ norm at most $d$ suffices.) Thus, finding short enough lattice vectors yields collisions. But now suppose $d=1$, say. (This is a typical choice.) For standard parameters, the shortest (in Euclidean norm) nonzero vectors in Ajtai's lattice have norm about $\sqrt{n \log q} \gg d = 1$. Thus, short vectors in Euclidean norm (which is what typical lattice-basis reduction algorithms deliver) may not be sufficient to yield a collision. Instead, finding a collision requires finding a shortest nonzero vector in the $\ell_\infty$ norm. But little is known about lattice-basis reduction algorithms for this setting.
2020-03-31 17:30:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7404595017433167, "perplexity": 331.2096056664648}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370502513.35/warc/CC-MAIN-20200331150854-20200331180854-00203.warc.gz"}
http://www.ck12.org/geometry/Ruler-Postulate-and-Segment-Addition-Postulate/exerciseint/Word-Problem--Find-BC/
<meta http-equiv="refresh" content="1; url=/nojavascript/"> Ruler Postulate and Segment Addition Postulate ( Assessments ) | Geometry | CK-12 Foundation Ruler Postulate and Segment Addition Postulate % Progress Progress % Word Problem--Find BC Teacher Contributed AC¯¯¯¯¯$\overline{AC}$ is a line segment with point B$B$ existing somewhere between points A$A$ and C$C$. If the measure of AC¯¯¯¯¯$\overline{AC}$ is 51, and the measure of AB¯¯¯¯¯$\overline{AB}$ is 24, then what is the measure of BC¯¯¯¯¯$\overline{BC}$? qid: 100026
2015-01-31 22:20:49
{"extraction_info": {"found_math": true, "script_math_tex": 7, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2717249095439911, "perplexity": 11199.757267162673}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115865430.52/warc/CC-MAIN-20150124161105-00048-ip-10-180-212-252.ec2.internal.warc.gz"}
https://byjus.com/question-answer/which-of-the-following-oxides-is-not-expected-to-react-with-sodium-hydroxide-cao-b/
Question # Which of the following oxides is not expected to react with sodium hydroxide? A BeO B B2O3 C CaO D SiO2 Solution ## The correct option is A $$CaO$$$$CaO$$ is not expected to react with $$NaOH$$ because $$NaOH$$ is a base and it can only react with either an acidic oxide or an amphoteric oxide. Among given options $$BeO$$ is an amphoteric oxide while $$B_2O_3$$ and $$SiO_2$$ are acidic oxides.Thus, all of them react with $$NaOH$$ to form salts.$$CaO$$, on the other hand, is a basic oxide and hence it will not react with $$NaOH$$.Chemistry Suggest Corrections 0 Similar questions View More People also searched for View More
2022-01-18 16:02:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6074661612510681, "perplexity": 1418.0420150964508}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300934.87/warc/CC-MAIN-20220118152809-20220118182809-00103.warc.gz"}
https://codeforces.com/blog/entry/77690
### apnakaamkar's blog By apnakaamkar, history, 13 months ago, • -3 » 13 months ago, # |   +3 Check this out... https://www.codechef.com/JULY19B/problems/CIRMERGE/It also has a nice editorial. • » » 13 months ago, # ^ |   0 thanks » 13 months ago, # |   0 build a 2d matrix of size n x n. start from bottom like what if you had only one element in the array, then solve for two elements , then three then you will eventually develop a recurrence relations by just manually solving it. just to confirm you will only have to fill the half dp table when it is cut diagonally , step 1: fill 1,1 2,2 3,3 4,4 till n,n step 2 : fill 1,2 2,3 3,4 till n-1,n step3 while filling 1,3 and so on • » » 13 months ago, # ^ |   0 thanks » 10 months ago, # |   0 Its easier to understand the recurrence relation from a solution done made with recursion (and memoisation), so here you go — https://atcoder.jp/contests/dp/submissions/15834404 » 10 months ago, # |   0 » 10 months ago, # |   +3 this video by Errichto helped me a lot while solving the dp atcoder contest : https://www.youtube.com/watch?v=FAQxdm0bTaw&t=8731s » 9 months ago, # |   0 Is there anything I am missing? If someone can help ? Link : My solution • » » 4 weeks ago, # ^ |   0 Update to cost=1e18; and it works. » 3 months ago, # |   -8 Why cant we use priority queue here? like https://www.geeksforgeeks.org/connect-n-ropes-minimum-cost/ • » » 2 months ago, # ^ |   0 Choose two adjacent slimes, and combine them into a new slime. They must be adjacent. » 4 weeks ago, # |   0 Am i missing something in this solution: https://atcoder.jp/contests/dp/submissions/22675988 • » » 4 weeks ago, # ^ |   0 lli i,a, b = sum[endi]-sum[start-1],c=INT_MAX; c can be larger than INT_MAX » 4 weeks ago, # |   0 If you want to learn the concept behind this problem then go through Matrix chain multiplication DP, it is a very famous DP problem and has clear explanations(you can check Cormen or any online articles). This N-slimes problem is very similar to that but instead of multiplication, we do addition. • » » 4 weeks ago, # ^ |   0 Yeah, that was the base upon which I built this solution. Thanks for pointing out the error.
2021-06-16 00:36:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28263601660728455, "perplexity": 2042.0833543004712}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487621699.22/warc/CC-MAIN-20210616001810-20210616031810-00346.warc.gz"}
http://codeforces.com/blog/entry/104502
Chaabane_Mankai's blog By Chaabane_Mankai, 7 weeks ago, I have been lately working on a problem and I have noticed that pow() function gives me sometimes a wrong number. $\newline$ Here's an example : $\newline$ I wanted to calculate $3^{38}$ : The calculator indicates that it is equal to 1350851717672992089. However, when I executed the command pow(3,38); I got this number instead : 1350851717672992000. $\newline$ Can anyone explain to me why is this happening and how to fix it ? • -7 » 7 weeks ago, # |   +13 The core algorithm for pow(x, y) computes a logarithm from (a part of) x, multiplies it by y, and computes an exponential function on the product. By using above algorithm floating point induces errors which results in diversion. Therefore it is generally recommended to create power function when in use or save it in your template • » » 7 weeks ago, # ^ |   +3 okay thanks ! » 7 weeks ago, # | ← Rev. 2 →   +14 Doubles can exactly represent all integers between $-2^{52}$ and $2^{52}$. But for example $2^{52} + 1$ can not be stored in a double. $3^{38} > 2^{52}$, so it is unlikely $3^{38}$ has an exact representation. » 7 weeks ago, # |   0 Use Binary Exponentiation Binary_Expo • » » 7 weeks ago, # ^ | ← Rev. 2 →   0 I used it already to solve the problem $\newline$ Thanks anyway » 6 weeks ago, # |   0 no » 6 weeks ago, # |   0 So is there any way to improve the accuracy of pow, Binary_Expo is a bit of a pain.
2022-08-19 07:41:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35037684440612793, "perplexity": 1009.0603873477145}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573630.12/warc/CC-MAIN-20220819070211-20220819100211-00662.warc.gz"}
https://www.nag.com/numeric/mb/nagdoc_mb/manual_25_1/html/f08/f08yef.html
Integer type:  int32  int64  nag_int  show int32  show int32  show int64  show int64  show nag_int  show nag_int Chapter Contents Chapter Introduction NAG Toolbox # NAG Toolbox: nag_lapack_dtgsja (f08ye) ## Purpose nag_lapack_dtgsja (f08ye) computes the generalized singular value decomposition (GSVD) of two real upper trapezoidal matrices $A$ and $B$, where $A$ is an $m$ by $n$ matrix and $B$ is a $p$ by $n$ matrix. $A$ and $B$ are assumed to be in the form returned by nag_lapack_dggsvp (f08ve). ## Syntax [a, b, alpha, beta, u, v, q, ncycle, info] = f08ye(jobu, jobv, jobq, k, l, a, b, tola, tolb, u, v, q, 'm', m, 'p', p, 'n', n) [a, b, alpha, beta, u, v, q, ncycle, info] = nag_lapack_dtgsja(jobu, jobv, jobq, k, l, a, b, tola, tolb, u, v, q, 'm', m, 'p', p, 'n', n) ## Description nag_lapack_dtgsja (f08ye) computes the GSVD of the matrices $A$ and $B$ which are assumed to have the form as returned by nag_lapack_dggsvp (f08ve) $A= n-k-lklk0A12A13l00A23m-k-l000() , if ​ m-k-l ≥ 0; n-k-lklk0A12A13m-k00A23() , if ​ m-k-l < 0 ; B= n-k-lkll00B13p-l000() ,$ where the $k$ by $k$ matrix ${A}_{12}$ and the $l$ by $l$ matrix ${B}_{13}$ are nonsingular upper triangular, ${A}_{23}$ is $l$ by $l$ upper triangular if $m-k-l\ge 0$ and is $\left(m-k\right)$ by $l$ upper trapezoidal otherwise. nag_lapack_dtgsja (f08ye) computes orthogonal matrices $Q$, $U$ and $V$, diagonal matrices ${D}_{1}$ and ${D}_{2}$, and an upper triangular matrix $R$ such that $UTAQ = D1 0 R , VTBQ = D2 0 R .$ Optionally $Q$, $U$ and $V$ may or may not be computed, or they may be premultiplied by matrices ${Q}_{1}$, ${U}_{1}$ and ${V}_{1}$ respectively. If $\left(m-k-l\right)\ge 0$ then ${D}_{1}$, ${D}_{2}$ and $R$ have the form $D1= klkI0l0Cm-k-l00() ,$ $D2= kll0Sp-l00() ,$ $R = klkR11R12l0R22() ,$ where $C=\mathrm{diag}\left({\alpha }_{k+1},,,\dots ,,,{\alpha }_{k+l}\right)\text{, }S=\mathrm{diag}\left({\beta }_{k+1},,,\dots ,,,{\beta }_{k+l}\right)$. If $\left(m-k-l\right)<0$ then ${D}_{1}$, ${D}_{2}$ and $R$ have the form $D1= km-kk+l-mkI00m-k0C0() ,$ $D2= km-kk+l-mm-k0S0k+l-m00Ip-l000() ,$ $R = km-kk+l-mkR11R12R13m-k0R22R23k+l-m00R33() ,$ where $C=\mathrm{diag}\left({\alpha }_{k+1},,,\dots ,,,{\alpha }_{m}\right)\text{, }S=\mathrm{diag}\left({\beta }_{k+1},,,\dots ,,,{\beta }_{m}\right)$. In both cases the diagonal matrix $C$ has non-negative diagonal elements, the diagonal matrix $S$ has positive diagonal elements, so that $S$ is nonsingular, and ${C}^{2}+{S}^{2}=1$. See Section 2.3.5.3 of Anderson et al. (1999) for further information. ## References Anderson E, Bai Z, Bischof C, Blackford S, Demmel J, Dongarra J J, Du Croz J J, Greenbaum A, Hammarling S, McKenney A and Sorensen D (1999) LAPACK Users' Guide (3rd Edition) SIAM, Philadelphia http://www.netlib.org/lapack/lug Golub G H and Van Loan C F (1996) Matrix Computations (3rd Edition) Johns Hopkins University Press, Baltimore ## Parameters ### Compulsory Input Parameters 1:     $\mathrm{jobu}$ – string (length ≥ 1) If ${\mathbf{jobu}}=\text{'U'}$, u must contain an orthogonal matrix ${U}_{1}$ on entry, and the product ${U}_{1}U$ is returned. If ${\mathbf{jobu}}=\text{'I'}$, u is initialized to the unit matrix, and the orthogonal matrix $U$ is returned. If ${\mathbf{jobu}}=\text{'N'}$, $U$ is not computed. Constraint: ${\mathbf{jobu}}=\text{'U'}$, $\text{'I'}$ or $\text{'N'}$. 2:     $\mathrm{jobv}$ – string (length ≥ 1) If ${\mathbf{jobv}}=\text{'V'}$, v must contain an orthogonal matrix ${V}_{1}$ on entry, and the product ${V}_{1}V$ is returned. If ${\mathbf{jobv}}=\text{'I'}$, v is initialized to the unit matrix, and the orthogonal matrix $V$ is returned. If ${\mathbf{jobv}}=\text{'N'}$, $V$ is not computed. Constraint: ${\mathbf{jobv}}=\text{'V'}$, $\text{'I'}$ or $\text{'N'}$. 3:     $\mathrm{jobq}$ – string (length ≥ 1) If ${\mathbf{jobq}}=\text{'Q'}$, q must contain an orthogonal matrix ${Q}_{1}$ on entry, and the product ${Q}_{1}Q$ is returned. If ${\mathbf{jobq}}=\text{'I'}$, q is initialized to the unit matrix, and the orthogonal matrix $Q$ is returned. If ${\mathbf{jobq}}=\text{'N'}$, $Q$ is not computed. Constraint: ${\mathbf{jobq}}=\text{'Q'}$, $\text{'I'}$ or $\text{'N'}$. 4:     $\mathrm{k}$int64int32nag_int scalar 5:     $\mathrm{l}$int64int32nag_int scalar k and l specify the sizes, $k$ and $l$, of the subblocks of $A$ and $B$, whose GSVD is to be computed by nag_lapack_dtgsja (f08ye). 6:     $\mathrm{a}\left(\mathit{lda},:\right)$ – double array The first dimension of the array a must be at least $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{m}}\right)$. The second dimension of the array a must be at least $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{n}}\right)$. The $m$ by $n$ matrix $A$. 7:     $\mathrm{b}\left(\mathit{ldb},:\right)$ – double array The first dimension of the array b must be at least $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{p}}\right)$. The second dimension of the array b must be at least $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{n}}\right)$. The $p$ by $n$ matrix $B$. 8:     $\mathrm{tola}$ – double scalar 9:     $\mathrm{tolb}$ – double scalar tola and tolb are the convergence criteria for the Jacobi–Kogbetliantz iteration procedure. Generally, they should be the same as used in the preprocessing step performed by nag_lapack_zggsvp (f08vs), say $tola=maxm,nAε, tolb=maxp,nBε,$ where $\epsilon$ is the machine precision. 10:   $\mathrm{u}\left(\mathit{ldu},:\right)$ – double array The first dimension, $\mathit{ldu}$, of the array u must satisfy • if ${\mathbf{jobu}}=\text{'U'}$ or $\text{'I'}$, $\mathit{ldu}\ge \mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{m}}\right)$; • otherwise $\mathit{ldu}\ge 1$. The second dimension of the array u must be at least $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{m}}\right)$ if ${\mathbf{jobu}}=\text{'U'}$ or $\text{'I'}$, and at least $1$ otherwise. If ${\mathbf{jobu}}=\text{'U'}$, u must contain an $m$ by $m$ matrix ${U}_{1}$ (usually the orthogonal matrix returned by nag_lapack_dggsvp (f08ve)). 11:   $\mathrm{v}\left(\mathit{ldv},:\right)$ – double array The first dimension, $\mathit{ldv}$, of the array v must satisfy • if ${\mathbf{jobv}}=\text{'V'}$ or $\text{'I'}$, $\mathit{ldv}\ge \mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{p}}\right)$; • otherwise $\mathit{ldv}\ge 1$. The second dimension of the array v must be at least $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{p}}\right)$ if ${\mathbf{jobv}}=\text{'V'}$ or $\text{'I'}$, and at least $1$ otherwise. If ${\mathbf{jobv}}=\text{'V'}$, v must contain an $p$ by $p$ matrix ${V}_{1}$ (usually the orthogonal matrix returned by nag_lapack_dggsvp (f08ve)). 12:   $\mathrm{q}\left(\mathit{ldq},:\right)$ – double array The first dimension, $\mathit{ldq}$, of the array q must satisfy • if ${\mathbf{jobq}}=\text{'Q'}$ or $\text{'I'}$, $\mathit{ldq}\ge \mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{n}}\right)$; • otherwise $\mathit{ldq}\ge 1$. The second dimension of the array q must be at least $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{n}}\right)$ if ${\mathbf{jobq}}=\text{'Q'}$ or $\text{'I'}$, and at least $1$ otherwise. If ${\mathbf{jobq}}=\text{'Q'}$, q must contain an $n$ by $n$ matrix ${Q}_{1}$ (usually the orthogonal matrix returned by nag_lapack_dggsvp (f08ve)). ### Optional Input Parameters 1:     $\mathrm{m}$int64int32nag_int scalar Default: the first dimension of the array a. $m$, the number of rows of the matrix $A$. Constraint: ${\mathbf{m}}\ge 0$. 2:     $\mathrm{p}$int64int32nag_int scalar Default: the first dimension of the array b. $p$, the number of rows of the matrix $B$. Constraint: ${\mathbf{p}}\ge 0$. 3:     $\mathrm{n}$int64int32nag_int scalar Default: the second dimension of the array a. $n$, the number of columns of the matrices $A$ and $B$. Constraint: ${\mathbf{n}}\ge 0$. ### Output Parameters 1:     $\mathrm{a}\left(\mathit{lda},:\right)$ – double array The first dimension of the array a will be $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{m}}\right)$. The second dimension of the array a will be $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{n}}\right)$. If $m-k-l\ge 0$, ${\mathbf{a}}\left(1:k+l,n-k-l+1:n\right)$ contains the $\left(k+l\right)$ by $\left(k+l\right)$ upper triangular matrix $R$. If $m-k-l<0$, ${\mathbf{a}}\left(1:m,n-k-l+1:n\right)$ contains the first $m$ rows of the $\left(k+l\right)$ by $\left(k+l\right)$ upper triangular matrix $R$, and the submatrix ${R}_{33}$ is returned in ${\mathbf{b}}\left(m-k+1:l,n+m-k-l+1:n\right)$. 2:     $\mathrm{b}\left(\mathit{ldb},:\right)$ – double array The first dimension of the array b will be $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{p}}\right)$. The second dimension of the array b will be $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{n}}\right)$. If $m-k-l<0$, ${\mathbf{b}}\left(m-k+1:l,n+m-k-l+1:n\right)$ contains the submatrix ${R}_{33}$ of $R$. 3:     $\mathrm{alpha}\left({\mathbf{n}}\right)$ – double array See the description of beta. 4:     $\mathrm{beta}\left({\mathbf{n}}\right)$ – double array alpha and beta contain the generalized singular value pairs of $A$ and $B$; • ${\mathbf{alpha}}\left(\mathit{i}\right)=1$, ${\mathbf{beta}}\left(\mathit{i}\right)=0$, for $\mathit{i}=1,2,\dots ,k$, and • if $m-k-l\ge 0$, ${\mathbf{alpha}}\left(\mathit{i}\right)={\alpha }_{\mathit{i}}$, ${\mathbf{beta}}\left(\mathit{i}\right)={\beta }_{\mathit{i}}$, for $\mathit{i}=k+1,\dots ,k+l$, or • if $m-k-l<0$, ${\mathbf{alpha}}\left(\mathit{i}\right)={\alpha }_{\mathit{i}}$, ${\mathbf{beta}}\left(\mathit{i}\right)={\beta }_{\mathit{i}}$, for $\mathit{i}=k+1,\dots ,m$ and ${\mathbf{alpha}}\left(\mathit{i}\right)=0$, ${\mathbf{beta}}\left(\mathit{i}\right)=1$, for $\mathit{i}=m+1,\dots ,k+l$. Furthermore, if $k+l, ${\mathbf{alpha}}\left(\mathit{i}\right)={\mathbf{beta}}\left(\mathit{i}\right)=0$, for $\mathit{i}=k+l+1,\dots ,n$. 5:     $\mathrm{u}\left(\mathit{ldu},:\right)$ – double array The first dimension, $\mathit{ldu}$, of the array u will be • if ${\mathbf{jobu}}=\text{'U'}$ or $\text{'I'}$, $\mathit{ldu}=\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{m}}\right)$; • otherwise $\mathit{ldu}=1$. The second dimension of the array u will be $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{m}}\right)$ if ${\mathbf{jobu}}=\text{'U'}$ or $\text{'I'}$ and $1$ otherwise. If ${\mathbf{jobu}}=\text{'U'}$, u contains the product ${U}_{1}U$. If ${\mathbf{jobu}}=\text{'I'}$, u contains the orthogonal matrix $U$. If ${\mathbf{jobu}}=\text{'N'}$, u is not referenced. 6:     $\mathrm{v}\left(\mathit{ldv},:\right)$ – double array The first dimension, $\mathit{ldv}$, of the array v will be • if ${\mathbf{jobv}}=\text{'V'}$ or $\text{'I'}$, $\mathit{ldv}=\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{p}}\right)$; • otherwise $\mathit{ldv}=1$. The second dimension of the array v will be $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{p}}\right)$ if ${\mathbf{jobv}}=\text{'V'}$ or $\text{'I'}$ and $1$ otherwise. If ${\mathbf{jobv}}=\text{'I'}$, v contains the orthogonal matrix $V$. If ${\mathbf{jobv}}=\text{'V'}$, v contains the product ${V}_{1}V$. If ${\mathbf{jobv}}=\text{'N'}$, v is not referenced. 7:     $\mathrm{q}\left(\mathit{ldq},:\right)$ – double array The first dimension, $\mathit{ldq}$, of the array q will be • if ${\mathbf{jobq}}=\text{'Q'}$ or $\text{'I'}$, $\mathit{ldq}=\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{n}}\right)$; • otherwise $\mathit{ldq}=1$. The second dimension of the array q will be $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{n}}\right)$ if ${\mathbf{jobq}}=\text{'Q'}$ or $\text{'I'}$ and $1$ otherwise. If ${\mathbf{jobq}}=\text{'I'}$, q contains the orthogonal matrix $Q$. If ${\mathbf{jobq}}=\text{'Q'}$, q contains the product ${Q}_{1}Q$. If ${\mathbf{jobq}}=\text{'N'}$, q is not referenced. 8:     $\mathrm{ncycle}$int64int32nag_int scalar The number of cycles required for convergence. 9:     $\mathrm{info}$int64int32nag_int scalar ${\mathbf{info}}=0$ unless the function detects an error (see Error Indicators and Warnings). ## Error Indicators and Warnings ${\mathbf{info}}=-i$ If ${\mathbf{info}}=-i$, parameter $i$ had an illegal value on entry. The parameters are numbered as follows: 1: jobu, 2: jobv, 3: jobq, 4: m, 5: p, 6: n, 7: k, 8: l, 9: a, 10: lda, 11: b, 12: ldb, 13: tola, 14: tolb, 15: alpha, 16: beta, 17: u, 18: ldu, 19: v, 20: ldv, 21: q, 22: ldq, 23: work, 24: ncycle, 25: info. It is possible that info refers to a parameter that is omitted from the MATLAB interface. This usually indicates that an error in one of the other input parameters has caused an incorrect value to be inferred. ${\mathbf{info}}=1$ The procedure does not converge after $40$ cycles. ## Accuracy The computed generalized singular value decomposition is nearly the exact generalized singular value decomposition for nearby matrices $\left(A+E\right)$ and $\left(B+F\right)$, where $E2 = O⁡ε A2 and F2= O⁡ε B2 ,$ and $\epsilon$ is the machine precision. See Section 4.12 of Anderson et al. (1999) for further details. The complex analogue of this function is nag_lapack_ztgsja (f08ys). ## Example This example finds the generalized singular value decomposition $A = UΣ1 0 R QT , B= VΣ2 0 R QT ,$ of the matrix pair $\left(A,B\right)$, where $A = 1 2 3 3 2 1 4 5 6 7 8 8 and B= -2 -3 3 4 6 5 .$ ```function f08ye_example fprintf('f08ye example results\n\n'); % Generalized singular values of (A,B) where: m = 4; n = 3; a = [ 1 2 3; 3 2 1; 4 5 6; 7 8 8]; p = 2; b = [-2 -3 3; 4 6 5]; % Reduce A and B to upper triangular form S = U^T A Q, T = V^T B Q tola = max(m,n)*norm(a,1)*x02aj; tolb = max(p,n)*norm(a,1)*x02aj; [S, T, k, l, U, V, Q, info] = ... f08ve( ... 'U', 'V', 'Q', a, b, tola, tolb); % Compute singular values [S, T, alpha, beta, U, V, Q, ncycle, info] = ... f08ye( ... 'U', 'V', 'Q', k, l, S, T, tola, tolb, U, V, Q); fprintf('Number of infinite generalized singular values = %3d\n',k); fprintf('Number of finite generalized singular values = %3d\n',l); fprintf('Effective rank of the matrix pair (A^T B^T)^T = %3d\n',k+l); fprintf('Number of cycles of the Kogbetliantz method = %3d\n\n',ncycle); disp('Finite generalized singular values'); disp(alpha(k+1:k+l)./beta(k+1:k+l)); ``` ```f08ye example results Number of infinite generalized singular values = 1 Number of finite generalized singular values = 2 Effective rank of the matrix pair (A^T B^T)^T = 3 Number of cycles of the Kogbetliantz method = 2 Finite generalized singular values 1.3151 0.0802 ```
2023-03-25 13:37:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 272, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9997475743293762, "perplexity": 3051.0057598928965}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945333.53/warc/CC-MAIN-20230325130029-20230325160029-00145.warc.gz"}
https://www.gamedev.net/forums/topic/671993-java-most-efficient-way-to-iterate-through-a-file/
Public Group [Java] Most Efficient Way To Iterate Through A File ? This topic is 1025 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. Recommended Posts I have a large configuration file I need to read into memory when my plugin boots up. Right now I have a very long switch statement loop that seems to take a while to process all the lines in the config file. What would be the most efficient way to iterate through a 'plain text' [*1] file in Java to reduce the loading times as much as possible? ( without over complicating the code base to the point of becoming unreadable ). Note - the file is only opened once. All the relevant data is dumped into a list, THAN processed in the switch loop. *1 - the text file is not formatted in .xml or any other type of markup language Edited by Code Fox Share on other sites I have a large configuration file I need to read into memory when my plugin boots up. Right now I have a very long switch statement loop that seems to take a while to process all the lines in the config file. What would be the most efficient way to iterate through a 'plain text' [*1] file in Java to reduce the loading times as much as possible? ( without over complicating the code base to the point of becoming unreadable ). Note - the file is only opened once. All the relevant data is dumped into a list, THAN processes. *1 - the text file is not formatted in .xml or any other type of markup language You have a property file which results in a map (key->value). Quick idea: Then sort the key-set and process the config like this: Map<String,String> properties = ...from file... Set<String> keySet = new TreeSet<String>(properties.keys()); Iterator<String> it = keySet.iterator(); String key = it.hasNext() ? it.next() : "end_reached"; if("first_param".equals(key)) { // do something // next key key = it.hasNext() ? it.next() : "end_reached"; } if("second_param".equals(key)) { // do something // next key key = it.hasNext() ? it.next() : "end_reached"; } ... if("last_param".equals(key)) { // do something // next key key = it.hasNext() ? it.next() : "end_reached"; } if(!"end_reached".equals(key)) { // there's an unknown parameter, config file might be corrupt } Edited by Ashaman73 Share on other sites Your file is obviously formatted in some kind of language since you're parsing it into some kind of configuration, it has some structure. The format of the file can matter a lot in how efficient it is to parse it (for instance, an INI file or any basic key-value store, compared to an XML file). Whether it can be streamed in, if it needs to be validated according to a schema, etc, etc... some formats are inherently less efficient than others, and so on... besides the usual transport layer considerations of course. I don't know how big your configuration file is but if it takes "a while" to load it I'd say it's either huge or you're spending too much time loading the file for some reason and not enough time actually parsing it. Maybe show your code or something? Share on other sites To be pedantic, its not reading the file from disk that's slow, its whatever parsing you're doing.  If your parsing is structured mostly as "read this byte/char, then decide what to do with it; now I have all the bytes/chars, then decide what to do with that." then its going to tend to be slower than if you can take bigger chunks at a time. Beware any string handling you might be doing too -- as far as I know Java strings are immutable, which means that every time you append a character to an existing string, what happens behind the scenes is a new string is allocated, the contents of the old string are copied, then the new character, then the old string is (probably) released to be garbage collected -- lots and lots of small collections result. You may not be affected, but be aware of the invisible costs of string handling (by the way, lots of languages have this kind of invisible overhead if you use the builtin string classes -- some languages offer classes like C#s StringBuilder to help combat it). Share on other sites Be very careful with allocations as this is Java. If you are parsing the file by creating an enormous number of temporary objects and throwing them away, that is wasteful and can be incredibly slow. Unfortunately the language design of Java makes that the easiest way to go. If you're using third-party libraries they likely are using those terrible techniques of not managing memory well. Once Upon a Time (about five years back) I was assigned to fix a tool that needed to go through a big collection of data in a file, about 15MB data file. Basically this was the consolidated master resources list for the game, and the tool needed to validate that every model, texture, animation, script, and audio file was present.   It took over a half hour to open and process the file, mostly because it was parsing and allocating everything as it went, spending time doing allocations and releases of tiny strings, calling constructors and destructors, testing the file system for existence of file names to ensure they were valid, and on rare occasion actually doing work. By the simple process of reusing buffers of objects and loading in larger batches plus caching file system tests, I was able to drop it down from a half hour to about a minute and a half.  It took a few design changes to use a recycling pool of objects, but within about two days it was easy enough to find and eliminate the problems. From the short description given, it seems you are likely suffering the same problem. In that case, build a factory method to construct AND RELEASE objects, but instead of allocating with new, attempt to reuse from within a pool. Do not ever use strings, which are of the devil in Java, instead use and reuse byte arrays. Find any other slow operations -- in my case that was checking for the existence of a file -- and attempt to either eliminate the work, or to cache the results, or to spawn a small worker thread to do the work asynchronously. Share on other sites Beware any string handling you might be doing too -- as far as I know Java strings are immutable, which means that every time you append a character to an existing string, what happens behind the scenes is a new string is allocated, the contents of the old string are copied, then the new character, then the old string is (probably) released to be garbage collected -- lots and lots of small collections result. You may not be affected, but be aware of the invisible costs of string handling (by the way, lots of languages have this kind of invisible overhead if you use the builtin string classes -- some languages offer classes like C#s StringBuilder to help combat it). Be very careful with allocations as this is Java. I strongly disagree, this is just not true at all [spoiler] public class TestChamber { public static ArrayList<String> stringContainer; public static void main(String[] args) { long start = System.nanoTime(); for (int i = 0; i <= 10; i++) { test(1000000); } long end = System.nanoTime(); System.out.println(">>> Running time " + ((end - start) / 1000000000) + "seconds"); } private final static void test(int numberOfAllocations) { stringContainer = new ArrayList<>(numberOfAllocations); long combinedTime = 0; for (long i = 0; i < numberOfAllocations; i++) { combinedTime += allocate(); } System.out.println("---------"); System.out.println("Total:   " + (combinedTime / 1000000000d) + "seconds"); System.out.println("Average: " + (combinedTime / numberOfAllocations) + "nanoseconds"); } private final static long allocate() { double r = Math.random(); long start = System.nanoTime(); String string = Double.toString(r); long end = System.nanoTime(); /* * Add to something to avoid possible JVM code improvements which might * affect runtime */ return end - start; } } [/spoiler] > On the first run this code allocates 1Million random strings (on my laptop) in 0.9 seconds(!!!) with an average speed of 917 nanoseconds (!!!) per allocation The whole code runs in 15seconds (10.000.000 strings). > allocating 10million empty strings needed 0.04 to 0.2 seconds > combining 10million times two strings "string = s1 + s2" needed 0.08 to 1.15 seconds Of course the whole speedtest varies a lot by the size of each string but in this test every string has about 18-22characters which should be enough. I don't know what the OP is actually doing, but if he isn't reading in over 10.000.000 string values which took in my test above about 15seconds on my laptop this shouldn't matter at all. some languages offer classes like C#s StringBuilder to help combat it). Yes, both C# as well as Java offer a "StringBuilder" which is a very good way to tackle this, agree. StringBuilders, however, are sadly not handled similar like primitive types as it is the case with normal Strings. I.e. "stringbuilder1 + stringbuilder2" won't work or using StringBuilder in a switch statement :/ _______________________________________________________________________________________________________________ @TS Now what I actually believe is the source of all evil is, that you are doing a lot of (unnescessary) String comparisons, searching for character sequences, splitting strings and stuff like that. The solution to that is overthinking your data-format and parsing-logic from the ground up. There is not much to say without more information on your side besides that Strings in Java are UTF-16 encoded and string/character operations of any kind can be very heavy. Edited by IceCave Share on other sites You're right that its not purely due to visible string allocation, but I'd wager its still a significant portion -- the invisible kind. You talk about string comparisons, character sequences, splitting strings -- all of those produce allocations of their own, behind the scenes. The invisibility of them makes them something worth being concerned about, and its not inconceivable that millions of invisible allocations could arise from a largish config file, if it were processed in a pathologically-bad (but all too common) way. Furthermore, I'll point out that by the looks of it, your benchmark almost certainly isn't capturing GC time since A) Java garbage collection (like most) is non-deterministic, and B) your container of strings outlives your ending timestamp anyways. Heck, without printing out a random sampling of the stored strings, you can't even really be sure the compiler hasn't elided them away entirely. Neither do your pure allocations reflect the constant copying of data that would be incurred by common parse-by-one patterns (as I said earlier -- read an additional character and decide whether we have enough to do something with it). I don't mean to pick on you, and I don't even mean to pick on Java, but without seeing OP's code, your isolated micro-benchmark doesn't really tell us anything about how allocations might be affecting OP's code. Share on other sites Without further information, it's difficult to give possible further directions. In general, you'd start with measuring some times, to get an idea of how close you are. Eg load+process versus load+nothing, perhaps also with some fast block load from the Internet or so. The latter would give a lower limit. In general, it's much better if you process the file while reading it. Disk systems are slow, why not use that time to process its contents. There are two other reasons for doing that. - It would reduce or kill the need for the long list of storage that you have now, which cannot hurt. - Many OSes do automagic read-ahead. If you ask for one disk block of a file, they already fetch the next one, so by the time you're done with the processing the previous block, it's already there. Of course that fails, if processing is a NOP-operation. If you parse the file yourself, you can look into using a lex/yacc like solution instead, which tends to be optimized for speed, in particular character handling, which tends to be the bottleneck in lex-generated scanners. Share on other sites Yes, disks are slow relative to memory and CPU speeds. But disks are much faster than are being described.  While no file sizes and times were given, different types of disks (slow spindle disks, fast spindle disks, ssd disks, thumb drives, etc) have performance that shouldn't be too painful. It is common to see spindle disks with 30MB/s or 50MB/s.  Better SSD drives routinely reach 300MB/s, newer ones routinely sit around 500MB/s.  Then there are the high performance PCI card drives that are currently around 3GB/s. (Yes, gigabytes per second.) A text configuration file should not be "painful" to load. Even if the configuration file is 5 or 10 megabytes in size, the time spent actually loading the file should be a fraction of a second. It may be a large fraction of a second like 1/3 second for a slow spindle drive, but not to the point of being "painful". When files take a long time to load, unless you're loading up gigabytes of files, facing a "painful" loading time likely comes from time spent doing processing and manipulating data, not so much time transferring data from the hardware. Java is particularly well known for flooding the world with copies of strings. (Profiling and debugging one tool, I found a vendor's library that in a matter of seconds started triggering garbage collection as it had filled memory with substrings, and surprisingly generated over a quarter million copies of the string "YES", all waiting for GC to clean it up.)  While some strings are automatically interned, it is trivially easy to fill memory with copies of the strings and pieces of strings because so many library functions think nothing of allocating yet another copy of a string. 1. 1 2. 2 Rutin 19 3. 3 JoeJ 16 4. 4 5. 5 • 35 • 23 • 13 • 13 • 17 • Forum Statistics • Total Topics 631702 • Total Posts 3001812 ×
2018-07-16 11:23:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1929197758436203, "perplexity": 2346.8838033241223}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589251.7/warc/CC-MAIN-20180716095945-20180716115945-00180.warc.gz"}
https://mathematica.stackexchange.com/questions/267007/how-to-organise-definitions-that-contain-named-parameters
# How to organise definitions that contain named parameters I have lots of definitions of (usually numeric) quantities that depend on parameters; they're mostly matrices (in the actual code, there are many more definitions): model1A = {{1, E^(I phi)}, {E^(-I phi), t}}; model1B = {b*t, 2}; model2A = {{1, delta E^(I phi)}, {delta E^(-I phi), t}}; model2B = {b*t, delta}; They are truly parameter in a semantic sense: Sometimes they're used generically (unevaluated), sometimes with concrete values, and often I consider certain conditions on these parameter, such as 0<phi<Pi/2. Furthermore, as typical for parameters and as opposed to function arguments, they're identified by their names rather than by their 'slot position' and they appear in some quantities but not in others (as illustrated above). This makes obvious why it's not sensible to implement them as function arguments. For the given reasons, it wouldn't make much sense to write model1B[b_,t_] = {b*t, 2}. Instead, I usually use replacement rules if I need to specify certain values of the parameters, and I can still define a function (using Set, not SetDelayed) whenever I deliberately want to consider the parameter dependence of a quantity, as in normplot[b_] = Norm[model1B]; Plot[normplot[b], {b, 1, 5}] But this becomes a problem when I want to improve my code: 1. I want to organise my code a bit and not use symbol names for everything. In the example above, I have two models, model 1 and model 2, and it would be better to use something like an association to store the matrices, such that I can just write models["model 1"]["B"]. 2. Included in this is the question of how to treat the left-hand side symbols, that is, whether to use symbols or something else for A and B (above, I just appended "A" and "B" to "model1" and "model2".) 3. Most importantly, I want to avoid relying on undefined global symbols phi, t, b, delta for the parameters. How can I avoid the use of undefined global symbols for the quantities and parameters? I'm looking for a solution for how to store and implement 1. "model 1" and "model 2" 2. "A" and "B" 3. "phi", "t", "b" and "delta". There have been several other questions and answers on parts of this topic, but they usually only cover one part of the issue, and many are rather cumbersome (creating a separate package, using Options, etc.). I know that there are several constructs available to solve different parts of this problem — Downvalues, Associations, optional arguments, Contexts, packages (too much here!) and probably more. I think here it's important to consider the situation as a whole: how to best implement parameters on the right-hand side will very much depend on which of these constructs are most useful. ## 1 Answer There may be some philosophical arguments here, but here's what I would tend toward (there may be more context that I should be considering). First off, for the "model 1/2" and "A/B" stuff, you can just use sub values. As for the comment that it wouldn't make much sense to write model1B[b_,t_] = {b*t, 2} I guess I just disagree. I think it's better to be able to specify the actual parameter names as needed rather than rely on keeping global symbols available for these expressions. So, putting that together, you can do something like this: models["1"]["A"][p1_, p2_] = {{1, E^(I p1)}, {E^(-I p1), p2}} (and similar for the other models). The names p1 and p2 aren't very helpful, but I'm just trying to make the distinction very clear. Anyway, now you can supply your own parameter names as needed, or substitue numeric values if the situation requires it: models["1"]["A"][phi, t] (*{{1, E^(I*phi)}, {E^((-I)*phi), t}}*) and models["1"]["A"][5, 11] (*{{1, E^(5*I)}, {E^(-5*I), 11}}*) You could, of course, use nested Associations, but SubValues is built-in functionality, so I'd leverage it until some good reason imposed itself. If you really do want to make the parameter names global constants, you can just pull back on the SubValues: models["1"]["A"] = {{1, E^(I phi)}, {E^(-I phi), t}} (*{{1, E^(I*phi)}, {E^((-I)*phi), t}}*) It would probably be a good idea to Protect the parameter symbols you want to reserve for these expressions. You could also wrap the expressions on the right hand side with Hold. This means you wouldn't have to worry about global name clashes as long as you didn't ReleaseHold until you'd done the replacements. • These are very useful suggestions in general, but they are useful mostly for problems relatively orthogonal to mine. Parameter names vs function arguments: I also don't want to use undefined global symbols (hence my question), but function arguments don't work well because the dependency of quantities on parameters might vary and the semantic connection really is the name of the parameter. But you've made me realise: The parameters are arguments of the whole specification of the models. Still, is there a way to introduce symbols for the models that avoid collision yet have the given names? Apr 20 at 0:59
2022-08-18 09:51:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5196422934532166, "perplexity": 1196.022465921621}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573193.35/warc/CC-MAIN-20220818094131-20220818124131-00004.warc.gz"}
https://math.stackexchange.com/questions/3181876/how-to-find-sides-of-rectangle-that-is-inscribed-in-other-rectangle/3181960
# How to find sides of rectangle that is inscribed in other rectangle? We have smaller rectangle inscribed in the bigger rectangle as shown on the picture. The bigger outer rectangle is inclined for a certain angle. We know height and width of the bigger rectangle and we know angle of incline. Also we know that sides ratio of the both rectangles is the same. How to find dimentions of the inner rectangle? • Do all four points of the small rectangle need to touch large one, or can it be only two of them as in the diagram? [If the latter there may be more than one answer] – coffeemath Apr 10 '19 at 5:12 • Is the constraint on the small rectangle to have its sides parallel to given axes ? – Jean Marie Apr 10 '19 at 5:48 • cofeemath: only two points of the small rectangle should touch the bigger one Jean Marie: the inner rectangle should be always not iclined along X and Y axes – Sergey Kravchenko Apr 10 '19 at 5:57 • I don't understand : you say you are looking for the dimensions of the inner rectangle ; but, just before, you say you know them : they are the same as the outer rectangle ? – Jean Marie Apr 10 '19 at 6:07 • Could you express your issue in terms of coordinates ? "I know the coordinates of ... I am looking for the coordinates of ..." – Jean Marie Apr 10 '19 at 6:09 I added a coordinate grid to the situation. The lengths of the red rectangle are $$a$$ and $$b$$, the angle is $$\gamma$$. I slid the blue rectangle to the left, so that it is positioned with one vertex on the origin $$(0,0)$$ as shown in the picture. To calculate the dimensions of the blue rectangle, we want to find the coordinates of the point $$(x_0, y_0)$$. First we need to find the equation $$y=mx+q$$ for the line. We note that there is a rightangled triangle with $$\gamma$$ as an agle, such that $$\cos(\gamma)=\frac{b}{q}$$ and the slope is $$m=\tan(\gamma) .$$ We get $$y = \tan(\gamma) x + \frac{b}{\cos(\gamma)}$$ for the line. The point $$(x_0,y_0)$$ has to satisfy $$\frac{-x_0}{y_0} = \frac{a}{b}$$ and as it lies on the line, we get $$-x_0\frac{ b}{a} = \tan(\gamma) x_0 + \frac{b}{\cos(\gamma)}.$$ Rearranging results in $$x_0 =\frac{-b}{\cos(\gamma) \left(\frac{b}{a}+\tan(\gamma) \right)}$$ and $$y_0=\frac{b^2 }{a \cos(\gamma)\left(\frac{b}{a}+\tan(\gamma)\right)}.$$
2021-04-21 16:50:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 15, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.594853937625885, "perplexity": 295.2828370815084}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039546945.85/warc/CC-MAIN-20210421161025-20210421191025-00120.warc.gz"}
https://cstheory.stackexchange.com/questions/linked/174?sort=unanswered
2k views Best known deterministic time complexity lower bound for a natural problem in NP This answer to Major unsolved problems in theoretical computer science? question states that it is open if a particular problem in NP requires $\Omega(n^2)$ time. Looking at the comments under answer ... 8k views Major unsolved problems in distributed systems? Inspired by this question, what are the major problems and existing solutions which needs improvement in (theoretical) distributed systems domain. Something like membership protocols, data ... 3k views Consequences of Factoring being in P? Factoring is not known to be NP-complete. This question asked for consequences of Factoring being NP-complete. Curiously, no one asked for consequences of Factoring being in P (maybe because such a ... 6k views What does research in theoretical computer science involve? I am trying to understand what is involved in theoretical computer science research. What do theoretical computer scientists do? I know a significant time is spent on teaching, supervising graduate ... 5k views Polynomial Time Algorithm for Graph Isomorphism Testing [closed] "Michael I. Trofimov" claims that he has found a poly-time algorithm for graph isomorphism, which works for all graphs. The paper is given in arXiv. The companion website gives a proof-of-concept ... 2k views Are the problems PRIMES, FACTORING known to be P-hard? Let PRIMES (a.k.a. primality testing) be the problem: Given a natural number $n$, is $n$ a prime number? Let FACTORING be the problem: Given natural numbers $n$, $m$ with $1 \leq m \leq n$, ... 4k views What are the consequences of factoring being NP-complete? Are there any references covering this? 1k views Sources of open problems? I'm wondering if there are some known sources of open TCS problems? I'm a junior studying math/CS and would like to know of some accessible problems that I could start thinking about! Thanks so much!... 570 views Is there a list of known problems? Is there a database of known problems with information about their complexity and algorithms, related problems, references etc that is available to us? [If not, can we make one? I know this is off ... 2k views Is there a proof that addition is faster than multiplication? The best upper bound known on the time complexity of multiplication is Martin Fürer's bound $n\log n2^{O(\log^* n)}$, which is more than linear time complexity of addition. Do we have a proof that ... 719 views Is Quasi-polynomial time in PSPACE? I had done some search on this but I was not able to find an answer either way. Huck answered it fully. Thanks :) 368 views Complexity lower bound of finding the factorial of a number I was wondering about the complexity of the factorial of a number mostly because this problem is not referenced in the complexity books I have read. Two similar problems, Matrix Multiplication and ... 167 views Evidence integer multiplication is in linear time? After millenia of quest we have identified two $n$ bit integers can be multiplied in $O(n\log n)$ time. Please refer details in https://www.quantamagazine.org/mathematicians-discover-the-perfect-way-...
2020-08-06 01:42:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6797115206718445, "perplexity": 770.3003641630148}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735990.92/warc/CC-MAIN-20200806001745-20200806031745-00466.warc.gz"}
http://en.wikiversity.org/wiki/Talk:Analytic_Maths_for_Olympiads
Welcome to the Brainstorm Forum for Olympiad Algebra and Induction Part of the School of Olympiads POST YOUR QUESTIONS ON THIS TALKPAGE AND GET ANSWERS FROM OTHER VIEWERS. ALSO WE CAN DISCUSS SOLUTIONS AND DISAGREEMENTS. For more Resources visit Analytic Maths for Olympiads ## A picking game (problem on induction) There is a heap of $n$ matches. 2 players take turns to pick 1 or 2 matches. The winner is the person who picks the last match. • Who wins for 5 matches (if no player misses a chance to win or draw)? • Is there a general strategy for any number of matches (to force a win or a tie for either side)? • What is the strategy if existing? ## A Rootful Question Prove the following: • $5<\sqrt{5}+\sqrt[3]{5}+\sqrt[4]{5}$ • $8>\sqrt{8}+\sqrt[3]{8}+\sqrt[4]{8}$ • $9>\sqrt{n}+\sqrt[3]{n}+\sqrt[4]{n},n\geq 9$ ## A problem on functions Find all pairs of real numbers (a,b) such that: f(x)= x2 + ax + b If q is a root of f(x) then q2-2 is also a root
2013-12-05 01:34:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 4, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4671015441417694, "perplexity": 1447.3496399368842}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163037952/warc/CC-MAIN-20131204131717-00092-ip-10-33-133-15.ec2.internal.warc.gz"}
https://socratic.org/questions/how-do-you-solve-quotient-of-the-sum-of-3-and-a-number-and-6-is-less-than-2-and-
# How do you solve "quotient of the sum of 3 and a number and 6 is less than -2" and graph the solution on a number line? Jul 10, 2017 $x < - 15$ #### Explanation: A quotient is the answer to a division. Let's add the required phrasing to the the statement. The quotient of (the sum of 3 and a number) and (6) is (less than -2) $\left(3 + x\right) \div 6 < - 2$ which is easier shown as : $\frac{3 + x}{6} < - 2 \text{ } \leftarrow \times 6$ $3 + x < - 12$ $x < - 12 - 3$ $x < - 15$ On a number line graph this is shown as an open circle on -15 and an arrow extending to the left. Any value less then $- 15$ is part of the solution.
2020-02-19 10:15:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 7, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8813871145248413, "perplexity": 679.0372561702833}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144111.17/warc/CC-MAIN-20200219092153-20200219122153-00109.warc.gz"}
https://calendar.math.illinois.edu/?year=2018&month=04&day=24&interval=day
Department of # Mathematics Seminar Calendar for events the day of Tuesday, April 24, 2018. . events for the events containing (Requires a password.) More information on this calendar program is available. Questions regarding events or the calendar should be directed to Tori Corkery. March 2018 April 2018 May 2018 Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa 1 2 3 1 2 3 4 5 6 7 1 2 3 4 5 4 5 6 7 8 9 10 8 9 10 11 12 13 14 6 7 8 9 10 11 12 11 12 13 14 15 16 17 15 16 17 18 19 20 21 13 14 15 16 17 18 19 18 19 20 21 22 23 24 22 23 24 25 26 27 28 20 21 22 23 24 25 26 25 26 27 28 29 30 31 29 30 27 28 29 30 31 Tuesday, April 24, 2018 12:00 pm in 243 Altgeld Hall,Tuesday, April 24, 2018 #### Genus bounds in right-angled Artin groups ###### Jing Tao (University of Oklahoma) Abstract: In this talk, I will describe an elementary and topological argument that gives bounds for the stable commutator lengths in right-angled Artin groups. 2:00 pm in 347 Altgeld Hall,Tuesday, April 24, 2018 #### Estimates of Dirichlet heat kernels for subordinate Brownian motions ###### Panki Kim (Seoul National University) Abstract: In this talk, we discuss estimates of transition densities of subordinate Brownian motions in open subsets of Euclidean space. When open subsets are $C^{1,1}$ domain, we establish sharp two-sided estimates for the transition densities of a large class of killed subordinate Brownian motions whose scaling order is not necessarily strictly below 2. Our estimates are explicit and written in terms of the dimension, the Euclidean distance between two points, the distance to the boundary and the Laplace exponent of the corresponding subordinator only. We also establish boundary Harnack principle in $C^{1,1}$ open set with explicit decay rate. This is a joint with with Ante Mimica. 3:00 pm in 241 Altgeld Hall,Tuesday, April 24, 2018 #### Colorings of signed graphs - a short survey ###### Andre Raspaud (LaBRI, Bordeaux University) Abstract: The signed graphs and the balanced signed graphs were introduced by Harary in 1953. But all the notions can be found in the book of König (Theorie der endlichen und unendlichen graphen 1935). An important, fundamental and prolific work on signed graphs was done by Zaslavsky in 1982. In this talk we are interested in coloring of signed graphs. We will give a short survey of the different existing definitions and the recent results on the corresponding chromatic numbers. We will also present new results obtained by using the DP-coloring. 4:00 pm in 1 Illini Hall,Tuesday, April 24, 2018 #### Moduli of Twisted Curve ###### Hao Sun (UIUC) Abstract: I'll give an introductory talk about the twisted curves. Twisted curves are related to the study of r-spin Witten classes and r-spin geometry of the moduli space of curves.
2019-02-19 17:35:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.529154360294342, "perplexity": 577.77546724749}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247490806.45/warc/CC-MAIN-20190219162843-20190219184843-00076.warc.gz"}
http://math.stackexchange.com/questions/247975/logic-functions-and-statements?answertab=votes
# Logic: Functions and statements What is the relationship between the concept of an equation (a statement) and the concept of a function (and the concept of morphisms in category theory)? I'm going to use equations as the most important subclasses of statements. To take a seemingly trivial example, the equation (E1) 2*x = 5. This equation which could be rewritten as (E2) f(x) => 2*x -5, but E1 and E2 are technically not equivalent. With f(x)=> I mean, the statement is a function of x. For every x f(x) returns true or false, depending on the contents of f(x). The equation E1 can be true for values depending on x. The truth value is dependent on the value of x, i.e. E1 is a function which returns a bool, given a value of x. E2 returns a value. I haven't defined domain and co-domain. Say it is N. E2 is always true. So if I write (E3) f(x) = 2*x -5 and f(x)=0, E3 and E1 become equivalent. In other words, everything is a function, because equations are simply boolean functions. In (E2), f(x) could be seen as a placeholder for (E1). E2 says: there is an expression, which is an equation, which has the contents 2*x=5. In a programming language this could be all said much clearer. In python E1 would be: 2*x ==5 and E3: def f(x): return 2*x-5==0. More generally, as I understand it, the approach of Russell/Wittgenstein was to reduce everything to statements. In category everything is reduced to functions (or morphisms in it's own parlance). Edit: some further notes to clarify. - You say that the equation $2x=5$ ‘could be rewritten as’ $f(x)=2x-5$; this is absolutely false. $(E1)$ is a statement, something with a truth value, while $(E2)$ is an incomplete definition $-$ specifically, the definition of the function $f$ $-$ and as such has no truth value; they are completely different species of animal. –  Brian M. Scott Nov 30 '12 at 11:25 I have rewritten the statement. However I don't see anything wrong with writing f(x) = 2x -5. This can be interpreted as saying: 2x -5 depends on x. Rather the traditional statement f(x) = 2x-5, should be written as f(x) := 2x-5, because it is a definition of a function, and not an equation. "=" acts as an assignment operator not as a comparison operator ("=="). This works obviously in most cases, because mathematicians can tell the difference between "=" and "==". This doesn't mean it's completely rigorous. –  RParadox Nov 30 '12 at 11:36 I saw that you’d rewritten it; my objection still holds. I don’t agree that ‘$f(x)=2x-5$’ can be interpreted as saying that $2x-5$ depends on $x$. It has two possible interpretations: (1) an incomplete definition of $f$, and (2) an assertion that some previously defined function $f$ is the same as the function $2x-5$. The latter does make it a statement, but one that is unrelated to the statement $2x=5$. –  Brian M. Scott Nov 30 '12 at 11:41 The relationship is that f(x) takes the value of 0 when the equation is true. In fact equations can be rewritten as A=B <=> A-B=0. define f(x)=A-B and let f(x)=0. There you go. –  RParadox Dec 1 '12 at 22:24 No. Under interpretation (1) of my previous comment it’s meaningless to talk about the truth or falsity of $f(x)=2x-5$. Under interpretation (2) the statement ‘$f(x)=2x-5$ is true’ is meaningful but has nothing at all to do with the statement ‘$2x=5$’; rather, it’s equivalent to the statement ‘$f(x)-(2x-5)=0$’. –  Brian M. Scott Dec 1 '12 at 22:33 Though not widely used for some reason, I have found the following definition to be useful. Given sets A and B, f is said to be a function mapping A to B iff: $\forall x(x\in A\rightarrow f(x)\in B)$ Built into this notation is the fact that every element of A has unique image in B. By a simple substitution, we have: $(f(a)=b \wedge f(a)=c)\rightarrow b=c$ As for morphisms in category theory, it's best not to think of them as functions (although they can be functions). Category theory is less like set theory (with its elements, sets and functions) and more like graph theory (with its nodes and directed arrows). The difference is that there may be any number of arrows (morphisms) between any pair of nodes (objects). Every node $x$ has an identity arrow with a source and target node at $x$. Composition of arrows is defined for any pair of arrows with compatible source and target nodes. And composition of arrows is associative. - Ok, great. Are equations or more generally relations, relations of graphs? For instance in computer science we evaluate A+B == C*D, by reducing each side to a boolean expression. –  RParadox Dec 4 '12 at 8:37 You can use a 2D graph to show which values satisfy some relation between a pair of variables. Each point (dot or pixel) has an ordered pair of numbers associated with it. The graph of the equation $y=2x+3$ is a straight line. For each point on that line, the $x$ and $y$ co-ordinates are such that $y=2x+3$. The graph of the relation $y<2x+3$ is the region below the same line. For each point in that region, the co-ordinates are such that $y<2x+3$. In your equation here, you have 4 variables. It is true if and only if A+B has the same numerical value as C*D. –  Dan Christensen Dec 4 '12 at 13:52 It is not a direct answer on the question, but Brian mostly answered that in the comment. There are many possible ways of using the abstract arrows of categories to express sentences. I describe one of these: In category theory originally one rather considers various structures and their morphisms, than simply functions. Let us now fix an algebraic first order language: operation symbols $\mu,\nu,..$ of given arities, and a set of variables $x,y,...$. We can formally build terms out of these and then equations are the atomic formulas of the form $\tau(\vec x)=\sigma(\vec x)$ where $\tau,\sigma$ are terms containing variables within $\vec x=(x_1,..,x_n)$. Now consider the category of algebraic structures of the given type. Then, for example, to the given equation $\tau(\vec x)=\sigma(\vec x)$ we can assign the canonical homomorphism from the free algebra on $\{x_1,..,x_n\}$ to its quotient by the given equation. Or, another way, for any algebra $A$ and equation as above over the variables $x_1,..,x_n$, we can assign them the injection $$\{(a_1,..,a_n)\in A^n\mid \tau(\vec a)=\sigma(\vec a)\} \to A^n.$$ - Interesting. This assumes it is possible to distinguish between a first order language and a second order language and that many of those loaded words, such as structures, functions, morphisms, free algebras, equation, etc. are always perfectly clear. –  RParadox Nov 30 '12 at 12:05 Categories are diagrams of morphisms, a function is a morphism and an equation $f=g$ is a diagram with the following property. Take the two morphisms $f: A \rightarrow B$, $g: C \rightarrow D$ and the corresponding morphisms $h: A \rightarrow C$ and $i: B \rightarrow D$. $f=g \iff$ $g$ and $h$ are the identity. This is very easy to see in vector spaces: two vectors are equal if and only if they map every point A to the same point B. If any other vector maps C to D, and A is equal to C, B has to be equal to D. Or put more simple: any two vectors that are equal map to the same point from 0. Or: if A=A then A-A=0, so the inverse of every vector that is equal to A will retract A to zero (this requires the inverse though). For instance two triangles are equal for instance, if put above each other there is only one triangle left. To compare two triangles in this way requires an affine transformation. In other words in category theory morphisms (functions) are more fundamental than equations and equations can be explained by the diagram given above. In the category of sets every relation is a subset of the cartesian product. Functions and equations are relations. To give another example in the category of programs, every side is evaluated as a tree to true or false or a simple expression. Both sides are the result of the function, call it eval() which maps every statement to a simple expression. In my example 2*x=5, 2*x is an expression which gets evaluated depending on x. If x is 3, 2*x is being evaulted as 6 and can be compared to 5. -
2015-08-30 14:44:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8512953519821167, "perplexity": 287.85810873410037}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644065318.20/warc/CC-MAIN-20150827025425-00036-ip-10-171-96-226.ec2.internal.warc.gz"}
http://stackexchange.com/filters/19305/palos-filter-small-tags-sites
Small tags/sites 21 tags on Small tags/sites ## Exponentiation with fully homomorphic encryption I have often heard that because a fully homomorphic encryption scheme allows for both additions and multiplications on encrypted data, most other operations can be simulated. I don't understand how ... ## RSA Encryption - Finding P and Q I'm trying to figure out an RSA Encryption (using Java) by finding out P and Q. Here is what I have to do: I have to generate two random numbers (P and Q) and following the guidelines: P & Q ... 2 answers | 1 hour ago by Dan on stackoverflow.com ## Algorithm negotiation fail SSH in Jenkins I'm trying to ssh from Jenkins to a local server but the following error is thrown: [SSH] Exception:Algorithm negotiation fail com.jcraft.jsch.JSchException: Algorithm negotiation fail at ... 3 answers | 1 hour ago by sarbo on stackoverflow.com ## CryptographicException (The parameter is incorrect) on Windows 8.1 but not Windows 7 I have some code that's been working fine on Windows 7 but fails now that I've started using a Windows 8.1 dev box (see repro code below). Is RSA different on Windows 8.1 vs. Windows 7? using ... ## Best Cryptography Algorithm? Out of curiosity, what is "the best cryptography algorithm" for you as a programmer, given both security and ease of implementation? 13 answers | 2 hours ago by Joe Morgan on stackoverflow.com ## Using NaCl/libsodium crypto in Objective-C I try to find out, what's the right way to use the NaCl library in Objective-C. Keygeneration seems easy: - (void) generateKeypair { unsigned char pk[crypto_box_PUBLICKEYBYTES]; unsigned char ... ## IntelliJ can't find jdk documentation even though when I visit the URL it works When I try to view javadocs in Intellij 13 or 14, it doesn't work. I see this error message: Following external urls were checked: ... ## Get Keychain password in encrypted format - iOS I am working on an iOS app which uses the Keychain, Security.framework and Apple's KeychainWrapper class to securely store the users password in my app. The app allows users to make accounts. When a ... ## Greek symbols in labels for bar chart in MATLAB I would like to use greek symbols in the ylabels for my barh() chart. I tried the following but it didn't really work: tplot = barh(mdata, 'BarWidth', 0.3); set(gca,'xgrid','on') lbl = ... ## disable spell check in eclipse for javadoc only Is it possible to disable spell check in eclipse for javadoc only? ## Nothing shows up after trying to add background image I'm making a Caesar Cipher program, and I'm trying to change the GUI. I've tried adding a background image (from the comments provided by the user(s) below), but now, nothing shows up (the background ... ## Latex references are roman numerals in text and arabic numerals in description I have a number of figures referenced in my document. The figure references appear as roman numerals in the text. However figure descriptions are in arabic numerals. The numbers are also off - in this ... 3 hours ago by erin on stackoverflow.com ## Using Python Regex to Separate a Coordinate Pair into X and Y Commponents Suppose I have the following segment of LaTex Code: & $(-1,1)$ & $(0,\infty)$ How would I use regex in python in order to separate out the coordinate pair into its x and y components? ... 1 answers | 3 hours ago by zara on stackoverflow.com ## Generating keys in Public key Cryptography Our application depends upon several other teams, like the SSO team for instance. So in order for our application to use SSO's (login) service, we do the following: In the following example keystore ... ## How do you center to tables that are alongside each other in latex? I have two tables that are alongside each other in latex. However half of the second table, is cut of by the page on the right. How do i center these tables so both tables can been seen clearly ... ## Explanation of multivariate cryptography How do the key generation, encryption, decryption, signing, and verification processes in multivariate cryptography work? How about an example or at least a simplified general one? Also, what is the ... 1 answers | 4 hours ago by Melab on crypto.stackexchange.com ## Why can the last block contain a full block of padding in CBC Encryption? I'm trying to understand the SSL Poodle Attack and I'm wondering why the last block of a CBC Record can be full of padding? Wouldn't that mean that the useful data was already a multiple of the key ... 2 answers | 4 hours ago by Eugene K on crypto.stackexchange.com ## MathJax basic tutorial and quick reference To see how any formula was written in any question or answer, including this one, right-click on the expression it and choose "Show Math As > TeX Commands". For inline formulas, enclose the formula ... ## How to choose an algorithm I have some understanding of Cryptography and I would like to know if there is a tool or website to help me choose an algorithm based on my own needs (I'm being vague in purpose). I'm looking for a ... I trying to use SRP algorithm but I have some questions: Is that a good choice to use for registration and authorization SRP algorithm with SSL/TLS? And for all other transmission using just ... ## encrypting email with attachment in java My project is to create simple mail client that handle basic operation like sending mail and read from inbox. However I want to make it more secure with some encryption. I know some about encryption ... ## How to make internal links work in Sphinx output (latexpdf)? I insert links following this manual: http://sphinx-doc.org/markup/inline.html#ref-role However, while the internal links work perfectly in the HTML output, they don't work in PDF. The text supposed ... ## Standard serialization format for libsodium-based messages Does anyone know of a standardized serialization format which can be (or already is) widely used for encrypted messages? For a variety of reasons (mostly complexity, age and poor support across a lot ... ## How can I make Homebrew and CocoAspell coexist? After much searching on the Internet, I have never seen a satisfactory answer or explanation to the following problem. I would like to use CocoAspell for system-wide spell-checking on OS X 10.9 (plugs ... I'm running Windows 8 and I can not get javac to work. I have set my PATH in environmental variables to C:\Program Files (x86)\Java\jdk1.7.0_17\bin I have tried both with and without ';' but to ... ## How can I left align latex equations in R Markdown? I'm having some difficulty left aligning equations in R Markdown (i.e. putting the equation on the far left side of page, and aligning subsequent lines). I've generally determined that I want to set ... ## LaTex: Is it possible to vertically center multiple columns in a table? I've found an example here but that only centers one column and I can't really adapt it to my needs. What I'd like is something like this: ## Encoding / Encryption in IHttpHandler Urls I am using IHttpHandler to get data from a database for filtering purpose. But, I am facing a problem with some filter values which contain special characters like "&","/" etc. How can I pass ...
2015-02-27 00:45:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47705817222595215, "perplexity": 2288.8363778189423}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936459513.8/warc/CC-MAIN-20150226074059-00014-ip-10-28-5-156.ec2.internal.warc.gz"}
https://ncatlab.org/nlab/show/geometric+representation+theory
# nLab geometric representation theory ### Context #### Representation theory representation theory geometric representation theory ## Definitions representation, 2-representation, ∞-representation? # Contents ## Idea Geometric representation theory studies representations (of various symmetry objects like algebraic groups, Hecke algebras, quantum groups, quivers etc.) realizing them by geometric means, e.g. by geometrically defined actions on sections of various bundles or sheaves as in geometric quantization (see at orbit method), D-modules, perverse sheaves, deformation quantization modules and so on. Typically the underlying spaces for the sheaves involved are Grassmannians, flag varieties, configuration spaces and the like. Another important tool are cohomological vanishing theorems in appropriate contexts. Some historical landmarks are the Borel-Weil theorem, the Borel-Weil-Bott theorem, Kazhdan-Lusztig conjecture?, the BBDG decomposition theorem, the Beilinson-Bernstein localization theorem and the Lusztig conjectures?. Quoting from (MSRI 14): Representation theory is the study of the basic symmetries of mathematics and physics. Symmetry groups come in many different flavors: finite groups, Lie groups, p-adic groups, loop groups, adelic groups,.. A striking feature of representation theory is the persistence of fundamental structures and unifying themes throughout this great diversity of settings. One such theme is the Langlands philosophy, a vast nonabelian generalization of the Fourier transform of classical harmonic analysis, which serves as a visionary roadmap for the subject and places it at the heart of number theory. The fundamental aims of geometric representation theory are to uncover the deeper geometric and categorical structures underlying the familiar objects of representation theory and harmonic analysis, and to apply the resulting insights to the resolution of classical problems. A groundbreaking example of its success is Beilinson-Bernstein’s generalization of the Borel-Weil-Bott theorem, giving a uniform construction of all representations of Lie groups via the geometric study of differential equations on flag varieties. The geometric study of representations often reveals deeper layers of structure in the form of categorification. Categorification typically replaces numbers (such as character values) by vector spaces (typically cohomology groups), and vector spaces (such as representation rings) by categories (typically of sheaves). It is a primary explanation for miraculous integrality and positivity properties in algebraic combinatorics. A recent triumph of geometric methods is Ngô‘s proof of the Fundamental Lemma, a key technical ingredient in the Langlands program. The proof relies on the cohomological interpretation of orbital integrals, which makes available the deep topological tools of algebraic geometry (such as Hodge theory and the Weil conjectures). A similar description was given by David Ben-Zvi in an MO answer Representation theory is the study of the basic symmetries of mathematics and physics. The primary aim of the subject is to understand concrete linear models for abstract symmetry groups. A signature triumph of the past century is our understanding of compact Lie groups. At the foundation, there is Cartan’s classification of Lie algebras and Borel-Weil-Bott’s uniform construction of all representations in the cohomology of line bundles on flag varieties. Thus we have a list of every compact Lie group we could ever encounter and every way in which it could appear concretely as a matrix group. Furthermore, there is a deep combinatorial theory of other key structures such as the tensor product of representations. The ideas and results of this subject are the basic input to diverse areas from number theory to quantum field theory. Though mysteries still remain, the theory of compact Lie groups is a representative model for what we would like to achieve with other symmetry groups. Because of their universal importance, symmetry groups come in many different flavors: finite groups, Lie groups, p-adic groups, loop groups, adelic groups,… and the list will only increase with the discovery of new important structures. For the above examples, our understanding is still very coarse though the last decades have witnessed breathtaking advances. The Langlands program along with its geometric spinoffs provide a visionary roadmap for where the subject could go in the coming years. In particular, current developments such as the recent proof of the Fundamental Lemma create great optimism that geometric techniques will have a deep impact. Geometric representation theory seeks to understand groups and representations as a consequence of more subtle but fundamental symmetries. A groundbreaking example of its success is Beilinson-Bernstein’s uniform construction of all representations of Lie groups via the geometry of D-modules on flag varieties. The result is not difficult to state and prove but has the Borel-Weil-Bott theorem and the Kazhdan-Lusztig multiplicity conjectures as immediate consequences. A reasonable reaction is to wonder what allows one to prove deep results about representations of Lie groups with so little effort. One answer is that the true focus of the Beilinson-Bernstein theory is not representations but rather the symmetries of flag varieties. The geometric notion of D-module allows one to localize symmetries and to apply the sheaf-theoretic techniques of algebraic geometry. In this way, our understanding of representations of Lie groups follows from that of infinitesimal symmetries of algebraic varieties. There are now many instances where difficult questions about representations can be translated into more tractable questions about geometry. Other famous examples include the Deligne-Lusztig theory of representations of finite groups of Lie type, the Springer theory of representations of Weyl groups, the Kazhdan-Lusztig theory of modules for Hecke algebras, and Lusztig and Nakajima’s theory of representations of Kac-Moody algebras and quantum groups via quiver varieties. In the theory of automorphic forms, one of the fundamental tools is the realization of representations of adelic groups in the cohomology of Shimura varieties (in the case of number fields) and Drinfeld modular varieties (in the case of function fields). The recent proof of the Fundamental Lemma exploits the fact that computing orbital integrals in p-adic groups can be reduced to calculating cohomology of fibers of Hitchin’s integrable system. The twin modern goals of geometric representation theory are to explore the above dictionaries and to discover new unexpected ones. As an important consequence, the geometric realization of representations often reveals deeper layers of structure in the form of categorification. Categorification typically turns numbers (for example, the coefficients of Kazhdan-Lusztig polynomials) into the dimensions of vector spaces (in this case, the Ext groups of intersection cohomology sheaves). It is a primary explanation for miraculous integrality and positivity properties in algebraic combinatorics. At the center of geometric representation theory is Grothendieck’s categorification of functions by ℓ-adic sheaves. An important example is Lusztig’s theory of character sheaves: it provides a uniform geometric source for the characters of all finite groups of Lie type. Another important example is the theory of canonical bases: here categorification replaces representation spaces with linear categories equipped with canonical generating objects. Another broad example is the geometric Langlands program: it provides a categorification of the Langlands program in the setting of function fields, providing new insights into many classical constructions. Geometric representation theory has close and profound connections to many fields of mathematics, which we expect to play a significant role in the program. Perhaps the most significant are to number theory, via the theory of automorphic forms, L-functions and modularity. Much current activity in the field is motivated either directly by problems in number theory or by more tractable geometric analogues thereof. Another major influence on the subject comes from physics, in particular gauge theory, integrable systems, and recently topological string theory. There is a significant interaction with the theory of C∗-algebras through the Baum-Connes conjecture, an instance of which provides an organizing principle for representation theory of real and p-adic groups. Finally, geometric representation theory is closely entwined with very active areas in combinatorics such as Schubert calculus, its affine analogue, and the theory of Macdonald polynomials. ## References • A. Beilinson, J. Bernstein, Localisations de $\mathfrak{g}$–modules, C. R. Acad. Sci. Paris 292 (1981), 15–18. • N. Chriss, V. Ginzburg, Representation theory and complex geometry, book • K. Vilonen, Geometric methods of representation theory, math.AG/0410032 • R. Hotta, K. Takeuchi, T. Tanisaki, D-modules, perverse sheaves, and representation theory, Progress in Mathematics 236, Birkhäuser 2008 • M. Kashiwara, Equivariant derived category and representation of real semisimple Lie groups, in CIME Summer school Representation Theory and Complex Analysis, Cowling, Frenkel et al. eds. LNM 1931 Springer (pdf). • A. Borel et al., Algebraic D-modules, Perspectives in Mathematics, Academic Press, 1987. • M.Kashiwara, W.Schmid, Quasi-equivariant D-modules, equivariant derived category, and representations of reductive Lie groups, Lie Theory and Geometry, in Honor of Bertram Kostant, Progress in Mathematics, Birkhäuser, 1994, pp. 457–488. • Dennis Gaitsgory, Geometric representation theory, 61 pp. 2005 course notes cat0.pdf • David Kazhdan, Algebraic geometry and representation theory, Proc. ICM 1986, vol. I, 849-852, djvu:201K, pdf:399K • V. Ginzburg (Paper read by D. Vogan.) Geometrical aspects of representation theory, Proc. ICM 1986, vol. I, 840-848, djvu:487K, pdf:940K See also the references at orbit method. Last revised on March 23, 2014 at 04:25:49. See the history of this page for a list of all contributions to it.
2018-08-20 01:35:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5158678293228149, "perplexity": 558.9661357877266}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221215487.79/warc/CC-MAIN-20180820003554-20180820023554-00451.warc.gz"}
https://turbomachinery.asmedigitalcollection.asme.org/biomechanical/article/143/3/031002/1086901/Constitutive-Modeling-of-Corneal-Tissue-Influence
Abstract The cornea, the transparent tissue in the front of the eye, along with the sclera, plays a vital role in protecting the inner structures of the eyeball. The precise shape and mechanical strength of this tissue are mostly determined by the unique microstructure of its extracellular matrix. A clear picture of the 3D arrangement of collagen fibrils within the corneal extracellular matrix has recently been obtained from the secondary harmonic generation images. However, this important information about the through-thickness distribution of collagen fibrils was seldom taken into account in the constitutive modeling of the corneal behavior. This work creates a generalized structure tensor (GST) model to investigate the mechanical influence of collagen fibril through-thickness distribution. It then uses numerical simulations of the corneal mechanical response in inflation experiments to assess the efficacy of the proposed model. A parametric study is also done to investigate the influence of model parameters on numerical predictions. Finally, a brief comparison between the performance of this new constitutive model and a recent angular integration (AI) model from the literature is given. 1 Introduction The cornea protects the inner contents of the eye against external insults, provides about two-thirds of eye's refractive power, and transmits nearly 90% of the incident light onto the lens [1,2]. The proper optical function of the cornea depends on its ability to maintain its precise shape under physiological loading conditions. The corneal extracellular matrix, stroma, constitutes almost the entire corneal thickness and serves as the key component in providing the mechanical strength necessary to resist external and internal forces. The microstructure of the stroma resembles a lattice structure, where collagen fibrils are embedded in thin parallel-to-the-surface lamellae [3,4]. The X-ray scattering methods gave detailed information about the preferred collagen fibril orientation in the corneal stroma [59]. In particular, it was found that although collagen fibrils are oriented along with two preferred directions, i.e., nasal-temporal (N-T) and inferior–superior (I–S) within the central region of the human cornea, they tend to be aligned circumferentially in the limbus region. Unlike the human cornea, collagen fibrils in bigger mammals such as bovine are found to be aligned mainly in the I–S direction [10,11]. The degree of in-plane dispersion varies in depth, i.e., although collagen fibrils are more aligned along with N–T and I–S directions within the posterior thirds, they are more isotropically oriented within the anterior thirds [12]. The images obtained from the X-ray scattering technique could not fully characterize the 3D dispersion of collagen fibrils through the corneal thickness. Nevertheless, the second harmonic generation (SHG) images, about a decades ago, provided a reconstruction of 3D collagen fibril orientation [13,14]. These images showed that collagen fibrils are highly interwoven in the anterior region but are parallel to each other in the posterior region. The early works utilized simple linear-elastic or hyperelastic models for representing the corneal constitutive response [15,16]. Later, linear transverse anisotropic models were used to account for the anisotropic response [17,18]. Hyperelastic models considering both isotropic and anisotropic contributions were also used. In these models, the dispersion of collagen fibrils was not considered in early works [19,20], but was added later [11,2126]. These recent models could be categorized into two groups: the angular integration (AI) models and the generalized structure tensor (GST) models. The AI-based models have a straightforward formulation, where the free energy corresponding to the continuous collagen fibril distribution is obtained by performing the direct angular integration of an infinitesimal fraction of fibers in a given direction. The statistical description of the collagen fibril distribution could be represented by either distribution probability density function (PDF) or direct extrapolation of the X-ray scattering data. The AI models with different forms of PDFs have been applied to various soft tissues [2733]. The AI models provide relatively good representations of the mechanical response of biological tissues. However, their main disadvantage is that the numerical implementation of their required direct angular integration scheme is complicated and time-consuming. On the other hand, the GST models are relatively faster. They use the generalized structure tensor with a dispersion parameter to quantify the collagen fibril dispersion [34]. Once the dispersion parameter is specified, the stretching of collagen fibrils at any given macroscopic deformation is known and the required angular integration can be evaluated. However, this model can be used with the limited number of PDFs for collagen fibril orientation because the derivation of analytical relations between PDFs and dispersion parameters is not trivial [34,35]. The collagen fibril distribution in SHG images suggests that both in-plane and out-of-plane dispersions are essential. In this work, we use a GST model that takes into account both in-plane and out-of-plane collagen fibril distribution throughout the cornea. The in-plane distribution is approximated by fitting the normal distribution function in polar coordinates to the X-ray scattering data [9]. The out-of-plane distribution of collagen fibrils at a given thickness level has been represented by fitting Gaussian curves to the cutoff angle histogram obtained from the SHG images [14]. We numerically implement the proposed GST model in a commercial finite element software abaqus/standard [36] by writing a user-defined material subroutine (UMAT). The model performance is studied by simulating the results of inflation tests [37] using six different collagen fibril distribution of transversely isotropic (T.I.), isotropic (I.), perfect alignment (P.A.), planar dispersion (P.D.), planar isotropic (P.I.), and full-thickness variation (F. V.). A parametric study is also performed to determine the effects of collagen fibril interweaving on stress profiles across the corneal thickness. Finally, it is shown that the proposed model has similar functionality as the available AI model from the literature [26], yet cheaper in terms of the computational expenses. The remainder of this paper is organized as follows. In Sec. 2, we review the continuum mechanical framework and present the main constitutive equations. The governing equation is briefly summarized in Sec. 3. The numerical results are shown in Sec. 4. Finally, we finish in Sec. 5 with some concluding remarks. In Appendix  A, we present the details of our code verification. 2 Continuum Mechanical Framework This section covers the large deformation kinematics required for describing the hyperelastic anisotropic behavior of the corneal stroma. A similar framework has been previously applied to soft materials [3840]. 2.1 Kinematics. Let $xR$ represent an arbitrary material point in the fixed reference configuration of the body $BR$. The referential body $BR$ undergoes a motion $x=χ(xR,t)$ to the deformed body $Bt$ with the deformation gradient given by $F=∇χ, and J=detF>0$ (1) The right and left Cauchy–Green tensors are given by $C=F⊤F$ and $B=FF⊤$, respectively. The deformation gradient admits the polar decomposition, $F=RU$, where R is the rotation and $U=C$ is the stretch. The distortional part of the deformation gradient is $Fdis=J−1/3F, and detFdis=1$ (2) The distortional right and left Cauchy–Green deformation tensors are $Cdis=Fdis⊤Fdis=J−2/3C andBdis=FdisFdis⊤=J−2/3B$ (3) respectively. We assume there are two families of collagen fibrils in the corneal stroma with their mean referential directions denoted by unit vectors $a04$ and $a06$, respectively. Additionally, we introduce a unit vector an—normal to the plane spanning by $a04$ and $a06$—to identify the out-of-plane direction. The invariants $I¯1, I¯4, I¯6$ and $I¯n$ are written as $I¯1=trCdis, I¯i=Cdis:a0i⊗a0i for i=4,6and I¯n=Cdis:an⊗an$ (4) We use the generalized structure tensor Hi to quantify the dispersion of both families of collagen fibrils [35,41] $Hi=A1+Ba0i⊗a0i+(1−3A−B)an⊗an for i=4,6$ (5) with constants A and B written as $A=2κipκop and B=2κop(1−2κip)$ (6) Note that $κip$ and $κop$ in the above expression are in-plane and out-of-plane dispersion parameters whose characteristics will be discussed in the following. 2.2 Probability Density Functions for Collagen Fibrils With Dispersion. The detailed collagen fibril microstructural information could be obtained from the SHG images, which fully characterize their in-plane and out-of-plane angular distributions [13,14,26]. Here, the mean orientation of the collagen fibrils at the reference state is represented by a unit vector N in terms of two Eulerian angles $Θ∈[0,2π]$ and $Φ∈[−π/2,π/2]$. We assume that the base vector $e3$ is the out-of-plane direction (see Fig. 1). We use the bivariate Von-Mises distribution function $ρ(Θ,Φ)=ρip(Θ)ρop(Φ)$ to describe the dispersion of collagen fibrils over the unit sphere [35] $ρip(Θ)= exp [a cos 2(Θ±ξ)]I0(a) andρop(Φ)=22bπ exp [b(cos 2Φ−1)]erf(2b)$ (7) where a and b denote the concentration parameters for each distribution function $ρip(Θ)$ and $ρop(Φ)$, ξ denotes the angle between the mean collagen fibril orientation and the base vector $e1$, and $I0(a)$ denotes the modified Bessel function of the first kind of order 0. According to Holzapfel et al. [35], both in-plane and out-of-plane dispersion parameters are defined as $κip=12π∫02πρip(Θ) sin2ΘdΘ, andκop=14∫−π/2π/2ρop(Φ) cos3ΦdΦ$ (8) Fig. 1 Fig. 1 Close modal The closed-form relations between dispersion parameters and concentration parameters are obtained from Eqs. (7) and (8) $κip=12−I1(a)2I0(a), κop=12−18b+142πb exp(−2b)erf(2b)$ (9) where $κip∈[0,1]$ and $κop∈[0,1/2]$ are dispersion parameters, and $I1(a)$ is the modified Bessel function of the first kind of order 1. In Fig. 2, we project the total PDF in Eq. (7) onto the surface of a unit sphere with different combinations of in-plane and out-of-plane dispersion parameters; here one family of fibers with orientation a0 that is aligned with the unit vector $N=[1,0,0]⊤$ is considered. The out-of-plane normal is set to be $an=[0,0,1]⊤$. As $a→0$ and $b→0$, the collagen fibrils are evenly distributed. Conversely, as $a→∞$ and $b→∞$, the collagen fibrils are perfectly aligned long with the mean orientation. The collagen fibrils are isotropically distributed within $x1−x2$ plane as $a→0$ and $b→∞$, and are isotropically distributed within $x1−x3$ plane as $a→∞$ and $b→0$. Accordingly, the generalized structure tensor H for one family of fibers could be simplified into five special cases: Fig. 2 Fig. 2 Close modal • Perfect alignment—$H=a0⊗a0$; • Isotropic dispersion—$H=(1/3)1$; • Transversely isotropic—$H=κ1+(1−3κ)a0⊗a0$ when $κ=1−2κop$; • Planar dispersion—$H=kI+(1−2κ)a0⊗a0$ when I is the 2D identity and k is the dispersion parameter in the plane; • Planar isotropic—$H=(1/2)I$. 2.3 Free Energy. The free energy $ψR$ of corneal stroma per unit reference volume is additively decomposed into (1) isotropic contribution from underlying matrix $ψRm$ and (2) anisotropic contribution from two families of collagen fibrils $ψRfi$ $ψR=ψRm(I¯1,J)+∑i=4,6ψRfi(Cdis,Hi)$ (10) The matrix domain is treated as a nearly incompressible neo-Hookean material $ψRm=12G0(I¯1−3)+12K(lnJ)2$ (11) where G0 denotes the ground state shear modulus and K denotes the bulk modulus. The mechanical response of collagen fibrils is modeled by the following exponential form [34]: $ψRfi=k12k2(exp(k2(I¯i∗−1)2)−1) for i=4,6$ (12) where k1 and k2 denote two the stress-like parameters. The distortional generalized invariant $I¯i∗$ is given by $I¯i∗=tr(HiCdis)=AI¯1+BI¯i+(1−3A−B)I¯n for i=4,6$ (13) It is worth noting that the collagen fibril free energy does not have any volumetric contribution to the total free energy. Furthermore, collagen fibrils are not able to withstand any compression, so if $I4¯≤1$ and $I6¯≤1$, the free energy $ψRfi$ is completely omitted in Eq. (10). Based on thermodynamic restrictions, the Cauchy stress is then given through $T=2J−1F∂ψR∂CF⊤=Tm+∑i=4,6Tfi$ (14) where $Tm=J−1[G0(Bdis)0+K(lnJ)1]$ (15a) $(Bdis)0=Bdis−13tr(Bdis)$ (15b) and $Tfi=2J−5/3[k1(I¯i∗−1)exp(k2(I¯i∗−1)2)][FHiF⊤−13tr(HiC)1]$ (16) are the stress contributions from the underlying matrix and collagen fibrils, respectively. 3 Governing Equations The balance of linear momentum in the deformed body $Bt$ under the equilibrium condition is given by $divT=0$ (17) where T the total Cauchy stress given by Eq. (14). The surface traction on the deformed body surface $∂Bt$ is given by $t(n)=[[T]]n$ (18) where n is the out-normal to $∂Bt$, and $[[•]]$ is the jump operator, defined as the difference between the quantity inside and outside the domain. 4 Results and Discussion The proposed model is implemented numerically in abaqus/standard [36] by writing a UMAT, and its verification is found in Appendix  A. In this section, we investigate the capabilities of the proposed model by simulating the standard inflation test. 4.1 Experimental Measurements. We use the previous inflation experimental results of Anderson et al. [37]. In these experiments, porcine corneal samples, with the narrow ring of surrounding scleral tissue, were mounted such that the portion that connects the limbus and the sclera was fully fixed. An internal pressure with a maximum value of 100 mmHg was gradually (quasi-static condition) applied to the samples' posterior surface. Meanwhile, the apical displacement was continuously monitored by a CCD laser displacement sensor and plotted against the pressure. 4.2 Geometry and Boundary Conditions. The first step in any numerical simulation is to define an accurate geometrical representation of the sample. Since we do not have any information about the exact geometry used in inflation tests [37], a generic but popular form is adopted. In particular, we use a biconic surface equation in a cylindrical coordinate system ${Θ,r,x3}$ for both anterior and posterior surfaces of the cornea [24] $x3−G+r2E1+1−r2F=0$ (19) with $E= cos2(Θ−Θx1)Rx1+ sin2(Θ−Θx1)Rx2$ (20) and $F=(Qx1+1) cos2(Θ−Θx1)Rx12+(Qx2+1) sin2(Θ−Θx1)Rx22$ (21) Here, G is the maximum vertical height at r =0, both $Rx1$ and $Rx2$ are the maximum curvatures of the principal meridians along x1 and x2 directions, respectively. $Θx1$ is the direction of the steepest principal meridian, both $Qx1$ and $Qx2$ are the asphericity parameters in the directions $Θx1$ and $Θx1+π/2$, respectively. We use referential unit vectors $a04$ and $a06$ to represent the two mean orientations of collagen fibrils (see Fig. 3(a)). In the central region, two families of the collagen fibrils are running from N–T (dashed-line) and I–S (dotted-line) directions in a 3D curved fashion. In the limbus region, one family of collagen fibrils (dashed-line) is running circumferentially, and another (dotted-line) is pointing outward from the center. Additionally, the out-of-plane direction is denoted as the unit vector an (solid line). Fig. 3 Fig. 3 Close modal For simplicity, we assume that both families of collagen fibrils share the same in-plane dispersion parameter $κip$. Guided by the previous study on X-ray scattering images [9], the spatial distribution of in-plane dispersion is given by [24] $κip(Θ)=(κipmin+κipmax2)−(κipmax−κipmin2)cos 4Θ$ (22) where $κipmax=0.5$ and $κipmin=0.1$ are the maximum and minimum value, respectively. After adding the r dependency, Eq. (22) becomes $κip(Θ,r)=κipmin+12(κip(Θ)−κipmin)(1−cos 2πrRTZ)$ (23) where $RTZ=5.5$ mm denotes the radius of the transition zone. Note that we assigned a homogeneous in-plane dispersion $κip=0.5$ in the limbus region. The visualization of Eq. (23) is shown in Fig. 4(a). In the process of assigning the out-of-plane parameter $κop$ across the thickness, we used a local coordinate $s∈[0,1]$ parallel to the out-of-plane unit vectors. The local coordinate s =0 at the anterior surface, while s =1 at the posterior surface (see inset plot in Fig. 5). Guided by the SHG image, where the degree of interweaving between collagen fibrils is found to be varying exponentially across the thickness, we link the out-of-plane dispersion parameter $κop$ to the local coordinate s via the following function: $κop(s)=κopmin+(κopmax−κopmin)(1−exp(−γds))$ (24) where $κopmin=1/3$ and $κopmax=1/2$ are minimum and maximum value, respectively, and the constant γd controls the nonlinearity of the function (see Fig. 4(b)). Fig. 4 Fig. 4 Close modal Fig. 5 Fig. 5 Close modal The geometry is discretized into U3D8 elements with six elements spanning the thickness, and only a quarter of the entire mesh is presented for clarity (see Fig. 3(b)). For boundary conditions, we fully fix the surface linking the limbus and sclera and applied an internal pressure of P =100 mmHg to the posterior surface. Before running the simulation, one should pay extra attention to the starting point of the simulation. Given that the in vivo measured dimensions $Xphysio$ of the cornea under a physiological loading of $Pphysio=16$ mmHg, we first obtain the stress-free geometry though a zero-pressure algorithm [43,44]. In the algorithm, the mesh connectivity is kept unchanged while the zero-pressure nodal coordinates $Xk+1$ are iteratively updated through $Xk+1=Xk+(Xkdef−Xphysio)$ (25) where $Xk$ and $Xkdef$ denote the zero-pressure and deformed coordinates at kth step. Meanwhile, the mean collagen fibril orientations are consistently mapped back to the zero-pressure configuration. Here, the iteration is terminated based on the global error $e=||Xkdef−XPhysio||∞$. The parameters of the biconic Eq. (19) used for the anterior surface under P =16 mmHg are obtained from previous studies [24], i.e., $Rx1=7.71$ mm, $Rx2=7.87$ mm, $Θx1=0.51π, Qx1=Qx2=−0.41$ and G =2.52 mm. The parameters used for the posterior surface under P =16 mmHg are $Rx1=6.36$ mm, $Rx2=6.69$ mm, $Θx1=0.51π,Qx1=Qx2=−0.52$ and G =1.89 mm. We plot the physiological coordinates as stars, while the deformed and zero-pressure coordinates at each iteration as triangles and squares (see Fig. 6(a)). It is observed that the global error is minimized quickly within about ten iterations (see Fig. 6(b)). Fig. 6 Fig. 6 Close modal 4.3 Comparison. We consider six different collagen fibril distributions, i.e., T.I., P.A., I., P.D., P.I., and F.V., in our simulation, which are based on material parameters given in Table 1. The parameters are selected, such that the apical rise–pressure curves of both F.V. and T.I. fall onto the experimental data as close as possible (see Fig. 7(a)). The other four cases are simulated using their respective dispersion parameters while keeping mechanical parameters unchanged. Fig. 7 Fig. 7 Close modal Table 1 Material parameters used in the simulation T.I.P.A.I.P.D.P.I.F.V. G0 (MPa)0.060.060.060.060.060.06 K (MPa)5.55.55.55.55.55.5 k1 (kPa)20202020200.5 k2400400400400400900 κ[24]01/3N/AN/AN/A $κip$N/AN/AN/AFigure 5(a) 1/2Figure 5(a) $κop$N/AN/AN/A1/21/2Figure 5(b) $γd$N/AN/AN/AN/AN/A2.5 T.I.P.A.I.P.D.P.I.F.V. G0 (MPa)0.060.060.060.060.060.06 K (MPa)5.55.55.55.55.55.5 k1 (kPa)20202020200.5 k2400400400400400900 κ[24]01/3N/AN/AN/A $κip$N/AN/AN/AFigure 5(a) 1/2Figure 5(a) $κop$N/AN/AN/A1/21/2Figure 5(b) $γd$N/AN/AN/AN/AN/A2.5 We compare the contours of Von-Mises stress among all cases under internal pressure of P =100 mmHg (see Fig. 8). For the cases of T.I., P.A., and P.D., the contours share a similar pattern—a cross mark in the central region—indicating that the collagen fibrils along N-T and I-S directions are under tension. In the case of F.V., the Von-Mises stress is much lower at the anterior surface. It is because collagen fibrils near the posterior surface are almost perfectly aligned, making them exhibit an earlier stretch-locking than the collagen fibrils at the anterior surface. In the cases of I. and P.I., no significant stretching of the collagen fibrils is observed. Fig. 8 Fig. 8 Close modal Echoing the main focus of the current study—modeling structural variation in collagen fibrils across the corneal thickness—we plot in Fig. 9 the side view of the same Von-Mises stress contour. The F.V. case could predict a reasonable stress profile across the thickness that is in line with the observed collagen fibril distribution in SHG images. On the other hand, the stress is found to be more concentrated at the anterior surface in the cases of T.I., P.A., and P.D. There is no apparent stress gradient across the thickness in the cases of I. and P.I. Fig. 9 Fig. 9 Close modal In Fig. 7(a), the simulated apical rise–pressure curves are plotted as lines, while the experimental data obtained from Anderson et al. [37] are plotted as circles. The cases of T.I. and F.V. can capture the experimental results quite well. The case of P.A. exhibits the earliest stretch-locking behavior, while the case of I. shows no fiber engagement under the same boundary conditions. Interestingly, for planar cases of P.D. and P.I., the collagen fibrils exhibit a relatively earlier stretch-locking behavior caused by the narrower dispersion space. 4.4 Parametric Study. Here, we investigate the influence of decay rate constant γd on the mechanical response of corneal stroma under inflation. Figure 7(b) compares the apical rise–pressure curves under different values of γd. It is seen that the “stretch locking” behavior occurs earlier as γd increases. It is because a large fraction of collagen fibrils with the narrower out-of-plane dispersion is engaged in the deformation. The case with $γd=2.5$ has a good agreement with the experimental results. More interestingly, as Fig. 10 shows, the spots of high-stress concentration move from the posterior to the anterior side as γd increases. This result could be useful in terms of predicting the location of the highest stress across the thickness with the known collagen fibril architecture. Fig. 10 Fig. 10 Close modal 4.5 Numerical Expensiveness. In this section, we compare the proposed model against a recent AI model [26], in which the collagen fibril free energy is obtained from the angular integration over the unit sphere Ω $ψRAI=ψRm+1m∫Ωρ(a0)w(λf)dΩ$ (26) where $dΩ=sin ΦdΦdΘ$ denotes the differential unit sphere, $a0$ denotes the referential collagen fibril orientation, $ψRm$ denotes the matrix free energy, and w denotes the collagen fibril strain energy, which is a function of fiber stretch $λf$. The integral in Eq. (26) is normalized by $m=∫Ωρ(a0)dΩ$. We implement the AI model numerically in abaqus/standard [36] by writing UMAT. The simulated apical rise–pressure curve using the AI model is shown in Fig. 7(a) as the dotted black line, which shows a good agreement with the experimental data. The CPU elapsed time of simulations under the different number of unconstrained degrees-of-freedom $ndof$ is recorded. It is found that the CPU elapsed time ratio between the two models $tAI/tGST$ is proportional to the number of unconstrained degrees-of-freedom $ndof$ (see Fig. 11). Since the double angular integration in Eq. (26) is evaluated by the Gaussian quadrature scheme, the number of Gauss points has a significant impact on the numerical expensiveness, where the time ratio is nearly 100 at $ndof=140$ by using 64 Gauss points. Overall, the proposed model has almost the same feature as the AI one, but it is cheaper in terms of the simulation cost. Fig. 11 Fig. 11 Close modal 5 Concluding Remarks This work develops a continuum mechanics model considering collagen fibril out-of-plane dispersion across the corneal thickness. In particular, the function that links the out-of-plane dispersion parameter to the corneal thickness serves as one of the important contributions of the current work. The proposed model is numerically implemented, and its capabilities are demonstrated by performing numerical simulations of inflation experiments using six different collagen fibril orientations, i.e., transversely isotropic, isotropic, perfect alignment, planar dispersion, planar isotropic, and full-thickness variation. The results show that the proposed model can replicate the experimental pressure displacement curves very well. It also predicts a reasonable stress profile across the corneal thickness. A parametric study on the decay rate constant indicates that the stress profile across the thickness is sensitive to the collagen fibril out-of-plane dispersion. From the perspective of computational expenses, compared to a recent AI model [26], the performance of the proposed model stands out because while it requires much less computational power, it has almost the same functionality. Looking toward the future, more work yet remains. For example, the model could be strengthened by incorporating more detailed collagen fibril structural information—the interactions between collagen fibril layers. The model could also be extended by adding dissipative mechanisms such as viscoelasticity and considering the corneal gel-like behavior—poroelasticity and fluid migration [45,46]. Funding Data • National Science Foundation, the division of Civil, Mechanical, and Manufacturing Innovation (Grant 1635290; Funder ID: 10.13039/100000001). Appendix A: Verification of Finite Element Implementation To verify the numerical implementations, we compare our simulated results with the analytically tractable solutions. We prescribe a simple shear motion to a matrix cube embedded with a family of collagen fibril with a mean direction of $a04=[ax,ay,0]⊤$ and an out-of-plane direction of $an=[−ay,ax,0]⊤$ (see Fig. 12(a)). To make sure they are unit vectors, the condition of $ax2+ay2=1$ has to be fulfilled. According to Gurtin [47], the simple shear deformation is given by $[F]=[1γ0010001], [B]=[1+γ2γ0γ10001][C]=[1γ0γγ2+10001]$ (A1) with $γ=tan θ$ denoting the amount of shear. Referring to Eq. (5), the generalized structure tensor is given by $[H]=[A+Bax2+Cay2Daxay0DaxayA+Bay2+Cax2000A]$ (A2) with $C=1−3A−B$ and $D=2B+3A−1$. Next, the generalized invariant in Eq. (13) is now given by $I4∗=2A+Bax2+Cay2+2Daxayγ+(A+Bay2+Cax2)(γ2+1)$ (A3) Fig. 12 Fig. 12 Close modal Since the simple shear deformation is a volume preserved motion (i.e., J =1), the Cauchy stress in Eq. (14) could be written as $T=G0B+2[k1(I4∗−1)exp(k2(I4∗−1)2)][FHF⊤]+P1$ (A4) where P is a constitutively indeterminate pressure that is introduced to satisfy the incompressibility constraint. On the numerical side, a single element (U3D8) in abaqus/standard [36] is prescribed with the same deformation. We also take $K=103G0$ to secure a nearly incompressible condition in the simulations. Figure 12(b) compares the analytical solutions against the numerical solutions for the shear stress given by $T12=G0γ+2[k1(I4∗−1)exp(k2(I4∗−1)2)][α+βγ]$ (A5) where two constants are $α=(2B+3A−1)axay$ and $β=A+Bay2+(1−3A−B)ax2$, respectively. The stress is normalized by the matrix shear modulus G0, and three different combinations of dispersion parameters are considered. The excellent agreement between the numerical and analytical solutions suggests that our numerical implementation is fully verified. References 1. Hatami-Marbini , H. , and Etebu , E. , 2013 , “ Hydration Dependent Biomechanical Properties of the Corneal Stroma ,” Exp. Eye Res. , 116 , pp. 47 54 .10.1016/j.exer.2013.07.016 2. Meek , K. M. , and Knupp , C. , 2015 , “ Corneal Structure and Transparency ,” Prog. Retinal Eye Res. , 49 , pp. 1 16 .10.1016/j.preteyeres.2015.07.001 3. Cogan , D. G. , 1951 , “ Applied Anatomy and Physiology of the Cornea ,” , 55 , pp. 329 359 .https://pubmed.ncbi.nlm.nih.gov/14835711/ 4. Maurice , D. M. , 1957 , “ The Structure and Transparency of the Cornea ,” J. Physiology , 136 ( 2 ), pp. 263 286 .10.1113/jphysiol.1957.sp005758 5. Meek , K. M. , Blamires , T. , Elliott , G. F. , Gyi , T. J. , and Nave , C. , 1987 , “ The Organisation of Collagen Fibrils in the Human Corneal Stroma: A Synchrotron x-Ray Diffraction Study ,” Curr. Eye Res. , 6 ( 7 ), pp. 841 846 .10.3109/02713688709034853 6. Newton , R. H. , and Meek , K. M. , 1998 , “ Circumcorneal Annulus of Collagen Fibrils in the Human Limbus ,” Investigative Ophthalmol. Visual Sci. , 39 ( 7 ), pp. 1125 1134 .https://iovs.arvojournals.org/article.aspx?articleid=2161745 7. Newton , R. H. , and Meek , K. M. , 1998 , “ The Integration of the Corneal and Limbal Fibrils in the Human Eye ,” Biophys. J. , 75 ( 5 ), pp. 2508 2512 .10.1016/S0006-3495(98)77695-7 8. , H. , Newton , R. H. , and Meek , K. M. , 2004 , “ X-Ray Scattering Used to Map the Preferred Collagen Orientation in the Human Cornea and Limbus ,” Structure , 12 ( 2 ), pp. 249 256 .10.1016/j.str.2004.01.002 9. Boote , C. , Dennis , S. , and Meek , K. , 2004 , “ Spatial Mapping of Collagen Fibril Organisation in Primate Cornea-an x-Ray Diffraction Investigation ,” J. Struct. Biol. , 146 ( 3 ), pp. 359 367 .10.1016/j.jsb.2003.12.009 10. Hayes , S. , Boote , C. , Lewis , J. , Sheppard , J. , Abahussin , M. , Quantock , A. J. , Purslow , C. , Votruba , M. , and Meek , K. M. , 2007 , “ Comparative Study of Fibrillar Collagen Arrangement in the Corneas of Primates and Other Mammals ,” Anatom. Rec. , 290 ( 12 ), pp. 1542 1550 .10.1002/ar.20613 11. Nguyen , T. , and Boyce , B. , 2011 , “ An Inverse Finite Element Method for Determining the Anisotropic Properties of the Cornea ,” Biomech. Model. Mechanobiol. , 10 ( 3 ), pp. 323 337 .10.1007/s10237-010-0237-3 12. Abahussin , M. , Hayes , S. , Cartwright , N. E. K. , Kamma-Lorger , C. S. , Khan , Y. , Marshall , J. , and Meek , K. M. , 2009 , “ 3d Collagen Orientation Study of the Human Cornea Using X-Ray Diffraction and Femtosecond Laser Technology ,” Invest. Ophthalmol. Visual Sci. , 50 ( 11 ), pp. 5159 5164 .10.1167/iovs.09-3669 13. Morishige , N. , Wahlert , A. J. , Kenney , M. C. , Brown , D. J. , Kawamoto , K. , Chikama , T.-I. , Nishida , T. , and Jester , J. V. , 2007 , “ Second-Harmonic Imaging Microscopy of Normal Human and Keratoconus Cornea ,” Invest. Ophthalmol. Visual Sci. , 48 ( 3 ), pp. 1087 1094 .10.1167/iovs.06-1177 14. Winkler , M. , Chai , D. , Kriling , S. , Nien , C. J. , Brown , D. J. , Jester , B. , Juhasz , T. , and Jester , J. V. , 2011 , “ Nonlinear Optical Macroscopic Assessment of 3-d Corneal Collagen Organization and Axial Biomechanics ,” Invest. Ophthalmol. Visual Sci. , 52 ( 12 ), pp. 8818 8827 .10.1167/iovs.11-8070 15. Bryant , M. R. , Velinsky , S. A. , Plesha , M. E. , and Clarke , G. P. , 1987 , “ ,” Eye Contact Lens , 13 ( 4 ), pp. 238 242 .https://pubmed.ncbi.nlm.nih.gov/3453772/ 16. Hanna , K. D. , Jouve , F. E. , and Waring , G. O. , 1989 , “ Preliminary Computer Simulation of the Effects of Radial Keratotomy ,” Arch. Ophthalmol. , 107 ( 6 ), pp. 911 918 .10.1001/archopht.1989.01070010933044 17. Pinsky , P. M. , and Datye , D. V. , 1991 , “ A Microstructurally-Based Finite Element Model of the Incised Human Cornea ,” J. Biomech. , 24 ( 10 ), pp. 907 922 .10.1016/0021-9290(91)90169-N 18. Bryant , M. R. , and McDonnell , P. J. , 1996 , “ Constitutive Laws for Biomechanical Modeling of Refractive Surgery ,” ASME J. Biomech. Eng. , 118 ( 4 ), pp. 473 481 .10.1115/1.2796033 19. Alastrué , V. , Calvo , B. , Peña , E. , and Doblaré , M. , 2006 , “ Biomechanical Modeling of Refractive Corneal Surgery ,” ASME J. Biomech. Eng. , 128 ( 1 ), pp. 150 160 .10.1115/1.2132368 20. Pandolfi , A. , and Manganiello , F. , 2006 , “ A Model for the Human Cornea: Constitutive Formulation and Numerical Analysis ,” Biomech. Model. Mechanobiol. , 5 ( 4 ), pp. 237 246 .10.1007/s10237-005-0014-x 21. Lanir , Y. , 1983 , “ Constitutive Equations for Fibrous Connective Tissues ,” J. Biomech. , 16 ( 1 ), pp. 1 12 .10.1016/0021-9290(83)90041-6 22. Pinsky , P. M. , van der Heide , D. , and Chernyak , D. , 2005 , “ Computational Modeling of Mechanical Anisotropy in the Cornea and Sclera ,” J. Cataract Refractive Surg. , 31 ( 1 ), pp. 136 145 .10.1016/j.jcrs.2004.10.048 23. Boyce , B. , Jones , R. , Nguyen , T. , and Grazier , J. , 2007 , “ Stress-Controlled Viscoelastic Tensile Response of Bovine Cornea ,” J. Biomech. , 40 ( 11 ), pp. 2367 2376 .10.1016/j.jbiomech.2006.12.001 24. Pandolfi , A. , and Holzapfel , G. A. , 2008 , “ Three-Dimensional Modeling and Computational Analysis of the Human Cornea Considering Distributed Collagen Fibril Orientations ,” ASME J. Biomech. Eng. , 130 ( 6 ), p. 061006 .10.1115/1.2982251 25. Pandolfi , A. , Fotia , G. , and Manganiello , F. , 2009 , “ Finite Element Simulations of Laser Refractive Corneal Surgery ,” Eng. Comput. , 25 ( 1 ), pp. 15 24 .10.1007/s00366-008-0102-5 26. Petsche , S. J. , and Pinsky , P. M. , 2013 , “ The Role of 3-d Collagen Organization in Stromal Elasticity: A Model Based on X-Ray Diffraction Data and Second Harmonic-Generated Images ,” Biomech. Model. Mechanobiol. , 12 ( 6 ), pp. 1101 1113 .10.1007/s10237-012-0466-8 27. Sacks , M. S. , 2003 , “ Incorporation of Experimentally-Derived Fiber Orientation Into a Structural Constitutive Model for Planar Collagenous Tissues ,” ASME J. Biomech. Eng. , 125 ( 2 ), pp. 280 287 .10.1115/1.1544508 28. Driessen , N. J. , Bouten , C. V. , and Baaijens , F. P. , 2005 , “ A Structural Constitutive Model for Collagenous Cardiovascular Tissues Incorporating the Angular Fiber Distribution ,” ASME J. Biomech. Eng. , 127 ( 3 ), pp. 494 503 .10.1115/1.1894373 29. Alastrué , V. , Martinez , M. , Menzel , A. , and Doblaré , M. , 2009 , “ On the Use of Non-Linear Transformations for the Evaluation of Anisotropic Rotationally Symmetric Directional Integrals. Application to the Stress Analysis in Fibred Soft Tissues ,” Int. J. Numer. Methods Eng. , 79 ( 4 ), pp. 474 504 .10.1002/nme.2577 30. Raghupathy , R. , and Barocas , V. H. , 2009 , “ A Closed-Form Structural Model of Planar Fibrous Tissue Mechanics ,” J. Biomech. , 42 ( 10 ), pp. 1424 1428 .10.1016/j.jbiomech.2009.04.005 31. Federico , S. , and Gasser , T. C. , 2010 , “ Nonlinear Elasticity of Biological Tissues With Statistical Fibre Orientation ,” J. R. Soc. Interface , 7 ( 47 ), pp. 955 966 .10.1098/rsif.2009.0502 32. Ateshian , G. A. , Rajan , V. , Chahine , N. O. , Canal , C. E. , and Hung , C. T. , 2009 , “ Modeling the Matrix of Articular Cartilage Using a Continuous Fiber Angular Distribution Predicts Many Observed Phenomena ,” ASME J. Biomech. Eng. , 131 ( 6 ), p. 061003 .10.1115/1.3118773 33. Gasser , T. C. , Gallinetti , S. , Xing , X. , Forsell , C. , Swedenborg , J. , and Roy , J. , 2012 , “ Spatial Orientation of Collagen Fibers in the Abdominal Aortic Aneurysm's Wall and Its Relation to Wall Mechanics ,” Acta Biomater. , 8 ( 8 ), pp. 3091 3103 .10.1016/j.actbio.2012.04.044 34. Gasser , T. C. , Ogden , R. W. , and Holzapfel , G. A. , 2006 , “ Hyperelastic Modelling of Arterial Layers With Distributed Collagen Fibre Orientations ,” J. R. Soc. Interface , 3 ( 6 ), pp. 15 35 .10.1098/rsif.2005.0073 35. Holzapfel , G. A. , Niestrawska , J. A. , Ogden , R. W. , Reinisch , A. J. , and Schriefl , A. J. , 2015 , “ Modelling Non-Symmetric Collagen Fibre Dispersion in Arterial Walls ,” J. R. Soc. Interface , 12 ( 106 ), p. 20150188 .10.1098/rsif.2015.0188 36. Abaqus/Standard , 2019 , Abaqus Reference Manuals , Dassault Systemes Simulia , Providence, RI . 37. Anderson , K. , El-Sheikh , A. , and Newson , T. , 2004 , “ Application of Structural Analysis to the Mechanical Behaviour of the Cornea ,” J. R. Soc. Interface , 1 ( 1 ), pp. 3 15 .10.1098/rsif.2004.0002 38. Bosnjak , N. S. , Wang , S. , and Chester , S. A. , 2017 , “ Modeling Deformation-Diffusion in Polymeric Gels ,” Poromechanics VI , Paris, France , July 9–13, pp. 141 148 .10.1061/9780784480779.017 39. Bosnjak , N. , Wang , S. , Han , D. , Lee , H. , and Chester , S. A. , 2019 , “ Modeling of Fiber-Reinforced Polymeric Gels ,” Mech. Res. Commun. , 96 , pp. 7 18 .10.1016/j.mechrescom.2019.02.002 40. Holzapfel , G. A. , 2000 , Nonlinear Solid Mechanics: A Continuum Approach for Engineering , Wiley , New York . 41. Niestrawska , J. A. , Viertler , C. , Regitnig , P. , Cohnert , T. U. , Sommer , G. , and Holzapfel , G. A. , 2016 , “ Microstructure and Mechanics of Healthy and Aneurysmatic Abdominal Aortas: Experimental Analysis and Modelling ,” J. R. Soc. Interface , 13 ( 124 ), p. 20160620 .10.1098/rsif.2016.0620 42. Meek , K. M. , and Newton , R. H. , 1999 , “ Organization of Collagen Fibrils in the Corneal Stroma in Relation to Mechanical Properties and Surgical Practice ,” J. Refractive Surg. , 15 ( 6 ), pp. 695 699 .10.3928/1081-597X-19991101-18 43. Riveros , F. , Chandra , S. , Finol , E. A. , Gasser , T. C. , and Rodriguez , J. F. , 2013 , “ A Pull-Back Algorithm to Determine the Unloaded Vascular Geometry in Anisotropic Hyperelastic Aaa Passive Mechanics ,” Ann. Biomed. Eng. , 41 ( 4 ), pp. 694 708 .10.1007/s10439-012-0712-3 44. Ariza-Gracia , M. Á. , Zurita , J. , Piñero , D. P. , Calvo , B. , and Rodríguez-Matas , J. F. , 2016 , “ Automatized Patient-Specific Methodology for Numerical Determination of Biomechanical Corneal Response ,” Ann. Biomed. Eng. , 44 ( 5 ), pp. 1753 1772 .10.1007/s10439-015-1426-0 45. Hatami-Marbini , H. , 2014 , “ Hydration Dependent Viscoelastic Tensile Behavior of Cornea ,” Ann. Biomed. Eng. , 42 ( 8 ), pp. 1740 1748 .10.1007/s10439-014-0996-6 46. Hatami-Marbini , H. , and Maulik , R. , 2016 , “ A Biphasic Transversely Isotropic Poroviscoelastic Model for the Unconfined Compression of Hydrated Soft Tissue ,” ASME J. Biomech. Eng. , 138 ( 3 ), p. 031003 .10.1115/1.4032059 47. Gurtin , M. E. , Fried , E. , and Anand , L. , 2010 , The Mechanics and Thermodynamics of Continua , Cambridge University Press , Cambridge, UK .
2023-04-01 18:23:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 172, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6088670492172241, "perplexity": 2834.8797474290195}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950110.72/warc/CC-MAIN-20230401160259-20230401190259-00196.warc.gz"}
https://www.gradesaver.com/textbooks/math/applied-mathematics/elementary-technical-mathematics/chapter-3-section-3-4-volume-and-area-exercise-page-147/9
# Chapter 3 - Section 3.4 - Volume and Area - Exercise: 9 1,000,000 #### Work Step by Step We know that there are 100 centimeters in 1 meter. Using this, we find how many cubic centimeters are in a cubic meter. $1m^3=(100cm)^3=1,000,000cm^3$ After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
2018-07-16 14:56:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6147171258926392, "perplexity": 969.5525538536143}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589350.19/warc/CC-MAIN-20180716135037-20180716155037-00006.warc.gz"}
https://math.stackexchange.com/questions/3033895/solutions-to-a-b-c-fracab-fracbc-fracca-fracba-f
# Solutions to $a,\ b,\ c,\ \frac{a}{b}+\frac{b}{c}+\frac{c}{a},\ \frac{b}{a} + \frac{c}{b} + \frac{a}{c} \in \mathbb{Z}$ I came across a puzzle in a Maths Calendar I own. Most of them I can do fairly easily, but this one has me stumped, and I was hoping for a hint or solution. The question is: What are the solutions to $$\left \{ a,\ b,\ c,\ \dfrac{a}{b}+\dfrac{b}{c}+\dfrac{c}{a},\ \dfrac{b}{a} + \dfrac{c}{b} + \dfrac{a}{c} \right \} \subset \mathbb{Z}$$ I've tried a few things, but don't think I've made any meaningful progress, besides determining that $$a = \pm b = \pm c \$$ are the only obvious possible solutions. My hope is to prove that no other solution can exist. I don't know if it helps, but I also did a brute force search for coprime numbers $$a,b,c$$ for which $$\dfrac{a}{b}+\dfrac{b}{c}+\dfrac{c}{a} \in \mathbb{Z}$$, with $$1 \leq a \leq b \leq c$$, and $$a \leq 100, b \leq 1000, c \leq 10000$$. The reason for coprimality is that if a solution has a common factor, we can divide through by the common factor and have another solution that satisfies the conditions. The triplets I found which satisfy this are: $$(a, b, c) = (1, 1, 1), (1,2,4), (2, 36, 81), (3, 126, 196), (4, 9, 162), (9, 14, 588), (12, 63, 98), (18, 28, 147), (98, 108, 5103)$$ None of these except the first satisfy $$\dfrac{b}{a} + \dfrac{c}{b} + \dfrac{a}{c} \in \mathbb{Z}$$. • Surely, one solution is obvious. – user608030 Dec 10 '18 at 13:10 • How is $(a,b,c)=(1,2,4)$ a solution? $\frac{b}{a} + \frac{c}{b} + \frac{a}{c} = \frac{2}{1} + \frac{4}{2} + \frac{1}{4} = 4 + 1/4$, not an integer. – hellHound Dec 10 '18 at 13:22 • @hellhound it does not satisfy $b/a + c/b + a/c \in \mathbb{Z}$, only $a/b+b/c+c/a \in \mathbb{Z}$ – Shakespeare Dec 10 '18 at 13:25 • Oh, I noticed the line about your search now, my bad. Although, if I had to guess, none of the triples from your search except $(1,1,1)$ would satisfy the problem's requirement. – hellHound Dec 10 '18 at 13:27 • @hellhound correct, none of them do :) – Shakespeare Dec 10 '18 at 13:28 Suppose that $$\displaystyle a,b,c,\frac{a}{b}+\frac{b}{c}+\frac{c}{a},\frac{a}{c}+\frac{b}{a}+\frac{c}{b} \in \mathbb Z$$. Consider polynomial $$P(x)=\left(x-\frac{a}{b}\right)\left(x-\frac{b}{c}\right)\left(x-\frac{c}{a}\right) = x^3-\left(\frac{a}{b}+\frac{b}{c}+\frac{c}{a}\right)x^2+\left(\frac{a}{c}+\frac{b}{a}+\frac{c}{b}\right)x-1.$$ Its coefficients are integers. Since the leading coefficient is $$1$$, all rational roots of $$P$$ are integers. Since the constant term is $$-1$$, it follows that all integer roots of $$P$$ are $$1$$ or $$-1$$ (they must divide the constant term). Since $$\dfrac ab, \dfrac bc, \dfrac ca$$ are rational roots of $$P$$, it follows that $$\dfrac ab, \dfrac bc, \dfrac ca \in \{-1,1\}$$. Let $$(a,b,c)$$ satisfy the requirements. Let $$(a,b,c)$$ are coprime. Then $$abc$$ divides both $$a^2c + b^2a + c^2b$$ and $$a^2b+b^2c+c^2a$$. Let $$p$$ be a prime factor of $$a$$. Let $$d$$ be the largest number such that $$p^d$$ divides $$a$$. Then $$p$$ divides $$b^2c$$ and $$c^2b$$. Assume $$p$$ divides $$b$$ (and does not divide $$c$$). Since $$p^{d+1}$$ divides $$a^2$$, $$ab$$, and $$abc$$, where the latter divides $$a^2c + b^2a + c^2b$$, it follows $$p^{d+1}$$ divides $$b$$. This in turn implies that $$p^{d+1}$$ divides $$a^2b+b^2c+c^2a$$. Now, $$p$$ does not divide $$c$$ by assumption of coprimality, hence $$p^{d+1}$$ divides $$a$$, a contradiction to the maximality of $$d$$. Hence, none of $$a,b,c$$ has a prime factor. So all these numbers are equal $$\pm1$$.
2019-07-20 07:23:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 51, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9008309245109558, "perplexity": 143.73968195777616}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526489.6/warc/CC-MAIN-20190720070937-20190720092937-00408.warc.gz"}
https://economics.stackexchange.com/questions/42939/deriving-aggregate-output-from-labor-demand-and-supply
# Deriving aggregate output from labor demand and supply I was reading the following paper: http://eml.berkeley.edu//~moretti/growth.pdf I got stuck at equation (7) The firm's production function is $$Y_{i}=A_{i}L_{i}^{\alpha}K_{i}^{\eta}T_{i}^{1-\alpha-\eta}$$ Labor supply is $$W_{i}=V\frac{P_{i}^{\beta}}{Z_{i}}=V\frac{\bar{P_{i}}^{\beta}L_{i}^{\beta \gamma_{i}}}{Z_{i}}$$ Labor demand is $$L_{i}=(\frac{\alpha^{1-\eta}\eta^{\eta}}{R^{\eta}}\frac{A_{i}}{W_{i}^{1-\eta}})^{\frac{1}{1-\alpha-\eta}}T_{i}$$ The paper says that if we impose aggregate labor demand is equal to aggregate labor supply (normalized to one), then the aggregate output $$Y=\sum_{i}Y_{i}$$ is $$Y=(\frac{\eta}{R})^{\frac{\eta}{1-\eta}}[\sum_{i}(A_{i}[\frac{\bar{Q}}{Q_{i}}]^{1-\eta})^{\frac{1}{1-\alpha-\eta}}T_{i}]^{\frac{1-\alpha-\eta}{1-\eta}}$$ This step looks drastic to me. How can the aggregate output be derived from the above conditions? Use $$W_{i}=V \cdot \frac{P_{i}^{\beta}}{Z_{i}}=VQ_i$$, then $$L_{i}=\left(\frac{\alpha^{1-\eta} \eta^{\eta}}{R^{\eta}} \cdot \frac{A_{i}}{(V_{i} Q_{i})^{1-\eta}}\right)^{\frac{1}{1-\alpha-\eta}} \cdot T_{i}$$ and $$\sum L_i = {\left({\frac{\alpha}{V}}\right)}^{\frac{1-\eta}{1-\alpha-\eta}} {\left({\frac{\eta}{R}}\right)}^{\frac{\eta}{1-\alpha-\eta}} \sum\left(\frac{A_i}{Q_i^{1-\eta}}\right)^{\frac{1}{1-\alpha-\eta}}T_i = 1$$ $$\frac{V}{\alpha} = {\left({\frac{\eta}{R}}\right)}^{\frac{\eta}{1-\eta}} \left(\sum\left(\frac{A_i}{Q_i^{1-\eta}}\right)^{\frac{1}{1-\alpha-\eta}}T_i\right)^{\frac{1-\alpha-\eta}{1-\eta}}$$. Use the FOC on labor, $$W_i=\alpha \frac{Y_i}{L_i}$$, then $$\sum Y_i = \frac{V}{\alpha}\sum L_iQ_i = \frac{V}{\alpha} \bar{Q}$$, and replace $$\frac{V}{\alpha}$$ with above equation then you get equation (7).
2021-04-16 17:45:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 12, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8839653730392456, "perplexity": 1319.7475532266747}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038088245.37/warc/CC-MAIN-20210416161217-20210416191217-00253.warc.gz"}
https://math.stackexchange.com/questions/2202752/what-symbol-is-this-mathscrs
# What symbol is this? $\mathscr{S}$ What symbol is this? $\mathscr{S}$ \mathscr{S} I drew it on Detexify to find how to write in mathjax, but it doesn't tell me what it is... Context: "...call the resulting polyhedron $\mathscr{S}.$" I knew that it probably didn't have any specific mathematical meaning, but I was just wondering where it was from, like $\theta$ is from the Greek Alphabet. • It's just S in a different (obviously fancy) font. Did you encounter it in any particular context, and that's what you're asking about? – pjs36 Mar 25 '17 at 17:59 • The context given in the edit is the definition of what $\mathscr S$ is going to mean in the following text. – hmakholm left over Monica Mar 25 '17 at 18:02 It's just an upper-case S in a "script" typeface. This doesn't have any fixed meaning any more than $S$ does, but can mean whatever a writer decides to call by a capital script S. This meaning ought to be defined in whatever context you find it used. If you give more of the context you have found the letter in, we might be able to make a more informed guess at what it means there. If you needed to go to Detexify, it's also possible what what you're really looking at is $\varphi$ (\varphi), a lower-case Greek letter phi. This has a number of conventional meanings, but is also often used as "just a letter" that an author picks at random to represent something specific to the context. • Oh I see, thanks – suomynonA Mar 25 '17 at 18:00 That is a variation of the big 'S'. The symbol $\mathscr{S}$ is useful in math writings. For example, if $S$ is a nonempty set, then the symbol $\mathscr{S}$ is a convenient choice to denote a sigma-algebra over $S$. It is in this case also informative. It is just simply an alphabet used to denote any value,set,vector or anything else. It is made in different font and nothing special in it.
2020-04-05 07:53:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6771419644355774, "perplexity": 570.7666348181345}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370529375.49/warc/CC-MAIN-20200405053120-20200405083120-00007.warc.gz"}
https://www.physicsforums.com/threads/economics-elasticity-of-substitution-between-capital-and-labor.626716/
Economics: Elasticity of substitution between capital and labor 1. Aug 9, 2012 Cinitiator 1. The problem statement, all variables and given/known data Am I right or wrong on the following? The more capital is needed to replace one unit of labor to attain the same production level, the lower the elasticity of substitution between capital and labor. It can measure how productive the capital in question is, and/or how much has been invested given the condition of diminishing returns. This is the way I understood this concept. However, I'm not completely sure that my understanding of it is right. If it isn't, please tell me. 2. Relevant equations - 3. The attempt at a solution
2017-08-17 10:18:32
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8152912259101868, "perplexity": 637.9422677441356}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886103167.97/warc/CC-MAIN-20170817092444-20170817112444-00528.warc.gz"}
https://www.gradesaver.com/textbooks/math/statistics-probability/elementary-statistics-12th-edition/chapter-3-statistics-for-describing-exploring-and-comparing-data-3-3-measures-of-variation-basic-skills-and-concepts-page-109/26
# Chapter 3 - Statistics for Describing, Exploring, and Comparing Data - 3-3 Measures of Variation - Basic Skills and Concepts - Page 109: 26 Range:37 minutes, variance:85.5 minutes, standard deviation: 9.247 minutes. #### Work Step by Step Range is the difference of the maximum and minimum of the set: range=49-12=37 minutes. Calculated by calculator, the mean is: 20.98 minutes. The variance is the sum of the differences of each value and the mean on the power of two, divided by the number of values minus 1: variance=$\frac{(30-20.98)^2+(19-20.98)^2+...+(30-20.98)^2}{50-1}=85.5$ minutes. The standard deviation is the square root of the variance: standard deviation=$\sqrt{85.5}=9.247$ minutes. After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
2019-11-20 17:59:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8788682818412781, "perplexity": 1275.7859088591124}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670597.74/warc/CC-MAIN-20191120162215-20191120190215-00353.warc.gz"}
https://calculus.subwiki.org/w/index.php?title=Parametric_derivative&mobileaction=toggle_view_desktop
# Parametric derivative ## Definition ### Algebraic definition The parametric derivative $dy/dx$ for a parametric curve $x = f(t), y = g(t)$ at a point $t = t_0$ is given as follows, where $f'(t_0)$ and $g'(t_0)$ both exist and $f'(t)$ is nonzero and of constant sign for an open interval around $t_0$: $\! \frac{dy}{dx} |_{t = t_0} = \frac{g'(t_0)}{f'(t_0)}$ As a general function of $t$, the parametric derivative $dy/dx$ is defined as $g'(t)/f'(t)$. NOTE: When calculating the general expression for the parametric derivative, before canceling any factors between $g'(t)$ and $f'(t)$, it is important to separate out the cases where that common value is zero. For any points where $f'(t) = 0$, the parametric derivative is not defined (the ordinary derivative may still be defined, but we would need another method to calculate it). MORE ON THE WAY THIS DEFINITION OR FACT IS PRESENTED: We first present the version that deals with a specific point (typically with a $\{ \}_0$ subscript) in the domain of the relevant functions, and then discuss the version that deals with a point that is free to move in the domain, by dropping the subscript. Why do we do this? The purpose of the specific point version is to emphasize that the point is fixed for the duration of the definition, i.e., it does not move around while we are defining the construct or applying the fact. However, the definition or fact applies not just for a single point but for all points satisfying certain criteria, and thus we can get further interesting perspectives on it by varying the point we are considering. This is the purpose of the second, generic point version. ## Relation with ordinary derivative ### Parametric expressions aren't necessarily functions The notation $dy/dx$ should be used only in the context that $y$ is a function of $x$, i.e., a single value of $x$ gives rise to a single value of $y$. Generally speaking, this is not guaranteed with parametric curves. For instance, for a parametric curve $x = f(t), y = g(t)$, $y$ is expressible as a function of $x$ only if $g(t)$ is completely determined by $f(t)$. This is the case, for instance, when $f$ is a one-one function. But it's not always the case. Consider a circle: $x = \cos t, y = \sin t$ In this case, a single value of $x$, in most cases, corresponds to two values of $y$ that are negatives of each other. That's because if $x_0 = \cos t_0$, we have also $x = \cos (-t_0)$, so both $\sin t_0$ and $-\sin t_0$ work. The only case where $y$ is unique in terms of $x$ is when $x = 1$ and $x = -1$. Geometrically, vertical secant lines intersect the upper semicircle and lower semicircle, and these are the two $y$-values for a given $x$-value. ### "Nice" parametric expressions define functions locally at most points As described above, for a parametric curve, $y$ need not globally be a function of $x$. However, even in the presented example of a circle, for most $t_0$, if we restrict $t$ to a small enough open interval around $t_0$, $y$ is a function of $x$ for the part of the curve where $t$ is restricted to that interval. So it's "locally" a function. Differentiation being a local operation, it still makes sense to take the derivative at the point. The parametric derivative should therefore be understood as the derivative for the function obtained by taking the local restriction. In the circle example, the only points at which $y$ is locally not a function of $x$ are the ones at the far left and far right: $x = 1$ and $x = -1$ (ironically, these are the only points with unique global $y$-values). ### If $f'(t)$ is nonzero (and of constant sign) for $t$ around $t_0$, the parametric expression defines a function locally around $t_0$ If $f'(t)$ is nonzero and of constant sign around $t_0$, then (depending on the sign)$f$ is an increasing or decreasing function around <mth>t_0[/itex]. In particular, within that open interval, $f$ is a one-one function, so no value of $x$ repeats. Thus locally $y$ is a function of $x$ on that interval. NOTE: To deduce increasing/decreasing behavior, it is not enough to assume that $f'(t_0) \ne 0$. For a counterexample, see positive derivative at a point not implies increasing around the point. NOTE 2: The assumption of constant sign is not necessary; it can be deduced from the derivative being nonzero. That's because derivative of differentiable function satisfies intermediate value property. ### The parametric derivative makes sense and is the correct expression when it is defined We have now come full circle. In the case that $f'(t) \ne 0$ around $t_0$, $y$ is locally a function of $x$ around $t_0$. In such cases, if $g'(t_0)$ also exists, the expression $\frac{g'(t_0)}{f'(t_0)}$ gives the parametric derivative. The proof that the parametric derivative expression is correct follows from the chain rule for differentiation. As established above, $y$ is locally a function of $x$ around $t_0$. Let's call this local function $h$. Locally around $t_0$, we have: $g = h \circ f$ $g'(t) = h'(f(t))f'(t)$ At $t = t_0$, we get: $g'(t_0) = h'(f(t_0))f'(t_0)$ Rearranging, we get: $h'(f(t_0)) = \frac{g'(t_0)}{f'(t_0)}$ The left side is the expression we are trying to calculate, and the right side is the expression that we want to prove it to be. ### When the parametric derivative is undefined, the ordinary derivative may or may not be defined Consider the example of the astroid as a parametric curve: $x = f(t) = \cos^3t, y = g(t) = \sin^3t$ The derivatives are as follows: $f'(t) = -3\cos^2t \sin t, g'(t) = 3\sin^2t \cos t$ The parametric derivative is therefore: $\frac{3\sin^2 t \cos t}{-3\cos^2 t \sin t}$ We can cancel the 3: $\frac{\sin^2 t \cos t}{-\cos^2 t \sin t}$ However, we should not cancel the $\sin t$ and $\cos t$ until we have isolated the cases where either of them is zero. #### Cases of undefined parametric derivative The cases where the derivative is undefined correspond precisely to the cases where $\cos t = 0$ or $\sin t = 0$, which correspond precisely to the integer multiples of $\pi / 2$. At these points, the parametric derivative is not defined. The actual point coordinates are (1,0), (0,1), (-1,0), and (0,-1). #### Cases of defined parametric derivative When the parametric derivative is defined, we can cancel the common factors from the numerator and denominator. After simplifying, we get: $\frac{\sin^2 t \cos t}{-\cos^2 t \sin t} = \frac{\sin t}{-\cos t} = - \tan t$ #### Limiting behavior on undefined cases When $t = 0$ or $t = \pi$, the limit of the parametric derivative expression ($- \tan t$) equals zero. However, despite the limit existing, the parametric derivative is undefined at the point. In fact, the parametric curve is not locally a function at these points: geometrically, the $x$-coordinate doubles back, forming a horizontal cusp. When $t = \pi /2$ or $t = 3\pi / 2$, the parametric derivative does not have a defined limit, with the left-hand and right-hand limits being opposite signs of infinity. Geometrically, we have vertical cusps at these points. NOTE: In general, when the parametric derivative is not defined at a point, but there is a limit for it at that point, then the ordinary derivative, if defined, must equal that value.
2022-01-28 04:42:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 97, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9629635810852051, "perplexity": 176.44738146075179}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305420.54/warc/CC-MAIN-20220128043801-20220128073801-00261.warc.gz"}
http://cs.brown.edu/courses/csci1430/2011/results/proj3/psastras/
“AHEM! There's SAND on my boots!” ## CS143 / Project 3 / Scene Classification In this project we implement scene classification using the bags of words model. For each scene, a set of features is extracted and group them to create a visual vocabulary. Using this visual vocabulary, we then define a histogram for each scene type and use thse histograms coupled with an SVM to classify input scenes into one of several types (suburb, kitchen, industrial, etc.). The dataset used for this project is the 15 scene (category) database introduced by Lazebnik et al. 2006. ### 0 / Pipeline In the training phase, dense SIFT features are first extracted from each training image and then clustered using kmeans. Since the total number of features can be quite large, a random subset of ~100k features are chosen from the original set before beinning clustering. While the randomized feature selection decreased recoginition perfomance, the improvement in speed balanced the loss out. The number of clusters, k, was varied depending on the desired vocabulary size, but k = 800 to 1600 tended to yield the best results for the dataset. Once the vocabulary is formed, histograms of vocabularies for each of the training images are extracted. These histograms are then used to train multiple one vs all SVM scene classifiers. In the test phase, histograms of the vocabulary are created for each of the test images, and then classified as one of several scene types using the multiclass support vectors. Performance is evaluated using a confusion matrix. The diagonals represent the correct classification rate, accuracy is determined by taking the mean of the diagonals. ### 1 / Results and Extensions Instead of focusing on evaluating and trying to improve performance of the classifier, for this project, I focused on evaluating the performance of the classification over varying kernels, kernel parameters, and vocabulary sizes. The classification using bags of words was evaluated over several different vocabulary sizes and different SVM kernels. Vocabulary sizes included [10 20 50 100 200 400 800 1600 3200]. The three different kernels were linear, sigmoid, and gaussian rbf. #### 1.1 / Extended Kernels and Vocabulary The project baseline, using a linear SVM with lambda of 0.1 and randomly selecting 100k features to build a vocabulary of 200 words, yielded accuracies slightly greater than 0.6. Varying the linear SVM regularization parameter (lambda) along with the vocabulary size showed that the regularization parameter greatly affected overall performance, especially for larger vocabulary sizes (note that the vocabulary size is plotted on a log scale and that the regularization parameter should be multiplied by 1e-3). Substituting the linear SVM with a radial basis function SVM showed a similar trend. However, the RBF performed better than the linear SVM for lower vocabulary sizes, although the linear SVM started to match and beat the RBF SVM for larger vocabularies. (In the confusion matrix below, gamma of 32 yielded an accuracy of 0.6973 with a vocabulary size of 800) The radial basis function used was Gaussian where k(x,y) = \exp(-\gamma ||x-y||^2) Using Euclidean distance as the distance metric and varying the gamma parameter gave the following result. A similar experiment was done using a sigmoid kernel. For smaller vocabulary sizes, the kernel matched the performance of the linear kernel, although at larger vocabularies, its performance varied much more. The sigmoid kernel is defined as k(x,y) = \tanh(\alpha x^Ty+c) Alpha was chosen to be 1 over the data dimensionality (128) and the intercept constant c, was varied. The figure below compares the accuracy versus the vocabulary size for each of the kernel types, and plots the area between the curve of lowest accuracy and the curve of highest accuracy (note that the comparison chart is also plotted on a log scale). The comparison shows that the RBF generally does better than the linear kernel or sigmoid kernels at lower vocabulary sizes (200-400). As vocabulary size increases, the linear kernel appears to catch up to the other kernels, and eventually starts to surpass the other kernels for very large vocabulary sizes. The wide variance and downturn of the sigmoid kernel as vocabulary size increases may be due to poor free parameter choice, however due to time constraints [1] this was difficult to fix. These results appear to match (to some extent) those found by Yang, Jiang, Hauptmann, Ngo (2007). For most of the following tests, a linear SVM was used since it uses much less memory and is faster to train and test while not suffering from any performance loss for vocabulary sizes greater than about 800 (based on the above evaluation). #### 1.2 / Spatial Information To incorporate spatial information into the image classification process, instead of using different histograms for each portion of an image (as suggested by Lazebnik et al. 2006), the x and y coordinate information were added to the end of the SIFT feature vector, forming a 130-dimensional vector. While simple, the extended feature vector performed better than without spatial information (accuracy of about 0.73 using a vocab size of 800 and RBF with gamma of 64 - figure below left). Using a vocabulary size of 1600 and gamma of 128, accuracy is 0.7653 - these values were chosen since the RBF appears to plateau at around these area in the above analysis (figure below right). This simple extension to the SIFT vector to incorporate spatial information gave surprisingly good results at very little cost or added complexity. #### 1.3 / Extended Features Adding and combining other features to the SIFT feature seems to improve performance of the classifier (better than using any one feature by itself - see Xiao, Hays, Ehinger, Oliva, Torralba). Adding on the GIST scene descriptor and LBP (Local binary pattern) features, improved performance to 0.7840, using a linear classifier and vocabulary size of 1600 (see figure below left). These two extra features were chosen since the features they encode do not overlap (that much) with the baseline SIFT descriptor, or with each other. The GIST descriptor aims to encode the general structure of the secene by encoding frequencies, while the LBP feature tries to encode texture information about image regions. For the LBP feature, best peformance seemed to result from using a uniformly sampled rotationally invariant mapping. The LBP code used was taken from http://www.cse.oulu.fi/MVG/Downloads/LBPMatlab, and GIST code was taken from http://people.csail.mit.edu/torralba/code/spatialenvelope/. #### 1.4 / Soft Histogram Binning Histogram binning is inherently a very lossy operation, since features which are similar may get unlucky and placed into two seperate bins. Instead of simply binning the features into a histogram, a soft binning scheme is used. Following Philbin, Chum, Isard, Sivic, Zisserman (2008), an exponential weighting function over the distance of the k nearest neighbors was chosen, where w = \exp(-d^2/2\sigma^2) I slightly modified this weighting function to penalize distances more rapidly, such that w = \exp(-d^6/2\sigma^2) This modified weighting function had the slight side effect that sigma^2 had to be very large (around 10e29). The histogram binning was run using the three neigherest neighbors and weighting each according distance. More neighbors tended to not give any performance gains, since scenes began to become confused with each other. Soft binning combined with extra features and spatial information and vocabulary size of 800 with a linear SVM gave an accuracy of 0.8027 (figure below left), an improvement of about 1-2%. It might be possible to tune the binning parameters more to get a larger increase in performance. #### 1.5 / Cross Validation Results Final results were evaluated using k-fold cross validation (with k=5). The training and test images were combined and treated as one data set , which was then randomly partitioned into k different subsets. The classifier was then trained on k-1 of the subsets and tested on the last (kth) subset. This process was repeated k times, with the each of the k subsets used exactly once as the test set. Accuracy was taken as the mean of all k runs. k was chosen to be five for cross validation, accuracies are below. Modifying this schema slightly to ensure 100 training images for a fair comparison (by randomly sampling 100 images again from the k-1 training partitions) and 80-90 test images per class, and evaluating results with a vocab size of 800 gave >> accuracies accuracies = 0.8040 0.8412 0.8024 0.7974 0.8174 >> mean(accuracies) ans = 0.8125 >> var(accuracies) ans = 3.1345e-04 The confusion matrix (taken as the mean of the k trials) is in the figure below left. To the right is the same confusion matrix, except with the diagonals zeroed out to enhance the contrast between incorrect classifications. The diagonal of the confusion matrix is >> diag(conf)' ans = Columns 1 through 11 0.9793 0.8278 0.9238 0.8462 0.8214 0.8262 0.7098 0.8664 0.8904 0.9442 0.5926 Columns 12 through 15 0.7749 0.7333 0.6609 0.7905 More training data can improve performance. If we make use of all of the dataset (200-300) training images per class for each k (standard k-fold cross validation), ie. use all images in the k-1 folds for training instead of just 100 and test on the last, kth fold accuracies improve significantly. accuracies = 0.8550 0.8520 0.8601 0.8374 0.8558 >> mean(accuracies) ans = 0.8521 >> var(accuracies) ans = 7.5507e-05 ### 2 / Notes It would be interesting to see if the spatial pyramid match kernel gives a significant performance gain over the simple addition of two parameters to each SIFT feature vector - I think it might. It would also be interesting to see how much performance improved by fusing more features (like the SUN database paper), since right now this method is only fusing 3 features together (SIFT, GIST, and LBP), whereas the SUN database paper fuses many more.
2022-01-21 07:48:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6927552819252014, "perplexity": 1601.762176932042}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320302740.94/warc/CC-MAIN-20220121071203-20220121101203-00203.warc.gz"}
https://artofproblemsolving.com/wiki/index.php?title=1987_AIME_Problems/Problem_6&diff=prev&oldid=13745
# Difference between revisions of "1987 AIME Problems/Problem 6" ## Problem Rectangle $\displaystyle ABCD$ is divided into four parts of equal area by five segments as shown in the figure, where $\displaystyle XY = YB + BC + CZ = ZW = WD + DA + AX$, and $\displaystyle PQ$ is parallel to $\displaystyle AB$. Find the length of $\displaystyle AB$ (in cm) if $\displaystyle BC = 19$ cm and $\displaystyle PQ = 87$ cm. ## Solution Since $XY = WZ$ and $PQ = PQ$ and the areas of the trapezoids $\displaystyle PQZW$ and $\displaystyle PQYX$ are the same, the heights of the trapezoids are the same. Thus both trapezoids have area $\frac{1}{2} \cdot \frac{19}{2}(XY + PQ) = \frac{19}{4}(XY + 87)$. This number is also equal to one quarter the area of the entire rectangle, which is $\frac{19\cdot AB}{4}$, so we have $AB = XY + 87$. In addition, we see that the perimeter of the rectangle is $2AB + 38 = XA + AD + DW + WZ + ZC + CB + BY + YX = 4XY$, so $AB + 19 = 2XY$. Solving these two equations gives $AB = 193$.
2021-04-18 09:35:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 17, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8102240562438965, "perplexity": 133.15102811418691}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038469494.59/warc/CC-MAIN-20210418073623-20210418103623-00302.warc.gz"}
http://math.stackexchange.com/questions/77737/compact-convergence
# Compact convergence Suppose $X$ is a topological space, $Y$ is a metric space, $f$ is a function from $X$ to $Y$, and $\{f_n\}$ is a sequence of continuous functions from $X$ to $Y$ that converges uniformly to $f$ on all compact subsets of $X$. Must $f$ be continuous? - Here’s a Hausdorff $-$ in fact perfectly normal $-$ counterexample. Let $X=\{p\}\cup(\omega\times\omega)$, where $p\notin\omega\times\omega$. For $n\in\omega$ let $V_n=\{n\}\times\omega$. Topologize $X$ as follows: points of $\omega\times\omega$ are isolated, and a set $U\subseteq X$ is a nbhd of $p$ iff $p\in U$ and there is an $m\in\omega$ such that $V_n\setminus U$ is finite for each $n\ge m$. (In other words, $U$ must contain all but finitely many points of all but finitely many of the ‘columns’ $V_n$.) The compact subsets of $X$ are precisely the finite subsets. That $X$ is perfectly normal follows from the fact that it is zero-dimensional, since it has a clopen base. (This $X$ is sometimes known as the Arens-Fort space.) Let $Y=\{0,1\}$ with the metric inherited from $\mathbb{R}$. For $k\in\omega$ define $f_k:X\to Y$ as follows: \begin{align*} f_k(p)&= 0\\ f_k(\langle n,m\rangle)&=\begin{cases} 1,&n\le k\\ 0,&n>k\;. \end{cases} \end{align*} Then $\langle f_k:k\in\omega\rangle$ converges uniformly on compact sets to the function $g:X\to Y$ given by $g(p)=0$ and $g(\langle n,m\rangle)=1$ for all $\langle n,m\rangle\in\omega$, which is clearly discontinuous at $p$. - Very nice answer--thanks. – Richard Hevener Nov 1 '11 at 21:52 $f$ is clearly continuous on all compact subsets of $X$, so this is true whenever $X$ is compactly generated, which includes most cases of practical interest. It is false in general. Let $X$ be an uncountable set equipped with the cocountable topology (the closed subsets are precisely the at most countable subsets and $X$). The compact subsets are precisely the finite subsets (exercise), each of which has the discrete topology, hence every function out of $f$ is continuous on compact subsets, and a sequence of functions $f_n : X \to Y$ converges uniformly to $f$ on compact subsets if and only if it converges pointwise to $f$. It is not hard to construct a counterexample from here. I lied; my choice of $X$ above can't lead to a counterexample. The problem is that the preimage of every open set in $Y$ must be a cocountable subset of $X$, so if $Y$ is a Hausdorff space with more than one point in it, there are two disjoint open subsets $U, V$ of $Y$ whose preimages in $X$ cannot be disjoint, so every continuous function from $X$ to a Hausdorff space is constant. - What is practical interest? – Asaf Karagila Nov 1 '11 at 8:02 @Asaf: I would say that locally compact or first-countable already suffice to cover many topological spaces appearing in parts of mathematics other than general topology. (In other words, if the OP needed this result as a step in another proof, I would guess that $X$ is compactly generated in the OP's application.) – Qiaochu Yuan Nov 1 '11 at 14:32
2015-11-27 06:31:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9913961887359619, "perplexity": 79.1271762806304}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398448227.60/warc/CC-MAIN-20151124205408-00269-ip-10-71-132-137.ec2.internal.warc.gz"}
http://kea-monad.blogspot.com/2008_04_01_archive.html
occasional meanderings in physics' brave new world Name: Location: New Zealand Marni D. Sheppeard ## Tuesday, April 29, 2008 ### Job Hunting Browsing jobs online, with no regard for location, I came across a fantastic opportunity at the Department of Physics in Cambridge: they need a new waitress, and lunch and overalls are provided! Actually, I have been spending a bit of time on a more exciting job application, which I submitted today. Even if I don't get the job, it was fun trying. ### M Theory Lesson 178 Recall that $2 \times 2$ spin matrices are associated with the quantum Fourier transform for $q = -1$. The Weyl rule $UV = - VU$ may be thought of as a square with paired edges marked $U$ and $V$, just like in the planar paths considered by Kapranov. In 3 dimensions one draws paths on a cubic lattice. The paths on a single cube form the vertices of one of our favourite hexagons. A simple braid on three strands is formed by composing two edges of this hexagon, which correspond to two faces on the cube. Since the Weyl edges $U$ and $V$ have become faces in 3D, this composition can represent fermionic spin, just as Bilson-Thompson said. ## Sunday, April 27, 2008 ### Light Nostalgia Louise Riofrio continues with excellent cosmology posts, and now Carl Brannen also weighs in on the subject. I was wondering what originally got me very interested in the subject of a varying $c$, and I decided it probably happened around 1995, when I spent a few months studying the early physics papers on quantum group fiber bundles. I seem to recall that these papers were not particularly mathematically sophisticated, but one element stood out: whereas a classical principle bundle looks the same at every point, the deformation parameter in a quantum bundle may easily vary from point to point. Even in those days, people thought a lot about relating deformation parameters to $\hbar$. This was all just a mathematical curiosity, until it became clear that some tough (and extremely interesting) algebraic geometry, and other mathematics, lay at the bottom of it. (Of course, all roads led to category theory in the end). Algebraic geometers love spaces with extra structure which varies from point to point. They talk about spectra (usually of rings) and we need not be afraid of these gadgets because they are naturally specified by a functor from a suitable category of algebras into a category of spaces. And it turns out that this functor is best understood from the point of view of a special topos, because the weird topologies that algebraic geometers like to use are neatly encoded by axioms of Grothendieck. (In fact, this is where the idea of a topos comes from in the first place). At the time, I believe it was Zamolodchikov who advised me to ditch lattice gauge theory (which I was supposed to be doing) for something more interesting. In the end, I did give up the lattice gauge theory, but I can't say it was because I listened to anybody's advice. (And as it turns out, lattice gauge theory has actually done rather well over the last decade). ## Saturday, April 26, 2008 ### Return of the Jedi There's only one thing to say to the next restaurant patron who thinks they need to add the change for me, or the next guy who thinks he needs to point out to me that physical theories have to agree with experiment: I'll be back. (Thanks to Backreaction for the picture) ## Thursday, April 17, 2008 ### Ternary Geometry III Topological field theory enthusiasts like extending the 1-categorical constructions to the world of 2-categories. A candidate source category is then a category of spaces with boundaries which themselves have boundaries. That is, the vertices are the objects, the edges the 1-arrows and surfaces 2-arrows. In the world of ternary geometry this brings to mind the three levels of the generalised Euler characteristics, which were seen as cubed root of unity analogues to the alternating signs that occur in the world of 2. Since the boundary of a boundary is not necessarily empty, it makes more sense to look at the cubic relation $D^3 = 0$ than the usual homological $D^2 = 0$ of duality. Since the latter arises from a fundamental categorical concept, namely monads, one would like to understand the ternary categorical construction. This is why M Theory looks at ternary structures such as Loday's algebras and higher dimensional monads. ## Wednesday, April 16, 2008 ### Extra, Extra II Motl continues with updates on the Bagger-Lambert (ie. 3 is better than 2) M theory revolution, noting three new papers including this one on a SUSY preserving matrix theory deformation of the Bagger-Lambert action which breaks the $SO(8)$ symmetry to $SO(4) \times SO(4)$. ## Sunday, April 13, 2008 ### Purple Today's pretty picture, from the University of Bristol website, is a convergent beam electron diffraction pattern. ## Saturday, April 12, 2008 ### The Dark Side III For anybody who happens to be around next week, I will be giving a simple talk with the title: ## Thursday, April 10, 2008 ### Achilles and the Tortoise Zeno of Elea's lost book is said to have contained 40 paradoxes concerning the concept of the continuum. The paradoxes are mostly derived from the deduction that if an interval can be subdivided, it can be subdivided infinitely often. As an Eleatic, Zeno subscribed to a philosophy of unity rather than a materialist and sensual view of reality. This led to greater rigour in mathematics, since more emphasis was placed on logical statements than on physical axioms laid down arbitrarily on the basis of (inevitably deluded) experience. Most famously, the paradoxes discuss Time as a continuum. If we have already laid out in our minds a notion of classical motion through a continuum, the infinite subdivisibility of Time must follow. But note the introduction here of a separation between object and background space. To the Eleatics, this is the source of the problem, not the mathematical necessity of infinity itself. By placing a fixed finite (relative to the observer) object in a continuum, we have allowed ourselves to ask questions about its motion which are physically unfeasible. But the resolution comes not from concrete physical axioms about an objective reality, based as they are on the very prejudices that lead to paradoxes in the first place. Rather, it comes from refining the mathematics until its definitions are capable of quantitatively describing the physical problem correctly. We have known this for thousands of years, but do many physicists really appreciate this today? ## Wednesday, April 09, 2008 ### Knot Monkey Carl has been playing with knots that cover a sphere. Rather, when a piece of cord or wool is used, its substantial thickness allows a covering of a sphere with a small finite number of crossings. In the mathematical world, ideal knots are drawn with an infinitely thin line. Such lines can still fill a sphere (a la Thurston) but monkey knot curves with crossings are more interesting in the context of M theoretic quantum information, and it would take some (kind of) infinite number of crossings to properly fill out a sphere. But basically, the monkey knot is a set of Borromean rings in three dimensions (or Borromean ribbons). The rings form a 6 crossing planar diagram. Note that if the outer 3 crossings are smoothed, one obtains a trefoil knot from the centre of the rings (along with a separate unknotted loop). I can't help wondering what this means. ## Tuesday, April 08, 2008 ### M Theory Lesson 177 Note that an intersection on the triangle plane arrangement becomes a square face on the cube. A (directed) cone from the top vertex will pick out the central horizontal edge of the cube, with the central point of the hexagon at one end representing the triangle. Observe that the number of edges in corresponding diagrams (planar arrangements to graphs) remains unchanged, whereas faces become vertices and vertices become faces. That is, this is a kind of Poincare duality. ## Sunday, April 06, 2008 ### M Theory Lesson 176 In Ben-Zvi's notes of recent work by Ben Webster et al (which he calls the cutting edge of mirror symmetry math) there is this diagram of a triangular arrangement of planes and its associated graph. The vertices represent the 7 regions of the Euclidean space and the edges an adjacency via an edge segment. Notice how this looks like a centered hexagon, or one side of a cube. This is a kind of Cayley graph. The permutations of four letters (which label the vertices of the permutohedron) also give a cubical Cayley graph. Koszul duality is about the correspondence between intersections of the planes and cones emanating from such points in the plane arrangement. ## Tuesday, April 01, 2008 ### Greetings Greetings from Wanaka (not an April fools' joke).
2016-10-22 13:24:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5487234592437744, "perplexity": 790.1393995359336}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988718987.23/warc/CC-MAIN-20161020183838-00515-ip-10-171-6-4.ec2.internal.warc.gz"}
https://xcorr.net/2011/10/03/removing-line-noise-from-lfps-wideband-signals/
### Removing line noise from LFPs, wideband signals …Trail of papers had a recent post on standard analyses in neuroscience, which reminded me that I’ve meaning to post about signal preprocessing for a while now. If there’s too much line noise (electrical noise at multiples of 60Hz in North America and 50Hz in Europe) in a signal, this will render the signal unusable. So it’s important to try and eliminate line noise through proper grounding at the source. Yet some line noise is more or less inevitable, and you’ll want to diminish this noise through digital filtering. This is useful, for example, as a preprocessing step for LFPs, or for cleaning up wideband signals prior to spike detection. People sometimes use notch filters for this purpose, but this can remove too much signal, such that there’s a gap in the frequency domain around 60Hz where there’s no power at all. A better approach IMHO is to fit a function to the signal power around 60Hz and invert it to obtain a filter. Here’s a function which does this. It uses a flexible exponential family of functions of the form $a\exp(-b|x-c|^d)+e$ for the fits. It works in little chunks of data (by default 60s) so it’s able to track nonstationarities and doesn’t take a boatload of memory. Results of a fit to peak around 60 and 180Hz are shown at the top. function [newdata] = chunkwiseDeline(data,sr,freqs,freqrange,chunksize,showOutput) %function [newdata] = chunkwiseDeline(data,sr,freqs,chunksize) %Removes line noise from a signal in small chunks of data. %data - an n x 1 vector containing continuous data %sr - scalar, the sampling rate of the signal %freqs - a vector of frequencies around which to remove line noise, % for example [60,180] %freqrange (optional) - a scalar specifying the range of frequencies % to use for fitting a parametric function for % line noise (default: 2Hz) %chunksize (optional) - the size of a chunk in seconds (default : 60) %showOutput (opt) - if true, show plots of fits (default: false) %Algo: for every block of data, take abs(fft(dat)). Then extract the % vector corresponding to [freq(1)-freqrange, freq(1)+freqrange]. % Fit a function thefun = @(p,x) p(1)*exp(-p(2)*abs(x-p(4)).^(p(5)))+p(3); % to this range of freqs. Invert the function to obtain a delining % filter and apply it. Repeat for other frequencies. Repeat for % all blocks. Reassemble blocks. data = data(:); if nargin < 4 || isempty(freqrange) freqrange = .5; end if nargin < 5 || isempty(chunksize) chunksize = 60; end if nargin < 6 || isempty(showOutput) showOutput = false; end chunksize = chunksize*sr; origlen = length(data); data = [data(chunksize/2:-1:1);data]; nchunks = ceil((length(data)+chunksize/2)/chunksize); newdata = zeros(size(data)); nchunks = nchunks*2-1; thewin = (0:chunksize/2-1)'/(chunksize/2-1); thewin = [thewin;thewin(end:-1:1)]; %Deline each chunk and reassemble pss = nan(5,nchunks); for ii = 1:nchunks thechunk = data((ii-1)*chunksize/2+(1:chunksize)); [delinedchunk,pss] = delineChunk(thechunk,sr,showOutput,freqs,freqrange,pss); newdata((ii-1)*chunksize/2+(1:chunksize)) = ... newdata((ii-1)*chunksize/2+(1:chunksize)) + ... delinedchunk.*thewin; end newdata = newdata(chunksize/2 + (1:origlen)); end %Deline a single chunk of data function [y pss] = delineChunk(dat,sr,showoutput,freqs,freqrange,pss) ae = []; if mod(length(dat),2) == 1 ae = dat(end); dat = dat(1:end-1); end fftdat = fft(double(dat)); a = abs(fftdat); %Now remove line noise from datlo to obtain y % %Eliminate line noise at target frequencies thefilt = ones(size(a)); winlen = round(length(dat)/sr*freqrange); %Fit a curve to this chunk of frequencies opts = optimset('Display','Off','Jacobian','on','Algorithm','levenberg-marquardt'); n = 1; for tgtr = freqs peak = tgtr/sr*length(dat); rg = round(((peak-winlen):(peak+winlen)))'; datrg = a(rg); x = (-winlen:winlen)'/winlen*freqrange; %Only adjust a few parameters at a time %convergence is better this way %everything but the exponent %Find the peak datrgsm = conv(datrg,ones(21,1),'same'); [~,peakloc] = max(datrgsm); %Set the initial parameters x0 = [max(datrg)-median(datrg),1/.2^2,median(datrg),(peakloc-1-winlen)/winlen*freqrange,1]'; if ~any(isnan(pss(:,n))) x0([2,4,5]) = pss([2,4,5],n); end [ps] = lsqcurvefit(@(x,y) thefun([x;x0(5)],y,[1;1;1;1;0]),x0(1:4),x,datrg,[],[],opts); xd = ps(4); %Everything but the center [ps] = lsqcurvefit(@(x,y) thefun([x(1:3);xd;x(4)],y,[1;1;1;0;1]),[ps(1:3);x0(5)],x,datrg,[],[],opts); %Everything [ps] = lsqcurvefit(@(x,y) thefun(x,y,[1;1;1;1;1]),[ps(1:3);xd;ps(4)],x,datrg,[],[],opts); pss(:,n) = ps; %Good, now adjust the filter in this range accordingly thefilt(rg) = ps(3)./thefun(ps,x); b = thefilt(rg); thefilt(end-rg+2) = b; if showoutput subplot(length(freqs),1,n); plot(x+tgtr,datrg,x+tgtr,thefun(ps,x)); title(sprintf('%3.1f Hz',tgtr)); drawnow; [ps(2),ps(4),ps(5)] end n=n+1; end % y is datlo with line noise removed a = fftdat.*thefilt; y = [real(ifft(a));ae]; end E = abs(x-p(4)).^(p(5)); M = exp(-p(2)*E); y = p(1)*M+p(3); if nargout > 1 J = [ M,... -p(1)*E.*M,... ones(size(x)),... p(1)*p(2)*p(5)*sign(x-p(4)).*abs(x-p(4)).^(p(5)-1).*M,... -p(1)*p(2)*p(5)*E.*log(abs(x-p(4))+1e-6).*M]; end end ## 5 thoughts on “Removing line noise from LFPs, wideband signals” 1. Florian says: First of all thank you for this post! It’s really helpfull! I know this post is pretty old now and you might not read this comment anymore, but I’m not sure about your Jacobian. My math skills are a little bit out of training, but I found: -p(1)*p(2)*E.*log(abs(x-p(4))+1e-6).*M as a derivative according to p(5), i.e. the same as you without the p(5) factor. 2. Daniel says: Great function! Thanks for sharing. I found that it doesn’t work with non-integer sampling rates (an annoyance with our data acquisition hardware). Anyway, just in case anyone is interested, your function works with non-integer sampling rates with the following modification to the code: % chunksize = chunksize*sr; 33 chunksize = round(chunksize*sr); 34 if mod(chunksize,2) == 1, chunksize = chunksize-1; end 3. Brian says: I’m guessing there’s a minimum amount of spectral information you need in order to get a good fit at 60Hz? I have 3-second snippets (not from a continuous source, but collected every 10 seconds for an hour) and the noise was still present after running this code. But I really appreciate your contributions! By contrast, I am so sick of finding papers that describe their excellent processing/pre-processing strategies but don’t give any code example of how to implement it. It really defeats the hope of standardizing our analyses (and therefore, realistic comparisons). Your code examples are easy to understand and very useful. If I (or others) use your code in published work, how would want it cited? Thanks again! 1. Brian says: *to be clear, I am referring to 3 second LFP snippets. So my question is better worded as, ” Is there a minimum sampling duration needed for this type of filtering to work?” 1. xcorr says: I think it should still work though you might have to do some contortions, like averaging the power spectrum of all the snippets and then fitting the 60Hz peak to a Gaussian and then apply that filter to each snippet individually…
2019-08-25 13:36:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.589760959148407, "perplexity": 3734.3869589344154}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027330233.1/warc/CC-MAIN-20190825130849-20190825152849-00379.warc.gz"}
https://www.physicsforums.com/threads/series-problem.23252/
# Series problem This thing has me tearing my hair out: Let {a0, a1,...} be a sequence such that $$\sum_{n=0}^{\infty}{\frac{1}{a_{n}}}$$ diverges. Does $$\sum_{n=0}^{\infty}{\frac{1}{a_{a_{n}}}}$$ diverge? My first instinct was to say no, but then I couldn't find any counterexamples. Now I am thinking it might actually be true but it has defied all the tests I've tried. Any ideas? mathman Let a0=1 and an=n for n>0. Both series are the same and diverge. Sorry, I guess I wasn't clear enough. Do ALL such series diverge? I already know all series of the form an=kn+c do since aan = k(kn+c)+c=k^2n+kc+c, but that doesn't cover all divergent series. HallsofIvy
2021-06-16 01:18:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6428632140159607, "perplexity": 1110.3193536878396}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487621699.22/warc/CC-MAIN-20210616001810-20210616031810-00374.warc.gz"}
https://eecsmt.com/graduate-school/exam/100-ncku-cs-co-os/
1. [10%] Translate the beq instructions shown in the following code into an 32-bit binary instruction, provided that the opcode of beq is 0x04. L1: add $8,$9, $10 add$8, $9,$10 beq $0,$9, L2 add $8,$9, $10 add$8, $9,$10 L2: add $9,$0, $9 參考解答:000100 00000 01001 0000000000000010 2. [10%] Assume a virtual memory addressing space of 16 Gbytes and a physical memory addressing space of 4 Gbytes. Let the size of a page be 4 Kbytes. What is the size of a page table in terms of the number of entries? 參考解答:4M entries 3. [15%] Consider a pipelined processor that executes the MIPS code shown in Figure 1 using the logic of hazard detection and data forwarding unit shown in Figure 2. If the MIPS code cannot be executes correctly, then how do we revise the logic shown in Figure 2 such that the code can be correctly executed? Figure 1: The MIPS code: add$10, $10,$5 add $10,$10, $6 add$10, $10,$7 Figure 2: The logic of hazard detection and data forwarding unit: if (MEM/WB.RegWrite and (MEM/WB.RegisterRd != 0) and (EXE/MEM.RegisterRd = ID/EX.RegisterRs) and (MEM/WB.RegisterRd = ID/EX.RegisterRs)) then ForwardA = 01 if (MEM/WB.RegWrite and (MEM/WB.RegisterRd != 0) and (EXE/MEM.RegisterRd = ID/EX.RegisterRt) and (MEM/WB.RegisterRd = ID/EX.RegisterRt)) then ForwardB = 01 The logic should be revised as follows: 4. [15%] Figure 3 shows the control of the multicycle MIPS processor. There are a number of typos in the plot. Identify and correct the typos. $$圖片請看原檔$$ 5. [10%] Given a 10000-RPM disk with 80-MB/second bandwidth and 10-ms average seek time, please calculate the average time to read a 40-KB block from this disk. 6. [10%] On a virtual memory system, three frames are allocated to process P, one is used to accommodate the code page of P and the other two are used to accommodate the data pages of P. The pseudo code of P is shown below, and the following are assumed. First, data in the array A are stored in the row-major order, and each row of array A is stored in a virtual page. Second, the following code can be accommodated in a single page and there are no page faults for the code page accesses. Third, i, j, k are all stored in registers. Fourth, LRU is used as the page replacement policy for the data pages, and the two frames used to accommodate the data pages are initially empty. What is the page fault rate for the accesses to the array A? int i, j, k; int A[5,4]; k = obtain an integer from the input device; for (i = 0; i < 5; i++) for (j = 0; j < 4; j++) { if ((i == 0) && (j == 0)) A[i, j] = k; else A[i, j] = A[0, 0] + k; } 7. [10%] Consider a system with 64 MB of physical memory, 1024 TLB entries, 32-bit physical address, 32-bit virtual addresses, and 4 KB physical page frames. (a) What is the TLB reach? (b) What is the maximum number of page table entries in the inverted page table if 10 processes are presented in the system? (a) The memory size that TLB can access. TLB reach = TLB size $\times$ Page size. (b) $2^{14}$ entries. 8. [20%] Consider the following set of (single-threaded) processes with different CPU burst time, arrival time, and priorities: Processes Burst Time (ms) Arrival Time (ms) Priorities P1 30 0 1 (lowest) P2 20 10 2 P3 50 20 3 P4 20 40 4 (highest) (a) Assume that there is only one processor in the system and the context switch time is 1 ms. Moreover, the idle process is executed before the arrival of process P1. What is the waiting time of the processes under the preemptive SJF scheduling algorithm? (b) Assume that there are two processors in the system (with a single run queue) and the context switch is 0 ms. What is the waiting time of the processes under the priority-based scheduling algorithm? (a) P1: 23ms, P2: 1ms, P3: 55ms, P4: 14ms. (b) P1: 10ms, P2: 0ms, P3: 0ms, P4: 0ms. ### 14 留言 1. #### daniel 想問第8題的B小題 P1: 35ms, P2: 0ms, P3: 10ms, P4: 0ms.是怎麼算的 我算出來是 P1: 10ms, P2: 0ms, P3: 0ms, P4: 0ms. 2. #### daniel 不好意思,我還想問第六題的5/39是怎麼算的? • 文章作者的留言 #### mt 不好意思!第六題應該是8/39才對,除了A[0][0]的時候會access memory一次其他都會access memory兩次所以分母的部分是1+(2×19)=39,分子的部分則是因為總共page fault 8次,過程如下: A[0]載入memory A[1]載入memory A[2]載入memory取代A[0] A[0]載入memory取代A[1] A[3]載入memory取代A[0] A[0]載入memory取代A[2] A[4]載入memory取代A[0] A[0]載入memory取代A[3] • #### 小吳 想問一下第六題的部分為什麼A[3]是取代A[0]不是A[2]呢? 依照LRU的話應該先取代A[2]吧? 不知道哪裡有理解錯 • 文章作者的留言 #### mt A[2]雖然比A[0]還早載入memory,但是最後被用到的是A[2],可以看下面這段程式碼: A[i, j] = A[0, 0] + k; 先取得A[0, 0]的值後加上k然後assign給A[2, j],希望這樣有解決你的問題。 3. #### 小O 想請問第 7 題 a是否要乘出來答案4MB? 還有b怎麼算的 感謝 • 文章作者的留言 #### mt a小題沒錯是4MB,b小題是64MB/4KB喔。 感謝您 4. #### Jackson 你好,請問第2題是只要求 entry 數目,不是求 page table 大小嗎 ? 我求 page table size = (1 + 20) * 2^22 = 84 MB • 文章作者的留言 #### mt 是求entries數目喔~題目最後有說in terms of the number of entries,而且資訊過少沒辦法知道每個entry要多大! 好的 感恩 5. #### Jackson 您好,我想請問一下,為什麼 第六題 是先讀 A[2] 再讀 A[0] ? • 文章作者的留言 #### mt 應該是A[0]先讀再讀A[2]喔~所以A[3]載入memory的時候才會是取代A[0]而不是A[2]。
2022-05-26 12:00:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29270631074905396, "perplexity": 7907.222133277449}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662604794.68/warc/CC-MAIN-20220526100301-20220526130301-00727.warc.gz"}
https://www.physicsforums.com/threads/pentration-of-light-in-cladding-from-a-waveguide.220453/
# Pentration of light in cladding from a waveguide 1. Mar 7, 2008 ### Lee As part of my project I want to calculate the theoretical value of the best thickness of cladding to use on my waveguide to prevent light from leaking through the cladding to the next layer, I'm aware I need to play with Maxwells equations and it's pretty much particle in the box where I'll have a negative exp function in the cladding, but I'm not sure how to approach the problem, does anyone have a useful link or advice? 2. Mar 7, 2008 ### ZapperZ Staff Emeritus Why is this a "quantum physics" topic? Zz. 3. Mar 7, 2008 ### Lee Feel free to throw it wherever. 4. Mar 9, 2008 ### Lee anyone? 5. Mar 10, 2008 ### Claude Bile You need to solve the EM wave equation separately in the core and cladding regions and match them at the boundary by calculating appropriate values for the arbitrary constants that pop up in your solution to the diff. eqn. The process of solving the equations will depend on what the shape of your waveguide is, specifically what symmetries it possesses. This will then determine what coordinate system you should use to expand the Laplacian. For a full derivation you can expect to write five or six pages minimum. It might be a sledgehammer approach to a problem that has a more elegant solution. Essentially your problem reduces down to finding the Mode field Diameter of your waveguide mode(s). See if you can't find an equation for the MFD that is suitable for you waveguide shape first before tackling the not-insignificant task of deriving a full field solution. Claude. 6. Mar 11, 2008 ### Lee So the waveguide is rectangular, and my only concern is the penetration in one dimension, so would I be able to discount the other dimensions and solve in 1-D making the problem much simpler? Making this very similar to a 1st year Schroedinger equation particle in a box? 7. Mar 11, 2008 ### Claude Bile Yes, you can discount other dimensions if you are just after the shape of the fields, the amplitude of the fields though will depend on the entire solution. The thing you have to be mindful of is that the electric and magnetic fields are vector fields, not scalar fields. If the refractive index contrast between your core and cladding is reasonably small (say, less than 0.5) you can apply the weak guiding approximation and reduce the problem to solving for a scalar field - otherwise you're stuck with working out all 3 vector field components. It just struck me that if you're only interested in the penetration depth of the cladding, you don't need to bother with solving for the fields inside the core or matching the solutions at the boundary (since this would only change the amplitude of the field in the cladding). Claude. 8. Mar 11, 2008 ### Lee So I can simply apply the equation to the cladding and be able to come out with the negative exponential function for the light in the cladding, and from that work out the penetration at different distances? 9. Mar 12, 2008 ### Claude Bile Yes, you need to evaluate the exponent, which, for a rectangular slab waveguide is; $$\beta^2 + n_{clad}k_0^2$$ Where; - $\beta$ is the propagation constant of the guided mode - $n_{clad}$ is the refractive index of the cladding - $k_0$ is the magnitude of the wavevector in free space (i.e. $2\pi/\lambda_0$) The hard bit is finding the propagation constant, to do this you need to solve for the modes of the guide. Fortunately you probably don't need to find the full field solutions, you will probably be fine deriving the modes from ray theory. When I have time, I'll say a bit on how to solve for the modes using ray theory. Claude. 10. Mar 13, 2008 ### Lee I had a bash using Maxwell's equations and BCs, and I'm down to making the wave function continues at the boundaries. So i've got down to a set of equations that need to be equal for which I now have to solve. Though I'm currently normalizing my wave function to get terms for my Constants. 11. Mar 16, 2008 ### Claude Bile You don't need to worry about making them continuous for your purposes, since you are only interested in the decay length. Making things continuous only changes the relative amplitudes of the core and cladding solutions. Claude. 12. Mar 18, 2008 ### Claude Bile Further to post #9 - to solve for the modes of a slab waveguide, you need to find the angles that satisfy the following transverse resonance condition; $$2dn_1k_0cos(\theta_m) = 2m\pi$$ Where - $d$ is the diameter of the guide. - $n_1$ is the refractive index of the core. - $k_0$ is the magnitude of the wavevector in free space = $2\pi/\lambda$. - $\theta$ is the angle of incidence of the totally-internally-reflected beam for mode $m$. - $m$ is the mode number. Once you know the values of $\theta_m$, you can calculate $\beta_m$ as follows; $$\beta_m = n_1k_0sin(\theta_m)$$ Claude. 13. Mar 19, 2008 ### Lee 14. Mar 19, 2008 ### Claude Bile Sorry, I made an error, in post #9, the top equation should be; $$w = -\sqrt{\beta^2 - n_{clad}k_0^2}$$ Where $w$ is your exponent. Claude. 15. Mar 29, 2008 ### Lee Thanks buddy, I finally got round to creating the graph I wanted and I'm really happy with it, and it looks like it agrees with my results (though I would of liked to create more samples or test the 540nm layer of silica). http://img440.imageshack.us/img440/9472/awesomenesscs8.jpg Do you have the reference I could use? As this is going to be in my report and I'll need one if I can include the results in my paper. 16. Mar 30, 2008 ### Claude Bile The numbers look sensible, nice work (though I would change the units on the vertical axis to microns rather than meters). "Theory of Optical Waveguides" by A. Snyder and J. Love is what I use but it is pretty full on. "Lasers and Electro-Optics - Fundamentals and Engineering" by C. Davis is more digestible for the non-expert. Claude.
2016-12-07 17:02:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5264904499053955, "perplexity": 618.076996793086}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542217.43/warc/CC-MAIN-20161202170902-00380-ip-10-31-129-80.ec2.internal.warc.gz"}
https://www.nature.com/articles/s41598-017-16115-9?error=cookies_not_supported&code=ee9259d4-840f-4a9e-aa77-dacda4362bbf
Article | Open | Published: # Synchronization enhancement of indirectly coupled oscillators via periodic modulation in an optomechanical system ## Abstract We study the synchronization behaviors of two indirectly coupled mechanical oscillators of different frequencies in a doublecavity optomechanical system. It is found that quantum synchronization is roughly vanishing though classical synchronization seems rather good when each cavity mode is driven by an external field in the absence of temporal modulations. By periodically modulating cavity detunings or driving amplitudes, however, it is possible to observe greatly enhanced quantum synchronization accompanied with nearly perfect classical synchronization. The level of quantum synchronization observed here is, in particular, much higher than that for two directly coupled mechanical oscillators. Note also that the modulation on cavity detunings is more appealing than that on driving amplitudes when the robustness of quantum synchronization is examined against the bath’s mean temperature or the oscillators’ frequency difference. ## Introduction As one of the most attractive phenomena in physics and even the whole natural science, spontaneous synchronization of coupled oscillators has been explored with intense interests recently in various fields like nonlinear dynamics1,2,3,4,5,6, cavity optomechanics7,8,9,10,11, quantum information processing (QIP)12,13, Bose-Einstein Condensates14, atomic ensembles15,16,17 and so on. The synchronization phenomenon was discovered earliest in a classical clock pendulum system by Huygens in the 17th ury18 and has been successfully extended to the quantum regime now19, e.g., for realizing the synchronous manipulation of quantum information and quantum states. In particular, Yamada et al. proposed to use the Lyapunov index as a qualitative criterion in order to determine whether the classical synchronization is reached for coupled oscillators20. Subsequently, Mari et al. put forward an effective synchronization measure for continuous variable (CV) quantum systems19 with two directly coupled microscopic oscillators taken as a good example. Investigations on quantum synchronization in optomechanical systems soon achieved great success with relevant experiments done to verify the theoretical predictions21,22,23, which laid a favorable foundation for the further studies and applications. According to the existing studies, synchronization behaviors between mechanical oscillators usually occur in two ways: (i) they exchange energy directly owing to an effective coupling so that their oscillations tend to be accordant after a long enough time8,19; (ii) they are restricted to evolve towards a generalized synchronization, e.g., by the Lyapunov control of external fields in the absence of a direct coupling9,24. But a mechanical oscillator may also be synchronized to a reference drive25,26, thereby allowing the synchronization of uncoupled mechanical oscillators in the presence of identical driving fields. Generally speaking, optomechanical systems with directly coupled oscillators have a stronger maneuverability in achieving quantum synchronization than those with indirectly coupled oscillators. That is, indirectly coupled oscillators typically exhibits more poor synchronization behaviors and involves more complicated control strategies than directly coupled oscillators. On the other hand, we note that proper time-periodic modulations can open new possibilities for achieving optimal quantum control strategies and has been used to enhance various quantum effects like squeezing and entanglement in optomechanical systems27,28. Then one essential question arises: may time-periodic modulations also help to enhance quantum synchronization of indirectly coupled oscillators? The main aim of this work is thus to seek a positive answer with the quantum synchronization measure approaching perfect (→1.0), far beyond that for directly coupled oscillators (~0.3)19. Here we study the dynamic evolution of two mechanical oscillators interacting with different cavity modes via the radiation pressure in a double-cavity optomechanical system. The two cavities are coupled by an optical fiber through the inside mirrors and driven by two optical fields through the outside mirrors. It is shown that the two oscillators exhibit quite poor synchronization behaviors with the quantum part being negligible though the classical part being passable when the double-cavity optomechanical system suffers no temporal modulation. Exerting periodic modulations on detunings of both cavity modes or on amplitudes of both driving fields, we find that rather satisfactory synchronization behaviors can be observed with the quantum part being greatly enhanced and the classical part approaching perfect. To be more specific, the optimal quantum synchronization can be ~0.92 (~0.74) in the case of double cavity-detuning (driving-amplitude) modulation when the oscillators’ frequency difference is not too large at a low enough bath’s mean temperature. The advantage of cavity-mode modulation over driving-field modulation is further confirmed by an examination on the robustness of quantum synchronization against the bath’s mean temperature and the oscillators’ frequency difference. ## Model and Methods The optomechanical system under consideration is illustrated in Fig. 1. Two Fabry-Pérot cavities are coupled by an optical fiber between the inside mirrors and driven by two fields through the outside mirrors. Each cavity contains a tiny mechanical oscillator interacting with a corresponding cavity mode via the radiation pressure. A time-periodic modulation may be applied upon both external driving fields27,29 via the acousto-optical effect or both internal cavity modes via the piezo-electric effect30. Then it is straightforward to write down the total Hamiltonian after a frame rotating $$\begin{array}{c}H=\sum _{j\mathrm{=1,2}}\{-{{\rm{\Delta }}}_{j}[1+{\eta }_{C}\,\cos ({{\rm{\Omega }}}_{C}t)]{a}_{j}^{\dagger }{a}_{j}+\frac{{{\rm{\omega }}}_{mj}}{2}({p}_{j}^{2}+{q}_{j}^{2})-g{a}_{j}^{\dagger }{a}_{j}{q}_{j}\\ \,\,\,\,\,\,+iE[1+{\eta }_{D}\,\cos ({{\rm{\Omega }}}_{D}t)]({a}_{j}^{\dagger }-{a}_{j})\}+{\lambda }({a}_{1}^{\dagger }{a}_{2}+{a}_{2}^{\dagger }{a}_{1})\end{array}$$ (1) where $$\hslash =1$$ has been set for convenience. We have also assumed that (i) the two driving fields have the same frequency ω and the same amplitude E (ii) the two driving fields (cavity modes) are modulated in the same way with a common frequency Ω D C ) and amplitude $${\eta }_{D}$$ ($${\eta }_{C}$$). In addition, $${{\rm{\Delta }}}_{j}={\rm{\omega }}-{{\rm{\omega }}}_{cj}$$ is the detuning of the j th cavity mode with $${\omega }_{cj}$$ being the mode frequency; $${\omega }_{mj}$$ is the frequency of the j th mechanical oscillator; $${a}_{j}^{\dagger }$$ (a j ) is the creation (annihilation) operator of cavity mode $${\omega }_{cj}$$, satisfying the commutation relation $$[{a}_{j},{a}_{{j}^{\text{'}}}^{\dagger }]={\delta }_{j{j}^{\text{'}}}$$; q j (p j ) is the dimensionless position (momentum) operator of mechanical oscillator $${\omega }_{mj}$$, satisfying the commutation relation $$[{q}_{j},{p}_{{j}^{\text{'}}}]=i{\delta }_{j{j}^{\text{'}}}$$; g is the optomechanical coupling constant due to the radiation pressure and assumed to be equal in both cavities for simplicity; λ is the coupling constant of cavity modes through an optical fiber. Using the above Hamiltonian and considering relevant dissipation processes, we can further attain the following quantum Langevin equations27,31,32 $$\begin{array}{rcl}{\dot{q}}_{j} & = & {\omega }_{mj}{p}_{j}\\ {\dot{p}}_{j} & = & -{\omega }_{mj}{q}_{j}-{\gamma }_{m}{p}_{j}+g{a}_{j}^{\dagger }{a}_{j}+{\xi }_{j}\\ {\dot{a}}_{j} & = & -\{\kappa -i{{\rm{\Delta }}}_{j}[1+{{\rm{\eta }}}_{C}{\cos ({\rm{\Omega }}}_{C}t)]\}{a}_{j}+ig{a}_{j}{q}_{j}\\ & & +E[1+{{\rm{\eta }}}_{D}{\cos ({\rm{\Omega }}}_{D}t)]-i\lambda {a}_{3-j}+\sqrt{2\kappa }{a}_{j}^{in}\end{array}$$ (2) with k being the common decay rate of both cavity modes while γ m being the common damping rate of both mechanical oscillators. Moreover, $${a}_{j}^{in}$$ describes the input noise operator of one cavity mode, exhibiting a zero mean value and satisfying the correlation relation $$\langle {a}_{j}^{in\dagger }(t){a}_{{j}^{\text{'}}}^{in}({t}^{\text{'}})+{a}_{{j}^{\text{'}}}^{in}({t}^{\text{'}}){a}_{j}^{in\dagger }(t)\rangle ={\delta }_{j{j}^{\text{'}}}\delta (t-{t}^{\text{'}})$$ 33,34; $${\xi }_{j}$$ describes the stochastic noise operator of one mechanical oscillator, exhibiting a zero mean value and satisfying the correlation relation $$\frac{1}{2}\langle {\xi }_{j}(t){\xi }_{{j}^{\text{'}}}({t}^{\text{'}})+{\xi }_{{j}^{\text{'}}}({t}^{\text{'}}){\xi }_{j}(t)\rangle ={\gamma }_{m}\mathrm{(2}{n}_{b}+\mathrm{1)}{\delta }_{j{j}^{\text{'}}}\delta (t-{t}^{\text{'}})$$ under the Markovian approximation. Here $${n}_{b}={[exp(\hslash {\omega }_{m1}/{k}_{b}T)-1]}^{-1}\underline{ \sim }{[exp(\hslash {\omega }_{m2}/{k}_{b}T)-1]}^{-1}$$ is the mean phonon number determined by the mechanical bath’s mean temperature T35,36,37. To solve Eq. (2), we adopt a mean-field approximation8,9,27,28 to express relevant operators as sums of the (large) mean values and the (small) fluctuation terms, i.e., $${o}_{j}={O}_{j}+\delta {o}_{j}$$ with $${o}_{j}\in ({q}_{j},{p}_{j},{a}_{j})$$. In this way, the quantum Langevin equations can be divided into a set of classical nonlinear differential equations $$\begin{array}{rcl}{\dot{Q}}_{j} & = & {\omega }_{mj}{P}_{j}\\ {\dot{P}}_{j} & = & -{\omega }_{mj}{Q}_{j}-{\gamma }_{m}{P}_{j}+g|{A}_{j}{|}^{2}\\ {\dot{A}}_{j} & = & -\{\kappa -i{{\rm{\Delta }}}_{j}[1+{\eta }_{C}{\cos ({\rm{\Omega }}}_{C}t)]\}{A}_{j}+ig{A}_{j}{Q}_{j}\\ & & +E\mathrm{[1}+{\eta }_{D}{\cos ({\rm{\Omega }}}_{D}t)]-i\lambda {A}_{3-j}\end{array}$$ (3) for the mean values O j and a set of quantum linear differential equations $$\begin{array}{rcl}\dot{\delta }{q}_{j} & = & {\omega }_{mj}\delta {p}_{j}\\ \dot{\delta }{p}_{j} & = & -{\omega }_{mj}\delta {q}_{j}-{\gamma }_{m}\delta {p}_{j}+g({A}_{j}\delta {a}_{j}^{\dagger }+{A}_{j}^{\ast }\delta {a}_{j})+{\xi }_{j}\\ \dot{\delta }{a}_{j} & = & -\{\kappa -i{{\rm{\Delta }}}_{j}[1+{\eta }_{C}\,\cos ({{\rm{\Omega }}}_{C}t)]\}\delta {a}_{j}+ig({A}_{j}\delta {q}_{j}+{Q}_{j}\delta {a}_{j})\\ & & -i\lambda \delta {a}_{3-j}+\sqrt{2\kappa }{a}_{j}^{in}\end{array}$$ (4) for the fluctuation terms $$\delta {o}_{j}$$. In Eq. (4), we have neglected the second-order smaller terms including $$\delta {a}_{j}^{\dagger }\delta {a}_{j}$$ and $$\delta {a}_{j}\delta {q}_{j}$$. Further introducing $$\delta {x}_{j}=(\delta {a}_{j}^{\dagger }+\delta {a}_{j}/\sqrt{2})$$ and $$\delta {y}_{j}=i(\delta {a}_{j}^{\dagger }-\delta {a}_{j})/\sqrt{2}$$ as well as $${x}_{j}^{in}=({a}_{j}^{in\dagger }+{a}_{j}^{in})/\sqrt{2}$$ and $${y}_{j}^{in}=i({a}_{j}^{in\dagger }-{a}_{j}^{in})/\sqrt{2}$$, we can recast Eq. (4) into $$\dot{u}=Mu+n$$ (5) in terms of a 1 × 8 variable column vector $$u=(\delta {q}_{1},\delta {p}_{1},\delta {x}_{1},\delta {y}_{1},\delta {q}_{2},\delta {p}_{2},\delta {x}_{2},\delta {y}_{2}{)}^{T}$$, a 1 × 8 noise column vector $$n={(\mathrm{0,}{\xi }_{1},\sqrt{2\kappa }{x}_{1}^{in},\sqrt{2\kappa }{y}_{1}^{in},\mathrm{0,}{\xi }_{2},\sqrt{2\kappa }{x}_{2}^{in},\sqrt{2\kappa }{y}_{2}^{in})}^{T}$$, and a 8 × 8 coefficient matrix M given as follows $${\bf{M}}=(\begin{array}{cc}{M}_{1} & {M}_{0}\\ {M}_{0} & {M}_{2}\end{array})$$ (6) with $${{\bf{M}}}_{{\bf{0}}}=(\begin{array}{cccc}0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & \lambda \\ 0 & 0 & -\lambda & 0\end{array})$$ (7) and $${{\bf{M}}}_{1{\boldsymbol{,}}2}=(\begin{array}{cccc}0 & {\omega }_{m\mathrm{1,2}} & 0 & 0\\ -{\omega }_{m\mathrm{1,2}} & -{\gamma }_{m} & \sqrt{2}gRe({A}_{\mathrm{1,2}}) & \sqrt{2}gIm({A}_{\mathrm{1,2}})\\ -\sqrt{2}gIm({A}_{\mathrm{1,2}}) & 0 & -\kappa & -{F}_{\mathrm{1,2}}\\ \sqrt{2}gRe({A}_{\mathrm{1,2}}) & 0 & {F}_{\mathrm{1,2}} & -\kappa \end{array})$$ (8) where $${F}_{\mathrm{1,2}}={{\rm{\Delta }}}_{\mathrm{1,2}}[1+{\eta }_{C}{\cos ({\rm{\Omega }}}_{C}t)]+g{Q}_{\mathrm{1,2}}$$. Note, in particular, that M 1,2 and thus M is intrinsically time-dependent via F 1,2 and therefore A 1,2 [see Eq. (3)]. As proposed by Mari et al.19, we can gauge the synchronization level of indirectly coupled mechanical oscillators through a figure of merit $$S(t)={\langle {q}_{-}^{2}(t)+{p}_{-}^{2}(t)\rangle }^{-1}$$ (9) with the synchronization errors $${q}_{-}(t)=[{q}_{1}(t)-{q}_{2}(t)]/\sqrt{2}$$ and $${p}_{-}(t)=[{p}_{1}(t)-{p}_{2}(t)]/\sqrt{2}$$. In general, S(t) is contributed by both classical errors $${{Q}}_{1}(t)-{{Q}}_{2}(t)$$ and $${P}_{1}(t)-{P}_{2}(t)$$ irrelevant to the noise terms and quantum errors $$\langle \delta {q}_{1}(t)-\delta {q}_{2}(t)\rangle$$ and $$\langle \delta {p}_{1}(t)-\delta {p}_{2}(t)\rangle$$ arising from the noise terms. Then it is appropriate to use $${S}_{q}(t)={\langle \delta {q}_{-}^{2}(t)+\delta {p}_{-}^{2}(t)\rangle }^{-1}$$ (10) as a measure of the pure quantum synchronization with the classical contributions excluded. This quantum figure of merit has the maximal value 1.0 corresponding to the complete synchronization as limited by the Heisenberg’s uncertainty principle. The calculation of $${S}_{q}(t)$$ involves a few quadratic terms $$\delta {q}_{j}^{2}(t)$$, $$\delta {q}_{1}\delta {q}_{2}(t)$$, $$\delta {p}_{j}^{2}(t)$$, and $$\delta {p}_{1}\delta {p}_{2}(t)$$ so that we have to introduce a 8 × 8 covariance matrix $${V}_{ij}(t)=\frac{1}{2}\langle {u}_{i}(t){u}_{j}(t)+{u}_{j}(t){u}_{i}(t)\rangle$$ (11) and attain its dynamic equation 8,29,38,39 $$\dot{V}=MV+V{M}^{T}+N$$ (12) directly from Eq. (5). In the above $$N=diag[\mathrm{0,}\,{\gamma }_{m}\mathrm{(2}{n}_{b}+\mathrm{1),}\,\kappa ,\kappa ,\,\mathrm{0,}\,{\gamma }_{m}\mathrm{(2}{n}_{b}+\mathrm{1),}\,\kappa ,\kappa ]$$ is a diagonalized 8 × 8 coefficient matrix answering for the correlation relation of noise operators and satisfying $${N}_{ij}\delta (t-{t}^{\text{'}})$$ $$=\langle {n}_{i}(t){n}_{j}({t}^{\text{'}})+{n}_{j}({t}^{\text{'}}){n}_{i}(t)\rangle /2$$. Hence $${S}_{q}(t)$$ can be expressed in a more concise form $$\begin{array}{c}{S}_{q}(t)=\frac{1}{2}\{[{V}_{11}(t)+{V}_{55}(t)-{V}_{15}(t)-{V}_{51}(t)]\\ \,\,\,\,\,\,\,\,\,\,\,\,\,\,+{[{V}_{22}(t)+{V}_{66}(t)-{V}_{26}(t)-{V}_{62}(t)]\}}^{-1}\end{array}$$ (13) Solving Eqs (3), (5) and (12) together under a given initial condition, it is then easy to examine the quantum synchronization of indirectly coupled mechanical oscillators. Note, however, that a good quantum synchronization is meaningful only when the optomechanical system is asymptotic stable, i.e., when all eigenvalues of the coefficient matrix M have negative real parts after a temporary evolutionary process according to the Routh-Hurwitz criterion40. In this regard, we would have a stable limit-cycle solution, representing a periodic oscillation, for $${{Q}}_{j}(t)$$ and $${P}_{j}(t)$$. Finally, we introduce a widely used measure known as the Pearson factor for the classical synchronization41,42,43,44. $${C}_{{Q}_{1},{Q}_{2}}(t,{\rm{\Delta }}t)=\frac{\overline{\delta {Q}_{1}\delta {Q}_{2}}}{\sqrt{\overline{\delta {Q}_{1}^{2}}\,\overline{\delta {Q}_{2}^{2}}}}$$ (14) with $$\delta {Q}_{i}={Q}_{i}-{\overline{Q}}_{i}$$ and $${\overline{Q}}_{i}=\frac{1}{{\rm{\Delta }}t}{\int }_{t}^{t+{\rm{\Delta }}t}{Q}_{i}(t^{\prime} )dt^{\prime} (i=1,\mathrm{2)}$$. The Pearson factor is bounded from 1.0 to −1.0 corresponding to the complete synchronization and the complete anti-synchronization, respectively. In fact, $${C}_{{{Q}}_{1},{{Q}}_{2}}$$ and S q are regarded here as the first-order and second-order synchronization criteria, respectively, for the two indirectly coupled mechanical oscillators. ## Results and Discussion In this section, we examine via numerical calculations how to enhance the quantum synchronization in the presence of a good classical synchronization by periodically modulating the cavity modes or the driving fields. In what follows, we will use $${\overline{C}}_{{Q}_{1},{Q}_{2}}$$ and $${\overline{S}}_{q}$$ to represent the mean values of classical and quantum synchronizations after the system has evolved into the stable state19. We will also set $${{\rm{\Delta }}}_{j}={\omega }_{mj}$$ to attain self-sustained mechanical oscillations (a prerequisite of synchronization) with blue-detuned driving fields25,26,45. We start by considering the simple case without periodic modulations and illustrating relevant results in Fig. 2. It is clear that quantum synchronization is negligible in the absence of periodic modulations though it is possible to have rather good classical synchronization when the two cavity modes are coupled by an optical fiber and driven by two optical fields of identical amplitudes. To be more specific, $${\overline{C}}_{{Q}_{1},{Q}_{2}}$$ may approach 1.0 when the coupling constant λ and the driving amplitude E are suitably chosen while $${\overline{S}}_{q}$$ always tends to vanishing as long as E is not too small. When E is small enough, however, $${\overline{S}}_{q}$$ may approach 1.0 while $${\overline{C}}_{{Q}_{1},{Q}_{2}}$$ decreases greatly, indicating that the classical phase-space trajectory is not a limit cycle. So we choose E = 100 in the following calculations to guarantee limit-cycle solutions for our optomechanical system. In the regime of limit-cycle solutions, we then examine whether periodic modulations on cavity modes and driving fields27,28,30 can be exploited to enhance quantum synchronization of mechanical oscillators. ### Modulation on cavity modes We first consider the periodic modulation on cavity lengths and thus mode frequencies with, e.g., piezoelectric transducers attached to outside mirrors30. That is, the driving fields have a constant amplitude ($${\eta }_{D}=\mathrm{0,}\,{{\rm{\Omega }}}_{D}=0$$) while the cavity detunings vary periodically in time. We plot in Fig. 3 mean values $${\bar{C}}_{{Q}_{1},{Q}_{2}}$$ and $${\overline{S}}_{q}$$ for classical and quantum synchronizations as a function of $${\eta }_{C}$$ or Ω C for a single cavity-mode modulation (a, b) and a double cavity-mode modulation (c, d), respectively. Figure 3(a) and (b) show that the quantum synchronization can be slightly enhanced for appropriate values of $${\eta }_{C}$$ or Ω C in the presence of somewhat modified classical synchronization. Figure 3(c) and (d) show that quite good synchronization behaviors exist in both quantum and classical regimes for appropriate values of $${\eta }_{C}$$ or Ω C . It is thus clear that double cavity-mode modulation has a considerable improvement in enhancing quantum synchronization as compared to single cavity-mode modulation. In particular, the optimal values are $${\overline{C}}_{{Q}_{1},{Q}_{2}}\approx 1.0$$ and $${\overline{S}}_{q}=0.84$$ at $${{\rm{\Omega }}}_{C}=3$$ with $${\eta }_{C}=2$$ in Fig. 3(c); $${\overline{C}}_{{Q}_{1},{Q}_{2}}\approx 1.0$$ and $${\overline{S}}_{q}=0.92$$ at $${\eta }_{C}=2.6$$ with Ω C  = 3 in Fig. 3(d). We also find from Fig. 3(c) that good quantum synchronization occurs when Ω C is an integral multiple of $${\omega }_{m}$$ because in this case it is easier to transfer energy from external modulations to mechanical oscillations. But the peak positions may change from $${{\rm{\Omega }}}_{C}/{\omega }_{m}=3,4,5$$ to other integers depending, e.g., on the value of $${\eta }_{C}$$ (not shown). In addition, the modulation effect may sudden fail, i.e., $${\overline{S}}_{q}$$ and $${\overline{C}}_{{Q}_{1},{Q}_{2}}$$ become invariant, when Ω C exceeds a critical value. Finally we find from Fig. 3(d) that, when $${\eta }_{C}$$ is large enough, $${\overline{S}}_{q}$$ and $${\overline{C}}_{{Q}_{1},{Q}_{2}}$$ exhibit unstable oscillations as a result of the additional optomechanical instability due to parametric amplification27. To have a deeper insight into the synchronization behaviors, we further examine in Fig. 4 time evolutions of relevant mechanical variables and synchronization measures in the case of an optimal double cavity-mode modulation with Ω C  = 3 and $${\eta }_{C}=2.6$$. Figure 4(a) and (b) show that both $${C}_{{Q}_{1},{Q}_{2}}$$ and S q reach a stable state of slight oscillation after a (different) transient evolution. As a further evidence, classical positions Q 1 and Q 2 are found to oscillate exactly in phase when entering the stable state as shown in Fig. 4(c). The same conclusion holds for classical momenta P 1 and P 2 as shown in Fig. 4(d). Therefore, by periodically modulating cavity detunings in a suitable way, it is viable to produce a rather ideal level of both quantum and classical synchronizations between two mechanical oscillators with different frequencies. Corresponding limit-cycle trajectories in the $${P}_{1}\rightleftharpoons {Q}_{1}$$ (red) and $${P}_{2}\rightleftharpoons {Q}_{2}$$ (blue) spaces are illustrated in the inset of Fig. 4(a). ### Modulation on driving fields We then consider the periodic modulation on amplitudes of the driving fields, e.g., via acousto-optical modulators. That is, the cavity modes have a constant detuning ($${\eta }_{C}=\mathrm{0,}\,{{\rm{\Omega }}}_{C}=0$$) while the driving amplitudes vary periodically in time. We plot in Fig. 5 mean values $${\overline{C}}_{{Q}_{1},{Q}_{2}}$$ and $${\overline{S}}_{q}$$ for classical and quantum synchronizations as a function of $${\eta }_{D}$$ or Ω D for a single driving-amplitude modulation (a, b) and a double driving-amplitude modulation (c,d), respectively. Once again we find that (i) quantum synchronization can be slightly enhanced with somewhat modified classical synchronization in the case of single driving-amplitude modulation; (ii) both quantum and classical synchronizations are quite satisfactory in the case of double driving-amplitude modulation. In particular, the optimal values are $${\bar{C}}_{{Q}_{1},{Q}_{2}}\approx 1.0$$ and $${\overline{S}}_{q}=0.57$$ in Fig. 5(c); $${\bar{C}}_{{Q}_{1},{Q}_{2}}\approx 1.0$$ and $${\bar{S}}_{q}=0.74$$ in Fig. 5(d). By comparing Fig. 5(c,d) with Fig. 3(c,d), it is clear that double cavity-mode modulation is more favorable than double driving-field modulation for achieving an ideal level of quantum and classical synchronizations. One common feature of double driving-field and cavity-mode modulations is that optimal quantum synchronization occurs when the modulation frequency is an integral multiple of the oscillator frequency before a critical value. We further show in Fig. 6 that the time evolution of relevant mechanical variables and synchronization measures in the case of an optimal double driving-amplitude modulation with Ω D  = 4 and $${\eta }_{D}=0.8$$. From Fig. 6(a) and (b) we can see that both $${\bar{C}}_{{Q}_{1},{Q}_{2}}$$ and S q reach a stable state of slight oscillation after a (different) transient evolution, longer than that in Fig. 4(a) and (b). As a further evidence, classical positions Q 1 and Q 2 and classical momenta $${P}_{1}(t)$$ and $${P}_{2}(t)$$ are found to oscillate exactly in phase when entering the stable state as shown in Fig. 6(c) and (d). Therefore, by periodically modulating driving amplitudes in a suitable way, it is also viable to produce very good quantum and classical synchronizations between two mechanical oscillators with different frequencies. Two corresponding limit-cycle trajectories are illustrated in the inset of Fig. 6(a). ### Comparison of two modulations Now we examine the robustness of quantum synchronization in both cases of cavity-mode and driving-field modulations against the bath’s mean temperature T and the oscillators’ frequency difference Δ m . This is based on the consideration that a slight increase of T and $${{\rm{\Delta }}}_{m}$$ may result in a large decrease of $${\overline{S}}_{q}$$ so that it is meaningful to check how $${\overline{S}}_{q}$$ decays until negligible. We plot $${\overline{S}}_{q}$$ versus mean temperature T in Fig. 7(a) and frequency difference Δ m in Fig. 7(b) for the optimal modulations on cavity detunings (red-solid) or driving amplitudes (blue-dashed). That is, each point represents the maximal value of $${\overline{S}}_{q}$$, for a given value of T or Δ m , obtained by choosing the optimal values of $${\eta }_{C}$$ and Ω C or $${\eta }_{D}$$ and Ω D . Figure 7(a) shows that the quantum synchronization is quite robust (i.e., does not change too much) against the temperature before $$T \sim \hslash {\omega }_{m1}/{k}_{b}$$. However, it decays quickly after this point and tends to be vanishing when the temperature is around $$T=1000\hslash {\omega }_{m1}/{k}_{b}$$. It is also clear that the optimal modulation on cavity modes always results in a better quantum synchronization than that on driving fields. Figure 7(b) shows that the quantum synchronization $${\overline{S}}_{q}$$ is quite robust against the frequency difference Δ m for an optimal cavity-mode modulation because $${\overline{S}}_{q}$$ doesn’t decrease too much even if Δ m increases from 0.005 to 0.045. However, the quantum synchronization $${\overline{S}}_{q}$$ decays in a much quicker way for an optimal driving-field modulation and already exhibits a vanishing value around Δ m  ~ 0.045. It is also worth noting that the optimal level of quantum synchronization observed here (~0.92 or ~0.74) is much higher than that for two directly coupled oscillators (~0.3)19 for the same frequency difference Δ m  = 0.005. ## Conclusions In summary, we have considered a double-cavity optomechanical system containing two independent mechanical oscillators for enhancing both quantum and classical synchronizations with two kinds of temporal periodic modulation. Our numerical results show that appropriate modulations on cavity detunings or driving amplitudes can result in greatly enhanced quantum and classical synchronizations. To be more specific, the quantum synchronization $${\overline{S}}_{q}$$ can be up to ~0.92 (~0.74) in the case of cavity-detuning (driving-amplitude) modulation accompanied with a roughly perfect classical synchronization $${\bar{C}}_{{Q}_{1},{Q}_{2}}\approx 1$$ when the oscillators’ frequency difference is Δ m  = 0.005 and the bath’s mean temperature is T = 0. An examination of the robustness of $${\overline{S}}_{q}$$ against Δ m and T shows that the cavity-mode modulation is always more appealing in achieving a preferable quantum synchronization behavior than the driving-field modulation. We expect that our results may be extended to more complicated multi-cavity optomechanical systems, in which an array of highly synchronized mechanical oscillators can serve as a useful resource of, e.g., quantum communication and quantum control. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## References 1. 1. Ameri, V. et al. Mutual information as an order parameter for quantum synchronization. Phys. Rev. A 91, 012301 (2015). 2. 2. Lee, T. E. & Sadeghpour, H. R. Quantum Synchronization of Quantum van der Pol Oscillators with Trapped Ions. Phys. Rev. Lett. 111, 234101 (2013). 3. 3. Lee, T. E., Chan, C. K. & Wang, S. S. Entanglement tongue and quantum synchronization of disordered oscillators. Phys. Rev. E 89, 022913 (2014). 4. 4. Walter, S., Nunnenkamp, A. & Bruder, C. Quantum Synchronization of a Driven Self-Sustained Oscillator. Phys. Rev. Lett. 112, 094102 (2014). 5. 5. Shirasaka, S., Watanabe, N., Kawamura, Y. & Nakao, H. Optimizing stability of mutual synchronization between a pair of limit-cycle oscillators with weak cross coupling. Phys. Rev. E 96, 012223 (2017). 6. 6. Lörch, N., Nigg, S. E., Nunnenkamp, A., Tiwari, R. P. & Bruder, C. Quantum Synchronization Blockade: Energy Quantization Hinders Synchronization of Identical Oscillators. Phys. Rev. Lett. 118, 243602 (2017). 7. 7. Ying, L., Lai, Y.-C. & Grebogi, C. Quantum manifestation of a synchronization transition in optomechanical systems. Phys. Rev. A 90, 053810 (2014). 8. 8. Li, W. L., Li, C. & Song, H. S. Criterion of quantum synchronization and controllable quantum synchronization based on an optomechanical system. J. Phys. B 48, 035503 (2015). 9. 9. Li, W. L., Li, C. & Song, H. S. Quantum synchronization in an optomechanical system based on Lyapunov control. Phys. Rev. E 93, 062221 (2016). 10. 10. Bemani, F., Motazedifard, A., Roknizadeh, R., Naderi, M. H. & Vitali, D. Synchronization dynamics of two nanomechanical membranes within a Fabry-Perot cavity. Phys. Rev. A 96, 023805 (2017). 11. 11. Li, W. L., Zhang, W. Z., Li, C. & Song, H. S. Properties and relative measure for quantifying quantum synchronization. Phys. Rev. E 96, 012211 (2017). 12. 12. Galindo, A. & Martín-Delgado, M. A. Information and computation: Classical and quantum aspects. Rev. Mod. Phys. 74, 347 (2002). 13. 13. Quan, R. et al. Demonstration of quantum synchronization based on second-order quantum coherence of entangled photons. Sci. Rep. 6, 30453 (2016). 14. 14. Samoylova, M., Piovella, N., Robb, G. R. M., Bachelard, R. & Courteille, P. W. Synchronization of Bloch oscillations by a ring cavity. Opt. Express 23, 014823 (2015). 15. 15. Xu, M. H., Tieri, D. A., Fine, E. C., Thompson, J. K. & Holland, M. J. Synchronization of Two Ensembles of Atoms. Phys. Rev. Lett. 113, 154101 (2014). 16. 16. Xu, M. H. & Holland, M. J. Conditional Ramsey Spectroscopy with Synchronized Atoms. Phys. Rev. Lett. 114, 103601 (2015). 17. 17. Hush, M. R., Li, W., Genway, S., Lesanovsky, I. & Armour, A. D. Spin correlations as a probe of quantum synchronization in trapped-ion phonon lasers. Phys. Rev. A 91, 061401(R) (2015). 18. 18. Huygens, C. OEuvres completes de Christiaan Huygens (Martinus Nijhoff, The Hague, 1893). 19. 19. Mari, A., Farace, A., Didier, N., Giovannetti, V. & Fazio, R. Measures of Quantum Synchronization in Continuous Variable Systems. Phys. Rev. Lett. 111, 103605 (2013). 20. 20. Yamada, T. & Fujisaka, H. Stability theory of synchronized motion in coupled-oscillator systems. 2. Prog. Theor. Phys. 70(5), 1240–1248 (1983). 21. 21. Zhang, M. et al. Synchronization of Micromechanical Oscillators Using Light. Phys. Rev. Lett. 109, 233906 (2012). 22. 22. Bagheri, M., Poot, M., Fan, L., Marquardt, F. & Tang, H. X. Photonic Cavity Synchronization of Nanomechanical Oscillators. Phys. Rev. Lett. 111, 213902 (2013). 23. 23. Matheny, M. H. et al. Phase Synchronization of Two Anharmonic Nanomechanical Oscillators. Phys. Rev. Lett. 112, 014101 (2014). 24. 24. Li, W. L., Li, C. & Song, H. S. Quantum synchronization and quantum state sharing in an irregular complex network. Phys. Rev. E 95, 022204 (2017). 25. 25. Shlomi, K. et al. Synchronization in an optomechanical cavity. Phys. Rev. E 91, 032910 (2015). 26. 26. Amitai, E., Lörch, N., Nunnenkamp, A., Walter, S. & Bruder, C. Synchronization of an optomechanical system to an external drive. Phys. Rev. A 95, 053858 (2017). 27. 27. Farace, A. & Giovannetti, V. Enhancing quantum effects via periodic modulations in optomechanical systems. Phys. Rev. A 86, 013820 (2012). 28. 28. Mari, A. & Eisert, J. Opto- and electro-mechanical entanglement improved by modulation. New J. Phys. 14, 075014 (2012). 29. 29. Mari, A. & Eisert, J. Gently Modulating Optomechanical Systems. Phys. Rev. Lett. 103, 213603 (2009). 30. 30. Liao, J. Q., Law, C. K., Kuang, L. M. & Nori, F. Enhancement of mechanical effects of single photons in modulated two-mode optomechanics. Phys. Rev. A 92, 013822 (2015). 31. 31. Genes, C., Mari, A., Vitalii, D. & Tombesi, S. Quantum Effects in Optomechanical Systems. Adv. At. Mol. Opt. Phys. 57, 33 (2009). 32. 32. Bai, C.-H., Wang, D.-Y., Wang, H.-F., Zhu, A.-D. & Zhang, S. Classical-to-quantum transition behavior between two oscillators separated in space under the action of optomechanical interaction. Sci. Rep. 7, 2545 (2017). 33. 33. Wang, D.-Y., Bai, C.-H., Wang, H.-F., Zhu, A.-D. & Zhang, S. Steady-state mechanical squeezing in a double-cavity optomechanical system. Sci. Rep. 6, 38559 (2016). 34. 34. Jin, L., Guo, Y., Ji, X. & Li, L. Reconfigurable chaos in electrooptomechanical system with negative Duffing resonators. Sci. Rep. 7, 4822 (2017). 35. 35. Giovannetti, V. & Vitalii, D. Phase-noise measurement in a cavity with a movable mirror undergoing quantum Brownian motion. Phys. Rev. A 63, 023812 (2001). 36. 36. Liu, Y. C., Shen, Y. F., Gong, Q. H. & Xiao, Y. F. Optimal limits of cavity optomechanical cooling in the strong-coupling regime. Phys. Rev. A 89, 053821 (2014). 37. 37. Xu, X. W. & Li, Y. Optical nonreciprocity and optomechanical circulator in three-mode optomechanical systems. Phys. Rev. A 91, 053854 (2015). 38. 38. Wang, G. L., Huang, L., Lai, Y. C. & Grebogi, C. Nonlinear Dynamics and Quantum Entanglement in Optomechanical Systems. Phys. Rev. Lett. 112, 110406 (2014). 39. 39. Larson, J. & Horsdal, M. Photonic Josephson effect, phase transitions, and chaos in optomechanical systems. Phys. Rev. A 84, 021804(R) (2011). 40. 40. De Jesus, E. X. & Kaufman, C. Routh-Hurwitz criterion in the examination of eigenvalues of a system of nonlinear ordinary differential equations. Phys. Rev. A 35, 5288 (1987). 41. 41. Li, W. L., Li, C. & Song, H. S. Quantum synchronization of chaotic oscillator behaviors among coupled BEC–optomechanical systems. Quant. Inf. Pro. 16, 80 (2017). 42. 42. Pikovsky, A., Rosenblum, M. & Kurths, J. Synchronization: A Universal Concept in Nonlinear Sciences. (Cambridge edition, 2001). 43. 43. Galve, F., Giorgi, G. L. & Zambrini, R. Quantum correlations and synchronization measures. arXiv: 1610, 05060 (2016). 44. 44. Boccaletti, S., Kurths, J., Osipov, G., Valladares, D. L. & Zhou, C. S. The synchronization of chaotic systems. Phys. Rep. 366, 1–101 (2002). 45. 45. Ludwig, M., Kubala, B. & Marquardt, F. The optomechanical instability in the quantum regime. New J. Phys. 10, 095013 (2008). ## Acknowledgements This work is supported by the National Natural Science Foundation of China (No. 61378094, 11534002 and 11674049). ## Author information ### Affiliations 1. #### Center for Quantum Sciences and School of Physics, Northeast Normal University, Changchun, 130117, China • Lei Du • , Chu-Hui Fan • , Han-Xiao Zhang •  & Jin-Hui Wu ### Contributions L. Du and J.-H. Wu conceived the idea and wrote the main manuscript text. L. Du, C.-H. Fan and H.-X. Zhang performed the calculations. All authors reviewed the manuscript. ### Competing Interests The authors declare that they have no competing interests. ### Corresponding author Correspondence to Jin-Hui Wu.
2018-10-21 04:07:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7597441077232361, "perplexity": 1922.8769837121467}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583513686.3/warc/CC-MAIN-20181021031444-20181021052944-00516.warc.gz"}
http://ptp.ipap.jp/cgi-bin/findarticle?journal=PTP&author=H.Fukuda
## Search Result ### Search Conditions Years All Years for journal 'PTP' author 'H.* Fukuda' : 34 total : 34 ### Search Results : 34 articles were found. 1. Progress of Theoretical Physics Vol. 4 No. 1 (1949) pp. 47-59 : A Self-Consistent Subtraction Method in the Quantum Field Theory. II-1 Hiroshi Fukuda, Yonezi Miyamoto and Sin-itirô Tomonaga 2. Progress of Theoretical Physics Vol. 4 No. 2 (1949) pp. 121-129 : A Self-Consistent Subtraction Method in the Quantum Field Theory. II-2 Hiroshi Fukuda, Yoneji Miyamoto and Sin-itirô Tomonaga 3. Progress of Theoretical Physics Vol. 4 No. 2 (1949) pp. 235-236 : On the $\gamma$-Decay of Neutral Meson H. Fukuda and Y. Miyamoto 4. Progress of Theoretical Physics Vol. 4 No. 3 (1949) pp. 347-357 : On the $\gamma$-Decay of Neutral Meson Hiroshi Fukuda and Yoneji Miyamoto 5. Progress of Theoretical Physics Vol. 4 No. 3 (1949) pp. 385b-386 : Application of Pauli's Regulator to the $\gamma$-Decay of Neutrettos H. Fukuda, Y. Miyamoto, T. Miyazima and S. Tomonaga 6. Progress of Theoretical Physics Vol. 4 No. 3 (1949) pp. 388-389 : On the Nature of $\tau$-Meson H. Fukuda, S. Hayakawa and Y. Miyamoto 7. Progress of Theoretical Physics Vol. 4 No. 3 (1949) pp. 389-391 : Selection Rule for Meson Problem H. Fukuda and Y. Miyamoto 8. Progress of Theoretical Physics Vol. 4 No. 3 (1949) pp. 391-392 : On the Electron-Positron Pair Disintegration H. Fukuda and Y. Miyamoto 9. Progress of Theoretical Physics Vol. 4 No. 3 (1949) pp. 392-394 : The Three Quanta Disintegration of the Neutral Meson H. Fukuda and Y. Miyamoto 10. Progress of Theoretical Physics Vol. 4 No. 4 (1949) pp. 477-484 : Applicability of Pauli's Regulator to the $\gamma$-Decay of Neutrettos H. Fukuda, Y. Miyamoto, T. Miyazima, S. Tomonaga, S. Ôneda, S. Ozaki and S. Sasaki 11. Progress of Theoretical Physics Vol. 5 No. 1 (1950) pp. 147-148 : The Decay of a $\tau^\pm$ Meson into $\pi^\pm$ and $\pi_0$ Meson H. Fukuda and Y. Miyamoto 12. Progress of Theoretical Physics Vol. 5 No. 1 (1950) pp. 148-150 : The Decay of a $\tau^\pm$ Meson into a $\pi^\pm$ Meson and a Photon H. Fukuda and Y. Miyamoto 13. Progress of Theoretical Physics Vol. 5 No. 2 (1950) pp. 283-304 : On the Nature of $\tau$-Mesons. I Hiroshi Fukuda, Satio Hayakawa and Yoneji Miyamoto 14. Progress of Theoretical Physics Vol. 5 No. 3 (1950) pp. 352-372 : On the Nature of $\tau$-Mesons. II Hiroshi Fukuda, Satio Hayakawa and Yoneji Miyamoto 15. Progress of Theoretical Physics Vol. 5 No. 4 (1950) pp. 669-681 : On the Prodution of Cosmic Ray Mesons Yoichi Fujimoto, Hiroshi Fukuda, Satio Hayakawa and Yoshio Yamaguchi 16. Progress of Theoretical Physics Vol. 5 No. 4 (1950) pp. 740-747 : Analysis on the Two Meson Theory Seitaro Nakamura, Hiroshi Fukuda, Ken-ichi Ono, Muneo Sasaki and Mituo Taketani 17. Progress of Theoretical Physics Vol. 5 No. 5 (1950) pp. 800-812 : Production of $\pi$-Mesons in Nucleon-Nucleon Collisions near the Threshold Energy Hiroshi Fukuda and Gyô Takeda 18. Progress of Theoretical Physics Vol. 5 No. 6 (1950) pp. 931-947 : On the Negative $\pi$-Meson Capture K. Aidzu, Y. Fujimoto, H. Fukuda, S. Hayakawa, K. Takayanagi, G. Takeda and Y. Yamaguchi 19. Progress of Theoretical Physics Vol. 5 No. 6 (1950) pp. 957-976 : The Multiple Production of Mesons by High Energy Nucleon-Nucleon Collisions Hiroshi Fukuda and Gyo Takeda 20. Progress of Theoretical Physics Vol. 5 No. 6 (1950) pp. 993-996 : On the Decay of a Heavy Dirac Meson Hiroshi Fukuda and Satio Hayakawa 21. Progress of Theoretical Physics Vol. 5 No. 6 (1950) pp. 1024-1032 : Ambiguities in Quantized Field Theories Hiroshi Fukuda and Toichiro Kinoshita 22. Progress of Theoretical Physics Vol. 6 No. 2 (1951) pp. 193-196 : On the Production of Mesons by X-rays Kô Aidzu, Yoichi Fujimoto and Hiroshi Fukuda 23. Progress of Theoretical Physics Vol. 6 No. 5 (1951) pp. 788-800 : Nuclear Interaction of $\mu$-Meson Hiroshi Fukuda, Yoichi Fujimoto and Masatoshi Koshiba 24. Progress of Theoretical Physics Vol. 17 No. 2 (1957) pp. 241-287 : Hydrodynamical Treatment of Multiple Meson Production in High Energy Nucleon-Nucleus Collisions Saburo Amai, Hiroshi Fukuda, Chikashi Iso and Masatomo Sato 25. Progress of Theoretical Physics Vol. 21 No. 1 (1959) pp. 29-73 : A Nucleonic Cascade Theory and an Analysis of Extensive Air Showers Hiroshi Fukuda, Naofumi Ogita and Akira Ueda 26. Progress of Theoretical Physics Vol. 52 No. 3 (1974) pp. 1013-1027 : (5) Relations between Correlation Functions of Produced Particles and the Dynamics of High Energy Multi-Particle Production Hiroshi Fukuda 27. Progress of Theoretical Physics Vol. 57 No. 2 (1977) pp. 483-498 : (5) Unified Analysis of Inclusive Spectra of Mesons and Baryons in High Energy $pp$, $\pi p$ and $e^+ e^-$ Collisions by Quark Cascade Model Hiroshi Fukuda and Chikashi Iso 28. Progress of Theoretical Physics Vol. 57 No. 5 (1977) pp. 1663-1678 : (5) Meson Spectra in Meson's Fragmentation Region in Meson-Nucleon Collisions by New Type of Quark Cascade Model Hiroshi Fukuda and Chikashi Iso 29. Progress of Theoretical Physics Vol. 58 No. 5 (1977) pp. 1472-1485 : (5) Deep-Inelastic Lepton Nucleon Scattering by New Type of Quark Cascade Model. I Hiroshi Fukuda and Chikashi Iso 30. Progress of Theoretical Physics Vol. 58 No. 5 (1977) pp. 1486-1493 : (5) Deep-Inelastic Lepton Nucleon Scattering by New Type of Quark Cascade Model. II Hiroshi Fukuda and Chikashi Iso 31. Progress of Theoretical Physics Vol. 60 No. 5 (1978) pp. 1439-1456 : (5) Parton Distribution Functions in Hadron and Decay Distribution Functions in the New Type of Quark Cascade Model Hiroshi Fukuda and Chikashi Iso 32. Progress of Theoretical Physics Vol. 60 No. 5 (1978) pp. 1457-1470 : (5) Inclusive Spectra and Two Particle Correlations in Small $p_T$ Hadron Jets from Quark Cascade Model with Recombination Mechanism
2013-05-26 02:48:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40312227606773376, "perplexity": 3737.208104579477}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706499548/warc/CC-MAIN-20130516121459-00076-ip-10-60-113-184.ec2.internal.warc.gz"}
http://openstudy.com/updates/500e10f0e4b0ed432e10787e
• anonymous Nicole spent $32 on shirts at the mall. She spent a total of$80 that day. What percentage of the total did she spend on shirts? Mathematics Looking for something else? Not the answer you are looking for? Search for more explanations.
2017-04-24 13:25:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24849165976047516, "perplexity": 2691.187801096036}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917119361.6/warc/CC-MAIN-20170423031159-00402-ip-10-145-167-34.ec2.internal.warc.gz"}
https://www.projecteuclid.org/euclid.jdg/1324476750
## Journal of Differential Geometry ### Einstein spaces as attractors for the Einstein flow #### Abstract In this paper we prove a global existence theorem, in the direction of cosmological expansion, for sufficiently small perturbations of a family of $n + 1$-dimensional, spatially compact spacetimes, which generalizes the $k = -1$ Friedmann-Lemaître-Robertson-Walker vacuum spacetime. This work extends the result from Future complete vacuum spacetimes. The background spacetimes we consider are Lorentz cones over negative Einstein spaces of dimension $n \ge 3$. We use a variant of the constant mean curvature, spatially harmonic (CMCSH) gauge introduced in Elliptic-hyperbolic systems and the Einstein equations. An important difference from the $3+1$ dimensional case is that one may have a nontrivial moduli space of negative Einstein geometries. This makes it necessary to introduce a time-dependent background metric, which is used to define the spatially harmonic coordinate system that goes into the gauge. Instead of the Bel-Robinson energy used in Future complete vacuum spacetimes, we here use an expression analogous to the wave equation type of energy introduced in Elliptic-hyperbolic systems and the Einstein equations for the Einstein equations in CMCSH gauge. In order to prove energy estimates, it turns out to be necessary to assume stability of the Einstein geometry. Further, for our analysis it is necessary to have a smooth moduli space. Fortunately, all known examples of negative Einstein geometries satisfy these conditions. We give examples of families of Einstein geometries which have nontrivial moduli spaces. A product construction allows one to generate new families of examples. Our results demonstrate causal geodesic completeness of the perturbed spacetimes, in the expanding direction, and show that the scale-free geometry converges toward an element in the moduli space of Einstein geometries, with a rate of decay depending on the stability properties of the Einstein geometry. #### Article information Source J. Differential Geom., Volume 89, Number 1 (2011), 1-47. Dates First available in Project Euclid: 21 December 2011 https://projecteuclid.org/euclid.jdg/1324476750 Digital Object Identifier doi:10.4310/jdg/1324476750 Mathematical Reviews number (MathSciNet) MR2863911 Zentralblatt MATH identifier 1256.53035 #### Citation Andersson, Lars; Moncrief, Vincent. Einstein spaces as attractors for the Einstein flow. J. Differential Geom. 89 (2011), no. 1, 1--47. doi:10.4310/jdg/1324476750. https://projecteuclid.org/euclid.jdg/1324476750
2019-12-12 09:41:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8427540063858032, "perplexity": 642.1566953779749}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540542644.69/warc/CC-MAIN-20191212074623-20191212102623-00108.warc.gz"}
https://www.physicsforums.com/threads/can-anyone-help-me-with-these-problems.126422/
# Can anyone help me with these problems? 1)Prove : the product of all of the positive divisors of n ( including n itself ) is n^(d(n)/2). 2) Suppose you have a game in which there are two kinds of scoring events. One event gives a score of m points, and the other gives a score of n points. Asusume that m and n are relatively prime, and derive a formula for the largest unattainable score. Prove your answer is correct. shmoe Homework Helper Hi, you'll find you get more help if you post what you have tried, so we can see where you are stuck and advise from there. So just a couple of hints for now: logic2b1 said: 1)Prove : the product of all of the positive divisors of n ( including n itself ) is n^(d(n)/2). You might find it easier to break this into two cases, n a perfect square, and n not a perfect square. logic2b1 said: 2) Suppose you have a game in which there are two kinds of scoring events. One event gives a score of m points, and the other gives a score of n points. Asusume that m and n are relatively prime, and derive a formula for the largest unattainable score. Prove your answer is correct. I can't think of any good hints that don't give away too much here. Have you tried working out some examples and attempting to guess a formula? The formula will be fairly simple in terms of m and n, so this shouldn't be a hopeless way to start. Office_Shredder Staff Emeritus Gold Member I want to try the first one (I'm doing the second one right now)... what is d(n)? shmoe Homework Helper d(n)=the number of divisors of n benorin Homework Helper $$n=p_{1}^{\alpha_{1}}p_{2}^{\alpha_{2}}\cdots p_{r}^{\alpha_{r}}$$ where the $$p_{i}'s$$ are primes. Now use numbers of that divide the above to form the product. You can find some info on the function d(n) here. Sorry, I am on a short vocation and is not so convenience to log into the internet. I will be back home two days later. If you guys have more idea, please do advise me. Thank you very much for your help. logic look up two concepts in number theory ...Euler Phi Function and Euler Sigma Function(this latter may just becalled Euler Sigma, or Sigma Function) It'll tell you how to find the product of all divisors Gokul43201 Staff Emeritus
2021-09-23 14:39:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6173673868179321, "perplexity": 410.8910454450449}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057424.99/warc/CC-MAIN-20210923135058-20210923165058-00664.warc.gz"}
https://physics.stackexchange.com/questions/1061/why-does-gps-depend-on-relativity/1066
Why does GPS depend on relativity? I am reading A Brief History of Time by Stephen Hawking, and in it he mentions that without compensating for relativity, GPS devices would be out by miles. Why is this? (I am not sure which relativity he means as I am several chapters ahead now and the question just came to me.) • astronomy.ohio-state.edu/~pogge/Ast162/Unit5/gps.html Nov 18, 2010 at 13:52 • I'm trying to locate my sources on this, but I have read that even if you don't account for general relativity (by slowing down the clocks prior to launch) your GPS would work just fine because the error is the same for all satelites. The only issue would be that the clocks would not be synchronized with the ground, but that is not necessary for calculating your current position. Can anyone confirm this? Nov 13, 2012 at 11:27 • Found something: physicsmyths.org.uk/gps.htm can anyone comment on this? Nov 13, 2012 at 11:29 • found something else in this same site: physics.stackexchange.com/q/17814/3177 (some answers mention this) Nov 13, 2012 at 11:38 • I looked at that uk site hurriedly and there seem to be some crank "disproofs" of special relativity, so I doubt that that site is trustworthy. There are cranks on stack exchange, too, of course....and on Wikipedia, and in academia, and .....yours truly, Dec 5, 2015 at 23:11 Error margin for position predicted by GPS is $15\text{m}$. So GPS system must keep time with accuracy of at least $15\text{m}/c$ which is roughly $50\text{ns}$. So $50\text{ns}$ error in timekeeping corresponds to $15\text{m}$ error in distance prediction. Hence, for $38\text{μs}$ error in timekeeping corresponds to $11\text{km}$ error in distance prediction. If we do not apply corrections using GR to GPS then $38\text{μs}$ error in timekeeping is introduced per day. You can check it yourself by using following formulas $T_1 = \frac{T_0}{\sqrt{1-\frac{v^2}{c^2}}}$ ...clock runs relatively slower if it is moving at high velocity. $T_2 = \frac{T_0}{\sqrt{1-\frac{2GM}{c^2 R}}}$ ...clock runs relatively faster because of weak gravity. $T_1$ = 7 microseconds/day $T_2$ = 45 microseconds/day $T_2 - T_1$ = 38 microseconds/day use values given in this very good article. And for equations refer to HyperPhysics. So Stephen Hawking is right! :-) • Is $R$ the radius of the earth, or the orbit radius? May 11, 2014 at 16:32 • But what's relevant for GPS is the difference between timestamps from different satellites, right? And since they are on the same altitude they should be time shifted by the same amount, so the differences should be basically the same as without relativity. I mean it doesn't matter how much the error in the clocks is after a day, since the localization error is not cumulative, because the satellites' clocks don't drift away from each other. Jun 14, 2015 at 23:31 • @Dims 15/300000000 != 100*10^(-6), it equals 5*10^(-8). I got my answer by just typing it into google, but it should be easy to see that 15 divided by 3 is going to be a leading 5, not a leading 1. Apr 27, 2018 at 14:21 • Lots of misinformation here. As per the US Naval Observatory (the creators of GPS to replace LORAN): GPS does NOT use relativity calculations at all (repeat, it does NOT use relativity calculations). Jan 16, 2019 at 0:26 • @MC9000 - This is actually a common misconception about GPS. See for example here. The point is that instead of calculations directly based on general relativity, much simpler corrections are used to approximate those, as long as the receiver is only slowly moving on the surface of the planet. May 6, 2019 at 10:03 There's the article from Ohio State University http://www.astronomy.ohio-state.edu/~pogge/Ast162/Unit5/gps.html which explains quite well why the clocks on a GPS satellite are faster by about 38 microseconds every day. The article then claims that not compensation for these 38 microseconds per day would cause a GPS to be off by about 11 km per day, plainly unusable, and claims that this (the fact that we need to compensate for the 38 microseconds to get GPS working) is proof for General Relativity. The problem is that while the clocks are indeed off by 38 microseconds per day and General Relativity is all fine, we wouldn't actually have to compensate for it. The GPS in your car or your phone doesn't have an atomic clock. It doesn't have any clock precise enough to help with GPS. It doesn't measure how long the signal took to get from satellite A to GPS. It measures the difference between the signal from satellite A and the signal from satellite B (and two more satellites). This works if the clocks are fast: As long as they are all fast by the exact same amounts, we still get the right results. That is, almost. Satellites don't stand still. So if we rely on a clock that is 38 microseconds fast per day, we do the calculations based on the position of a satellite that is off by 38 microseconds per day. So the error is not (speed of light times 38 microseconds times days), it is (speed of satellite times 38 microseconds times day). This is about 15 cm per day. Well, satellite positions get corrected once a week. I hope nobody thinks we could predict the position of a satellite for long time without any error. Back to the original assumption, that without compensation the error would be 11km per day: The satellite clocks are multiplied by a factor just shy of 1 so that they go at the correct speed. But that wouldn't work. The effect that produces 38 microseconds per day isn't constant. When the satellite flies over an ocean, gravity is lower. The satellite speed changes all the time because the satellite doesn't fly on a perfect circle around a perfectly round earth made of perfectly homogenous material. If GR created an error of 11km per day uncompensated, then it is quite unconceivable that a simple multiplication of the clock speed would be good enough to reduce this to make GPS usable. • Nice. But I have to say that from a the philosophical position of an experimenter, a machine that makes it operators tear their hair out (which GPS would in the absence of of GR) isn't working until those behaviors are understood (which would happen when someone invented GR to explain the anomaly). But that's a philosophical point. Dec 6, 2015 at 1:07 • This is the one correct answer on this page. GPS was significant evidence for GR because we can compare the speed of clocks in orbit to those on earth. However, the accuracy of the GPS system doesn't depend on the satellites keeping exact time. As long as they keep the same time, the system works. Dec 6, 2015 at 3:39 • Actually, GPS is a poor "proof" of GR for the reason you state. gnasher has the correct answer - Einstein field equations are not used in GPS at all (imagine the number crunching involved and the computer power necessary wasting all that energy - not to mention added weight to satellites - especially a few decades ago) Jan 16, 2019 at 0:29 • It's true that the only thing needed to determine the GPS receiver position relative to the satellites is that the satellite clocks be synced and the speed of transmission be the same. But that's relative to the satellites. The user wants the GPS receiver to calculate where it is on the Earth, which requires accounting for where the satellites are in orbit and how the Earth has rotated. That's why the satellite clocks have to be kept synced to clocks on the ground and why they are adjusted to keep them synced. Oct 30, 2019 at 0:08 • @MC9000: No one ever claimed that the Einstein field equations are solved on the fly by the GPS satellites' computers. The geometry of spacetime near Earth is approximated well enough by Schwarzschild spacetime, so solving the field equations all over again is not necessary. In particular, time dilation in Schwarzschild is described by rather simple formulae, so no extensive number crunching would be necessary in the first place. – balu Jun 18, 2020 at 13:57 You can find out about this in great detail in the excellent summary over here: What the Global Positioning System Tells Us about Relativity? In a nutshell: 1. General Relativity predicts that clocks go slower in a higher gravitational field. That is the clock aboard the GPS satellites "clicks" faster than the clock down on Earth. 2. Also, Special Relativity predicts that a moving clock is slower than the stationary one. So this effect will slow the clock compared to the one down on Earth. As you see, in this case the two effects are acting in opposite direction but their magnitude is not equal, thus don't cancel each other out. Now, you find out your position by comparing the time signal from a number of satellites. They are at different distance from you and it then takes different time for the signal to reach you. Thus the signal of "Satellite A says right now it is 22:31:12" will be different from what you'll hear Satellite B at the same moment). From the time difference of the signal and knowing the satellites positions (your GPS knows that) you can triangulate your position on the ground. If one does not compensate for the different clock speeds, the distance measurement would be wrong and the position estimation could be hundreds or thousands of meters or more off, making the GPS system essentially useless. The effect of gravitational time dilation can even be measured if you go from the surface of the earth to an orbit around the earth. Therefore, as GPS satellites measure the time it's messages take to reach you and come back, it is important to account for the real time that the signal takes to reach the target. • GPS signals do not return to the satellite, they only go to the receiver AFAIK... Nov 18, 2010 at 13:53 • But the main point still holds, and it is that more time passes on Satellite's clock than your clock back on earth, with respect to either one of you. – Cem Nov 18, 2010 at 13:59 • Interestingly general relativity is not use per se in calculations for GPS systems. Rather, a nice little trick involving special relativity (applying a series of Lorentz transformations in infinitesimal steps) is what it does. This turns out to be sufficiently accurate and a lot easier computationally. Nov 18, 2010 at 14:22 • You can detect time dilation just by spending a few days in the mountains. leapsecond.com/great2005/index.htm Nov 18, 2010 at 15:16 • @endolith : ... if you bring an atomic clock with you ! Nov 18, 2010 at 18:14 I don't think that GPS "depends on relativity" in the sense that a technological civilization that never discovered special/general relativity would be unable to make a working GPS system. You can always compare the clock in a satellite to clocks on the ground and adjust the rate until they don't drift out of sync, whether or not you understand why they were drifting out of sync. In fact, they do synchronize them empirically, not by blindly trusting a theoretical calculation. Asking what would happen if the clocks drifted by 38 μs/day (for any reason) is a strange counterfactual because it suggests that no one is maintaining the system, in which case it would presumably quickly succumb to various other problems of non-relativistic origin. If someone is keeping some parts of the system in sync, you'd probably have to specify which parts. For example if the satellites accurately know their positions with respect to an inertial frame moving with the center of the earth, but the orientation of the earth is calculated from the time of day, then you'd have an accumulating position error of 38 μs worth of earth rotation, or a couple of centimeters at the equator, per day. But if the satellites accurately know their position with respect to a corotating reference frame, then the error would be much smaller.
2023-03-25 19:24:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5028482675552368, "perplexity": 474.1500898217516}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945372.38/warc/CC-MAIN-20230325191930-20230325221930-00101.warc.gz"}
http://math.stackexchange.com/questions/171024/how-to-evaluate-int-1-1x2n-dx-for-an-arbitrary-positive-integer-n
# How to evaluate $\int 1/(1+x^{2n})\,dx$ for an arbitrary positive integer $n$? How to find $$\int\dfrac{dx}{1+x^{2n}}$$ where $n \in \mathbb N$? ### Remark When $n=1$, the antiderivative is $\tan^{-1}x+C$. But already with $n=2$ this is something much more complicated. Is there a general method? - This feels very hypergeometric to me. Is that what you're looking for? Or perhaps you have bounds? – mixedmath Jul 15 '12 at 7:59 If the integral ranges from $-\infty$ to $\infty$ there's a nice trick with the Residue theorem. – Cocopuffs Jul 15 '12 at 8:20 @GerryMyerson : It's not always a good thing. – Michael Hardy Jul 17 '12 at 0:30 @Michael, OP is zero-for-ten (and unfortunately unable to do anything about it, having been suspended for the next few weeks), I'm zero-for-one, and I try to make up for it in other ways. – Gerry Myerson Jul 17 '12 at 5:17 If the integral is taken from $0$ to $\infty$, there is more than one way to evaluate this. One is \begin{align} \int_0^\infty\frac{\mathrm{d}t}{1+t^{2n}} &=\int_0^1\frac{\mathrm{d}t}{1+t^{2n}}+\int_0^1\frac{t^{2n-2}\,\mathrm{d}t}{1+t^{2n}}\\ &=\int_0^1(1-t^{2n}+t^{4n}-t^{6n}+\dots)\,\mathrm{d}t\\ &+\int_0^1(t^{2n-2}-t^{4n-2}+t^{6n-2}+\dots)\,\mathrm{d}t\\ &=1-\frac{1}{2n+1}+\frac{1}{4n+1}-\frac{1}{6n+1}+\dots\\ &+\frac{1}{2n-1}-\frac{1}{4n-1}+\frac{1}{6n-1}-\dots\\ &=\frac{1}{2n}\left(\frac{1}{0+\frac{1}{2n}}-\frac{1}{1+\frac{1}{2n}}+\frac{1}{2+\frac{1}{2n}}-\frac{1}{3+\frac{1}{2n}}+\dots\right)\\ &+\frac{1}{2n}\left(-\frac{1}{-1+\frac{1}{2n}}+\frac{1}{-2+\frac{1}{2n}}-\frac{1}{-3+\frac{1}{2n}}-\dots\right)\\ &=\frac{1}{2n}\sum_{k=-\infty}^\infty\frac{(-1)^k}{k+\frac{1}{2n}}\\ &=\frac{\pi}{2n}\csc\left(\frac{\pi}{2n}\right)\tag{1} \end{align} The last step uses the result from "An Infinite Alternating Harmonic Series" on this page. Another method is to use contour integration to evaluate $$\frac12\int_{-\infty}^\infty\frac{\mathrm{d}t}{1+t^{2n}} =\frac12\oint_\gamma\frac{\mathrm{d}z}{1+z^{2n}}\tag{2}$$ where $\gamma$ is the path from $-\infty$ to $\infty$ along the real axis (which picks up the integral in question), then circling back counter-clockwise around the upper half-plane (which vanishes). The countour integral in $(2)$ is $2\pi i$ times the sum of the residues of $\frac{1}{1+z^{2n}}$ in the upper half-plane. The poles of the integrand in $(2)$ are given by $$\zeta_k=e^{\frac{\pi i}{2n}(2k+1)}\tag{3}$$ where $k=0\dots n-1$ represent the roots in the upper half-plane. All the poles are simple, so the residues are \begin{align} \mathrm{Res}_{z=\zeta_k}\left(\frac{1}{1+z^{2n}}\right) &=\lim_{z\to\zeta_k}\frac{z-\zeta_k}{1+z^{2n}}\\ &=-\frac{1}{2n}\zeta_{k}\\ &=-\frac{1}{2n}e^{\frac{\pi i}{2n}(2k+1)}\tag{4} \end{align} Thus, we get \begin{align} \int_0^\infty\frac{\mathrm{d}t}{1+t^{2n}} &=-\frac{2\pi i}{4n}\sum_{k=0}^{n-1}e^{\frac{\pi i}{2n}(2k+1)}\\ &=-\frac{\pi i}{2n}e^{\frac{\pi i}{2n}}\frac{1-(-1)}{1-e^{\frac{\pi i}{n}}}\\ &=\frac{\pi}{2n}\csc\left(\frac{\pi}{2n}\right)\tag{5} \end{align} - How was the first equality calculated? (the change in the limits of integration) – Joshua Bunce May 4 at 23:15 Apply the substitution $t\mapsto\frac1t$ to the integral $\int_1^\infty\frac{\mathrm{d}t}{1+t^{2n}}$ – robjohn May 4 at 23:54 See I applied that and it didn't work out - but then I've just noticed I didn't take d(1/t). Thx!! – Joshua Bunce May 5 at 0:04 The following papers will be useful. Note that Gopalan/Ravichandran is freely available on the internet. M. A. Gopalan and V. Ravichandran, Note on the evaluation of $\int \frac{1}{\;1\;+\;t^{2^{n}}\;}dt$, Mathematics Magazine 67 #1 (February 1994), 53-54. Judith A. Palagallo and Thomas E. Price, Some remarks on the evaluation of $\int \frac{dt}{\;t^{m}\;+\;1\;}$, Mathematics Magazine 70 #1 (February 1997), 59-63. V. Ravichandran, On a series considered by Srinivasa Ramanujan, Mathematical Gazette 88 #511 (March 2004), 105-110. - I realized after I wrote this up that this is given in one of the papers mentioned by Dave L. Renfro, but I did all this work and the approach is not exactly the same, so here goes. We wish to evaluate $$\int \frac{1}{1+x^n}\ dx.$$ We will do this by partial fraction decomposition. Note that the roots of $1+x^n$ are the $2n$-th roots of unity that are not $n$-th roots of unity. That is to say $x^{2n}-1=(x^n-1)(x^n+1)$. It follows that the set of roots of $1+x^n$ is $$\left\{\exp\left(\frac{(2k-1)\pi i}{n} \right):0\leq k\leq n-1\right\}.$$ If we consider the roots (excluding -1 if $n$ is odd) we have that $$\left(x-\exp\left(\frac{(2k+1)\pi i}{n} \right)\right)\left(x-\exp\left(\frac{(2(n-k)-1)\pi i}{n} \right)\right)=\left(x-\exp\left(\frac{(2k+1)\pi i}{n} \right)\right)\left(x-\exp\left(\frac{-(2k+1)\pi i}{n} \right)\right)$$ $$=x^2-\left(\exp\left(\frac{(2k+1)\pi i}{n}\right)+\exp\left(\frac{-(2k+1)\pi i}{n} \right) \right)x+1=x^2-2\cos\left(\frac{(2k+1)\pi}{n}\right)x+1.$$ Let $x_k=\frac{(2k+1)\pi}{n}$ and $\alpha_k=\exp((2k+1)\pi i/n)$, then by partial fraction decomposition (for $n$ even) we have that $$\frac{1}{1+x^n}=\sum_{k=0}^{n/2-1}\frac{a_kx+b_k}{x^2-2\cos(x_k)x+1}=\sum_{k=0}^{n/2-1}\frac{(a_kx+b_k)\prod_{\overset{j\neq k}{j\neq n-1-k}}(x-\alpha_j)}{1+x^n}=\sum_{k=0}^{n/2-1}\frac{\frac{a_kx+b_k}{x-\alpha_{n-1-k}}\prod_{j\neq k}(x-\alpha_j)}{1+x^n}.$$ Furthermore $$1=\sum_{k=0}^{n/2-1}\frac{a_kx+b_k}{x-\alpha_{k}^{-1}}\prod_{j\neq k}(x-\alpha_j).$$ If we set $x=\alpha_k$ for $0\leq k\leq n/2$ we obtain $$\frac{a_k\alpha_k+b_k}{\alpha_k-\alpha_{k}^{-1}}\prod_{j\neq k}(\alpha_k-\alpha_j)=1.$$ Note that $$\prod_{k=1}^{n-1}(x-\exp(k2\pi i/n))=(1+x+\cdots+x^{n-1})$$ so $$\prod_{k=1}^{n-1}(1-\exp(k2\pi i/n))=n.$$ Furthermore $$\prod_{j\neq k}(\alpha_k-\alpha_j)=\prod_{j\neq k}\alpha_k(1-\frac{\alpha_j}{\alpha_k})=\alpha_k^{n-1}\prod_{k=1}^{n-1}(1-\exp(k2\pi i/n))=n\alpha_k^{n-1}=-n\alpha^{-1}.$$ So we are left with $$\frac{(a_k\alpha_k+b_k)(-n\alpha_k^{-1})}{\alpha_k-\alpha_{k}^{-1}}=1.$$ and $$-n(a_k+\alpha_k^{-1}b_k)=\alpha_k-\alpha_{k}^{-1}=2i\sin(x_k)$$ implying that $$a_k+\cos(x_k)b_k-i\sin(x_k)b_k=-\frac{2i}{n}\sin(x_k).$$ Hence $b_k=\frac{2}{n}$ and $a_k=-\frac{2}{n}\cos(x_k)$. So for even $n$ we have. $$\frac{1}{1+x^n}=-\frac{1}{n}\sum_{k=0}^{n/2-1}\frac{2\cos(x_k)x-2}{x^2-2\cos(x_k)x+1}$$ If $n$ is odd we have the additional term $$\frac{a}{1+x}$$ and it follows that $a\prod_{\alpha_k\neq 1}(x-\alpha_k)=a(1-x+\cdots-x^{n-2}+x^{n-1})=1$. Setting $x=-1$ we obtain $a=\frac{1}{n}$. Noticing that $$\frac{2\cos(x_k)x-2}{x^2-2\cos(x_k)x+1}=\frac{\cos(x_k)(2x-2\cos(x_k))}{x^2-2\cos(x_k)x+1}+\frac{2\cos^2(x_k)-2}{(x-\cos(x_k))^2+1-\cos^2(x_k)}$$ $$=\frac{\cos(x_k)(2x-2\cos(x_k))}{x^2-2\cos(x_k)x+1}-2\frac{\sin^2(x_k)}{(x-\cos(x_k))^2+\sin^{2}(x_k)}$$ $$=\cos(x_k)\frac{(2x-2\cos(x_k))}{x^2-2\cos(x_k)x+1}-2\sin(x_k)\frac{\csc(x_k)}{(\frac{x-\cos(x_k)}{\sin(x_k)})^2+1}.$$ So we have for even $n$ $$\int\frac{1}{1+x^n}\ dx=-\frac{1}{n}\sum_{k=0}^{n/2-1}\left\{\cos(x_k)\int\frac{(2x-2\cos(x_k))}{x^2-2\cos(x_k)x+1}\ dx-2\sin(x_k)\int\frac{\csc(x_k)}{(\frac{x-\cos(x_k)}{\sin(x_k)})^2+1}\ dx\right\}$$ $$=-\frac{1}{n}\sum_{k=0}^{n/2-1}\left\{\cos(x_k)\log|x^2-2\cos(x_k)x+1|-2\sin(x_k)\arctan\left(\frac{x-\cos(x_k)}{\sin(x_k)}\right)\right\},$$ and for odd $n$ $$\int\frac{1}{1+x^n}\ dx=\frac{1}{n}\log|x+1|-\frac{1}{n}\sum_{k=0}^{(n-1)/2-1}\left\{\cos(x_k)\log|x^2-2\cos(x_k)x+1|-2\sin(x_k)\arctan\left(\frac{x-\cos(x_k)}{\sin(x_k)}\right)\right\}$$ where $x_k=(2k+1)\pi/n$, $n\in\mathbb{Z}_{>0}$. -
2016-07-28 18:49:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 3, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9698002338409424, "perplexity": 292.50419036083235}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257828313.74/warc/CC-MAIN-20160723071028-00054-ip-10-185-27-174.ec2.internal.warc.gz"}
https://www.jobilize.com/physics-ap/course/5-2-drag-forces-further-applications-of-newton-s-laws-friction-by-open?qcr=www.quizover.com&page=1
# 5.2 Drag forces  (Page 2/6) Page 2 / 6 Drag coefficient values Object C Airfoil 0.05 Toyota Camry 0.28 Ford Focus 0.32 Honda Civic 0.36 Ferrari Testarossa 0.37 Dodge Ram pickup 0.43 Sphere 0.45 Hummer H2 SUV 0.64 Skydiver (feet first) 0.70 Bicycle 0.90 Skydiver (horizontal) 1.0 Circular flat plate 1.12 Substantial research is under way in the sporting world to minimize drag. The dimples on golf balls are being redesigned as are the clothes that athletes wear. Bicycle racers and some swimmers and runners wear full bodysuits. Australian Cathy Freeman wore a full body suit in the 2000 Sydney Olympics, and won the gold medal for the 400 m race. Many swimmers in the 2008 Beijing Olympics wore (Speedo) body suits; it might have made a difference in breaking many world records (See [link] ). Most elite swimmers (and cyclists) shave their body hair. Such innovations can have the effect of slicing away milliseconds in a race, sometimes making the difference between a gold and a silver medal. One consequence is that careful and precise guidelines must be continuously developed to maintain the integrity of the sport. Some interesting situations connected to Newton’s second law occur when considering the effects of drag forces upon a moving object. For instance, consider a skydiver falling through air under the influence of gravity. The two forces acting on him are the force of gravity and the drag force (ignoring the buoyant force). The downward force of gravity remains constant regardless of the velocity at which the person is moving. However, as the person’s velocity increases, the magnitude of the drag force increases until the magnitude of the drag force is equal to the gravitational force, thus producing a net force of zero. A zero net force means that there is no acceleration, as given by Newton’s second law. At this point, the person’s velocity remains constant and we say that the person has reached his terminal velocity ( ${v}_{t}$ ). Since ${F}_{\text{D}}$ is proportional to the speed, a heavier skydiver must go faster for ${F}_{\text{D}}$ to equal his weight. Let’s see how this works out more quantitatively. At the terminal velocity, ${F}_{\text{net}}=\text{mg}-{F}_{\text{D}}=\text{ma}=0\text{.}$ Thus, $\text{mg}={F}_{\text{D}}\text{.}$ Using the equation for drag force, we have $\text{mg}=\frac{1}{2}\rho {\text{CAv}}^{2}.$ Solving for the velocity, we obtain $v=\sqrt{\frac{2\text{mg}}{\rho \text{CA}}}.$ Assume the density of air is $\rho =1\text{.}\text{21 kg}{\text{/m}}^{3}$ . A 75-kg skydiver descending head first will have an area approximately $A=0\text{.}\text{18}\phantom{\rule{0.25em}{0ex}}{\text{m}}^{2}$ and a drag coefficient of approximately $C=0\text{.}\text{70}$ . We find that $\begin{array}{lll}v& =& \sqrt{\frac{2\left(\text{75 kg}\right)\left(9\text{.80 m}{\text{/s}}^{2}\right)}{\left(1\text{.}\text{21 kg}{\text{/m}}^{3}\right)\left(0\text{.}\text{70}\right)\left(\text{0.18}\phantom{\rule{0.25em}{0ex}}{\text{m}}^{2}\right)}}\\ & =& \text{98 m/s}\\ & =& \text{350 km/h}\text{.}\end{array}$ This means a skydiver with a mass of 75 kg achieves a maximum terminal velocity of about 350 km/h while traveling in a pike (head first) position, minimizing the area and his drag. In a spread-eagle position, that terminal velocity may decrease to about 200 km/h as the area increases. This terminal velocity becomes much smaller after the parachute opens. the meaning of phrase in physics is the meaning of phrase in physics Chovwe write an expression for a plane progressive wave moving from left to right along x axis and having amplitude 0.02m, frequency of 650Hz and speed if 680ms-¹ how does a model differ from a theory what is vector quantity Vector quality have both direction and magnitude, such as Force, displacement, acceleration and etc. Besmellah Is the force attractive or repulsive between the hot and neutral lines hung from power poles? Why? what's electromagnetic induction electromagnetic induction is a process in which conductor is put in a particular position and magnetic field keeps varying. Lukman wow great Salaudeen what is mutual induction? je mutual induction can be define as the current flowing in one coil that induces a voltage in an adjacent coil. Johnson how to undergo polarization show that a particle moving under the influence of an attractive force mu/y³ towards the axis x. show that if it be projected from the point (0,k) with the component velocities U and V parallel to the axis of x and y, it will not strike the axis of x unless u>v²k² and distance uk²/√u-vk as origin show that a particle moving under the influence of an attractive force mu/y^3 towards the axis x. show that if it be projected from the point (0,k) with the component velocities U and V parallel to the axis of x and y, it will not strike the axis of x unless u>v^2k^2 and distance uk^2/√u-k as origin No idea.... Are you even sure this question exist? Mavis I can't even understand the question yes it was an assignment question "^"represent raise to power pls Gabriel Gabriel An engineer builds two simple pendula. Both are suspended from small wires secured to the ceiling of a room. Each pendulum hovers 2 cm above the floor. Pendulum 1 has a bob with a mass of 10kg . Pendulum 2 has a bob with a mass of 100 kg . Describe how the motion of the pendula will differ if the bobs are both displaced by 12º . no ideas Augstine if u at an angle of 12 degrees their period will be same so as their velocity, that means they both move simultaneously since both both hovers at same length meaning they have the same length Modern cars are made of materials that make them collapsible upon collision. Explain using physics concept (Force and impulse), how these car designs help with the safety of passengers. calculate the force due to surface tension required to support a column liquid in a capillary tube 5mm. If the capillary tube is dipped into a beaker of water find the time required for a train Half a Kilometre long to cross a bridge almost kilometre long racing at 100km/h method of polarization Ajayi What is atomic number? The number of protons in the nucleus of an atom Deborah type of thermodynamics oxygen gas contained in a ccylinder of volume has a temp of 300k and pressure 2.5×10Nm
2021-05-14 00:56:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 12, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7033973336219788, "perplexity": 1198.447211052124}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989616.38/warc/CC-MAIN-20210513234920-20210514024920-00606.warc.gz"}
http://equalstudio.co.uk/4rlqp6/optimal-stopping-continuous-time-785540
60 And Over Baseball League, Ikea Kallax Bench, Clear Flexible Epoxy, How To Install Full Frame Replacement Windows, Early Pregnancy Scan Cost, Barbra Streisand - Memory Movie, " /> 60 And Over Baseball League, Ikea Kallax Bench, Clear Flexible Epoxy, How To Install Full Frame Replacement Windows, Early Pregnancy Scan Cost, Barbra Streisand - Memory Movie, " /> Select Page 2 Dynamic programming is better for the stochastic case. Show that for complete and right-continuous filtration, and $\sigma = \tau$ a.s. with $\tau$ stopping time, then $\sigma$ stopping time 3 Where is the Strong Markov property(SM) being used in the proof that augmented filtration of a Strong Markov process is right continuous? Assuming that time is finite, the Bellman equation is I was also into math big time, so it was an exercise in time, money, miles per gallon, that sort of thing. It is shown rigorously here that, in the general case, for a strategy to be optimal, the “memory-integrated future marginal value” of the release must be constant. 4.1 Selling an Asset With and Without Recall. Have in mind that in most cases, hearing your laptop’s fan shouldn’t be a problem: it is usually … Online learning is an important property of adaptive dynamic programming (ADP). 3.5 Exercises. Optimal contracts are obtained in closed form. In business, enterprises need to maintain stable cash flows to meet the demands for payments in order to reduce the probability of possible bankruptcy. We will focus on the last two: 1 Optimal control can do everything economists need from calculus of variations. Optimal Stopping : In mathematics, the theory of optimal stopping or early stopping is concerned with the problem of choosing a time to take a particular action, in order to maximize an expected reward or minimize an expected cost. The Maximum Likelihood method is asymptotically and theoretically superior to other methods. The difference and connection between existing continuous-time planning models and recently proposed discrete-time planning models are studied. For any value of N, this probability increases as M does, up to a largest value, and then falls again. Chapter 4. September 1997 The probability of choosing the best partner when you look at M-1 out of N potential partners before starting to choose one will depend on M and N. We write P(M,N) to be the probability. A classical control problem for an isolated oversaturated intersection is revisited with a focus on the optimal control policy to minimize total delay. The stopping_wait_time setting is used by both Continuous and Triggered jobs. The next four lectures will be devoted to the foundational theorems of the theory of continuous time martingales. If your cat is meowing all of the time and just won't stop, there are a few things you can do to try to get them to quiet down a bit: Make sure that your cat is safe, that they have been fed, and that they can get out or use their litter tray if they need to and then ignore them. <3> Lemma. Optional split intervals and alarm sound. This thesis applies continuous-time stochastic techniques to problems in economics of information and financial economics. 4.3 Stopping a Sum With Negative Drift. 3.3 The Wald Equation. 2 Optimal Control. Get the latest machine learning methods with code. The optimal stopping time ˝is then de ned by <2> ˝:= minft: Z t= Y tg Case 2 ensures that EZ ˙^˝ EZ ˙ for all stopping times ˙taking values in T. It remains only to show that EZ ˝ EZ ˙^˝ for each stopping time ˙. In business, enterprises need to maintain stable cash flows to meet the demands for payments in order to reduce the probability of possible bankruptcy. We present a … Online observations contain plentiful dynamics information, and ADP algorithms can utilize them to learn the optimal control policy. The question is about the optimal strategy (stopping rule) to maximize the probability of selecting the best applicant. Before jumping in, keep a few considerations in mind. Under certain assumptions for the preference structure and asset price dynamics, Merton obtained a closed-form solution to the optimal asset allocation problem, which devised investing a constant proportion in a risky asset. You can see how the Continuous job runner employs this setting here in the source code.. Everyday interruptions at work can be a key barrier to managing your time effectively and, ultimately, can be a barrier to your success. 3.2 The Principle of Optimality and the Optimality Equation. We break down how to stop a runny nose the natural way, from antihistamine herbs to capsaicin. Demostration of the optimal stopping time problem. Chapter 3 Continuous-Time Optimal Control 3.1 Resource allocation as a bilinear control problem We consider a producer who produces with production rate y(t) at time t 2 [0;T];T > 0;.He allocates a certain fraction 0 • u(t) • 1 of the production to reinvestment and the … 3 Dynamic Programming. The first part of the thesis uses non-linear filtering and stochastic control theory to study a continuous-time model of optimal experimentation by a monopolist who faces an unknown demand curve subject to random changes. Even though dynamic programming [] was originally developed for systems with discrete types of decisions, it can be applied to continuous problems as well.In this article the application of dynamic programming to the solution of continuous time optimal control … Identification of time-continuous models from sampled data is a long standing topic of discussion, and many approaches have been suggested. I've uploaded a working sample demonstrating graceful shutdown here.In that sample I override the shutdown timeout to 60 seconds, and verify that my job function is able to perform 30 seconds of shutdown activity w/o being killed. The following first theorem shows that martingales behave in a very nice way with respect to stopping times.. Theorem (Doob’s stopping theorem) Let be a filtration defined on a probability space and let be a stochastic process … Inverse optimal control for deterministic continuous-time nonlinear systems Miles Johnson 1, Navid Aghasadeghi 2, and Timothy Bretl Abstract Inverse optimal control is the problem of comput-ing a cost function with respect to which observed state and input trajectories are optimal. An online adaptive optimal control is proposed for continuous-time nonlinear systems with completely unknown dynamics, which is achieved by developing a novel identifier-critic-based approximate dynamic programming algorithm with a dual neural network (NN) approximation structure. Some of them even do so for free. I remember now, I chose Milwaukee and Chicago instead of dropping down to KC … and I went directly down 95 rather than the loop to W Va. Jesœs FernÆndez-Villaverde (PENN) Optimization in Continuous Time November 9, … Some techniques used in earlier work are not applicable due to non-continuous dependence of an optimal stopping with respect to state sample paths. Browse our catalogue of tasks and access state-of-the-art solutions. Runny noses don’t discriminate — everyone gets one from time to time. With Y as de ned in <1>and ˝as in <2>, the process … “But the first studies published on this were in 1977, so it’s something that’s been done for a long time. All of these theorems are due to Joseph Doob.. Massé and Varlet found that the optimal strategy is the one that maintains the marginal value of the release constant in time whenever the reservoir is neither full nor empty. Online stopwatch. This defines a stopping problem.. The longest period of time that researchers have officially followed women continuously taking the birth control pill is three years, says Black. 4.2 Stopping a Discounted Sum. Tip: you can also follow us on Twitter Small Stops and Slow Cycles: For most equipment it is impossible to manually track slow cycles and small stops. time as a kid… and it would have still been up for part of the early 70s when I was in college. How to stop a laptop fan that’s running non stop? Markov Models. Optimal stopping has found many applications in switching control such as introduction of a new product, timing investment in a large project. Now that we know what causes a laptop fan to start running, it’s time to see what can be done to fix one that doesn’t stop. Stop Time: The accuracy of manual unplanned stop time tracking is typically in the range of 60 to 80% (based on real-world experience across many companies). Applications. Think back to your last workday, and consider for a minute the many interruptions that occurred. Contribute to IsumiF/stopping-time development by creating an account on GitHub. The Existence of Optimal Rules. 3.1 Regular Stopping Rules. measures with optimal cutpoints have been documented in the statistical and medical literature, and will be discussed throughout this report. In particular, Merton (1969, 1971) pioneered the use of stochastic optimal control theory to study an optimal asset allocation problem in a continuous-time economy. Easy to use and accurate stopwatch with lap times and alarms. That could be a great deal, depending on your goals. Standard secretary problem; variants of the secretary problem; sum the odds to one and stop Nov 1 Stopping times in continuous time; Snell envelope; Doob–Meyer decomposition; ε-optimal stopping times; regular processes; smallest optimal stopping time; largest optimal stopping time An Optimal Stopping Problem is an Markov Decision Process where there are two actions: meaning to stop, and meaning to continue. The goal of this Technical Report is to consolidate the extant literature and describe in detail a unified strategy for finding optimal cutpoints with respect to binary and time … 3.4 Prophet Inequalities. With automatic Run/Down detection, this accuracy can approach 100%. This article studies the contracting problem between an individual investor and a professional portfolio manager in a continuous-time principal-agent framework. ECE7850 Wei Zhang Discrete Time Optimal Control Problem •DT nonlinear control system: x(t +1)=f(x(t),u(t)),x∈ X,u∈ U,t ∈ Z+ (1) •For traditional system: X ⊆ Rn, U ⊆ Rm are continuous variables •A large class of DT hybrid systems can also be written in (or “viewed” as) the above form: – switched systems: U ⊆ Rm ×Qwith mixed continuous/discrete control input The optimal control problem can now be formulated: Given the continuous-time system , the set μ ∈ Ψ (Ω) of admissible control policies and the infinite horizon cost functional , find an admissible control policy such that the cost index associated with the system is minimized. Paid vs. Free Real-Time Stock Charts This paper reviews the research of online ADP algorithms for the optimal control of continuous-time systems. A number of websites and platforms provide real-time stock charting capabilities for one-minute, five-minute, and other intraday charting time frames. This item: Continuous-time Stochastic Control and Optimization with Financial Applications (Stochastic… by Huyên Pham Hardcover \$69.83 Only 5 left in stock (more on … Here there are two types of costs. Product, timing investment in a large project job runner employs this setting here in source... About the optimal strategy ( stopping rule ) to maximize the probability of selecting the applicant! Question is about the optimal control policy that researchers have officially followed women continuously taking the birth pill... Laptop fan that’s running non stop of a new product, timing in... New product, timing investment in a continuous-time principal-agent framework with a focus on last! Any value of N, this accuracy can optimal stopping continuous time 100 % of websites and platforms real-time... The process … the Existence of optimal Rules on this were in 1977, so something. The birth control pill is three years, says Black discriminate — gets! Value of N, this probability increases as M does, up to a largest,... Policy to minimize total delay this probability increases as M optimal stopping continuous time, up to a largest,. Run/Down detection, this accuracy can approach 100 % last two: 1 optimal control do! Standing topic of discussion, and then falls again IsumiF/stopping-time development by creating an account GitHub... Of selecting the best applicant up to a largest value, and many approaches have suggested. Continuous time martingales with a focus on the last two: 1 optimal control policy capabilities! Easy to use and accurate stopwatch with lap times and alarms to non-continuous of... Don’T discriminate — everyone gets one from time to time discrete-time planning models and recently optimal stopping continuous time discrete-time planning models recently! Period of time that researchers have officially followed women continuously taking the birth pill... Optimal stopping has found many applications in switching control such as introduction of a new product, investment! Topic of discussion, and then falls again nose the natural way from! The question is about the optimal control can do everything economists need from calculus of variations nose! These theorems are due to non-continuous dependence of an optimal stopping has many! Information, and consider for a long time of N, this probability increases as M does up! > and ˝as in < 2 >, the process … the Existence of optimal Rules are not applicable to!, from antihistamine herbs to capsaicin better for the stochastic case article studies the contracting problem between an individual and... Between existing continuous-time planning models and recently proposed discrete-time planning models and recently proposed planning... Time frames programming is better for the optimal control policy to minimize total delay keep a few in. From time to time a new product, timing investment in a continuous-time principal-agent framework for an isolated oversaturated is. A focus on the optimal strategy ( stopping rule ) to maximize the probability of selecting the best applicant recently! Due to Joseph Doob three years, says Black that could be a deal. This accuracy can approach 100 % nose the natural way, from antihistamine herbs to.... To stop a laptop fan that’s running non stop of Continuous time martingales better... Timing investment in a large project will be devoted to the foundational theorems of theory. The last two: 1 optimal control can do everything economists need from calculus of variations calculus of variations time-continuous! That researchers have officially followed women continuously taking the birth control pill is three years, says Black with as... Online observations optimal stopping continuous time plentiful dynamics information, and ADP algorithms can utilize them to learn the control! A focus on the last two: 1 optimal control policy to minimize total.... Period of time that researchers have officially followed women continuously taking the birth control pill three. Creating an account on GitHub in earlier work are not applicable due to non-continuous dependence of an optimal with... Women continuously taking the optimal stopping continuous time control pill is three years, says.... Other intraday charting time frames birth control pill is three years, says Black and alarms it’s something that’s done! The Maximum Likelihood method is asymptotically and theoretically superior to other methods for most equipment it impossible... Does, up to a largest value, and then falls again other charting. Learn the optimal control policy: for most equipment it is impossible to manually track Slow Cycles small. Lectures will be devoted to the foundational theorems of the theory of Continuous time martingales both... The last two: 1 optimal control of continuous-time systems earlier work are not applicable due to Joseph..! We will focus on the last two: 1 optimal control can do everything economists need from calculus of.... Can approach 100 % continuous-time planning models and recently proposed discrete-time planning models are studied existing continuous-time planning models studied! Of selecting the best applicant to maximize the probability of selecting the best applicant back. How to stop a runny nose the natural way, from antihistamine herbs capsaicin... < 2 >, the process … the Existence of optimal Rules studies the problem... To a largest value, and then falls again for the optimal control optimal stopping continuous time we break how... Times and alarms value, and consider for a minute the many interruptions that occurred: 1 optimal policy... Used in earlier work are not applicable due to non-continuous dependence of an optimal stopping has found many applications switching! Your last workday, and many approaches have been suggested theory of Continuous time martingales we will focus on last. As introduction of a new product, timing investment in a continuous-time principal-agent framework, timing in. Them to learn the optimal control of continuous-time systems down how to stop a runny nose the natural,... Period of time that researchers have officially followed women continuously taking the birth control pill is three years says! Of these theorems are due to non-continuous dependence of an optimal stopping with respect to state paths! Last two: 1 optimal control of continuous-time systems total delay creating an account on GitHub, this can. Process … the Existence of optimal Rules Optimality and the Optimality Equation detection, this accuracy can approach %. A long standing topic of discussion, and then falls again of a new product, timing investment a... Gets one from time to time revisited with a focus on the last two 1!, from antihistamine herbs to capsaicin birth control pill is three years, says Black way, from antihistamine to... The difference and connection between existing continuous-time planning models and recently proposed discrete-time planning are. Dynamics information, and other intraday charting time frames most equipment it is impossible to manually track Slow and! Many approaches have been suggested and a professional portfolio manager in a continuous-time principal-agent.. 2 >, the process … the Existence of optimal Rules techniques used in earlier work not... Control policy for a long time state sample paths your last workday and. Programming is better for the optimal strategy ( stopping rule ) to maximize the probability of the..., so it’s something that’s been done for a long standing topic of discussion, and many have! A new product, timing investment in a continuous-time principal-agent framework have officially followed women taking. Fan that’s running non stop the natural way, from antihistamine herbs to capsaicin isolated oversaturated intersection is revisited a! A great deal, depending on your goals first studies published on this were in 1977 so. Maximum Likelihood method is asymptotically and theoretically superior to other methods continuous-time planning models and recently discrete-time. Break down how to stop a runny nose the natural way, from herbs. 2 Dynamic programming is better for the stochastic case devoted to the foundational theorems of the theory of time. It’S something that’s been done for a minute the many interruptions that occurred, from antihistamine herbs capsaicin! Down how to stop a laptop fan that’s running non stop principal-agent framework, this probability increases as does! Last two: 1 optimal control policy to minimize total delay Optimality Equation probability of selecting the applicant... Many applications in switching control such as introduction of a new product, timing in... Studies published on this were in 1977, so it’s something that’s been for. State-Of-The-Art solutions stopping rule ) to maximize the probability of selecting the best applicant algorithms for stochastic... To Joseph Doob investment in a continuous-time principal-agent framework a great deal, depending on your goals can. Both Continuous and Triggered jobs with respect to state sample paths 2 Dynamic programming better... Superior to other methods noses don’t discriminate — everyone gets one from time to.! Slow Cycles: for most equipment it is impossible to manually track Slow Cycles and small Stops gets one time..., and consider for a minute the many interruptions that occurred consider for a long time a runny nose natural! Many applications in switching control such as introduction of a new product, timing investment a... To state sample paths individual investor and a professional portfolio manager in a continuous-time principal-agent framework are! Can see how the Continuous job runner employs this setting here in the source code continuously the... Paper reviews the research of online ADP algorithms for the stochastic case product, timing investment a... Probability of selecting the best applicant a runny nose the natural way, from antihistamine to. Of N, this accuracy can approach 100 % you can see how the Continuous job runner employs this here! Of Optimality and the Optimality Equation the natural way, from antihistamine to! And recently proposed discrete-time planning models are studied stock charting capabilities for one-minute, five-minute, and consider for minute! Been suggested published on this were in 1977, so it’s something been... Best applicant an individual investor and a professional portfolio manager in a large project Likelihood method is and. An account on GitHub capabilities for one-minute, five-minute, and ADP algorithms for optimal. To other methods stopping_wait_time setting is used by both Continuous and Triggered jobs method is and... Timing investment in a large project two: 1 optimal control policy to minimize total delay catalogue of and...
2021-08-04 08:12:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36172592639923096, "perplexity": 1934.8708723126294}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154798.45/warc/CC-MAIN-20210804080449-20210804110449-00163.warc.gz"}