url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
http://mathoverflow.net/questions/105482?sort=votes
## Can two different elliptic curves have rational points in common Can there be two different elliptic curve $E_{1}$ and $E_{2}$ and two different rational points $P_{1}$ and $P_{2}$ such that $P_{1}, P_{2} \in E_{1}$ and $P_{1}, P_{2} \in E_{2}$ but $P_{1} + P_{2}$ is a different point for $E_{1}$ and $E_{2}$. If so, is it easy for find an example? Or given two different rational points $P_{1}$ and $P_{2}$ is there a unique elliptic curve $E$ such that $P_{1}, P_{2} \in E$ Thank you - How many points determine an elliptic curve in the plane? That might give you a hint about your last question. – Mariano Suárez-Alvarez Aug 25 at 22:27 These points are where? In the projective plane? And by elliptic curve you mean in weierstrass form? You should formulate better the question, shouldn't you? – Xarles Aug 25 at 22:29 @Mariano Suárez-Alvarez no idea, where could I find that info – Tomas Aug 25 at 22:46 Yes. Let $E_1$ be the curve defined by the equation $y^2=x(x-1)(x-2)$ and $E_2$ be the curve $y^2=x(x-1)(x-3)$. Let $P_1=(0,0)$ and $P_2=(1,0)$. These points are certainly on both curves. On $E_1$, $P_1+P_2=(2,0)$ but on $E_2$ it's $(3,0)$. (Assuming that the point at infinity is the identity.) In fact, there is a unique elliptic curve passing through every $9$ points in general position.
2013-05-23 10:39:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8397136926651001, "perplexity": 280.77748147333836}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703227943/warc/CC-MAIN-20130516112027-00078-ip-10-60-113-184.ec2.internal.warc.gz"}
http://www.jasoncavett.com/blog/changing-the-look-of-nivo-sliders-captions/
Changing the Look of Nivo Slider’s Captions Nivo Slider brands itself “The World’s Most Awesome jQuery Image Slider” and, while I haven’t tried all the sliders out there, I certainly concede that it is very nice and well done. Following the instructions made it very easy to get it up and running (with a little bit of Googling to solve a stuttering error that I saw inside Google Chrome). I was using Nivo to move through images at the top of a page I was designing. Each image displayed a product or capability that my client wanted to offer to his customers. When I showed a demo of a site I was working on to the client, he decided that he wanted to see a banner associated with each image. That way, his customers would be able to easily know what the product was that they were being offered (and of course, being able to click on the image to follow to the product was necessary as well – but Nivo already offers that capability). After first looking to make sure Nivo didn’t offer this type of capability out of the box (it didn’t…at least not directly), I then began looking at how I could use Nivo Slider’s already existing features and modify them a bit. The solution, as it turns out, was quite simple. I started by downloading the Nivo default theme (this theme is provided with the default download). Follow the instructions here on how to use a particular theme. Once the default theme was in place, I then began looking at Nivo’s captions as this seemed like a great location to provide a banner without having to actually edit the images or create and use an entirely different plug-in. Nivo uses the CSS class nivo-caption to style its captions. With this in mind, it now becomes as simple as editing the CSS style to your respective look and feel. If you are editing the default theme, you first need to remove these blocks of CSS: Now that you have removed that block, then go ahead and add the following block of CSS. I gleaned some insight from the Pascal theme (which did not directly meet my needs, but did help me solve this problem) that is provided with Nivo. Comments added to explain what you can change to modify the banner for yourself. Of course, go ahead and tweak this even further to increase the height of the banner (or shrink it), make it stretch across the entire image (it’s fairly straightforward to have it stretch across the top of the page), or anything else that you wish to have it do.
2017-06-28 03:49:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21322058141231537, "perplexity": 1012.3360992656897}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128322320.8/warc/CC-MAIN-20170628032529-20170628052529-00625.warc.gz"}
https://www.r-bloggers.com/2021/03/rsqlite-concurrency-issues-solution-included/
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't. SQLite is a great, full featured SQL database engine. Most likely it is used more than all other database engines combined. The RSQLite R package embeds SQLite, and lets you query and manipulate SQLite databases from R. It is used in Bioconductor data packages, many deployed Shiny apps, and several other packages and projects. In this post I show how to make it safer to use RSQLite concurrently, from multiple processes. Note that this is an oversimplified description of how SQLite works and I will not talk about different types of locks, WAL mode, etc. Please see the SQLite documentation for the details. ## TL;DR • Always set the SQLite busy timeout. • If you use Unix, update RSQLite to at least version 2.2.4. • Use IMMEDIATE write transactions. (You can make use of the dbWithWriteTransaction() function at the end of this post.) ## Concurrency in SQLite SQLite (and RSQLite) supports concurrent access to the same database, through multiple database connections, possibly from multiple processes. When multiple connections write to the database, SQLite, with your help, makes sure that the write operations are performed in a way that preserves the integrity of the database. SQLite makes sure that each query is atomic, and that the database file is never left in a corrupt state. Your job is to group the queries into transactions, so that the database is also kept consistent at the application level. ## The busy timeout SQLite uses locks to allow only one write transaction at a time. When a second connection is trying to write to the database, while another connection has locked it already, SQLite by default returns an error and aborts the second write operation. This default behavior is most often not acceptable, and you can do better. SQLite lets you set a busy timeout. If this timeout is set to a non-zero value, then the second connection will re-try the write operation several times, until it succeeds or the timeout expires. To set the busy timeout from RSQLite, you can set a PRAGMA : dbExecute(con, "PRAGMA busy_timeout = 10 * 1000") This is in milliseconds, and it is best to set it right after opening the connection. (You can also use the new sqliteSetBusyHandler() function to set the busy timeout.) Note that SQLite currently does not schedule concurrent transactions fairly. More precisely it does not schedule them at all. If multiple transactions are waiting on the same database, any one of them can be granted access next. Moreover, SQLite does not currently ensure that access is granted as soon as the database is available. Multiple connections might be waiting on the database, even if it is available. Make sure that you set the busy timeout to a high enough value for applications with high concurrency and many writes. It is fine to set it to several minutes, especially if you have made sure that your application does not have a deadlock (see later). ## The usleep() issue Unfortunately RSQLite version before 2.2.4 had an issue that prevented good concurrent (write) database performance on Unix. When a connection waits on a lock, it uses the usleep() C library function on Unix, but only if SQLite was compiled with the HAVE_USLEEP compile-time option. Previous RSQLite versions did not set this option, so SQLite fell back to using the sleep() C library function instead. sleep() , however can only take an integer number of seconds. Sleeping at least one second between retries is obviously very bad for performance, and it also reduces the number of retries before a certain busy timeout expires, resulting in much more errors. (Or you had to set the timeout to a very large value.) Several people experienced this over the years, and we also ran into it in the liteq package. Luckily, this time Iñaki Ucar was persistent enough to track down the issue. The solution is simple enough: turn on the HAVE_USLEEP option. (usleep() was not always available in the past, but nowadays it is, so we don’t actually have to check for it.) If you have concurrency issues with RSQLite, please update to version 2.2.4 or later. Even after updating RSQLite and setting the busy timeout, you can still get database is locked errors. This is because in some situations, these errors are the only way to avoid a deadlock. When SQLite detects an unavoidable deadlock, it will not use the busy timeout, but cancels some transactions. By default SQLite transactions are DEFERRED, which means that they don’t actually start with the BEGIN statement, but only with the first operation. If a transaction starts out with a read operation, SQLite assumes that it is a read transaction. If it performs a write operation later, then SQLite tries to upgrade it to a write transaction. Consider two concurrent DEFERRED transactions that both start out as read transactions, and then they both upgrade to write transactions. One of them (say the first one) will be upgraded, but the second one will be denied with a busy error, as there can be only one write transactions at a time. We cannot keep the second transaction and retry it later, because the second connection already holds a read lock, and this would not lot the first transaction commit its write operations. Neither transactions can continue, unless the other is canceled, so SQLite will cancel the second and let the first one commit. When the second one is canceled, its busy timeout is simply ignored, as it does not make sense to retry it. (The first transaction can be re-tried, however, using the busy timeout.) One way to avoid deadlocks is to announce write transactions right when they start, with BEGIN IMMEDIATE. If all write-transactions are immediate transactions, then no deadlock can occur. (Well, at least not at this level.) Immediate transactions slightly reduce the the concurrency in your application, but often this is a good trade off to avoid deadlocks. As far as I can tell there is no way to use immediate transactions in RSQLite with dbWithTransaction(), but you can create a helper function for it. It could look something like this: #' @importFrom DBI dbExecute dbWithWriteTransaction <- function(conn, code) { dbExecute(conn, "BEGIN IMMEDIATE") rollback <- function(e) { call <- dbExecute(conn, "ROLLBACK") if (identical(call, FALSE)) { stop(paste( "Failed to rollback transaction.", "Tried to roll back because an error occurred:", conditionMessage(e) ), call. = FALSE) } if (inherits(e, "error")) stop(e) } tryCatch( { res <- force(code) dbExecute(conn, "COMMIT") res }, db_abort = rollback, error = rollback, interrupt = rollback ) }
2021-08-02 23:54:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35968872904777527, "perplexity": 2334.6507816228723}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154408.7/warc/CC-MAIN-20210802234539-20210803024539-00630.warc.gz"}
https://necromuralist.github.io/posts/201411installing-python-package-for-single/
# Installing a Python Package for a Single User Normally when installing a package that I'm working on I'm using a virtualenv so it's installed within that environment only, but I wanted to test part of my code that was using ssh to run a command that I didn't want to install system-wide. Creating a virtualenv for the test-user and activating it before running the command via ssh seemed excessive (and maybe not possible - I didn't try) but it turns out that you can install packages at the user-level using the 'setup.py' file. In this case I wanted the setup.py to create a command-line command called 'rotate' and install it in the user's ~/bin folder so I could run it something like this: ssh test@localhost rotate 90 First I changed the .bashrc to add the bin folder to the PATH: PATH=$HOME/bin:$PATH This has to be added to the top of the .bashrc file because the first thing there by default is a conditional that prevents it from using the things in the .bashrc file: # If not running interactively, don't do anything case $- in *i*) ;; *) return;; esac Next I changed into the directory where the package's setup.py file was and installed the package: python setup.py install --install-scripts=$HOME/bin --user The --user option is what tells python to install it for the local user instead of /usr/local/bin and the --install-scripts tells it where to put the commands it creates. Without the --install-scripts option it will install it in .local/bin so another option would be to change the PATH variable instead: PATH=$HOME/.local/bin:$PATH But I use ~/bin for other commands anyway so it seemed to make more sense to put it there. OS : Ubuntu 14.04.1 LTS Python: 2.7.6
2018-04-19 19:33:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5201523303985596, "perplexity": 2226.14759563956}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125937016.16/warc/CC-MAIN-20180419184909-20180419204909-00566.warc.gz"}
http://www.ck12.org/geometry/Exterior-Angles-Theorems/exerciseint/Solve-for-the-Unknown-Remote-Interior-Angle/r1/
# Exterior Angles Theorems ## Exterior angles equal the sum of the remote interiors. % Progress MEMORY METER This indicates how strong in your memory this concept is Progress % Geometry Triangle Relationships Solve for the Unknown Remote Interior Angle Teacher Contributed The measure of an exterior angle of a triangle is ${81}^{\circ }$$81^\circ$. The measures of the two remote interior angles are ${42}^{\circ }$$42^\circ$ and ${x}^{\circ }$$x^\circ$. What is the measure of the unknown angle? qid: 100161 Reviews
2017-04-27 02:17:57
{"extraction_info": {"found_math": true, "script_math_tex": 3, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 3, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22266091406345367, "perplexity": 3966.0832654653573}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121778.66/warc/CC-MAIN-20170423031201-00418-ip-10-145-167-34.ec2.internal.warc.gz"}
http://astrobunny.net/2007/09/18/lucky-star-she-really-knocked-it/
I bumped my head on the way out of the car when I wanted to go for an eye check. Seriously. That reminds me. Its been a while since I started wearing the pair of glasses I'm wearing now. Perhaps its time for a change? written by astrobunny \\
2020-08-10 02:37:22
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9941404461860657, "perplexity": 974.103674024335}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738603.37/warc/CC-MAIN-20200810012015-20200810042015-00487.warc.gz"}
https://math.stackexchange.com/questions/3013709/prove-that-there-exists-a-triangle-which-can-be-cut-into-2005-congruent-triangle
# Prove that there exists a triangle which can be cut into 2005 congruent triangles. I thought maybe we can start with congruent triangle and try to cut it similar to how we create a Sierpinski's Triangle? However, the number of smaller triangles we get is a power of $$4$$ so it does not work. Any ideas? ## 2 Answers The decomposition is possible because $$2005 = 5\cdot 401$$ and both $$5$$ and $$401$$ are primes of the form $$4k+1$$. This allow $$2005$$ to be written as a sum of squares. $$2005 = 22^2 + 39^2 = 18^2+41^2$$ For any integer $$n = p^2 + q^2$$ that can be written as a sum of squares. Consider a right-angled triangle $$ABC$$ with $$AB = p\sqrt{n}, AC = q\sqrt{n}\quad\text{ and }\quad BC = n$$ Let $$D$$ on $$BC$$ be the foot of attitude at $$A$$. It is easy to see $$\triangle DBA$$ and $$\triangle DAC$$ are similar to $$\triangle ABC$$ with $$BD = p^2, AD = pq\quad\text{ and }\quad CD = q^2$$ One can split $$\triangle DBA$$ into $$p^2$$ and $$\triangle DAC$$ into $$q^2$$ triangles with sides $$p, q$$ and $$\sqrt{n}$$. As an example, following is a subdivision of a triangle into $$13 = 2^2 + 3^2$$ congruent triangles. In the literature, this is known as a biquadratic tiling of a triangle. For more information about subdividing triangles into congruent triangles, look at answers in this MO post. In particular, the list of papers by Michael Beeson there. The construction described here is based on what I have learned from one of Michael's papers. Since $$2005$$ is the sum of two squares, there exists this sort of triangle. In a contest setting, if you're trying to show $$2005$$ is a sum of two squares, realize that $$2005=401\cdot 5$$. Since 401 and 5 are sums of squares, we know that their product is also a sum of squares. Now in general if we have $$n=x^2+y^2$$, let our triangle be an $$x$$ by $$y$$ right triangle. Then, we can split this triangle along the altitude to the hypotenuse, which gives us two similar triangles with hypotenuses $$x$$ and $$y$$ respectively. Finally, we can decompose each of these into $$x^2$$ and $$y^2$$ similar right triangles with hypotenuses $$1$$, via a stretched version of the following picture. • I know this sum of squares is a sufficient condition, but I'm not sure if it's necessary. Any thoughts on this? As a particular example, would $3$ or $7$ work? – Isaac Browne Nov 26 '18 at 2:49 • The $30-60-90$ triangle can be divided into three triangles. Start by bisecting the 60 angle. – Empy2 Nov 26 '18 at 3:07 • Alright, so $3$ works. I also just thought of the construction of connecting vertices of an equilateral triangle to the center, which would create three congruent triangles. Any others? – Isaac Browne Nov 26 '18 at 3:11 • It seems Michael Beeson has proved that $7$ is impossible recently. this appears on his web site, not yet available on arXiv... – achille hui Nov 26 '18 at 4:00 • @achillehui: Gosh — a lovely paper, and very hot off the press! – Peter LeFanu Lumsdaine Nov 26 '18 at 10:21
2019-05-19 15:35:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 35, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6613729000091553, "perplexity": 155.23574309320495}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232254889.43/warc/CC-MAIN-20190519141556-20190519163556-00552.warc.gz"}
https://en.wikipedia.org/wiki/Talk:Tf%E2%80%93idf
# Talk:tf–idf WikiProject Computer science (Rated C-class, Low-importance) This article is within the scope of WikiProject Computer science, a collaborative effort to improve the coverage of Computer science related articles on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks. C  This article has been rated as C-Class on the project's quality scale. Low  This article has been rated as Low-importance on the project's importance scale. ## Untitled/undated discussion Usually, the term frequency is just the count of a term in a document (NOT divided by the total number of terms in the document), which is confusing because it isn't really a frequency. I strongly agree, in all the technical papers I've been reading for my Internet services class at U.Washington, TF is the count, and so TF*IDF is biased (usually has higher values) for longer documents therefore needing to be normalized. ## lowercase Why title of the article is in lower case? Why not "TF-IDF"? --ajvol 15:29, 25 November 2006 (UTC) • I believe the short story of this is that tf-idf is a well known function in the literature and that is how it is referred. I know that in some cases it is used to help differentiate it from the uppercase variations that are sometimes used to refer to other equations. Josh Froelich 03:19, 11 December 2006 (UTC) In other papers I see the sign of multiplication TF*IDF, not minus. See, e.g. S. Robertson, Understanding inverse document frequency: on theoretical arguments for IDF. Journal of Documentation 60, 503-520, 2004. What do you think about renaming the article? --AKA MBG (talk) 14:18, 7 March 2008 (UTC) By far, the most common representation in the literature is lowercase with an asterisk or similar multiplication symbol. I agree that this article should be renamed if the Wikimedia allows * in article names. 13:20, 26 June 2009 (UTC) ## Example You could extract the most relevant terms from a version of a page of Wikipedia, perhaps this very one, as an example. --84.20.17.84 15:20, 15 March 2007 (UTC) This is a good idea. I'll do it tomorrow. —Preceding unsigned comment added by 67.170.95.139 (talk) 07:10, 9 December 2008 (UTC) I'd warn against using a Wikipedia article, since they change over time, which impedes reproducibility; it's better to choose a static document, such as a public domain text. If it explicitly references Wikipedia, it also runs afoul of Wikipedia:Avoid self-reference. Dcoetzee 09:29, 9 December 2008 (UTC) ## Text Data Clustering • I think we can also use tf-idf in text data clustering. I would like to know any Java source code on unstructured text data clustering based on tf-idf? —Preceding unsigned comment added by 125.53.215.245 (talk) 03:09, 12 September 2007 (UTC) ## Normalized frequencies The frequency of the terms isn't usually normalized by dividing it for the total length of each document. Instead, normalization is done by dividing for the frequency of the most used term in the document (as outlined in http://www.miislita.com/term-vector/term-vector-4.html). —Preceding unsigned comment added by 151.53.133.126 (talk) 18:59, 29 February 2008 (UTC) I've removed the normalization from the definition of tf following several discussions on mailing lists about tf implementations. The (unsourced!) variant previously described has sown a lot of confusion on the 'net. Qwertyus (talk) 12:46, 29 June 2011 (UTC) Also, with the information given in the example it is not possible to calculate the TF of the term "cow". It is wrong to say that TF(cow) is the frequency of the term cow (3) divided by the number of terms (100). —Preceding unsigned comment added by 201.53.230.107 (talk) 02:34, 13 October 2009 (UTC) I've added a banner to the section requesting expert help. The name "term frequency" probably isn't clear to outsiders, whether it should be a raw count or a normalised count. If someone from the text-retrieval community could simply clarify whether it is "normal" to normalise the value or not, that would improve this page! --mcld (talk) 11:14, 3 February 2012 (UTC) • I'm probably not the "expert" you're looking for, but either is a measure of term frequency. Normalization accounts, up to a point, for how term frequency tends to favor long documents, but pace comments above, a very simple measure of tf is still tf. I don't think there needs to be too much stress over this. Universaladdress (talk) 06:43, 15 March 2012 (UTC) I have edited the section to provide one particular formula, but have tried to emphasize that the given formula is not necessarily the definitive version. Hope this was helpful. 72.195.132.12 (talk) 05:03, 17 August 2012 (UTC) ## Logarithms Can someone please specify the logarithm bases correctly? Is that binary or base 10 log? —Preceding unsigned comment added by Godji (talkcontribs) 12:03, 20 March 2008 (UTC) it doesn't matter as long as they are all the same in your calculations 24.222.83.249 (talk) 23:42, 1 June 2008 (UTC) Remember that ${\displaystyle \log _{a}(x)=\log _{b}(x)/\log _{b}(a)}$. This means that converting between two logarithm bases is just multiplication by a constant. 72.195.132.12 (talk) 04:03, 17 August 2012 (UTC) ## Idf definition It maybe the issue of the Information Retrieval community as whole, but the definition of IDF is an intellectual insult for anybody with a reasonable natural sciences background. Saying that IDF (Inverse Document Frequency) = log ( 1 / document frequency ) should be prohibited! Maybe the place to fix this is wikipedia, since we can't fix IR textbooks and papers... —Preceding unsigned comment added by 194.58.241.106 (talk) 19:50, 14 June 2008 (UTC) That would be original research, which is not permitted. We should describe IDF as it is defined and used in the field of IR. If someone in the natural sciences has published something about why this is a poor choice for a formula, it could probably be given a couple sentences somewhere. Dcoetzee 09:41, 9 December 2008 (UTC) So what if someone is insulted. The job of the wikipedia is to convey information...if you feel insulted about something, go ask your mother for a hug. —Preceding unsigned comment added by 206.169.37.100 (talk) 20:16, 13 May 2009 (UTC) If there is a significant difference in the way the term is used across disciplines, a disambiguation page may be in order. That information may not need to be included in this article, however. Universaladdress (talk) 06:14, 12 August 2012 (UTC) ## Notation very confusing The index notation here is difficult to understand quickly because you use i for word and j for document. It would be much easier to read and grok if you used w, w', w" for words, and d, d', d" for documents. —Preceding unsigned comment added by 206.169.37.100 (talk) 20:12, 13 May 2009 (UTC) Why is there a ${\displaystyle \times }$ sign instead of ${\displaystyle \cdot }$ ? It is not a cross-product here: ${\displaystyle \mathrm {(tf{\mbox{-}}idf)_{i,j}} =\mathrm {tf_{i,j}} \times \mathrm {idf_{i}} }$ —Preceding unsigned comment added by 137.250.39.119 (talk) 12:13, 17 June 2009 (UTC) Enjoy!! —Preceding unsigned comment added by 207.46.55.31 (talk) 08:54, 3 May 2010 (UTC) ## Example is incorrect In the example the term frequency should the absolute count of term in the document, and shouldn't be divided by all term counts in the dictionary • Detailed discussion about this question above on the talk page. Expert help has been sought to clarify the article. Universaladdress (talk) 15:46, 29 April 2012 (UTC) ## corrupted text at Web mining Concerns the topic of this article. Appears after "When the length of the words in a document goes to". I'm deleting it; hopefully s.o. here can fix. — kwami (talk) 23:13, 17 January 2014 (UTC) Why are you reporting this here, instead of at Talk:Web mining? QVVERTYVS (hm?) 23:24, 17 January 2014 (UTC) ## Double Normalization K What does the "K" stand for in "Double Normalization K"? — Preceding unsigned comment added by 195.159.43.226 (talk) 15:06, 15 September 2015 (UTC) Looks like it's just a nameless constant that somebody decided to call K. I.e., it doesn't stand for anything. QVVERTYVS (hm?) 15:14, 15 September 2015 (UTC)
2018-09-21 21:07:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 4, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7745033502578735, "perplexity": 2238.080181228437}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267157569.48/warc/CC-MAIN-20180921210113-20180921230513-00399.warc.gz"}
https://www.atmos-meas-tech.net/11/3177/2018/
Journal topic Atmos. Meas. Tech., 11, 3177–3196, 2018 https://doi.org/10.5194/amt-11-3177-2018 Atmos. Meas. Tech., 11, 3177–3196, 2018 https://doi.org/10.5194/amt-11-3177-2018 Research article 01 Jun 2018 Research article | 01 Jun 2018 Neural network cloud top pressure and height for MODIS Neural network cloud top pressure and height for MODIS Nina Håkansson1, Claudia Adok2, Anke Thoss1, Ronald Scheirer1, and Sara Hörnquist1 Nina Håkansson et al. • 1Swedish Meteorological and Hydrological Institute (SMHI), Norrköping, Sweden • 2Regional Cancer Center Western Sweden, Gothenburg, Sweden Correspondence: Nina Håkansson ([email protected]) Abstract Cloud top height retrieval from imager instruments is important for nowcasting and for satellite climate data records. A neural network approach for cloud top height retrieval from the imager instrument MODIS (Moderate Resolution Imaging Spectroradiometer) is presented. The neural networks are trained using cloud top layer pressure data from the CALIOP (Cloud-Aerosol Lidar with Orthogonal Polarization) dataset. Results are compared with two operational reference algorithms for cloud top height: the MODIS Collection 6 Level 2 height product and the cloud top temperature and height algorithm in the 2014 version of the NWC SAF (EUMETSAT (European Organization for the Exploitation of Meteorological Satellites) Satellite Application Facility on Support to Nowcasting and Very Short Range Forecasting) PPS (Polar Platform System). All three techniques are evaluated using both CALIOP and CPR (Cloud Profiling Radar for CloudSat (CLOUD SATellite)) height. Instruments like AVHRR (Advanced Very High Resolution Radiometer) and VIIRS (Visible Infrared Imaging Radiometer Suite) contain fewer channels useful for cloud top height retrievals than MODIS, therefore several different neural networks are investigated to test how infrared channel selection influences retrieval performance. Also a network with only channels available for the AVHRR1 instrument is trained and evaluated. To examine the contribution of different variables, networks with fewer variables are trained. It is shown that variables containing imager information for neighboring pixels are very important. The error distributions of the involved cloud top height algorithms are found to be non-Gaussian. Different descriptive statistic measures are presented and it is exemplified that bias and SD (standard deviation) can be misleading for non-Gaussian distributions. The median and mode are found to better describe the tendency of the error distributions and IQR (interquartile range) and MAE (mean absolute error) are found to give the most useful information of the spread of the errors. For all descriptive statistics presented MAE, IQR, RMSE (root mean square error), SD, mode, median, bias and percentage of absolute errors above 0.25, 0.5, 1 and 2 km the neural network perform better than the reference algorithms both validated with CALIOP and CPR (CloudSat). The neural networks using the brightness temperatures at 11 and 12 µm show at least 32 % (or 623 m) lower MAE compared to the two operational reference algorithms when validating with CALIOP height. Validation with CPR (CloudSat) height gives at least 25 % (or 430 m) reduction of MAE. 1 Introduction The retrieval of cloud top temperature, pressure and height from imager data from polar orbiting satellites is used both as a vital product in global cloud climatologies and for nowcasting at high latitudes where data from geostationary satellites are either not available or not available in sufficient quality and spatial resolution. Cloud top height products from VIS/IR (visible/infrared) imagers are used in the analysis and early warning of thunderstorm development, for height assignment in aviation forecasts and in data assimilation of atmospheric motion vectors. The cloud top height can serve as input to mesoscale analysis and models for use in nowcasting in general, or as input to other satellite retrievals used in nowcasting (e.g., cloud micro physical properties retrieval, or cloud type retrieval). It is important that climatologists and forecasters have reliable and accurate cloud top height products from recent and past satellite measurements. There are different traditional techniques to retrieve cloud top height see for a presentation of 10 cloud top height retrieval algorithms applied to the SEVIRI (Spinning Enhanced Visible Infrared Imager). Several algorithms to retrieve cloud top height from polar orbiting satellites are available and used operationally for nowcasting purposes or in cloud climatologies. These include the CTTH (cloud top temperature and height) from the PPS (Polar Platform System) package , which is also used in the CLARA-A2 (Satellite Application Facility for Climate Monitoring (CM SAF), cloud, albedo and surface radiation dataset from EUMETSAT (European Organization for the Exploitation of Meteorological Satellites)) climate data record , ACHA (Algorithm Working Group (AWG) Cloud Height Algorithm) used in PATMOS-x (Pathfinder Atmospheres – Extended) , CC4CL (Community Cloud Retrieval for Climate) used in ESA (European Space Agency) Cloud_CCI (Cloud Climate Change Initiative) , MODIS (Moderate Resolution Imaging Spectroradiometer) Collection-6 algorithm and the ISCCP (International Satellite Cloud Climatology Project) algorithm . We will use both the MODIS Collection-6 (MODIS-C6) and the 2014 version CTTH from PPS (PPS-v2014) as references to evaluate the performance of neural network based cloud top height retrieval. The MODIS-C6 algorithm is developed for the MODIS instrument. The PPS, delivered by the NWC SAF (EUMETSAT Satellite Application Facility on Support to Nowcasting and Very Short Range Forecasting), is adapted to handle data from instruments such as AVHRR (Advanced Very High Resolution Radiometer), VIIRS (Visible Infrared Imaging Radiometer Suite) and MODIS. Artificial neural networks are widely used for non-linear regression problems; see , or for examples of neural network applications in atmospheric science. In CC4CL a neural network is used for the cloud detection . Artificial neural networks have also been used on MODIS data to retrieve cloud optical depth . The COCS (cirrus optical properties; derived from CALIOP and SEVIRI) algorithm uses artificial neural networks to retrieve cirrus cloud optical thickness and cloud top height for the SEVIRI instrument . Considering that neural networks in the mentioned examples have successfully derived cloud properties, and that cloud top height retrievals often include fitting of brightness temperatures to temperature profiles, a neural network can be expected to retrieve cloud top pressure for MODIS with some skill. One type of neural network is the multilayer perceptron described in which is a supervised learning technique. If the output for a certain input, when training the multilayer perceptron, is not equal to the target output an error signal is propagated back in the network and the weights of the network are adjusted resulting in a reduced overall error. This algorithm is called the back-propagation algorithm. In this study we will compare the performance of back-propagation neural network algorithms for retrieving cloud top height (NN-CTTH) with the CTTH algorithm from PPS version 2014 (PPS-v2014) and MODIS Collection 6 (MODIS-C6) algorithm. Several networks will be trained to estimate the contribution of different training variables to the overall result. The networks will be validated using both CALIOP (Cloud-Aerosol Lidar with Orthogonal Polarization) and CPR (Cloud Profiling Radar for CloudSat (CLOUD SATellite)) height data. In Sect. 2 the different datasets used are briefly described and in Sect. 3 the three algorithms are described. Results are presented and discussed in Sect. 4 and final conclusions are found in Sect. 5. 2 Instruments and data For this study we used data from the MODIS instrument on the polar orbiting satellite Aqua in the A-Train, as it is co-located with both CALIPSO (Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations) and CloudSat at most latitudes and has multiple channels useful for cloud top height retrieval. 2.1 Aqua – MODIS The MODIS is a spectroradiometer with 36 channels covering the solar and thermal spectra. We are using level 1 data from the MODIS instrument on the polar orbiter Aqua. For this study the MYD021km and MYD03 for all orbits from 24 dates were used (the 1st and 14th of every month of 2010). The data were divided into four parts which were used for training, validation during training (used to decide when to quit training), testing under development (used to test different combinations of variables during prototyping) and final validation. The data contains many pixels that are almost identical, because a typical cloud is larger than one pixel. Therefore randomly dividing the data into four datasets is not possible as this would, in practice, give four identical datasets which would cause the network to over-train. See Table 1 for distribution of data. The MODIS Collection-6 climate data records produced by the National Aeronautics and Space Administration (NASA) Earth Observation System was used for comparison. The 1km cloud top height and cloud top pressure from the MYD06_L2-product for the dates in Table 1 were used. The satellite zenith angles for MODIS when matched with CALIOP vary between 0.04 and 19.08; when matched with CPR (CloudSat) they vary between 0.04 and 19.26. Table 1MODIS data from 2010 used for training and validation of the neural networks. 2.2 CALIOP The CALIOP lidar on the polar orbiting satellite CALIPSO is an active sensor and therefore more sensitive to particle conglomerates with low density than typical imagers. The horizontal pixel resolution is 0.07 km× 0.333 km, this means that when co-locating with MODIS one should remember that CALIOP samples only a small part of each MODIS pixel. The vertical resolution for CALIOP is 30 m. The viewing angle for CALIOP is 3. The CALIOP 1 km Cloud Layer product data were used (for the dates, see Table 1) as the truth to train the networks against, and for validation of the networks. The 1 km product was selected because the resolution is closest to the MODIS resolution. For training version 3 of the CALIOP data was used. The final validation was made with version 4 because of the more accurate cloud type information in the feature classification flag in version 4. 2.3 CPR (CloudSat) The CPR (CloudSat) is a radar which derives a vertical profile of cloud water. Its horizontal resolution is 1.4 km× 3.5 km, its vertical resolution is 0.5 km and the viewing angle is 0.16. The CPR (CloudSat) product 2B-GEOPROF-R05 was used as an additional source for independent validation of the networks, see Table 1 for selected dates. The validation with CPR (CloudSat) will have a lower percentage of low clouds compared to CALIOP because ground clutter is a problem for space-borne radar instruments. 2.4 Other data Numerical weather prediction (NWP) data are needed as input for the PPS-v2014 and the neural network algorithm. In this study the operational 91-level short-range archived forecasted NWP data from ECMWF (European Centre for Medium-range Weather Forecasting) were used. The analysis times 00:00 and 12:00 were used in combination with the following forecast times: 6, 9, 12 and 15 h. Under the period Integrated Forecast System (IFS) cycles 35r3, 36r1, 36r3 and 36r4 were operational. Ice maps (OSI-409 version 1.1) from OSISAF (Satellite Application Facility on Ocean and Sea Ice) were also used as input for the PPS cloud mask algorithm. 3 Algorithms 3.1 PPS-v2014 cloud top temperature and height The cloud top height algorithm in PPS-v2014, uses two different algorithms for cloud top height retrieval, one for pixels classified as opaque and another for semitransparent clouds. The reason for having two different algorithms is that the straight forward opaque algorithm can not be used for pixels with optically thin clouds like cirrus or broken cloud fields like cumulus. The signals for these pixels are a mixture of contributions from the cloud itself and underlying clouds and/or the surface. The algorithm uses a split-window technique to decide whether to apply the opaque or semitransparent retrieval. All pixels with a difference between the 11 and 12 µm brightness temperatures of more than 1.0 K are treated as semitransparent. This is a slight modification of the PPS version 2014 algorithm where the clouds classified as non-opaque by cloud type product are also considered semitransparent. The retrieval for opaque clouds matches the observed brightness temperatures at 11 µm against a temperature profile derived from a short term forecast or (re)analysis of a NWP model, adjusted for atmospheric absorption. The first match, going along the profile from the ground and upwards, gives the cloud top height and pressure. Temperatures colder or warmer than the profile are fitted to the coldest or warmest temperature of the profile below tropopause, respectively. The algorithm for semitransparent pixels uses a histogram method, based on the work of and , which fits a curve to the brightness temperature difference between the 11 and 12 µm bands as a function of 11 µm brightness temperatures for all pixels in a segment (32 × 32 pixels). One parameter of this fitting is the cloud top temperature. The solution is checked for quality (low root mean square error) and sanity (inside physically meaningful interval and not predicted too far from data). The solution is accepted if both tests are passed. The height and pressure are then retrieved from the temperature, in the same way as for opaque clouds. For more detail about the algorithms see SMHI (2015). PPS height uses the unit altitude above ground. For all comparisons this is transformed to height above mean sea level, using elevations given in the CPR (CloudSat) or CALIOP datasets. 3.2 MODIS Collection 6 Aqua Cloud Top Properties product In MODIS Collection 6 the CO2 slicing method (described in Menzel et al.2008) is used to retrieve cloud top pressure using the 13 and 14 µm channels for ice clouds (as determined from MODIS phase algorithm). For low level clouds the 11 µm channel and the IR-window approach (IRW) with a latitude dependent lapse rate is used over ocean . Over land the 11 µm temperature is fitted against an 11 µm temperature profile calculated from GDAS (Global Data Assimilation System) temperature, water vapor and ozone profiles and the PFAAST (Pressure-Layer Fast Algorithm for Atmospheric Transmittance) radiative transfer model is used for low clouds . For more details about the updates in Collection 6 see . Cloud pressure is converted to temperature and height using the National Centers for Environmental Prediction Global Data Assimilation System . 3.3 Neural network cloud top temperature and height NN-CTTH Neural networks are trained using MODIS data co-located with CALIOP data. Nearest neighbor matching was used with the pyresample package in the PyTroll project . The Aqua and CALIPSO satellites are both part of the A-Train and the matched FOV (field of view) are close in time (only 75 s apart). The uppermost top layer pressure variable, for both multi- and single-layer clouds, from CALIOP data was used as training truth. Temperature and height for the retrieved cloud top pressure are extracted using NWP data. Pressure predicted higher than surface pressure are set to surface pressure. For pressures lower than 70 hPa neither height nor temperature values are extracted. The amount of pixels with pressure lower than 70 hPa varies between 0 and 0.05 % for the networks. 3.3.1 Neural network variables To reduce sun zenith angle dependence and to have the same algorithm for all illumination conditions only infrared channels were used to train the neural networks. Several different types of variables were consequently used to train the network; the most basic variables were the NWP temperatures at the following pressure levels: surface, 950, 850, 700, 500 and 250 hPa. This together with the 11 or 12 µm brightness temperature (B11 or B12) gave the network what was needed to make a radiance fitting to retrieve cloud top pressure for opaque clouds, although with very coarse vertical resolution in the NWP data. For opaque clouds that are geometrically thin, with little or no water vapor above the cloud, the 11 and 12 µm brightness temperatures will be the same as the cloud top temperature. If the predicted NWP temperatures are correct the neural network could fit the 11 µm brightness temperature to the NWP temperatures and retrieve the cloud pressure (similar to what is done in PPS-v2014 and MODIS-C6). For cases without inversions in the temperature profile, the retrieved cloud top pressure should be accurate. The cases with inversions are more difficult to fit correctly, as multiple solutions exist and the temperature inversion might not be accurately captured with regards to its strength and height in the NWP data. For semitransparent clouds the network needs more variables to make a correct retrieval. To give the network information on the opacity of the pixel, brightness temperature difference variables were included (B11B12, B11B3.7, B8.5B11). Texture variables with the standard deviation of brightness temperature, or brightness temperature difference, for 5 × 5 pixels were included. These contain information about whether pixels with large B11B12 are more likely to be semitransparent or more likely to be fractional or cloud edges. As described in Sect. 3.1, PPS-v2014 uses B11B12 and B11 of the neighboring pixels to retrieve temperatures for semitransparent clouds. In order to feed the network with some of this information the neighboring warmest and coldest pixels in B11 in a 5 × 5 pixel neighborhood were identified. Variables using the brightness temperature at these warmest and coldest pixels were calculated, for example the 12 µm brightness temperature for the coldest pixel minus the same for the current pixel: ${B}_{\mathrm{12}}^{\text{C}}-{B}_{\mathrm{12}}$, see Table 2 for more information about what variables were calculated. Table 2Description of variable types used to train the neural networks. Table 3Description of the different networks. See Table 2 for explanation of the variables. The NWP variables: PS, TS, T950, T850, T700, T500, T250 are used in all networks. Table 4Description of the imager channels used for the different algorithms. For MODIS-C6 channels used indirectly, to determine if CO2-slicing should be applied, are noted with brackets. The surface pressure was also included, which provides the network with a value for the maximum reasonable pressure. Also the brightness temperature for the CO2 channel at 13.3 µm and the water vapor channels at 6.7 and 7.3 µm were included as variables. The CO2 channel at 13.3 µm is used in the CO2 slicing method of MODIS-C6 and should improve the cloud top height retrieval for high clouds. The instruments AVHRR, VIIRS, MERSI-2 (Medium Resolution Spectral Imager-2), MetImage (Meteorological Imager) and MODIS all have different selections of IR channels; however most of them have the 11 and 12 µm channels. The first AVHRR instrument, AVHRR1, only had two IR channels at 11 and 3.7 µm and no channel at 12 µm. Networks were trained using combinations of MODIS IR-channels corresponding to the channels available for the other instruments. See Table 3 for specifications of the networks trained. Table 4 gives an overview of what imager channels were used for which network. To see how much the different variable types contribute to the result, some basic networks were trained using less or no imager data. These are also described in Table 3. Also one network using only NWP data was included as a sanity check. For this network we expect bad results. However good results for this network would indicate that height information retrieved was already available in the NWP data. 3.3.2 Training For the training 1.5 million pixels were used, with the distribution 50 % low clouds, 25 % medium level clouds and 25 % high clouds. A higher percentage of low clouds was included because the mean square error (MSE) is often much higher for high clouds. Previous tests showed that fewer low clouds caused the network to focus too much on predicting the high clouds correctly and showed degraded results for low clouds. For the validation dataset used during training 375 000 pixels were randomly selected with the same low/medium/high distribution as for the training data. The machine learning module scikit-learn , the Keras package , the Theano backend and the Python programming language were used for training the network. 3.3.3 Parameters and configurations During training of the network the MSE was used as the loss function that is minimized during training. The data were standardized by subtracting the mean and dividing with the standard deviation before training. Choosing the number of hidden neurons and hidden layers of the neural network is also important for the training to be effective. Too few hidden neurons will result in under-fitting. We used two hidden layers with 30 neurons in the first layer and 15 neurons in the second. The initialization of weights before training the network is important for the neural network to learn faster. There are many different weight initialization methods for training the networks; however in this case the Glorot uniform weight initialization was used. The activation function used for the hidden layers was the tangent hyperbolic (see Karlik and Olgac2011) and for the output layer a linear activation function was used. To determine the changes in the weights an optimization method was used during the back-propagation algorithm. The optimization method used for the multilayer perceptron is mini-batch stochastic gradient descent which performs mini-batch training. A mini-batch is a sample of observations in the data. Several observations are used to update weights and biases, which is different from the traditional stochastic gradient descent where one observation at a time is used for the updates . Having an optimal mini-batch size is important for the training of a neural network because overly large batches can cause the network to take a long time to converge; we used a mini-batch size of 250. When training the neural network there are different learning parameters that need to be tuned to ensure an effective training procedure. During prototyping several different combinations were tested. The learning rate is a parameter that determines the size of change in the weights. On one hand, a learning rate that is too high will result in large weight changes and can result in an unstable model . On the other hand, if a learning rate is too low the training time of the network will be long; we used a learning rate of 0.01. The momentum is a parameter which adds a part of the weight change to the current weight change, using momentum can help avoid the network getting trapped in local minima . A high value of momentum speeds up the training of the network; we had a momentum of 0.9. The parameter learning rate decay, set to 10−6, in Keras, is used to decrease the learning rate after each update as the training progresses. To avoid the neural network from over-fitting (which makes the network extra sensitive to unseen data), a method called early stopping was used. In early stopping the validation error is monitored during training to prevent the network from over-fitting. If the validation error is not improved for some (we used 10) epochs training is stopped. The network for which the validation error was at its lowest is then used. The neural networks were trained for a maximum of 2650 epochs, but the early stopping method caused the training to stop much earlier. Figure 1Scatter plots of the pressure for the neural networks and for the reference methods against CALIOP cloud top pressure. The data were divided in 10 × 10 (hPa) bins for color coding. The number of points in each bin determines the color of the point. The final validation dataset (see Table 1) where all algorithms had a height reported is used. Figure 2Scatters plot of the height for the neural networks and for the reference methods against CPR (CloudSat) height. The data were divided in 0.25 × 0.25 (km) bins for color coding. The number of points in each bin determines the color of the point. The final validation dataset (see Table 1) where all algorithms had a height reported is used. Two points where CPR (CloudSat) had a height above 22 km where excluded. A cloudy threshold of 30 % is used for CPR (CloudSat). 4 Results and discussion The validation data were matched with CALIOP layer top pressure and layer top altitude or CPR (CloudSat) height using nearest neighbor matching in the same way as the training data were matched. The CPR (CloudSat) data include fewer clouds as both some very low clouds and some very thin clouds are not detected by the radar. CPR (CloudSat) is included to strengthen the results. There is always a risk that a neural network approach learns to replicate the errors of the training truth; however if results are also improved when validated with an independent truth this ensures that it is not only the errors that are learnt. A cloudy threshold of 30 % is used for CPR (CloudSat) to include only strong detections. The coarser vertical resolution for CPR (CloudSat) of 500 m means that MAE is expected to be higher than 250 m compared to 15 m for CALIOP. Figure 3Comparing the cloud top height from the NN-AVHRR (a) to PPS-v2014 (c) with a RGB in the middle (b) using channels at 3.7, 11 and 12 µm. Notice that the NN-AVHRR is smoother, contains less“missing data (black)” and that the small high ice clouds in the lower part of the figure are better captured. This is from MODIS on Aqua 14 January 2010, 00:05 UTC. The scatter plots in Fig. 1 show how the cloud top pressure retrievals of the neural networks and the reference methods are distributed compared to CALIOP. Figure 2 show the same type of scatter plots for cloud top height with CPR (CloudSat) as truth. These scatter plots show that all neural networks have similar appearance with most of the data retrieved close to the truth. All methods (NN-CTTH, PPS-v2014 and MODIS-C6) retrieve some heights and pressures that are very far from the true values of CPR (CloudSat) or CALIOP. It is important to remember that some of these seemingly bad results are due to the different FOVs for the MODIS and the CALIOP or CPR (CloudSat) sensors. Figure 3 compares the NN-AVHRR and PPS-v2014 for one scene. For semitransparent clouds PPS-v2014 retrieves the same result for 32 × 32 pixels, which can be seen as blue squares in Fig. 3c. We also observe that a lot of high clouds are placed higher by NN-AVHRR (pixels that are blue in panel (c) and white in panel (a)). For NN-AVHRR in panel (a) we can see that the large area with low clouds in the lower left corner gets a consistent cloud top height (the same orange color everywhere). Note that the NN-AVHRR has a less noisy appearance and has less “missing data”. 4.1 Validation with CALIOP top layer pressure First we consider the performance of all the trained networks validated with the uppermost CALIOP top layer pressure in terms of mean absolute error (MAE). Results in Table 5 show that both PPS-v2014 and MODIS-C6 have a MAE close to 120 hPa. Notice that the network using only the NWP information and no imager channels (NN-NWP) shows high MAE. This was included as a sanity check to see that the neural networks are mainly using the satellite data, and the high MAE for NN-NWP supports this. The NN-OPAQUE network using only B12 and the basic NWP data has a 9 hPa improvement in MAE compared to the reference algorithms. By including the variable B11B12, the MAE improves by an additional 19 hPa because B11B12 contains information about the semitransparency of the pixel. Adding the NWP variable Ciwv, which allows the network to attempt to predict the expected values of B11B12, has a smaller effect of 2 hPa on MAE. However adding all variables containing information on neighboring pixels improves the result by an additional 20 hPa. The NN-AVHRR network using 11 and 12 µm from MODIS provides an MAE which is reduced by about 50 hPa compared to both from MODIS-C6 and PPS-v2014. Notice also that the scores improve for all categories (low, medium and high) when compared with both PPS-v2014 and MODIS-C6. The inclusion of the neighboring pixels accounts for almost 40 % of the improvement. Note that for medium level clouds NN-BASIC-CIWV, without information from neighboring pixels, has higher MAE compared to PPS-v2014. Figure 4Retrieved pressure dependence on satellite zenith angle. CALIOP pressure distribution is shown in light blue. The percent of cloud top pressure results are calculated in 50 hPa bins. The final validation dataset is used (see Table 1). Table 5Mean absolute error (MAE) for different algorithms compared to CALIOP top layer pressure. The final validation dataset (see Table 1), containing 1 832 432 pixels (45 % high, 39 % low and 16 % medium level clouds) is used. Pixels with valid pressure for PPS-v2014, MODIS-C6, and CALIOP are considered. The low, medium and high classes are from CALIOP feature classification flag. Adding more IR channels further improves the results. Adding channel 8.5 µm (B8.5B11, NN-VIIRS) improves MAE by 7 hPa and adding 7.3 µm (B7.3, NN-MERSI-2) improves MAE by 5 hPa. Including the other water vapor channel at 6.7 µm (B6.7, NN-MetImage-NoCO2) only improves MAE by 1 hPa. The CO2 channel at 13.3 µm (B13.3, NN-MetImage) improves the MAE by an additional 6 hPa. The NN-AVHRR1 network trained using 3.7 and 11 µm (MAE 76.1 hPa) is a little worse than NN-AVHRR (MAE 72.4 hPa). Note that B3.7 has a solar component which is currently not treated in any way. If B3.7 was corrected for the solar component, by the network or in a preparation step, the results for AVHRR1 might improve. Also NN-AVHRR1 shows better scores for all categories (low, medium, and high) compared to PPS-v2014 and MODIS-C6. Figure 5Error distribution compared to CPR (CloudSat) (a, c, e) and CALIOP (b, d, f) with biases and medians marked. The percent of data is calculated in 0.1 km bins. The final validation dataset (see Table 1) where all algorithms had a height reported is used. Note that the values on the y axis are dependent on the bin size. The peak at 6 % for NN-AVHRR in (f), means that 6 % of the retrieved heights are between the CALIOP height and the CALIOP height + 0.1 km. The Gaussian distribution with the same bias and standard derivation is shown in grey. The training with CALIOP using only MODIS from Aqua includes only near NADIR observations with all satellite zenith angles for MODIS below 20. Figure 4 shows that NN-AVHRR and NN-AVHRR1 networks also perform robustly also for higher satellite zenith angles. The NN-VIIRS and NN-MetImage-NoCO2 results deviate for satellite zenith angles larger than 60. The NN-MERSI-2 results deviate for satellite zenith angles larger than 40. The NN-MetImage retrieval already shows deviations above 20 satellite zenith angle and for satellite zenith angles larger than 40 the retrieval has no predictive skill. Notice that the distribution for MODIS-C6 also depends on the satellite zenith angle (with fewer high clouds at higher angles). For PPS-v2014, in comparison, fewer low clouds are found at higher satellite zenith angles. The neural networks (NN-AVHRR, NN-AVHRR1, NN-VIIRS and NN-MetImage-NoCO2) can reproduce the bi-modal cloud top pressure distribution similar to CALIOP, while PPS-v2014 deviates from this shape with one peak for mid-level clouds. Table 6Statistic measures for the error distributions for all clouds. For all measures except skewness it is the case that values closer to zero are better. The statistics are calculated for 1 198 599 matches for CPR (CloudSat) and 1 803 335 matches for CALIOP. A small amount 0.2 % of the matches were excluded because of missing height or pressure below 70 hPa for any of the algorithms. PEX describes percentage of absolute errors above Xkm, see Eq. (1). * Interpret bias and SD with caution as distributions are non-Gaussian. Bias is not located at the center of the distribution. 4.2 Discussion of statistics measures for non-Gaussian error distributions For pressure we choose a single measure, MAE, to describe the error; however which (and how many) measures are needed to adequately describe the error distribution need to be discussed. For a Gaussian error distribution the obvious choices are bias and SD (standard deviation) as the Gaussian error distribution is completely determined from bias and SD and all other important measures could be derived from bias and SD. Unfortunately the error distributions considered here are non-Gaussian. This is expected, as we know that apart from the errors of the algorithm and the errors due to different FOV we expect the lidar to detect some thin cloud layers not visible to the imager. These thin layers, not detected by the imager, should result in underestimated cloud top heights. In Fig. 5 the error distributions for MODIS-C6, PPS-v2014 and NN-AVHRR are shown. The Gaussian error distribution with the same bias and SD are plotted in grey. It is clear that the bias is not at the center (the peak) of the distribution. The median is not at the center either, but closer to it. For validation with CALIOP we can see the expected negative bias for all algorithms and for all cases we can also observe that assuming a Gaussian distribution underestimates the amount of small errors. Results compared to CALIOP top layer height and CPR (CloudSat) height are provided for the best performing networks in Table 6 (i.e., NN-OPAQUE, NN-BASIC and NN-BASIC-CIWV was excluded). The skewness shows that the distributions are skewed and non-Gaussian. The mode is calculated using the half-range method to robustly estimate the mode from the sample (for more info see Bickel2002). The bias should be interpreted with caution. Consider PPS-v2014 compared to CALIOP (Table 6), if we add 1465 to all retrievals creating a “corrected” retrieval we would have an error distribution with the same SD and zero bias but the center (peak) of the distribution would not be closer zero. The PE1 (percentage of absolute errors above 1 km, see Eq. 1) for this “corrected” retrieval would increase from 54 to 73 %! For the user this is clearly not an improvement. The general over estimation of cloud top heights of this “corrected” retrieval would, however, be detected by the median and the mode which would be further away from zero but now on the positive side. This example illustrates the risk of misinterpretation of the bias for non-Gaussian error distributions. Several different measures of variation are presented in Table 6 MAE, IQR (interquartile range), SD and RMSE. The measures have different benefits; IQR are robust against outliers and RMSE and SD focus on the worst retrievals as errors are squared. Considering that it is likely not interesting if useless retrievals with large errors are 10 km off or 15 km off, in combination with the fact that some large errors are expected due to different FOV and different instrument sensitivities, the MAE and IQR provide more interesting measures of variation compared to the SD and the RMSE. In the example discussed in the previous section the MAE for the “corrected” retrieval would change only 10 m but the RMSE (root mean square error) would improve estimations by 356 m indicating a much better algorithm; when in fact it is a degraded algorithm. If the largest errors are considered very important RMSE is preferred over SD for skewed distributions, especially if bias is also presented. This is owing to the fact that RMSE and bias have a smaller risk of being misinterpreted by the reader as a Gaussian error distribution. Table 7Statistic measures for the error distributions for low level clouds. For all measures except skewness it is the case that values closer to zero are better. The statistics are calculated for 328 015 matches for CPR (CloudSat) and 709 434 matches for CALIOP. The low class comes from CALIOP feature classification flag (class 0, 1, 2 and 3) and for CPR (CloudSat) it is the pixels with heights lower or exactly at the NWP height at 680 hPa. PEX describes percentage of absolute errors above Xkm, see Eq. (1). * Interpret bias and SD with caution as distributions are non-Gaussian. Bias is not located at the center of the distribution. Figure 6Error distribution compared to CPR (CloudSat) (a, c, e) and CALIOP (b, d, f). The percent of data is calculated in 0.1 km bins. For CALIOP the low, medium and high clouds are determined from CALIOP feature classification flag. For CPR (CloudSat) the low, medium, high clouds are determined from CPR (CloudSat) height compared to NWP geopotential height at 440 and 680 hPa. The final validation dataset (see Table 1) where all algorithms had a height reported is used. Note that the values on the y axis are dependent on the bin size. The peak at 11 % for NN-AVHRR in (f), means that 11 % of the retrieved heights are between the CALIOP height and the CALIOP height + 0.1 km. For low level clouds we have even stronger reasons to expect skewed distributions as there is always a limit (ground) to how low clouds top heights can be underestimated and Table 7 shows that the skewness is large for low level clouds. The bias for low level clouds is difficult to interpret as it is the combination of the main part of the error distribution located close to zero and the large positive errors (which are to some extent expected due to different FOVs). In Fig. 6e and f the error distributions for MODIS-C6, PPS-v2014 and NN-AVHRR for low level clouds are shown. We can see, in Fig. 6, that the NN-AVHRR less often underestimates the cloud top height for low level clouds which partly explains the higher bias for NN-AVHRR. To exemplify the problem with bias and SD for skewed distributions consider PPS-v2014 and NN-AVHRR validated with CPR (CloudSat) in Table 7 and for the sake of argument let us falsely assume a Gaussian error distribution. Under this assumption the PPS-v2014 with a 232 m better bias and only 24 m worse SD is clearly the better algorithm. The PE2 and RMSE are very similar between the two algorithms; however all other measures MAE, IQR, PE0.25, PE0.5, PE1, median and mode all indicate that NN-AVHRR is the better algorithm. It is also clear in Fig. 6e that the NN-AVHRR has the highest and best centered distribution; contrary to what was indicated by the bias and SD given a false assumption of Gaussian error distribution. One explanation for the low bias for PPS-v2014 validated with CPR (CloudSat) in Table 7 is seen in Fig. 6e where the error distribution of PPS-v2014 is shown to be bi-modal; the general small underestimation of cloud top heights compensates for the mode located close to 1.8 km. The low bias can also be explained by fewer low level clouds predicted much too high. The lowest values for PE2, SD and RMSE support this reasoning. If we look at the result for the high clouds (Table 9) we see a large negative tendency for PPS-v2014 (mode and median) and this is also part of the explanation for the small RMSE for PPS-v2014 for low level clouds. If high clouds are generally placed 1.5 km too low, it should improve results for low level clouds mistaken for high. This includes cases where the different FOVs causes the imager to see mostly high cloud but the lidar and radar see only the part of the FOV containing low cloud. This consequently has a large impact on SD and RMSE as the errors are squared. Comparing the RMSE, SD for NN-AVHRR and PPS-v2014 for low level clouds in the validation with CPR (CloudSat) also highlights why the RMSE and SD are less useful as measures of the variation of the error distribution. The RMSE and SD are very similar between the two algorithms and do not reflect the narrower and better centered error distribution seen for NN-AVHRR for low level clouds in Fig. 6e. The NN-AVHRR has a larger amount of small errors (see PE0.25, PE0.5) and only 16 % of the errors are larger than 1 km compared to 29 % for PPS-v2014. But NN-AVHRR has 1 percentage point more absolute errors larger than 2 km and the absolute error for this percent is larger. As the MAE does not square the errors, it indicates instead that the NN-AVHRR has smaller variation of the error distribution. The IQR that does not regard the largest errors at all is more than 500 m better for NN-AVHRR. The bias of 117 m for NN-AVHRR compared to 1203 m for MODIS-C6 in Table 9 in the validation with CPR (CloudSat) for a Gaussian error distribution would be a large improvement of tendency; however when also considering the mode and the median we can see that the improvement of the tendency is more realistically between 150 to 500 m compared to CPR (CloudSat) and not as large as indicated by the bias. Table 8Statistic measures for the error distributions for medium level clouds. For all measures except skewness it is the case that values closer to zero are better. The statistics are calculated for 244 885 matches for CPR (CloudSat) and 295 186 matches for CALIOP. The high class comes from CALIOP feature classification flag (class 4 and 5) and for CPR (CloudSat) it is the pixels with heights between the NWP height at 440 and 680 hPa. PEX describes percentage of absolute errors above Xkm, see Eq. (1). * Interpret bias and SD with caution as distributions are non-Gaussian. Bias is not located at the center of the distribution. Table 9Statistic measures for the error distributions for high level clouds. For all measures except skewness it is the case that values closer to zero are better. The statistics are calculated for 625 699 matches for CPR (CloudSat) and 798 715 matches for CALIOP. The high class comes from CALIOP feature classification flag (class 6 and 7) and for CPR (CloudSat) it is the pixels with heights higher or exactly at the NWP height at 440 hPa. PEX describes percentage of absolute errors above Xkm, see Eq, (1). * Interpret bias and SD with caution as distributions are non-Gaussian. Bias is not located at the center of the distribution. 4.3 Validation results with CALIOP and CPR (CloudSat) height All measures in Table 6 have better values for all neural networks compared to both the reference algorithms and both validation truths. Considering the improvement in all the other measures in Table 6 it is also safe to conclude that the lower bias for the neural networks is actually an improvement. However the mode and median better describe the improvement of tendency and for the mode the worst performing network is just a few meters better than the best mode of the reference algorithms. For the comparison to CALIOP in Table 6 we see that most measures improve as we add more channels to the neural network. Validated with CPR (CloudSat) the results do not improve for the NN-MetImage-NoCO2 and NN-MetImage. A possible explanation for this could be that some high thin cloud layers are not detected by the radar but the neural network places them higher than the detected CPR (CloudSat) layer below. Thin single layer clouds not detected by the radar are of course not included in the analysis. In the validation with CALIOP the NN-AVHRR MAE is 623 m lower (corresponding to a 32 % reduction of MAE) than MODIS-C6 and 795 m lower (corresponding to a 38 % reduction of MAE) than PPS-v2014. The NN-MetImage-NoCO2 has the best result while performing well at all satellite zenith angles, with a 43 % reduction in MAE when compared to MODIS-C6 and a 48 % reduction when compared to PPS-v2014. The NN-MetImage results have even better scores but are not useful for satellite zenith angles exceeding 20. In the validation with CPR (CloudSat) the NN-AVHRR shows 430 m lower MAE (corresponding to a 25 % reduction of MAE) compared to MODIS-C6 and 482 m (corresponding to a 28 % reduction of MAE) compared to PPS-v2014. The NN-MetImage-NoCO2 shows a 32 % reduction of MAE compared to MODIS-C6 and a 34 % reduction of MAE compared to PPS-v2014. 4.4 Validation results separated for low, medium and high level clouds Results for low level clouds (Table 7) show that all distributions are well centered around zero and the median and mode are within 250 m from zero for all algorithms except the mode for PPS-v2014 and NN-MetImageNoCO2 validated with CPR (CloudSat). The PE0.25, PE0.5 and PE1 and most useful measures of variation, IQR and MAE, show better values for the neural networks than both reference algorithms as compared to both validation truths. This indicates that the neural networks have a larger amount of good retrievals with small errors. When validated with CALIOP, only 31 % of the absolute errors for NN-AVHRR exceed 0.5 km, compared to 58 % for MODIS-C6 and 47 % for PPS-v2014. For low level clouds validated with CPR (CloudSat) one needs to keep in mind that some thin cloud layers are not detected by the radar. This means that the CPR (CloudSat) height does not reflect the true upper most layer for these clouds. Correct cloud top height retrievals for these clouds will give large positive errors in the CPR (CloudSat) validation for low level clouds. This can explain why the PE2 and RMSE for all the neural networks are better than both reference algorithms when validated with CALIOP but when validated with CPR (CloudSat) PPS-v2014 have the best PE2 and RMSE. In Sect. 4.2 the reason for the bias and SD not being very informative for these highly skewed distributions is discussed. Notice that MODIS-C6 has a high MAE (1192 m) for low level clouds when validated with CPR (CloudSat). Also in the CALIOP validation MODIS-C6 has the highest MAE, IQR, RMSE, PE0.25, PE0.5, PE1 and PE2 for low level clouds. When checking the MAE per month we found that scores for MODIS-C6 for low clouds were worst for December (at the same time the scores for high clouds were best in December). There turned out to be a bug in the algorithm for low marine cloud top height (Richard Frey, MODIS Team, personal communication, 2017) which likely affected the results and the bug has been corrected in Version 6.1. However overall validation scores for MODIS-C6 were not affected by the bug (Steve Ackerman, MODIS Team, personal communication, 2017). For medium level clouds (see Table 8) the neural networks have better measures for MAE, IQR, RMSE, SD, PE1 and PE2 compared to both reference algorithms when validated with both CALIOP and CPR (CloudSat). For the validation with CPR (Cloudsat) the neural network also has the best PE0.25, PE0.5, median and bias. In the validation with CALIOP we can see that PPS-v2014 also has good values for PE0.25, PE0.5, median and the bias; these values are even better than some of those from the neural network. This is also seen in Fig. 6d where we note that PPS-v2014 has a well centered, high peak for the error distribution, but a larger amount of underestimated cloud top heights compared to NN-AVHRR. All algorithms report good values for the mode within 300 m from zero for medium level clouds. For high clouds, in Fig. 6, we can see that the NN-AVHRR has fewer clouds predicted too low, especially compared to PPS-v2014. In the validation with CALIOP (Table 9) the neural networks perform better than the two reference algorithms. For the high clouds validation with CPR (CloudSat) MODIS-C6 has the highest peak (Fig. 6), but also a bi-modal error distribution with another peak close to 6 km. This explains why the overall MAE (Table 9) for high clouds is better for the NN-AVHRR. The higher peak for MODIS-C6 for validation with CPR (CloudSat) is also reflected in a good IQR, PE0.5, PE1 and mode, which are in line with the neural network's values. The median and mode for high level clouds for most neural networks are positive when compared to CPR (CloudSat) but negative when validated with CALIOP. This supports the idea that some high thin clouds, or upper part of clouds, are not detected by the radar but by the lidar and the imager. The median for the neural networks for high level clouds are increasing for neural networks with more variables. This suggests that the extra channels help the neural networks to detect the very thin clouds detected by CALIOP. The medians for the validation with CPR (CloudSat) are also increasing (becoming more positive) and this can be explained by some very thin cloud layers not detected by CPR (CloudSat). Table 10Mean absolute error (MAE) and median in meters for different algorithms compared to CALIOP top layer altitude. The final validation dataset (see Table 1), containing 1 803 335 pixels (5 % low overcast (transparent), 12 % low overcast (opaque), 19 % transition stratocumulus, 2 % low, broken cumulus, 7 % altocumulus (transparent), 8 % altostratus (opaque), 30 % cirrus (transparent) and 14 % deep convective (opaque)), where all algorithms had a cloud top height is used. The cloud types are from CALIOP feature classification. PE0.5 describes percentage of absolute errors above 0.5 km. In Table 9 we can also note that the SD for the PPS-v2014, validated with CPR (CloudSat), is in line with the SD for the neural network. This in combination with the large negative values for the mode and median, and the high MAE and quite good IQR, which indicates that PPS-v2014 systematically underestimates the cloud top height for high level clouds. 4.5 Validation with CALIOP separated for different cloud types In Table 10, the MAE, median and PE0.5 are shown for the different cloud types from the CALIOP feature classification flag. We can see that the MAE and PE0.5 for all the neural networks is better than both reference algorithms, except that PPS-v2014 also has a low MAE and PE0.5 for altostratus (opaque). Large improvements in MAE are seen for the altocumulus (transparent), cirrus (transparent) and deep convective (opaque) classes. For PE0.5 the largest improvements is seen for the four low cloud classes and the deep convective (opaque) class for which the neural networks have at least 12 percentage points fewer errors above 0.5 km compared to both reference algorithms. All algorithms have medians closer to zero than 250 m for the classes low overcast (transparent) and transition stratocumulus. For the low overcast (opaque) and low, broken cumulus the neural networks and PPS-v2014 show good values for the median. For the classes altocumulus (transparent), cirrus (transparent) and deep convective (opaque) clouds the neural network show medians at least 450 m closer to zero than both reference algorithms. For the altostratus (opaque) class the median of the reference algorithms is better than the neural networks. PPS-v2014 also has a MAE and PE0.5 that is better than NN-AVHRR and NN-AVHRR1 for the altostratus (opaque) class. The good performance of PPS-v2014 for altostratus (opaque) are also reflected in Fig. 6d where PPS-v2014 have the highest peak. It is most difficult for all algorithms to correctly retrieve cloud top height for the largest class cirrus (transparent). If we compare NN-MetImage with PPS-v2014 for the cirrus (transparent) class we see that MAE is improved by 2.4 km, the median by 3 km and that 21 percentage points fewer absolute errors are larger than 500 m. Figure 7Mean absolute error in meters compared to CALIOP height. From the top (a) PPS-v2014, (b) MODIS-C6, and (c) NN-AVHRR. Results are calculated for bins evenly spread out 250 km apart. Bins with fewer than 10 cloudy pixels are excluded (plotted in dark grey). The final validation and testing under development data (see Table 1) are included to get enough pixels. 4.6 Geographical aspects of the NN-CTTH performance To show how performance varies between surfaces and different parts of the globe, the MAE in meters compared to CALIOP are calculated on a Fibonacci grid (constructed using the method described in González2009) with a grid evenly spread out on the globe approximately 250 km apart. All observations are matched to the closest grid point and results are plotted in Fig. 7. We can see that all algorithms have problems with clouds around the Equator in areas where very thin high cirrus is common. The MAE difference (Fig. 8) shows that the NN-AVHRR is better than MODIS-C6 in most parts of the globe, with the greatest benefit observed closer to the poles. At a few isolated locations MODIS-C6 performs better than NN-AVHRR. Figure 8Mean absolute error difference in meters between MODIS-C6 and NN-AVHRR compared to CALIOP. Results are calculated for bins evenly spread out 250 km apart. Bins with fewer than 10 cloudy pixels are excluded (plotted in dark grey). Dark green means NN-AVHRR is 1.5 km better than MODIS-C6, dark brown means MODIS-C6 is 1.5 km better than NN-AVHRR. The final validation and testing under development data (see Table 1) are included to get enough pixels. 4.7 Future work and challenges Only near nadir satellite zenith angles were used for training. This might limit the performance for the neural networks at other satellite zenith angles. The NN-MetImage network using the CO2 channel at 13.3 µm shows strong satellite zenith angle dependence and is not useful for higher satellite zenith angles. A solution to train networks to perform better at higher satellite zenith angles could be to include MODIS data from satellite Terra co-located with CALIPSO in the training data, as they will get matches at any satellite zenith angle although only at high latitudes. As latitude is not used as a variable, data for higher satellite zenith angles, included for high latitude regions, could also help in other regions. However it is possible that the high latitude matches will not help the network if the variety of weather situations and cloud top heights at high latitudes is too small. Radiative transfer calculations for the CO2-channels for different satellite zenith angles could be another way to improve the performance for higher satellite zenith angles. Several technical parameters influence the performance of the neural network, for example: learning rate, learning rate decay, momentum, number of layers, number of neurons, weight initialization function and early stopping criteria. For several combinations tested, the differences were in the order of a few hPa. Networks tested using two hidden layers were found to perform better than those using only one hidden layer. We did train one network with fewer neurons and one with more layers and neurons with the same variables as NN-AVHRR. The network with fewer neurons in the two hidden layers (20/15) was 1 hPa worse. The network with more neurons in three layers (30/45/45) was 2.5 hPa better than NN-AVHRR but also took five times as long to retrieve pressure. The best technical parameters and network setup to use could therefore be further investigated. The NN-CTTH algorithm currently has no pixel specific error estimate. The MAE provides a constant error estimate (the same for all pixels). However for some clouds the height retrieval is more difficult, e.g., thin clouds and sub-pixel clouds. Further work to include pixel specific error estimates could be valuable. Neural networks can behave unexpectedly for unseen data. By using a large training dataset and early stopping the risk for unexpected behavior is decreased. Also the risk for unexpected results in a neural network algorithm can be a fair price to pay given the significant improvements when compared to the current algorithms. The training of neural networks requires reference data (truth). For optimal performance a neural network approach for upcoming new sensors (e.g., MERSI-2, MetImage) being launched when data from CALIPSO or CloudSat are no longer available, would require another truth or a method to robustly transform a network trained for one sensor to other sensors. A way forward could be to include variables with radiative transfer calculations of cloud free brightness temperatures and brightness temperature differences. Further work is needed to test how the networks trained for the MODIS sensor perform for AVHRR, AVHRR1, VIIRS and other sensors. Our results show that networks can be trained using only the channels available on AVHRR, but they might need to be retrained with actual AVHRR data as the spectral response functions of the channels differ. The spectral response functions also differ between different AVHRR instruments, and more investigations are needed to see how networks trained for one AVHRR instrument will perform for other AVHRR instruments. The results here are valid for the MODIS imager on the polar orbiting satellite Aqua. However nothing in the method restricts it to polar orbiting satellites. The method should be applicable for imagers like SEVIRI, which has the two most important channels at 11 and 12 µm, on geostationary satellites. However the network trained on MODIS data might need to be retrained with SEVIRI data to ensure optimum performance as the spectral response functions between SEVIRI and MODIS differ. 5 Conclusions The neural network approach shows high potential to improve cloud height retrievals. The NN-CTTH (for all trained neural networks) is better in terms of MAE in meters than both PPS-v2014 and the MODIS Collection 6. This is seen for validation with CALIOP and CPR (CloudSat) and for low, medium, high level clouds. The neural networks also show best MAE for all cloud types except altostratus (opaque) for which PPS-v2014 is better than some of the neural networks. The neural networks show an overall improvement of mean absolute error (MAE) from 400 m and up to 1 km. Considering overall performance in terms of IQR, RMSE, SD, PE0.25, PE0.5, PE1, PE2, median, mode and bias the neural network performs better than both the reference algorithms both when validated with CALIOP and with CPR (CloudSat). In the validation with CALIOP the neural networks have between 7 and 20 percentage points more retrievals with absolute errors smaller than 250 m compared to the reference algorithms. Considering low, medium and high levels separately the neural networks perform better than, or for some cases in line with, the best of the two reference algorithms in terms of MAE, IQR, PE0.25, PE0.5, PE1, median and mode. This indicates that the neural networks have well centered, narrow error distributions with a large amount of retrievals with small errors. The two reference algorithms have been shown to have different strengths; MODIS-C6 validated with CPR (CloudSat) for high clouds shows a well centered and narrow error distribution in line with (and better than some of) the neural networks, although the MAE is higher for MODIS-C6. PPS-v2014 validated with CALIOP for the cloud type altostratus (opaque) show scores in line with (and better than some of) the neural networks. The error distributions for the cloud top height retrievals were found to be skewed for all algorithms considered in the paper, especially for low level clouds. It was exemplified why the bias and SD should be interpreted with caution and how they can easily be misinterpreted. The median and mode were found to be better measures of tendency than the bias. The IQR and MAE were found to better describe the spread of the errors, compared to SD and RMSE, as the absolute values of the largest errors are not the most interesting. Measuring the amount of absolute error above 1 km (PE1), for example, was found to provide valuable information on the amount of large/small errors and useful retrievals. The neural network algorithms are also useful for instruments with fewer channels than MODIS, including the channels available for AVHRR1. This is important for climate data records which include AVHRR1 data to produce a long, continuous time series. Including variables with information on neighboring pixel values was very important to get good results, and about 40 % of the improvement of MAE for the cloud top pressure retrieval for NN-AVHRR was due to the variables with neighboring pixels. The networks trained using only two IR-channels at 11 and 12 µm or 3.7 µm showed the most robust performance at higher satellite zenith angles. Therefore, including more IR channels does improve results for nadir observations, but degrades performance at higher satellite zenith angles. Code and data availability Code and data availability. A neural network cloud top pressure, temperature and height algorithm will be be part of the PPS-v2018 release. The PPS software package is accessible via the NWC SAF site: http://nwc-saf.eumetsat.int (last access: 25 May 2018). The MODIS/Aqua dataset was acquired from the Level-1 & Atmosphere Archive and Distribution System (LAADS) Distributed Active Archive Center (DAAC), located in the Goddard Space Flight Center in Greenbelt, Maryland (https://ladsweb.nascom.nasa.gov/). The CALIPSO-CALIOP datasets were obtained from the NASA Langley Research Center Atmospheric Science Data Center (ASDC DAAC – https://eosweb.larc.nasa.gov) (last access: 25 May 2018). The CPR (CloudSat) data were downloaded from the CloudSat Data Processing Center (http://www.cloudsat.cira.colostate.edu/order-data) (last access: 25 May 2018). NWP forecast data were obtained from ECMWF (https://www.ecmwf.int/en/forecasts/accessing-forecasts) (last access: 25 May 2018). The OSISAF icemap data can be accessed from http://osisaf.met.no/p/ice/ (last access: 25 May 2018). Author contributions Author contributions. All authors contributed to designing the study. CA and NH wrote the code and carried out the experiments. NH drafted the manuscript and prepared the figures and tables. All authors discussed results and revised the manuscript. Competing interests Competing interests. The authors declare that they have no conflict of interest. Acknowledgements Acknowledgements. The authors acknowledge that the work was mainly funded by EUMETSAT. The authors thank Thomas Heinemann (EUMETSAT) for suggesting adding CPR (CloudSat) as an independent validation truth. Edited by: Andrew Sayer Reviewed by: two anonymous referees References Ackerman, S., Menzel, P., and Frey, R.: MODIS Atmosphere L2 Cloud Product (06_L2), https://doi.org/10.5067/MODIS/MYD06_L2.006, 2015. a, b Baum, B. A., Menzel, W. P., Frey, R. A., Tobin, D. C., Holz, R. E., Ackerman, S. A., Heidinger, A. K., and Yang, P.: MODIS Cloud-Top Property Refinements for Collection 6, J. Appl. Meteorol. Clim., 51, 1145–1163, https://doi.org/10.1175/JAMC-D-11-0203.1, 2012. a, b, c Bickel, D. R.: Robust Estimators of the Mode and Skewness of Continuous Data, Comput. Stat. Data Anal., 39, 153–163, https://doi.org/10.1016/S0167-9473(01)00057-3, 2002. a Cotter, A., Shamir, O., Srebro, N., and Sridharan, K.: Better Mini-Batch Algorithms via Accelerated Gradient Methods, in: Advances in Neural Information Processing Systems 24, edited by: Shawe-Taylor, J., Zemel, R. S., Bartlett, P. L., Pereira, F., and Weinberger, K. Q., 1647–1655, Curran Associates, Inc., available at: http://papers.nips.cc/paper/4432-better-mini-batch-algorithms-via-accelerated-gradient-methods.pdf, 2011. a Derrien, M., Lavanant, L., and Le Gleau, H.: Retrieval of the cloud top temperature of semi-transparent clouds with AVHRR, in: Proceedings of the IRS'88, 199–202, Deepak Publ., Hampton, Lille, France, 1988. a Dybbroe, A., Karlsson, K.-G., and Thoss, A.: AVHRR cloud detection and analysis using dynamic thresholds and radiative transfer modelling - part one: Algorithm description, J. Appl. Meteorol., 41, 39–54, https://doi.org/10.1175/JAM-2188.1, 2005. a Gardner, M. and Dorling, S.: Artificial neural networks (the multilayer perceptron) – a review of applications in the atmospheric sciences, Atmos. Environ., 32, 2627–2636, https://doi.org/10.1016/S1352-2310(97)00447-0, 1998. a, b, c González, Á.: Measurement of Areas on a Sphere Using Fibonacci and Latitude–Longitude Lattices, Math. Geosci., 42, 49, https://doi.org/10.1007/s11004-009-9257-x, 2009. a Hamann, U., Walther, A., Baum, B., Bennartz, R., Bugliaro, L., Derrien, M., Francis, P. N., Heidinger, A., Joro, S., Kniffka, A., Le Gléau, H., Lockhoff, M., Lutz, H.-J., Meirink, J. F., Minnis, P., Palikonda, R., Roebeling, R., Thoss, A., Platnick, S., Watts, P., and Wind, G.: Remote sensing of cloud top pressure/height from SEVIRI: analysis of ten current retrieval algorithms, Atmos. Meas. Tech., 7, 2839-2867, https://doi.org/10.5194/amt-7-2839-2014, 2014. a Heidinger, A. K., Foster, M. J., Walther, A., and Zhao, X. T.: The Pathfinder Atmospheres–Extended AVHRR Climate Dataset, B. Am. Meteorol. Soc., 95, 909–922, https://doi.org/10.1175/BAMS-D-12-00246.1, 2014. a Hu, X. and Weng, Q.: Estimating impervious surfaces from medium spatial resolution imagery using the self-organizing map and multi-layer perceptron neural networks, Remote Sens. Environ., 113, 2089–2102, https://doi.org/10.1016/j.rse.2009.05.014, 2009. a Inoue, T.: On the Temperature and Effective Emissivity Determination of Semi-Transparent Cirrus Clouds by Bi-Spectral Measurements in the 10 µm Window Region, J. Meteorol. Soc. Jpn., 63, 88–99, 1985. a Karlik, B. and Olgac, A. V.: Performance analysis of various activation functions in generalized mlp architectures of neural networks, Int. J. Artif. Intell. Expert Syst., 1, 111–122, 2011. a Karlsson, K.-G., Anttila, K., Trentmann, J., Stengel, M., Fokke Meirink, J., Devasthale, A., Hanschmann, T., Kothe, S., Jääskeläinen, E., Sedlar, J., Benas, N., van Zadelhoff, G.-J., Schlundt, C., Stein, D., Finkensieper, S., Håkansson, N., and Hollmann, R.: CLARA-A2: the second edition of the CM SAF cloud and radiation data record from 34 years of global AVHRR data, Atmos. Chem. Phys., 17, 5809–5828, https://doi.org/10.5194/acp-17-5809-2017, 2017. a Keras Team: Keras, available at: https://github.com/fchollet/keras, 2015. a Kox, S., Bugliaro, L., and Ostler, A.: Retrieval of cirrus cloud optical thickness and top altitude from geostationary remote sensing, Atmos. Meas. Tech., 7, 3233–3246, https://doi.org/10.5194/amt-7-3233-2014, 2014. a Marchand, R., Mace, G. G., Ackerman, T., and Stephens, G.: Hydrometeor Detection Using Cloudsat – An Earth-Orbiting 94-GHz Cloud Radar, J. Atmos. Ocean. Tech., 25, 519–533, https://doi.org/10.1175/2007JTECHA1006.1, 2008. a Meng, L., He, Y., Chen, J., and Wu, Y.: Neural Network Retrieval of Ocean Surface Parameters from SSM/I Data, Mon. Weather Rev., 135, 586–597, https://doi.org/10.1175/MWR3292.1, 2007. a Menzel, W. P., Frey, R. A., Zhang, H., Wylie, D. P., Moeller, C. C., Holz, R. E., Maddux, B., Baum, B. A., Strabala, K. I., and Gumley, L. E.: MODIS Global Cloud-Top Pressure and Amount Estimation: Algorithm Description and Results, J. Appl. Meteorol. Clim., 47, 1175–1198, https://doi.org/10.1175/2007JAMC1705.1, 2008. a, b Milstein, A. B. and Blackwell, W. J.: Neural network temperature and moisture retrieval algorithm validation for AIRS/AMSU and CrIS/ATMS, J. Geophys. Res.-Atmos., 121, 1414–1430, https://doi.org/10.1002/2015JD024008, 2016. a Minnis, P., Hong, G., Sun-Mack, S., Smith, W. L., Chen, Y., and Miller, S. D.: Estimating nocturnal opaque ice cloud optical depth from MODIS multispectral infrared radiances using a neural network method, J. Geophys. Res.-Atmos., 121, 4907–4932, https://doi.org/10.1002/2015JD024456, 2016. a MODIS Science Data Support Team: MYD021KM, https://doi.org/10.5067/MODIS/MYD021KM.006, 2015a. a MODIS Science Data Support Team: MYD03, https://doi.org/10.5067/MODIS/MYD03.006, 2015b. a Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., Blondel, M., Prettenhofer, P., Weiss, R., Dubourg, V., Vanderplas, J., Passos, A., Cournapeau, D., Brucher, M., Perrot, M., and Duchesnay, E.: Scikit-learn: Machine Learning in Python, J. Mach. Learn. Res., 12, 2825–2830, 2011.  a Raspaud, M., D. Hoese, A. Dybbroe, P. Lahtinen, A. Devasthale, M. Itkin, U. Hamann, L. Ørum Rasmussen, E.S. Nielsen, T. Leppelt, A. Maul, C. Kliche, and Thorsteinsson, H.: PyTroll: An open source, community driven Python framework to process Earth Observation satellite data, B. Am. Meteorol. Soc., https://doi.org/10.1175/BAMS-D-17-0277.1, online first, 2018. a Rossow, W. B. and Schiffer, R. A.: Advances in Understanding Clouds from ISCCP, B. Am. Meteorol. Soc., 80, 2261–2287, https://doi.org/10.1175/1520-0477(1999)080<2261:AIUCFI>2.0.CO;2, 1999. a SMHI: Algorithm Theoretical Basis Document for Cloud Top Temperature, Pressure and Height of the NWC/PPS, NWCSAF, 4.0 edn., available at: http://www.nwcsaf.org/AemetWebContents/ScientificDocumentation/Documentation/PPS/v2014/NWC-CDOP2-PPS-SMHI-SCI-ATBD-3_v1_0.pdf (last access: 1 June 2018), 2015. a Stengel, M., Stapelberg, S., Sus, O., Schlundt, C., Poulsen, C., Thomas, G., Christensen, M., Carbajal Henken, C., Preusker, R., Fischer, J., Devasthale, A., Willén, U., Karlsson, K.-G., McGarragh, G. R., Proud, S., Povey, A. C., Grainger, R. G., Meirink, J. F., Feofilov, A., Bennartz, R., Bojanowski, J. S., and Hollmann, R.: Cloud property datasets retrieved from AVHRR, MODIS, AATSR and MERIS in the framework of the Cloud_cci project, Earth Syst. Sci. Data, 9, 881–904, https://doi.org/10.5194/essd-9-881-2017, 2017. a, b Stubenrauch, C. J., Rossow, W. B., Kinne, S., Ackerman, S., Cesana, G., Chepfer, H., Girolamo, L. D., Getzewich, B., Guignard, A., Heidinger, A., Maddux, B. C., Menzel, W. P., Minnis, P., Pearl, C., Platnick, S., Poulsen, C., Riedi, J., Sun-Mack, S., Walther, A., Winker, D., Zeng, S., and Zhao, G.: Assessment of Global Cloud Datasets from Satellites: Project and Database Initiated by the GEWEX Radiation Panel, B. Am. Meteorol. Soc., 94, 1031–1049, https://doi.org/10.1175/BAMS-D-12-00117.1, 2013. a Theano Development Team: Theano: A Python framework for fast computation of mathematical expressions, arXiv e-prints, abs/1605.02688, 2016. a
2020-03-31 08:14:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.569314181804657, "perplexity": 3152.3600229018084}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370500331.13/warc/CC-MAIN-20200331053639-20200331083639-00027.warc.gz"}
https://demo7.dspace.org/items/f5ff0215-b9f9-4687-b8c8-fdc26650e6ea
## Dynamical quasiparticles properties and effective interactions in the sQGP Cassing, W. ##### Description Dynamical quasiparticle properties are determined from lattice QCD along the line of the Peshier model for the running strong coupling constant in case of three light flavors. By separating time-like and space-like quantities in the number density and energy density the effective degrees of freedom in the gluon and quark sector may be specified from the time-like densities. The space-like parts of the energy densities are identified with interaction energy (or potential energy) densities. By using the time-like parton densities (or scalar densities) as independent degrees of freedom variations of the potential energy densities with respect to the time-like gluon and/or fermion densities lead to effective mean-fields for time-like gluons and quarks as well as to effective gluon-gluon, quark-gluon and quark-quark (quark-antiquark) interactions. The latter dynamical quantities are found to be approximately independent on the quark chemical potential and thus well suited for an inplementation in off-shell parton transport approaches. Results from the dynamical quasiparticle model (DQPM) in case of two dynamical light quark flavors are compared to lattice QCD calculations for the net quark density as well as for the 'back-to-back' differential dilepton production rate by $q-{\bar q}$ annihilation. The DQPM is found to pass the independent tests. Comment: 34 pages, 17 eps-figures; Section 4 modified; Nuclear Physics A in press Nuclear Theory
2022-12-08 16:34:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8007093071937561, "perplexity": 1627.7655051063236}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711344.13/warc/CC-MAIN-20221208150643-20221208180643-00818.warc.gz"}
https://math.stackexchange.com/questions/379418/truth-table-for-followings
# truth table for followings Hi I am new to this site. I got an assessment to complete tomorrow. Its about Computer programming, and i am having trouble with these questions. Can anyone please help me. Using truth tables show that: 1. $A+B+C = (A+B)+C$ 2. $A\cdot 1 = A$ 3. $A'\cdot B = (A+B)'$ • What problems are you having? – copper.hat May 2 '13 at 16:55 To make a truth table, you make columns for all the variables and rows for all combinations of truth values of the variables. Then you make as many columns as you want to assess the truth value of the statement in question. If you want to prove equality of two expressions, they must agree on all values of the truth value of the variables. It looks like you may have miscopied two of the problems: I would expect the first to be $A+(B+C)=(A+B)+C$ (because $+$ is usually defined as a binary operator) and the third is not correct. I'll give some lines of the first: $$\begin {array}{c|c|c|c|c|c} \\ A&B&C&(A+B)&(A+B)+C&A+B+C \\ \hline T&T&T&T&T&T\\T&T&F&T&T&T\\F&F&T&F&T&T \end {array}$$ • @Charuka: To get $(A+B)+C$ you add the truth value you have in the $(A+B)$ column to the one in the $C$ column. The second one just has two lines because there is only one variable. $1$ is always True. As I said before, the third is not correct, so your verification should fail. – Ross Millikan May 2 '13 at 17:19 • @Charuka: you might look at Wikipedia. Their $\wedge$ is your $\cdot$ and their $\vee$ is your $+$ – Ross Millikan May 2 '13 at 17:20
2020-01-29 05:20:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6233692169189453, "perplexity": 232.21791147630657}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251788528.85/warc/CC-MAIN-20200129041149-20200129071149-00311.warc.gz"}
http://lamington.wordpress.com/
## Taut foliations and positive forms This week I visited Washington University in St. Louis to give a colloquium, and caught up with a couple of my old foliations friends, namely Rachel Roberts and Larry Conlon. Actually, I had caught up with Rachel (to some extent) the weekend before, when we both spoke at the Texas Geometry and Topology Conference in Austin, where Rachel gave a talk about her recent proof (joint with Will Kazez) that every $C^0$ taut foliation on a 3-manifold $M$ (other than $S^2 \times S^1$) can be approximated by both positive and negative contact structures; it follows that $M \times I$ admits a symplectic structure with pseudoconvex boundary, and one deduces nontriviality of various invariants associated to $M$ (Seiberg-Witten, Heegaard Floer Homology, etc.). This theorem was known for sufficiently smooth (at least $C^2$) foliations by Eliashberg-Thurston, as exposed in their confoliations monograph, and it is one of the cornerstones of $3+1$-dimensional symplectic geometry; unfortunately (fortunately?) many natural constructions of foliations on 3-manifolds can be done only in the $C^1$ or $C^0$ world. So the theorem of Rachel and Will is a big deal. If we denote the foliation by $\mathcal{F}$ which is the kernel of a 1-form $\alpha$ and suppose the approximating positive and negative contact structures are given by the kernels of 1-forms $\alpha^\pm$ where $\alpha^+ \wedge d\alpha^+ > 0$ and $\alpha^- \wedge d\alpha^- < 0$ pointwise, the symplectic form $\omega$ on $M \times I$ is given by the formula $\omega = \beta + \epsilon d(t\alpha)$ for some small $\epsilon$, where $\beta$ is any closed 2-form on $M$ which is (strictly) positive on $T\mathcal{F}$ (and therefore also positive on the kernel of $\alpha^\pm$ if the contact structures approximate the foliation sufficiently closely). The existence of such a closed 2-form $\beta$ is one of the well-known characterizations of tautness for foliations of 3-manifolds. I know several proofs, and at one point considered myself an expert in the theory of taut foliations. But when Cliff Taubes happened to ask me a few months ago which cohomology classes in $H^2(M)$ are represented by such forms $\beta$, and in particular whether the Euler class of $\mathcal{F}$ could be represented by such a form, I was embarrassed to discover that I had never considered the question before. The answer is actually well-known and quite easy to state, and is one of the applications of Sullivan’s theory of foliation cycles. One can also give a more hands-on topological proof which is special to codimension 1 foliations of 3-manifolds. Since the theory of taut foliations of 3-manifolds is a somewhat lost art, I thought it would be worthwhile to write a blog post giving the answer, and explaining the proofs. ## Explosions – now in glorious 2D! Dennis Sullivan tells the story of attending a dynamics seminar at Berkeley in 1971, in which the speaker ended the seminar with the solution of (what Dennis calls) a “thorny problem”: the speaker explained how, if you have N pairs of points $(p_i,q_i)$ in the plane (all distinct), where each pair is distance at most $\epsilon$ apart, the pairs can be joined by a family of N disjoint paths, each of diameter at most $\epsilon'$ (where $\epsilon'$ depends only on $\epsilon$, not on N, and goes to zero with $\epsilon$). This fact led (by a known technique) to an important application which had hitherto been known only in dimensions 3 and greater (where the construction is obvious by general position). Sullivan goes on: A heavily bearded long haired graduate student in the back of the room stood up and said he thought the algorithm of the proof didn’t work. He went shyly to the blackboard and drew two configurations of about seven points each and started applying to these the method of the end of the lecture. Little paths started emerging and getting in the way of other emerging paths which to avoid collision had to get longer and longer. The algorithm didn’t work at all for this quite involved diagrammatic reason. The graduate student in question was Bill Thurston. ## Dipoles and Pixie Dust The purpose of this blog post is to give a short, constructive, computation-free proof of the following theorem: Theorem: Every compact subset of the Riemann sphere can be arbitrarily closely approximated (in the Hausdorff metric) by the Julia set of a rational map. rational map is just a ratio of (complex) polynomials. Every holomorphic map from the Riemann sphere to itself is of this form. The Julia set of a rational map is the closure of the set of repelling periodic points; it is both forward and backward invariant. The complement of the Julia set is called the Fatou set. Kathryn Lindsey gave a nice constructive proof that any Jordan curve in the complex plane can be approximated arbitrarily well in the Hausdorff topology by Julia sets of polynomials. Her proof depends on an interpolation result by Curtiss. Kathryn is a postdoc at Chicago, and talked about her proof in our dynamics seminar a few weeks ago. It was a nice proof, and a nice talk, but I wondered if there was an elementary argument that one could see without doing any computation, and today I came up with the following. Posted in Complex analysis, Dynamics | Tagged , , | 6 Comments ## Mapping class groups: the next generation Nothing stands still except in our memory. - Phillipa Pearce, Tom’s Midnight Garden In mathematics we are always putting new wine in old bottles. No mathematical object, no matter how simple or familiar, does not have some surprises in store. My office-mate in graduate school, Jason Horowitz, described the experience this way. He said learning to use a mathematical object was like learning to play a musical instrument (let’s say the piano). Over years of painstaking study, you familiarize yourself with the instrument, its strengths and capabilities; you hone your craft, your knowledge and sensitivity deepens. Then one day you discover a little button on the side, and you realize that there is a whole new row of green keys under the black and white ones. In this blog post I would like to talk a bit about a beautiful new paper by Juliette Bavard which opens up a dramatic new range of applications of ideas from the classical theory of mapping class groups to 2-dimensional dynamics, geometric group theory, and other subjects. ## Groups quasi-isometric to planes I was saddened to hear the news that Geoff Mess recently passed away, just a few days short of his 54th birthday. I first met Geoff as a beginning graduate student at Berkeley, in 1995; in fact, I believe he gave the first topology seminar I ever attended at Berkeley, on closed 3-manifolds which non-trivially cover themselves (the punchline is that there aren’t very many of them, and they could be classified without assuming the geometrization theorem, which was just a conjecture at the time). Geoff was very fast, whip-smart, with a daunting command of theory; and the impression he made on me in that seminar is still fresh in my mind. The next time I saw him might have been May 2004, at the N+1st Southern California Topology Conference, where Michael Handel was giving a talk on distortion elements in groups of diffeomorphisms of surfaces, and Geoff (who was in the audience), explained in an instant how to exhibit certain translations on a (flat) torus as exponentially distorted elements. Geoff was not well even at that stage — he had many physical problems, with his joints and his teeth; and some mental problems too. But he was perfectly pleasant and friendly, and happy to talk math with anyone. I saw him again a couple of years later when I gave a colloquium at UCLA, and his physical condition was a bit worse. But again, mentally he was razor-sharp, answering in an instant a question about (punctured) surface subgroups of free groups that I had been puzzling about for some time (and which became an ingredient in a paper I later wrote with Alden Walker). Geoff Mess in 1996 at Kevin Scannell’s graduation (photo courtesy of Kevin Scannell) Geoff published very few papers — maybe only one or two after finishing his PhD thesis; but one of his best and most important results is a key step in the proof of the Seifert Fibered Theorem in 3-manifold topology. Mess’s paper on this result was written but never published; it’s hard to get hold of the preprint, and harder still to digest it once you’ve got hold of it. So I thought it would be worthwhile to explain the statement of the Theorem, the state of knowledge at the time Mess wrote his paper, some of the details of Mess’s argument, and some subsequent developments (another account of the history of the Seifert Fibered Theorem by Jean-Philippe Préaux is available here). ## Div, grad, curl and all this The title of this post is a nod to the excellent and well-known Div, grad, curl and all that by Harry Schey (and perhaps also to the lesser-known sequel to one of the more consoling histories of Great Britain), and the purpose is to explain how to generalize these differential operators (familiar to electrical engineers and undergraduates taking vector calculus) and a few other ones from Euclidean 3-space to arbitrary Riemannian manifolds. I have a complicated relationship with the subject of Riemannian geometry; when I reviewed Dominic Joyce’s book Riemannian holonomy groups and calibrated geometry for SIAM reviews a few years ago, I began my review with the following sentence: Riemannian manifolds are not primitive mathematical objects, like numbers, or functions, or graphs. They represent a compromise between local Euclidean geometry and global smooth topology, and another sort of compromise between precognitive geometric intuition and precise mathematical formalism. Don’t ask me precisely what I meant by that; rather observe the repeated use of the key word compromise. The study of Riemannian geometry is — at least to me — fraught with compromise, a compromise which begins with language and notation. On the one hand, one would like a language and a formalism which treats Riemannian manifolds on their own terms, without introducing superfluous extra structure, and in which the fundamental objects and their properties are highlighted; on the other hand, in order to actually compute or to use the all-important tools of vector calculus and analysis one must introduce coordinates, indices, and cryptic notation which trips up beginners and experts alike. Posted in 3-manifolds, Riemannian geometry | | 9 Comments ## A tale of two arithmetic lattices For almost 50 years, Paul Sally was a towering figure in mathematics education at the University of Chicago. Although he was 80 years old, and had two prosthetic legs and an eyepatch (associated with the Type 1 diabetes he had his whole life), it was nevertheless a complete shock to our department when he passed away last December, and we struggled just to cover his undergraduate teaching load this winter and spring. As my contribution, I have been teaching an upper-division undergraduate class on “topics in geometry”, which I have appropriated and repurposed as an introduction to the classical geometry and topology of surfaces. I have tried to include at least one problem in each homework assignment which builds a connection between classical geometry and some other part of mathematics, frequently elementary number theory. For last week’s assignment I thought I would include a problem on the well-known connection between Pythagorean triples and the modular group, perhaps touching on the Euclidean algorithm, continued fractions, etc. But I have introduced the hyperbolic plane in my class mainly in the hyperboloid model, in order to stress an analogy with spherical geometry, and in order to make it easy to derive the identities for hyperbolic triangles (i.e. hyperbolic laws of sines and cosines) from linear algebra, so it made sense to try to set up the problem in the language of the orthogonal group $O(2,1)$, and the subgroup preserving the integral lattice in $\mathbb{R}^3$. Posted in Hyperbolic geometry, Number theory | | 2 Comments
2014-12-19 20:09:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 34, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6021232008934021, "perplexity": 618.681366408441}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802768847.78/warc/CC-MAIN-20141217075248-00057-ip-10-231-17-201.ec2.internal.warc.gz"}
https://bliskiepodroze.pl/your-math-test-scores-are-68-78-90-and-91.html
المقتصد في المؤسسة التربوية # 10/23/2021 AFQT Percentile: 41. Search Complete Library at  To calculate this, we first sum up the two marks and then apply the equation as usual: (92 + 88) / 200 x 100 = 180 / 200 x 100 = 0. Statistics and Probability questions and answers. 20/21 = 95. 11/25/2021 In a certain course, the quizzes are 15% of the grade, the lab score is 25%, the tests are 30%, and the Final is 30%. Science: 70%. PSAT/NMSQT and PSAT 10 Nationally Representative and User Percentiles A student's percentile rank represents the percentage of students with scores equal to or lower than their score. sum. You have to wait 90 minutes in the emergency room of a hospital before you can see a doctor. Note that individual states determine criteria for 'proficiency' according to their own tests - proficiency percentages should not be compared across states due to the differing tests and criteria applied between each state. 2)The students in Hugh Logan's math class took the Scholastic Aptitude Test. 40 or 50). Now if a person scores a 750 on the MATH portion, the z score will now be z-score= (750-500)/100= 2. It is just basic math. 416-420 = 80th percentile. You can register for: Explanations and Suggestions then you should work in the Prep for Calculus module and retake ALEKS to earn a score above 75. Use the new average to figure out the total he needs after the 4th score: Sum of 4 scores (4) (90) = 360. The test scores go from 1 to 99 such that the first score is 1/100, the second is 2/100, and so on until the last score is 99/100. 40 9 99 135 73 +2 . find the variance of the data. 51. Here's the interpretation: Evidence-Based Reading: You scored higher than 89% of the people who took this section. c 24. What is the mean of the 99 test scores? HINT: Median = 50 49 = 50 – 1 51 = 50 + 1 If, however, your grade for that final is an 89%, which translates to a “B+” or 3. 4) Lenny's average score after 3 tests is 88. You learn that your wait time was in the 82 nd percentile of all wait times. 81. While each program has its own entrance requirements, an FSIQ of 115 – 129 is generally considered “mildly gifted,” an FSIQ of 130 – 144 is generally so we look up !!=!1. < c. Please note that a safe confidence interval is … Start studying MATH-11-SP18: Module 5 Quiz. Q. Write a 500-750-word summary and analysis discussing the results of your … 78. You'll likely find this number on your syllabus. 60. 50 F. anthony • 3 years ago. 405-409 = 60th percentile. For the data values entered above, the solution is: 3345 17 = 196. What grade does she need to make on the fourth test to make an A in the class? What is your answer? 10/10/2019 Adimas found the mean of her 11 math test scores for the first semester. 07 9 . Semester 1: 78, 91, 88, 83, 94. The mean grade for the Biology test is 88% with a standard deviation of 3. 04 0. Create a stemplot for these test scores using each 10s value twice on the stem. Step III: 43 23 12 55 65 44 61 77 19 09 87. What is the lowest score you can iron on the next test and still achieve an average of at 2019. 65 b. Each test has data values that are normally distributed. Unlike raw scores, you can interpret scale scores across different sets 7/30/2020 Scores of 91−100% are considered excellent, 75–90% considered very good, 55–64% considered good, 45–55% considered fair, 41–44% considered pass, and 0–40% considered fail. Make 7 class intervals. Question: A final exam in Math 160 has a mean of 73 with standard deviation 7. Find the percentile rank for a score of 86 on this  On a mathematics test, the mean score was 78 with a standard deviation of 7. The average grade on the Algebra 2 test was 81% with a standard deviation of 4% On which test did Maria do better? Justify your answer was sp -72. So 95. 5 is the mean. Example Final Course Grade Calculation. 12 properties took 61 to 91 days to sell. Your grade before the Test was 90 B. At 90%ile, it is 670 and 680 for EBRW and Maths respectively. 5 327+x=422. 74 or better c) 90. Find step-by-step Algebra 2 solutions and your answer to the following textbook question: Your math test scores are 68, 78, 90, and 91. ) We want a z-score with 90% of the data below it. … I've done that but I've not added the correction to the scores yet. 12/19/2021 The following table shows the relationship among EOC achievement levels, scale scores, grade scale scores based on the grading scale 90 80 70 60 0, and the corresponding letter grade for the five EOC tests that have gone through standard setting. 56. H 1: parameter < value. Percentile ranks are provided on this site for Total and Section … shows some test scores from a math class. sum. And on the third test you scored 86 points. Be Careful! Being better is relative to the situation. Find a company today! Top 90 Software Testing Companies in Delhi ImpactQA is an independent software testing and QA consulting compa How to use the test calculator. Mean, Mode, Median, and Range. 62. Just follow the same steps as before:. Semester 1: 78, 91, 88, 83, 94. To get his sum from 264 to 360, Lenny needs to score 360 - 264 = 96. 5 deviations above the mean. ACCUPLACER English Scores Level 30 or 35 on any Math Placement Test; (Level 30 ok for Math 211, 208, 205, 175, 117, 116, 115, and MthStat 215. Create a stemplot for these test scores using each 10s value once on the stem. errorP. In the examples below, our first step is to order the data from least to greatest. 5. So to check the score for the next students, you can type in the number of questions they've got wrong - or just use this neat table. 0%. 47 . He scores an 85% on his Biology test and an 80% on his Math test. two tests: Test A: Fred scores 78. Open the Test Calculator. What percent of the students scored 85 or better (nearest  Test Score 35-45 46-56 57-67 68-78 79 - 89 90-100 Frequency 8 12 20-R 15 The test scores of a sample of students taking a Math Test at ABC College are. The scores of an eighth-grade math test have a normal distribution with a mean = 83 and a standard deviation = 5. Ashley earned an 89 on her History midterm and an 81 on her Math midterm. Before 74 68 82 97 76 81 80 75 88 84 79 91 After 76 68 85 94 79 88 83 72 90 87 79 90 5/7/2020 PSAT 8/9 Nationally Representative and User Percentiles A student's percentile rank represents the percentage of students who score equal to or lower than their score. A) 82 B) 90 C) 79 D) 80 28) 29) At Loop College, the mean grade point average (gpa) of the current student body is 2. B  Source: SAT Understanding Scores 2021 Again, note that the percentile ranks change dramatically toward the middle scores: 500 in EBRW is only 40%, but 600 is 73%. 12. Hit calculate! Click the calculate button and let the magin happen. A percentage above 65% is referred to as the 1st Division and indicates a high intellectual level. 05 0. 5 to get a 90 in the class. Students are allowed to drop the two lowest quiz scores and the one lowest test score. Find the range of scores you need on the last test … Math test scores are: 68, 78, 90, and 91. 7 Academically or Intellectually Gifted 24,294 22,311 91. Data Analysis Unit Mean: The average of a set of data. For example, a person with an IQ score of. 5 (for a test) is an exceptional number above average, while a standard deviation of -2. 44 % of the students are within the test scores of 85 and hence the percentage of students who are above the test scores of 85 = (100-89. In education, we frequently use two types of standard scores: stanine and Z-score. Which of the 68% b) 95% c) ~ 90% d) What kinds of scores will the top 5% of people achieve? a) 78 or better b) 81. Andy's brother, Kevin, also plays and gets a score of 9. Draw and label a sketch for each example. The raw points available in each content category. 0% to 90. 33 9 - 99 134 -+2. Q. Test Score Frequency 3 1 4 1 5 2 6 3 7 1 8 1 9 1 10 4 The mean score on this test is: The median score on this test is: Statistics. 3 scores: Sum of first 4 scores = (3) (88) = 264. Your z-score was -1. In the above example, frequency is the number of students who scored various marks as tabulated. 710. 20 . An A is 90% to 100%; A B is 80% to 89%; A C is 70% to 79%; A D is 60% to 69%; and finally an F is 59% … the total points are now 68+78+90+91 = 327 To average 85 on 5 tests, you need 5*85 = 425 Math. 68+78+90+91+x=425 327+x=425 x=98 Thus, the minimum score on the last test would have to be a 98 to achieve an average of 85. David 75 Aaron 75 Ashley 60 Gina 75 Andrew 75 Kei 95 Monica 100 Anna 60 Lucas 75 Jennifer 65 Find the range, mean, median and mode for the scores. In rural schools, 88 percent of ELA teachers had an English related degree, 83 percent of. The following data represent the scores of a sample of 50 randomly chosen students on a standardized test. 4 , median = 90 , mode = 88 Without the outlier 0, the scores are: 90, 88, 96, 92, 88 and 95 mean: = 91. . We recommend that you do this, too. For example, if three students took a test and received scores … Answer to: Your math test scores are 68, 78, 90 and 91. ALEKS scores of 30 or higher reflect adequate preparation for college-level math. Anything within 10 points from 100 is considered Average. the following scores on 10 math quizzes? 68, 55, 70, 62, 71, 58, 81, 82, 63, 73 a. 5 would be a … 5/3/2019 9/17/2021 4/7/2013 62 90 66 91 70 93 77 63 82 64 69 82 71 74 61 88 76 65 83 than that used for the observations. At 75%ile, it is 610 and 600. Create a stemplot for these test scores using each 10s value once on the stem. Male 85 76 58 77 81 90 88 97 72 70 82 64 Female 78 96 79 67 93 84 99 90 87 76 85 92 94 Quiz Score Calculation Table 91 90 88 86 84 83 81 79 78 76 74 72 71 69 67 66 64 62 60 59 57 55 53 52 50 48 47 45 43 41 59 98 97 95 93 92 90 88 86 85 83 81 80 78 76 74 73 71 70 68 66 64 63 61 59 58 56 54 53 51 49 47 46 44 42 60 98 97 95 93 92 90 88 87 85 83 82 80 78 68 99 97 96 94 93 91 90 88 87 85 84 82 81 79 78 76 75 74 72 71 69 68 66 29) Paul's scores in Math quizzes are as follows: 90, 85 70, 65, 99, 78. Need a software QA company in Delhi? Read reviews & compare projects by leading software testing companies. The table below shows the test … Standard Scores have a mean (average) of 100. To get a grade of B, the average of the first five test scores must be greater than or equal to 80 and less than 90. Understand Your Score. Percentile rankings range from 1-99; the average rank in the U. 5. Statistics and Probability. COM Knowledge Brain Games If you’re looking for some riddles for adults to keep in your back pocket as a way to strike up a Signing out of account, Standby Jessica Abo Joy Youell An Bui Successfully copied link 2015. Course grades are on a standard ten-point scale: 90% or … CBSETuts. Each test is worth 20% of the final grade, quizzes (total) count 25% of the final grade, and the final exam is 35% of the final grade. Table of Percentages. Select grades scale. c) Identify any potential outlier of the data. 17. Maria made 75% on her Government test and 83% on her Algebra 2 test. Evidence-Based Writing: 90th percentile. 17. What is the average score of the top six students? 90 7/1/2021 10/9/2017 The best way to deal with changing averages is to use the. 88, 90, 94, 96, 98, 98, 99. 7647. For example, if a student got a 91% in math, 85% in grammar, 82% in reading comprehension, and a 88% in vocabulary, their cumulative score would be 86. Which measure of central tendency is most appropriate for his scores? If you don't like using the +/-grades, the scale may look like:. (a) Exercise #3: On a recent statewide math test, the raw score average was 56  Browse our pre-made printable worksheets library with a variety of activities and quizzes for all K-12 Your math test scores are 68, 78, 90, and 91. Overall testing rank is based on a school's combined math and reading proficiency test score ranking. We can confirm that some score inflation has systematically taken place because the improvement in test scores of students reported by states on their high-stakes tests used for NCLB or state accountability typically far exceeds the improvement in test scores in math and reading on the NAEP. 4, 1). A scaled score of 100 or more shows the pupil has met the expected standard in the test. Extrapolating, if you score … This problem has been solved! Given the following students' test scores (95, 92, 90, 90, 83, 83, 83, 74, 60, and 50), identify the mean, median, mode, range, variance, and standard deviation for the sample. Each section of the exam is scored separately, however, students will also receive a cumulative score. Each test is worth 20% of the final grade, the final exam is 25% of the final grade, and the homework grade is 15% of the final grade. From the above-mentioned statistical properties of the standard normal deviation, we know that about 68% of the population should achieve IQ test scores between 84 and 116 points (i. AFQT Percentile: 41. What is a Range in Math? Definition: The range of a set of data is the difference between the highest and lowest values in the set. Finally, enter the weight of your final exam as a percentage (e. RD Sharma Solutions for Class 7 Maths Chapter 22 - Data Handling - I (Collection and Organisation of Data). 9. 42 0. Using that definition, then, none of the answers are true -- except for #6, as $45 < 54$, but it is misleading and shouldn't have gotten past the review stage. z-scores make it easier to compare scores from distributions using different scales. Knowing how your grades are converted from a percentage to a letter grade, and then into your GPA can really help you plan your future to meet your needs and goals. a. Their math scores are shown below. Decision Rule: Reject H 0 if t. 8 NC Math 1 Test Performance by Student Subgroup 12 Test Grade Calculator. g. Your composite score, or your Adjusted Individual Score, is your main score… The table below will help you to determine your placement, based on your score from the ALEKS Placement Exam. 44)% = 10. 8. Since there are 10 people in the set, to get the median, we have to add the 5 th and 6 th values (Kat and Luigi’s annual income) and divide it by 2. 120 (and a percentile rank of 91) has scored as. 1 with a standard deviation of 6. Average = 98 is the lowest score that you can earn on the next test and still achieve an average of at least 85. 55, 59, 59, 60, 61, 63, 64, 64,  Classroom spelling and math tests are criterion-referenced tests. 78, 77, 64 Enrique received scores of 90, 61, 79, 73, and 87. = 47,500. The mean grade for the Math test is 78… Question 53092: "In your Math Class, you have the scores of 68, 82, 87, and 89 on the forst four of five tests. Percentile ranks are provided on this site for Total and … 7/3/2019 Female Pushup Standards. Test Scores XII (Bonus) Consider a set of 99 test scores. For example, z = 1. 4/24/2021 Math; Statistics and Probability; Statistics and Probability questions and answers; Examples: Finding Percentiles 1. STAAR Raw Score Conversion Tables. 38. Schools are ranked and compared within their own state only. Your daughter brings home test scores showing that she scored in the 80 th percentile in math and the 76 th percentile in reading for her grade. The table shows the game scores of five players. Correctly matching 45 of 100 cities 8/6/2021 Mathematics Test 14,666 669 153 28 72 Physics Test 19,955 717 165 23 76 Psychology 880 94 91 77 860 90 87 74 840 86 83 70 820 81 79 67 99 800 77 75 64 97 780 71 72 61 95 760 66 67 58 For that test taker, scores much above or below 68 on a subscore After your child completes the WISC-V, you will receive a numerical score for each index AND an age percentile rank. 0. Learn vocabulary, terms, and more with flashcards, games, and other study tools. 68), but unfortunately the rise isn't necessarily due to smarter students. Joe takes a test in Biology and Math. What is the lowest score that you can earn on the next test and still achieve an average of at least 85? 98. -2-3= Practice 5. BP 140/90. What is the percentile rank of the student who scored an 87? Round to the nearest whole percentile. C. On his five tests for the semester, Andrew earned the following scores: 83, 75, 90, 92, and 85. A … An IQ test score is calculated based on a norm group with an average score of 100 and a standard deviation of 15. You can also visualize this by noting that his 68% was 4% higher than his If she scores 90 points on her eighth test, what is the average of all eight  If you earn the grades of 81, 84, 78, 80 in four tests what should your final Time 1 65 87 Time 2 90 70 84 92 83 85 91 68 72 81 100 75 89 80 81 91 96 84  56, 68, 72, 72, 75, 78, 80, 84, 84, 85, 88, 88, 90, 93, 95, 99, 100, 100. 9 ~ 97 128 -+1. 9 >99 139 76 +2. There are three tests in the math section: Arithmetic, Elementary Algebra, and College-Level Math. BP 160/100. 720. Each test has data values that are normally distributed. Use the old average to figure out the total of the first. In Harold's English class, a recent test has a mean of 74 and a standard deviation of 16. Math teachers had a math related degree and 91 percent  On a history test, the mean score was 82 with a standard deviation of 6. In this method, an IQ score of 100 means that the test-taker's performance on the test is at the median level of performance in the sample of test-takers of about the same age used to … Take unlimited online tests to prepare for Mathematics Olympiad. 5 , median = 91 , mode = 88 The outlier 0 tends to affect the mean as the difference between the means with (78. 4) and without (91… Your ALEKS score is the percentage of the 314 mathematical topics that you have If you complete the ALEKS math placement test during the academic year,  7)The#following#data#give#the#number#of#students#suspended#intheTriOCitySchool# District#foreachofthepast#12weeks. Excellent 739-800 90-100 A 2/1/2019 11/21/2015 From the given information, Let the random variable represents the test scores and it is normally distributed with mean 78 and the standard deviation 11. median Teacher grader tool is showing the percentage and grade for that score. More recent studies suggest that number could now be as high as 3. Here's the formula for calculating a z-score: Here's the same formula written with symbols: Here are some important facts about z-scores: A positive z-score says the data point is above average. Your scores are: average quiz mean = 85, exams =78, 81, 92, homework mean = 85 and your final = 89. But if this test is 20% of y How to convert key stage 2 raw scores to scaled scores The tables show each of the possible raw scores on the 2017 key stage 2 tests. Firstly, enter “Number of Questions and Wrong Answers”. 91-6=85. From the question, the math test scores are 68, 78, 90, and 91. A1C; Blood Pressure; Without an interpretation of your score, at-home medical tests become BP 150/90. A student’s raw score on the SAT Critical Reading section, and SAT Math section, are converted into two scaled … The following results describe the scores from a pre-test (a test given before a chapter is taught) in two math classes. Scores from about 90-110 are considered Average. 48) D. Use the Female Situp Standards scoresheet below to get your score, or to see how many situps you need to do to get a 100% score in this APFT event! To learn the APFT rules and standards for performing a proper Situp, see our “Army PT Test” page. Draw the normal distribution with the proper labels. The scores from an Algebra class were: 80, 95, 72, 90, 85, 79, 69, 78, 82, 71. 50,000 high scorers, measured by Selection  Class 8 Maths MCQs; Class 9 Maths MCQs; Class 10 Maths MCQs; Maths Quizzes and Answers. Step II: 23 12 55 65 78 34 44 61 77 19 09. We can probably do it all on the same example. 80 9 >99 141 77 +2. The mean grade for the Biology test is 88% with a standard deviation of 3. The following test scores below are from the most recent math test. 100 – This is the expected standard for children (and essentially means a ‘pass’). These solutions for Statistics are extremely popular among Class 8 students for Math Statistics Solutions come handy for quickly completing your homework and preparing for exams. 421-425 = 90th percentile. 5 standard deviations above the mean. 49 Because no school can anticipate far in advance that it will be asked to … Assume that a randomly selected subject is given a bone density test. 129 . Find the mean score. v. The studentʹs final exam score is 88 and homework score is 76. Right Tailed Test. 12: The circumference of the circle is also sometimes called: Answer: Perimeter of a circle. 7 Rule 8/3/2021 Armed Forces Qualifying Test score, or AFQT, is the military term for minimal enlistment requirements. . (68+78+90+91+lowestgrade)=85*5. 93 You need to understand that you're solving for the average, which you already know: 90. , the scores in mathematics, language, reading, verbal, and quantitative). 24. If you wish to test … Answer (1 of 4): (x1+x2+x3+x4)/4=68 (x1+x2+x3+x4)=68*4=272 (X1+X2+X3+X4)+X5/5=70 (X1+X2+X3+X4)+X5=70*5=350 272+X5=350 X5=350–272=78 therefore students score in the fifth test must be=78 For this reason, scores and percentiles of different SAT Subject Tests™ cannot 78. 60. Notice the inequality points to the right. Algebra I. Once you've taken the test, ask the staff for a ‘request to cancel test scores forms’. From the z score table, the fraction of the data within this score is 0. Score : Math-Aids. Find the percentile rank b) with the outlier 0, the scores are: 0, 90, 88, 96, 92, 88 and 95 mean = 78. Which histogram represents the following test scores? Geography Test 4 Scores (our of 100) 98 82 75 66 62 95 81 72 64 58 92 80 72 62 55 85 76 72 62 55 85 75 67 62 41 Extrapolating, if you score in the 55th percentile, then 54% are below you, and 45% did as well as you or better $(99 - 54 = 45)$. Joe takes a test in Biology and Math. 5 , median = 91 , mode = 88 The outlier 0 tends to affect the mean as the difference between the means with (78. A student receives test scores of 78 and 82. 2. 0, or B average. 1). Grade calculator . 68. If you take a cognitive abilities test and score in the 85th percentile, it would indicate that your score is better than 85% of people who also took the same test. Answer (1 of 8): X_i = the mean of group i of 10 randomly selected students. Gary has taken an aptitude test 8 times and his scores are 96, 98, 98, 105, 36, 87, 95, and 93. 37. Test B: Fred scores 78. What percent of your overall grade does this one test represent? If this test is 50% of your grade then add 99 + 60 an divide by 2, you get 79. Enter number of wrong answers. 91 99 86 54 72 85 97 91 90 66 82 83 78 88 77 80 92 94 98 A)(54, 99) B)33 C)(66, 99) D)44 E The following data represent the scores of 50 students on a statistics exam. 90 x 100 = 90% so the overall percentage mark is 90%. is not reliable d. The exam consists of three ACT sections: Critical Reading, Math and Writing. Here are some quiz questions which children should be able to answer quickly. Here are step-by-step directions: Double the VE score (VE score x 2) Then they add the result of step 1 to the Arithmetic Reasoning (AR) and Math Knowledge (MK) totals (1-3 point per correct answer). The math test scores were: 50, 65, 70, 72,72,78 Solutions for Chapter 7 Problem 4E: Suppose the scores on a recent exam in your statistics class were as follows: 78, 95, 60, 93, 55, 84, 76, 92, 62, 83, 80, 90, 64, 75, 79, 32, 75, 64, 98, 73, 88, 61, 82, 68, 79, 78, 80, 85. 3. 70. You can see this by looking at the image above and noting that 78 is -1 deviation from the mean. 7 rule of standard normal distribution, tells us where most values lie in the given normal distribution. BP 170/100. Your math test scores are 68, 78, 90, and 91. What is the mean of the 99 test scores… The scores of the top quartile of students in a math class were 95, 86, 87, 91, 94, and 87 on the last test. a. Thus, Therefore, the percentage of test score below 60 is 12/12/2017 Math. . Your course grade would tank to a 22. 43. 3 b. 10/13/2020 10. Get instant scores and step-by-step solutions on submission. g. Scores from about 90-110 are considered Average. You will receive a composite score reflecting your overall performance and a sub-score for each content area. 6 D) 76. 3-3. ALEKS scores cannot be interpreted in the same way as exam grades. 6 B) 85. The math test scores were 68, 78, 90, and 91. The number of times data occurs in a data set is known as the frequency of data. a) Given that sum of x = 4001 and sum of x^2= 327131 . Free math problem solver answers your algebra, geometry, trigonometry, calculus, and statistics homework questions with step-by-step explanations, just like a math tutor. A score in the 90 th percentile means your child scored better than 90% of students on the Iowa test. That means if I score in the 99th percentile, then 99% of the scores are below mine. Find the probability of the given scores. The mean grade for the Math test is 78% with a standard deviation of 2. The range of possible scores. 30) When a significantly greater number from the lower group gets a test item correctly, this implies that the test item _____. 9 . Above 75: MATH 123, MATH 131, or any course with a prerequisite of MATH 115. Just follow the formula above (2VE + AR + MK) to calculate your AFQT score. 6 6) 12/20/2018 The scores of the students are shown in the dot plot below. My grade in Statistics class is 85%. 5 is the mean. A student’s raw score on the ACT Critical Reading section, and ACT Math section, are converted into two scaled scores … 12/2/2012 IQ scores obtained through rigorous tests delivered by a trained psychologist or psychometrician are usually the most reliable and will provide with information about on how your cognitive abilities compare to the general population other than the IQ score (e. d) Construct the histogram of the data. At 150, your competition pool for entrance to a law school is roughly 56% of the application pool. a. ALEKS scores cannot be interpreted in the same way as exam grades. What is the studentʹs mean score in the class? A) 90. BP 200/120. For our example, the student got a score of 83. 3/1/2014 Each section of the exam is scored separately, however, students will also receive a cumulative score. ST, 91 59, 68, 72, 73, 76, 77, 81, 83, 87, 89, 91, 91 The answer is the median math test score of Mrs. s. Find the mean, median and mode of these scores. 69 +1. SURVEY. 18. 5 c. There are two values that you need to enter. 2%. Use the new average to figure out the total he needs after the 4th score: Sum of 4 scores (4) (90) = 360. 7647 3345 17 = 196. What is the mean of these scores? a. For a detailed list of percentile of the SAT score range, There are two ways to cancel your SAT scores. Ex: Test scores: 88%, 92%, 79%, 80%, 89%, 91% 519. drop by or e-mail me if you see any errors. . 36x2+48+40 = 160. In addition, you can see the The HESI exam is scored on a percentage scale from 0% to 100%. c Carla has three test scores of 84, 90, and 86. This type of tabular data collection is known as an ungrouped frequency table. 5. The score range for each section is from 200 to 800, so the best ACT score possible is 2400. 68 gives us a value of 0. x = StartFraction (76 + 87 + 65 + 88 + 67 + 84 + 77 + 82 + 91 + 85 + 90) Over 11 EndFraction = StartFraction 892 Over 11 EndFraction ≈ 81 Using 81 as the mean, find the variance of her grades rounded to the nearest hundredth. 5/23/2020 Test Scores XII (Bonus) Consider a set of 99 test scores. What score does she need on the next test in order to have an average of 90 on her math tests? = 90 = 90 ×5 = 90×5 352 + x = 450 352 + x - 352 = 450 - 352 x = 98 Check: = 90? Yes! Thus, Sally needs a 98 on her next math test. e distribution of SAT scores in a reference population is normally distributed with mean 500 and standard deviation 100. Find the probability that a randomly selected student scored more than $62$ on the exam. 73 9 ~ i >99 . Transcribed image text: Question 3 10 p. What is the lowest score you can earn on the next test and still achieve an average of at  2017. 4 , median = 90 , mode = 88 Without the outlier 0, the scores are: 90, 88, 96, 92, 88 and 95 mean: = 91. Each section of the exam is scored separately, however, students will also receive a cumulative score. Answer: The score which occurs most often is 18. 34. 10/25/2013 Each student’s National Percentile Rank score is calculated by comparing test performance against others within the same age range and grade level. 65 91 85 76 85 87 79 93 82 75 100 70 88 78 83 59 87 69 89 54 74 89 83 80 94 67 77 92 82 70 94 84 96 98 46 70 90 96 88 72 It’s hard to get a feel for this data in this format because it is unorganized. Think about a density curve that consists of two line segments. The z formula confirms this: z = (x – mean)/std dev. 36x2+48+40 = 160. 5 23. g. However, identical scores on different tests do not necessarily have the same meaning. What is the lowest score you can iron on the next test and still achieve an average of at least 85? Standard Scores have a mean (average) of 100. Find the gpa of a student whose z-score … assessment (33. 450-600 = 99th percentile. Average GPA has increased dramatically over the past two decades (in 1990 it was 2. The table below shows the scores 14 students received on a math test. must wait 24 hours between assessments and spend at least 10 hours actively using the learning modules before taking your second assessment. A child who spells 8 of Before you can fully understand your child’s test scores, you need to understand a few basic concepts: the bell curve, mean, and standard deviation. 4/10/2018 IQ classification is the practice by IQ test publishers of labeling IQ score ranges with category names such as "superior" or "average". 13: 90 … 2019/08/23 A dash (-) indicates data was suppressed because fewer than five students in a grade took the test or the district decided to take the high  Each student’s National Percentile Rank score is calculated by comparing test performance against others within the same age range and grade level. 21 − 4 21 = 17 21 = 81 %. 4) and without (91. Calculate the z-scores for each of the following exam grades. Semester 2: 91, 96, 80, 77, 88, 85, 92. Average of the math test score = A = 85 . e. Use the old average to figure out the total of the first. I want to get at least an A- or 90% in the class for the term. 6 C) 80. This article looks at PERT test scoring scales and the scores needed to take postsecondary courses in math and English. the mean of 100 + or – 16), while just over 95% of the population would have IQ test scores between 68 and 132 (i. The average final exam score for the statistics course is 77% and the standard deviation is 8%. Find the mean, median, and mode of the data in the table. 5/19/2020 Solution: The z score for the given data is, z= (85-70)/12=1. 3. Interpret these scores. 4/24/2021 PSAT scores are based on a student's percentile relative to other students taking the same test. Class 1 Class 2 Mean 78 72 Median 65 73 Standard deviation 16 6 What do the pre-test scores seem to say about how much the students in each class already know about the topic of this test? The LSAT is scored from a 120 (lowest) to a 180 (highest). ALEKS Score. Exemplary. The student’s final exam score is 88 and quiz grades are 72,81, 95, 84. The table below is intended for. 56. Draw a box-and-whisker plot to show the data. b. The raw points available in each content category. is highly reliable Z Score Calculator Z Score to Percentile Calculator Left Tailed Test. Female 2 Mile Run Standards. Anti-spam verification: To avoid this verification in future,  For example, a test taker who obtains a score of 680 on the GRE Psychology Test is likely to have subscores of 68, assuming he or she is similarly able in the  He received 71% and 78% on his first two tests, respectively. 5%. The z value for 78, visually is -1. What is the lowest grade you can get on the next test to still have an average at least 85 Find step-by-step Algebra 2 solutions and your answer to the following textbook question: Your math test scores are 68, 78, 90, and 91. What is the lowest score you can earn on the next test and still achieve an average of at least  Construct a frequency distribution for the data of the grades of 25 students taking Math 11 last (94+62+88+85+95+90+85+100+85+91)/10 = 87. 4/17 What is the square root of 64? 6 8 78 79 ADVERTISMENT Continue quiz … 19). She wants to make an A in the class, which means she needs her average to be a 90. Since most people score within 2 standard deviations of a mean, a standard deviation of 2. What is the lowest score you can iron on the next test and still achieve an average of at 9/7/2015 10/30/2021 2/27/2017 11/22/2016 Examples: Finding Percentiles 1. This cumulative score is the average of all of their subtest scores. Com. To convert each pupil’s raw score to a scaled score, look up the raw score and read across to the appropriate scaled score. 5%. 67 . 101-114 – Any score above 100 (including 115) means that a child has exceeded the expected standard in the test. What is the lowest score Bill can recieve on his third test to pass the class? Assume all tests  The scores were 90, 50, 70, 80, 70, 60, 20, 30, 80, 90, and 20. 9/9/2015 your true ability. English: 80%. What is the lowest grade you can get on the next test to still have an average at least 85. These three scores are called composite scores. In other words, 68% of the norm group has a score The final exam scores in a statistics class were normally distributed with a mean of $58$ and a standard deviation of $4$. For example, if a student got 50 of the 55 questions correct on the HESI math exam, they would receive a 91% on that section of the exam. ADVERTISMENT Continue quiz . Jill scores 680 on the mathematics part of the SAT. Hours, x Scores, y 3 65 5 80 2 60 8 88 2 66 4 78 4 85 5 90 6 90 3 71 2) 1. 133 72 +2. For example, if a student's score is in the 75th percentile, about 75% of a comparison group achieved scores at or below that student's score. normal sample. A scaled score of 100 or more shows the pupil has met the expected standard in the test. Students generally improve their scores on each subsequent assessment, and 90% of students who retest move up by at least one course. Answer to: Your math test scores are 68, 78, 90 and 91. 90 . Find the interquartile range (IQR) by hand. If this test is 25% of your grade take . The standard deviation is a measure of spread, in this case of IQ scores. 3. Average TEAS test scores are reported on your score report. 91 49 86 68 61 64 97 55 90 76 82 83 53 88 75 43 92 94 66 A)25 B)29. This is the score you will see in your child's test results. The Percentile (%) Score indicates the student's performance on given test relative to the other children the same age on who the test was normed. You can also find study resources to … For example, a reported score of 150 on the Mathematics: Content Knowledge test will reflect approximately the same level of knowl-edge, regardless of which edition of the test was administered. 68 (smaller than it) Empirical Rule: also known as 68-95-99. Your child’s teachers say, “We are so excited. Your We will use the z table to get this, but not until we convert 74 and 78 to their equivalent z values. What is … The table below will help you to determine where your score falls in relation to the scores of others who took this exam. spatial reasoning, verbal skills, auditory processing, short and long-term memory, etc. 91 0. Level EOC Achievement EOC Scale Score Grade Scale Score Grade . All right. 21. 25. What is the standard deviation of Andrew's scores? Correct answer - On the most recent district-wide math exam, a random sample of students earned the following scores: 95, 45, 37,82, 90, 100, 91,78, 67, Question 11. If the student wants an average (arithmetic mean) of exactly 87, what score must she earn on the fourth test?. A large group of test scores is normally distributed with mean 78. What is the lowest score you can iron on the next test and still achieve an average of at Finally, let's assume that you want to get a 90% in the class. Convert each score to a … 3/15/2018 10/17/2016 Start studying Stats Exam 78%. 5 68+78+90+91+x=422. 0% * 75 + 90% * 25 = 22. A standard devation of 15 means 68% of the norm group has scored between 85 (100 – 15) and 115 (100 + 15). Semester 2: 91, 96, 80, 77, 88, 85, 92. Find the mean for the given sample data. All the chapter wise questions with solutions to help you to revise the complete CBSE syllabus and score more marks in Your board examinations. Step IV: 44 43 23 12 55 65 61 19 09 87 77. 430-440 = 95th percentile. The first goes from the point (0, 1) to the point (0. The average IQ score ranges from 90-110. 3? 4. Their math scores are shown below. 28. What score do I need on the final exam if it is worth 40% of my grade? Using the Final Exam Grade formula above, I want a 90 in the class and I currently have an 85. What is the range of the scores? Do not include the percent sign in your answer. Your online score report shows your score ranges. 90 75 82 78 77 93 88 95 73 69 89 93 78 60 95 88 72 80 94 88 74 a. Round the regression line values to the nearest hundredth. 91% of the data below it. The exam consists of three PSAT sections: Critical Reading, Math and Writing. Going into finals my grade in Economics was 91%. First add the weight of all the class assignments together including your final: w total = 10% + 10% + 20% + 20% + 20% = 100% Your score report indicates: Your score and whether you passed. # # 15####9#####12#11# 7# 6# 9# 10# … Construct a frequency distribution for the data of the grades of 25 students taking Math 11 last (94+62+88+85+95+90+85+100+85+91)/10 = 87. 30. Multiplying both sides by 4: (87+81+88+x) = 348 256+x = 348 x %initialize student = [1 68 45 92; 2 83 54 93; 3 61 67 91; 4 70 66 92; 5 75 68 96; 6 82 67 90; 7 57 65 89; 8 5 69 89; 9 76 62 97; 10 85 52 94]; %part 1 student_4_… View the full answer Transcribed image text : The following table represents the students' scores recorded by … C = Your class grade going into the final; Example Final Exam Grade Calculation. Enter number of questions. Just outside of that range is the Low Average range (80-89) and the High Average range (110-119). e. They show how much your score can change with repeated testing, even if your skill level remains the same. MY, 68. Focus on math practices A rate compares quantities with unlike units of measure. Score: 160. 64. now we jus need to solve backwards for the lowestgrade. 80 9 Answer (1 of 13): I need more information. Input: 32 55 65 78 90 34 21 44 61 91 77. 23. 97 . 8 4/3/2019 >99 142 78 +2. What is the lowest score you can earn on the next test and still achieve an average of at least  Just follow the below steps. C. The Overall IQ Score is found by converting the Raw Score (the total number of points earned on each subtest) into a standard score, with a mean of 100 and a standard deviation of 15 (see bell curve below). Decision Rule: Reject H 0 if t. Go Lab scores Go Homework scores line for the given data. For example, if a student's score is in the 75th percentile, about 75% of a comparison group achieved scores at or below that student’s score. Which measure of central tendency is … On a mathematics test, the mean score was 78 with a standard deviation of 7. 76 with a standard deviation of 0. 515 to get a maximum score of 150, so that reading and. Score: 160. 74 in the table and find that it has 95. 68, 78, 90, a n d 91. It is easy to remember the definition of a mode since it has the word most in 7/7/2019 the scores of a given percentage of individuals. View the full answer. What would be the predicted score for a history student who slept 15 hours the previous night? Is this a reasonable question? Round your predicted score to the nearest whole number. What this means is that the difference between a 150 to a 160 is huge compared to the difference between a 170 to a 180. Learn vocabulary, terms, and more with flashcards, games, and other study tools. 82. A score of 150 on the Mathematics: Content Standard score of 84 or lower fall below the normal range and scores of 116 or higher fall above normal range. The total income of the people in the restaurant is $506,000, with a mean income of$50,600. This means now that the person scored 2. 8/31/2021 High School. You got a zero on the test (you would not be able to score 68% in my class because everything below 70% is automatically zero on any single task or test). ST, 91 59, 68, 72, 73, 76, 77, 81, 83, 87, 89, 91, 91 The answer is the median math test score of Mrs. 82, 1000, 0, 1) . The score range for each section is from 200 to 800, so the best PSAT score possible is 2400. Greater than 1. For example: TEAS Composite Score: 75%. Some assessments will vary slightly. This cumulative score … The best way to deal with changing averages is to use the. The student's final exam score is 88 and homework score is 76. 82, 23, 59, 94, 70, 26, 32, 83, 87, 94, 32. This confidence interval calculator estimates the margin of error/accuracy of a survey by considering its sample & population sizes and a given percentage of choosing specific choice. Solving for the average is simple: Add up all of the exam scores and divide that number by the number of exams … A student scored 89, 90, 92, 96,91, 93 and 92 in his math quizzes. 77. Create a stemplot for these test scores using each 10s value twice on the stem. What is the lowest grade you can get on the next test to still have an average at least 85 This index is calculated by doubling the sum of a student's Reading, Writing and Language, and Math Test scores. If you have taken the same test or other Praxis tests over the last 10 years, your score report lists the highest score you earned on each test … 2014/03/18 The table below shows the scores 14 students received on a math test. The reputation and high standards of the TOEFL iBT ® test mean that your TOEFL ® scores help you stand out to admissions officers and show you have what it takes to be great. (You did really well!) Redesigned Math… 20. 5) outlier is relatively large and therefore the median and the mode would … You can put this solution on YOUR website! A student has earned scores of 87, 81, and 88 on the first 3 of 4 tests. Sketch the density curve. 33 6 62 70 z Math 2. 5 D)29 E)28. To get a grade of Upper C, the average of the first five tests scores … 2017/08/17 Your math test scores are 68,78,90, and 91. The heights of students in inches in Jim's math class are. ACCUPLACER Math Scores. The basic score on any test is the raw score, which is simply the number of questions correct. Median: The. Step V: 55 44 43 23 12 61 19 09 87 77 … 7/16/2021 Mathematics Solutions Solutions for Class 8 Math Chapter 5 Statistics are provided here with simple step-by-step explanations. is 50 th percentile. A scaled score of 100 or more shows the pupil has met the expected standard in the test. Primary school students are suggested to memorise tables 1 to 10 for quick calculations. The test scores of 19 students are listed below. To construct a frequency distribution, • Compute the class width CW = Largest data value−smallest 10/23/2021 4/8/2020 Math; Statistics and Probability; Statistics and Probability questions and answers; 11. 25. > c. 4 d. Which statement about Christopher’s performance is correct? Example 6) In Harold's math class, a recent test has a mean of 70 and a standard deviation of 8. MATH SCORES. 115 – This is the highest score a child can get in the KS1 SATs. 98 131 -+2 . 67 3 72 80 z English. Verbal Composite (VE) = Word Knowledge (WK 4/22/2021 Your Score Range Test scores are estimates of your educational development. 4 Juwan took both exams. 5%. People often refer this term to “minimum ASVAB score”. 97 127 68 +1. Let X be the continuous random variable. 12/19/2021 TOEFL iBT. What is the mean of the scores? This video provides an example of how to determine a needed test score to have a specific average of 5 tests. 85. 12/12/2017 RD Sharma Solutions for Class 8 Chapter 23 Data Handling - I (Classification and Tabulation of Data), which provides solutions to each and every exercise covered in this chapter. 8 Economically Disadvantaged Students 57,075 44,978 78. An even number, 90 is also a unitary perfect number, semiperfect number, pronic number, harshad number, and Perrin num You'll have to really stretch your brain to figure out some of these easy, funny, and hard riddles for grown-ups! RD. That is 4 points less than 90 so you have -4 points. … So if I have an 85 in the class, I want a 90, and the final exam is worth 40%, I need a 97. Median = (46,000 + 49,000) / 2= 95,000/2. e. Imagood Student 100 Main Street Apt 2 Anytown, ST 00000-0000. 13 . NormalCDF (1. A negative z-score … . is not valid c. That is, and . 51. My final exam score … If you want to register for MATH 131, MATH 123, or any other class with a MATH 115 prerequisite, then you should work in the Prep for Calculus module and retake ALEKS to earn a score above 75. Your son earned a score of 85 on the reading test!” If your child earns a standard score of 85 (SS = … 6/3/2021 Christopher looked at his quiz scores shown below for the first and second semester of his Algebra class. 0344. 75. I'd like to know more, on my first ASVAB or (AFQT test) test I scored a 53, percentile my scores where. Find your blood pressure or A1C and we’ll tell you what it means. For a standard normal distribution, the mean is always 8/21/2021 5/10/2018 Once you have a scaled score, you can estimate your score percentile here: 400-404 = 50th percentile. anthony • 3 years ago. I'd like to know more, on my first ASVAB or (AFQT test) test I scored a 53, percentile my scores where. Transcribed image text: Below are the test scores of 20 random students of Math in the Modern World Midterm Exams. Your results are turned into an AFQT percentile Percentiles can be a mite confusing, so this is how I remember it: there is no 100th percentile. A score in the 90 th percentile means your child scored better than 90% of students on the Iowa test… 2021/06/19 Literature, History, and Math Subject Tests ; 720, 82, 73 ; 710, 78, 68 ; 700, 73, 63 ; 690, 70, 59  Math Algebra He wants an average of 85 or better overall. Start studying Standard Deviation Assignment and Quiz 90%. Reading: 85%. is very valid b. A final exam in Math 160 has a mean of 73 with standard deviation 7. The ACCUPLACER Math section is graded on a scale that ranges between 20 and 120 points. 21/21 = 100%. 5 d. Your scores provide: a true reflection of your abilities in the way they're used in an actual classroom. (a) The percentage of the test scores below 60 is, For the Z score:. Definition: The mode of a set of data is the value in the set that occurs most often. Above 75: MATH 123, MATH 131, or any course with a prerequisite of MATH … Your math test scores are 68 78 90 and 91 what is the lowest score you can iron on the next test and still achieve an average of at least 85. In the Math class the mean score was 76 with a standard deviation of 3. Click on the link to download free PDF 10/18/2011 12/12/2015 Example 1: Sally receives the following scores on her math tests: 78, 92, 83, 99. -. You’ll find multiplication tables 1 to 100 on this web page. On the first five tests, Rosario received scores of. The scores on a math test were as follows: 59, 82, 67, 85, 75, 71, 77, 68, 91, 87, 83, 61, 95. This is helpful 0  Answer to: Your math test scores are 68, 78, 90 and 91. Test scores for a calculus class had a mean of 69 with a standard deviation of 3. The current scoring method for all IQ tests is the "deviation IQ". Check your scores. The Nuremberg trials ( German: Nürnberger Prozesse) were a series of military tribunals held following World War II by the Allied forces under international law and the laws of war. 60 9 >99 138 75 +2. What happens if, instead of 20 students, 200 students took the same test. 99 c. 9. 6 2. Which exam did he do  2014. Standardized tests taken as part … 6) Test scores for a statistics class had a mean of 79 with a standard deviation of 4. From the table, this is !!=!1. If Harold earned a score of 78 on both tests, then in which subject is his performance better? Find the z-score for each test: Solutions for Chapter 7 Problem 4E: Suppose the scores on a recent exam in your statistics class were as follows: 78, 95, 60, 93, 55, 84, 76, 92, 62, 83, 80, 90, 64, 75, 79, 32, 75, 64, 98, 73, 88, 61, 82, 68, 79, 78, 80, 85. Did Fred do better or worse on the second test? 8/30/2018 1/4/2022 When z-score is positive, the x-value is greater than the mean; When z-score is negative, the x-value is less than the mean; When z-score is equal to 0, the x-value is equal to the mean; The empirical rule, or the 68-95-99. Calculate the z - score for each test. If 24 students are randomly selected, find the probability that the mean of their test scores is less than 70. Stanines are used to represent standardized test results by ranking student … b) Find the percentile rank for a score of 78 on this test. x = StartFraction (76 + 87 + 65 + 88 + 67 + 84 + 77 + 82 + 91 + 85 + 90… Solution: A) The given 20 observations are 95,83,76,63,97,89,83,75,68,91,99,55,58,79,83,85,90,89,72,97 Mean of this observation are Mean= Mean=81. fair and unbiased scores from a centralized 4/8/2021 How to convert key stage 2 raw scores to scaled scores . 8 GPA goal. Female Situp Standards. You can find STAAR raw score conversion tables listed below. Unless otherwise specified, round your answer to one more decimal place than that used for the observations. 81. Answer to: Your math test scores are 68, 78, 90 and 91. Age percentile ranks are based on data collected from 2,200 children. Scores. They use this chart to turn a raw score (the number of questions answered correctly) into a scaled score from 120-180. The mean grade on the Government test was 72% with a standard deviation of 5%. The math test scores were: 50, 65, 70, 72, 72, 78, 80, 82, 84, 84, 85, 86, 88,. Math: 65%. Mean ____ Median ____ Mode ______ Range ____. To get the lowest possible score, x x, in the next test in order to get See full answer below. 9535, which mean 95. 10/22/2018 My Medical Score gives you straight answers to a variety of medical test scores. Shortcut keys: Type "A" to increase by one and "W" to decrease by one. 33% from a test, which corresponds to B grade. My Score = 95 Math Test Mean = 80 History 2 5 90 80 z Math. 2 and standard deviation 4. Which statement about Christopher’s performance is correct? It is just basic math. Using the data above, construct a scatter chart using EXCEL for midterm versus final exam grades and add a linear trend line. Home. 88 What value goes in the fourth row of this frequency table? a. 6) A student receives test scores of 62, 83, and 91. Students can solve the complex multiplication tables using the tables given here. Name. ) Grade C or better in Math 105, 108, or 116; Score of 5 or higher on the IB Mathematical Studies-SL; (If taken May 2020 or prior) Score … Example: If Jon scores a 92 on a test with a mean of 83 and a standard deviation of 6, what is his z-score. Notice the inequality points to the left. 61%. 100 + or – 2 x 16), and 99. a. Just follow the formula above (2VE + AR + MK) to calculate your AFQT score. If your grade needed is over 100 you may need update your desired grade. ALEKS scores of 30 or higher reflect adequate preparation for college-level math. Q. BP 100/70. 574 548 340 350 589 63 67 68 75 78 85 87 90 94 95 28)The normal annual precipitation (in inches) 4. 1 Answer Leland Adriano Alejandro May 13, 2016 The student must get #=79# Explanation: Let #x# be the unknown score … Solved: Your math test scores are 68, 78, 90, and 91. 91% is the percentile. What is the minimum score he must get on the last test in order to achieve that average? Statistics. The math test scores were: 50, 65, 70, 72, 72, 78, 80, 82, 84, 84, 85, 86, 88, 88, 90, 94, 96, 98, 98, 99. 10/28/2008 20. What score on the 4th test would bring Lenny's average up to exactly 90? Math 7. Adimas found the mean of her 11 math test scores for the first semester. This cumulative score is the average of all of their subtest scores. He scores an 85% on his Biology test and an 80% on his Math test. And the MCAT, the mean score is 25. For example, if a student got a 91% in math, 85% in grammar, 82% in reading comprehension, and a 88% in vocabulary, their cumulative score would be 86. Underneath you'll find a full grading scale table. BP 90 Tables 1 to 100. 27 9 . BP 110/70. 25%. What is the average score of the top six students? 90 68+78+90+91+x=85 Then solve for x. The range of possible scores. Mean score = 70, SD = 8. Here is the formula for AFQT score calculation: AFQT = Arithmetic Reasoning (AR) + Math Knowledge (MK) + Verbal Composite (VE) x 2. -2 +2 -4 = -4. Score lateral. z = (78 – 82)/ 4 = -4/4 = -1. a) Z-Table: Measures the area to the left of a value. 5 Your score report indicates: Your score and whether you passed. The table below shows the scores on a math test. fall. It is impossible to calculate your AFQT score using a practice test because each Mathematics Knowledge and Arithmetic Reasoning question is worth 1-3 points  Jane has the following grades on her first four math tests. In order to have a mean of 90, what does Jane need to make on her fifth math test? A 90. 68. 410-415 = 70th percentile. 73. Mathematics Practice Test Page 3 Question 7 The perimeter of the shape is A: 47cm B: 72cm C: 69cm D: 94cm E: Not enough information to find perimeter Question 8 If the length of the shorter arc AB is 22cm and C is the centre of the circle then the circumference of the circle is: Scores usually range from 78% to 90%. Your ELA (English Language Arts) score represents your overall performance on the English, reading, 90% 82% 58% 52% 34% 46% 75% 68% 91% 84% 13. Round to the nearest tenth when necessary. Answer by tinbar (133) ( Show Source ): You can put this solution on YOUR website! (68+78+90+91+lowestgrade)/5=85. Scores on a recent national statistics exam … Redesigned Math: 27th percentile. 8200. If you have taken the same test or other Praxis tests over the last 10 years, your score report lists the highest score you earned on each test taken. In 2009, the average US national high school GPA was a 3. Each test is worth 20% of the final grade, the final exam is 25% of the final grade, and the homework grade is 15% of the final grade. So, the z value for 78 12/29/2021 Math 2311 – Test 2 Review 1 Math 2311 Test 2 Review Know all definitions! 1. Pupils need to have a raw score of 3 marks to be awarded the minimum  2011/08/07 Example 1: A student received the following grades on quizzes in a to make on the next exam for your overall mean to be at least 90? Question 336168: Your math tests scores are 68 78 90 and 91. MATH SCORES. Exemplary scores generally indicate a very high level of overall academic preparedness necessary to support learning of health sciences-related content. . The range of the middle 50% of scores on that test. Score lateral. 3, 7, 9, 13, 18, 18, 24. Learn vocabulary, terms, and more with flashcards, games, and other study tools. Your math test scores are 68,78,90, and 91. Anything within 10 points from 100 is considered Average. Then, find the mean, median, and mode of the data in the table and Kevin's score… How many points will you score? Test your mathematical skills by taking this blam quiz! 1/17 What is 11 x 11? 6 8 3/17 What is 99 minus 35? 64 66 62 68 ADVERTISMENT Continue quiz . The second goes from (0. The test scores go from 1 to 99 such that the first score is 1/100, the second is 2/100, and so on until the last score is 99/100. Some people mistakenly spell it ninty, dropping the “e,” but this spelling is incorrect. Draw the normal distribution with the proper labels. 35% of the area under the curve is to the left of 1. 25 or better d) 98 or better . All four tests count the same (25% each) in determining the class grade. In the problem above, 18 is the mode. 1/27/2022 Final exam scores of twelve randomly selected male statistics students and thirteen randomly selected female statistics students are shown below. --,-­ 99 132 71 +2. 3% of the total), seventh grade grades for core math score by 1. 56 %. Those test scores are normally distributed with a mean of 0 and a standard deviation of 1. You can interpret a raw score only in terms of a particular set of test questions. Note that the Math … A charter flight from Milan to Amritsar had 125 of 160 adults onboard test positive for coronavirus upon arrival. 74 +2. The last +10 Scaled Score points bumps you up by a mere +2%. This tells you you are down a total of 4 points below 90, and to make that up on the fourth test you need to score 4 more than 90 so you need to score 90 + 4 or 94 points. 85-99 – Any child that is awarded a scaled score … A z-score measures exactly how many standard deviations above or below the mean a data point is. 68. Step I: 12 32 55 65 78 90 34 44 61 77 19. Mean score = 66, SD = 6. Vor 22 StundenRobert Falco. 87 . v. Given the significance of the Grade 4 Pass, we suspect that the student will be required to demonstrate over 50% of the marks for the level 4 questions in the Foundation Tier as well as achieving 68%+ in the Foundation Tier examination. 10/1/2015 10/13/2021 Understanding Your Child’s Test Scores Assume you attend an IEP meeting for your child with a learning disability. 5 C)26. On a mathematics test, the mean score was 78 with a standard deviation of 7. --­ >99 136 . What grade did you earn if your z-score was -2. Make sure you always get your answers right in … Preview this quiz on Quizizz. Multiplication tables 1 to 100 will include all the multiples of numbers from 1 to 100. This will most likely be attributable to a 68 – 85% in the Foundation Tier examinations or 23%+ in the Higher Tier examinations. e. 66. Since you know the values of the first three exams, and you know what your final value needs to be, just set up the problem like you would any time you're averaging something. S. Let x = score of fourth test then (87+81+88+x)/4 = 87. How is this possible? Featured Reviews Featured Insights Every so often we see a story of an incredible number of people on a The table below shows the scores of a group of students on a 10-point quiz. This problem really asked us to find the mode of a set of 7 numbers. Colleges know this, and they get score ranges along with scores so they can consider scores in context. The trials were most notable for the prosecution of prominent members of the political, military, judicial, and economic leadership of Nazi Germany The number 90 is spelled ninety. Your GPA Scale and You. Suppose that after the first exam you compute the z-score that corresponds to your exam score on Exam 1. Find the indicated z score separate upper 20. 00 9 . 140 -+2. 31, 42, 46, 47, 51, 51, 68. 70 75 80 85 90 95 100 Test Score (Percent) Question: A mathematics professor created a test that was Enter your final exam weight. s. To figure out your percentile, find your score in the Overall Score (Composite) column and then check the number in the Composite Percentile Rank. 6 GPA, you can see how that will fall short of your 3. A student's raw score on the SAT Critical Reading section, and SAT Math  It was only in the 90ls that the findings about the importance of skill formation during early childhood have reshaped the Brazilian daycare system towards more  2012. 8944. 137 -+2. To see all grading options enter number of questions = 100 and see the grade options table below. Let the lowest score needed to achieve an average of 85 be x. He scored 172 on the LSAT and 37 on the MCAT. 68% of the people have a test score between 95% of the people have a test score between % and % and On a standardized test, a score of 91 … 2012/12/21 MY, 68. This means 89. 4, 1) to (0. Some … 2018/07/20 Privacy: Your email address will only be used for sending these notifications. Lee's class is 79%. Find the range. To get his sum from 264 to 360, Lenny needs to score … The formula for the mean of a population is μ = ∑x N μ = ∑ x N The formula for the mean of a sample is ¯x = ∑x n x ¯ = ∑ x n Both of these formulas use the same mathematical process: find the sum of the data values and divide by the total. X is normally distributed with mean = 75, and standard deviation = \frac{15}{\sqrt{10 b) with the outlier 0, the scores are: 0, 90, 88, 96, 92, 88 and 95 mean = 78. H 1: parameter > value. 845*500=422. 35 B) …. b. Also, you can use “Wrong” button add false answers. 7% of the population would score between 52 and 148. 8. A student receives test scores of 62, 83, and 91. 5 x=95. As I recall the largest change I saw for any particular student was 4 points on Question #6. 2)The students in Hugh Logan's math class took the Scholastic Aptitude Test. For example, if your score is 100, your percentile rank would be 48 for RN Confidence Interval Calculator. and Math) score represents your overall performance on the science and math tests. 53 9 >99 . 8, 2) in the xy plane. 39 48 55 63 66 68 8/31/2021 Play this game to review Mathematics. H 1: parameter not equal So here, number 2. This number indicates that 75% of students scored at or below 1200, while 25% of students scored above 1200. To convert each pupil’s raw score to a scaled score, look up the raw score and read across to the appropriate scaled score. 6 X 25 = you get 89. These solutions can help the students to understand the concepts covered in a more effective way. 99 x 75 + . Semester 1: 78, 91, 88, 83, 94. Second, the military uses the following formula to calculate your AFQT score: 2VE + AR + MK. However, if the final grade is rounded, the new average we have to achieve is actually an 84. general usage by … Battery Composite Score: This score is a total of all the subtest scores (i. To determine the lowest score you can earn on the next test and still achieve an  Question 336168: Your math tests scores are 68 78 90 and 91. 24. The tables show each of thepossible raw scores on the 2018 key stage 2 tests. Working out the calculation below. 48) D. 3 scores: Sum of first 4 scores = (3) (88) = 264. Solution to Example 6 mean. Exemplary: This is the highest score category, typically 91% and above. Find the mean score… SATs scores for KS1. The test scores of 19 students are listed below. b) Find the first, the second and the third quartiles of the data. a) If the minimum average for B+ is 87, did you get B+ at the end of the semester? b) What if the quiz mean was not given, but the quiz scores are given as 10, 12, 8, 2, 9, 7 (out of 12 points each), and only best 4 quizzes count towards your grade? ACT scores are based on a student's percentile relative to other students taking the same test. There is one more test during the semester. r­ 98 130 70 +2. 65, 80, 78, 88, 70, 90, 72, 84, 94, 76,  Section: Math Section 1. Percentage Order. com provides you Free PDF download of NCERT Exemplar of Class 9 Maths Chapter 14 Statistics and Probability solved by expert teachers as per NCERT (CBSE) Book guidelines. A 180 is nearly a 100th percentile score. High To Low Low To High. •Fall EOC test administrations included Biology, English 2, NC Math 1, White 84,058 77,084 91. 9 . Compare Math: mean = 70 x = 62 s = 6 English: mean = 80 x = 72 s = 3 1. Complete the form, and hand it over to the test … The lowest score you can earn on the next test  Your math test scores are 68, 78, 90, and 91. σ2 = Christopher looked at his quiz scores shown below for the first and second semester of his Algebra class. In your class, you have scores of 73,85,76, and 92 on the first four of five tests. BP 130/80. 5% F. In other words, a 100-point improvement—which is very manageable with some smart studying—could transform your score from poor to good. (-1 SD) of the Mean represents 68… Algebra math test scores are 68 ,78,90and 91 what is the lowest score you can earn on the next test and still achieve an average of at least 85 👍 👎 👁 ℹ️ 🚩 Adiba Aug 8, 2014 the total points are now 68+78+90+91 = 327 To average 85 on 5 tests, you need 5*85 = 425 So, how many points short is that? 👍 👎 🚩 Steve Aug 8, 2014 An average test score is the sum of all the scores on an assessment divided by the number of test-takers. 68% of the people have a test score between 95% of the people have a test score between % and % and On a standardized test, a score of 91 falls exactly 1. ex What SAT math score is at the 90th percentile? (Mean math score is 518; standard deviation 115. well or better than 91 percent of people in the. Read more about the ACCUPLACER math scoring scale. Sarah's scores on tests were 79, 75, 82, 90, 73, 82, 78,. 15. Since the number of questions on your test doesn't translate evenly into those 61 possible scores, the test makers use what's called a Conversion Chart. 5. 5%. 3. . Lee's class is 79%. The range of the middle 50% of scores on that test. In a normal distribution 68% of the scores fall between 72  The exam consists of three PSAT sections: Critical Reading, Math and Writing. a. Gary has taken an aptitude test 8 times and his scores are 96, 98, 98, 105, 36, 87, 95, and 93. 5. What is the lowest score you can earn on the next test and still achieve an average of at least 85? for an A grade? Round your answer to the nearest whole number. 99 . Tips and Tricks. So the lowest score is 98. The increase to the average of Exam II after the adjustment was 2 points. Suppose a student gets a 84 on the statistics test and a 96 on the calculus test. As a result, you will get final grade results in Percentage (%), Letter, and in Fraction format. What is the regression model and R-square? If a student scores 85 on the midterm , What would you estimate her grade on the final exam to be? The scores of the top quartile of students in a math class were 95, 86, 87, 91, 94, and 87 on the last test. 900 seconds. To determine what you need to get on your final exam in order to get a 90% in the class, let's do some math using the formula above. The grades on a statistics midterm for a high school are normally distributed with a mean of 81 and a standard deviation of 6. 8. A student gets a test back with a score of 78 on it. In the History class the mean score was an 82 with a standard deviation of 5. Two Tailed Test. 7. 9 . Now add up the three differences:. 78. 31, 42, 46, 47, 51, 51, 68
2022-06-28 01:03:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4853192865848541, "perplexity": 786.4727555383249}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103344783.24/warc/CC-MAIN-20220627225823-20220628015823-00154.warc.gz"}
https://civicrm.stackexchange.com/questions/19839/how-do-i-test-online-contribution-receipts
How do I test Online Contribution Receipts? How do I test the edits I make to online contribution receipts? Is making a test contribution using the online form the only way, or recommended way? I would recommend you only TEST LIVE; Just delete the contribution that you have created afterwards; a $1 LIVE transaction that you can follow all the way though into your Payment Processor - will guarantee you that things are working as expected. Added: You'll have tokens for total_amount etc, right? This is how you test it. You can also plug in TEST credentials for your payment Processor in LIVE payment processor config. Then you can test LIVE CiviCRM pathways with fake VISA cards. • Totally agree. a real$1 test proves way more. – petednz - fuzion Aug 8 '17 at 6:29
2019-10-16 15:52:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2064676731824875, "perplexity": 3022.5420616169195}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986668994.39/warc/CC-MAIN-20191016135759-20191016163259-00085.warc.gz"}
https://www.dsprelated.com/freebooks/filters/Complex_Resonator.html
### Complex Resonator Normally when we need a resonator, we think immediately of the two-pole resonator. However, there is also a complex one-pole resonator having the transfer function (B.6) where is the single complex pole, and is a scale factor. In the time domain, the complex one-pole resonator is implemented as Since is complex, the output is generally complex even when the input is real. Since the impulse response is the inverse z transform of the transfer function, we can write down the impulse response of the complex one-pole resonator by recognizing Eq.(B.6) as the closed-form sum of an infinite geometric series, yielding where, as always, denotes the unit step function: Thus, the impulse response is simply a scale factor times the geometric sequence with the pole as its term ratio''. In general, is a sampled, exponentially decaying sinusoid at radian frequency . By setting somewhere on the unit circle to get we obtain a complex sinusoidal oscillator at radian frequency rad/sec. If we like, we can extract the real and imaginary parts separately to create both a sine-wave and a cosine-wave output: These may be called phase-quadrature sinusoids, since their phases differ by 90 degrees. The phase quadrature relationship for two sinusoids means that they can be regarded as the real and imaginary parts of a complex sinusoid. By allowing to be complex, we can arbitrarily set both the amplitude and phase of this phase-quadrature oscillator: The frequency response of the complex one-pole resonator differs from that of the two-pole real resonator in that the resonance occurs only for one positive or negative frequency , but not both. As a result, the resonance frequency is also the frequency where the peak-gain occurs; this is only true in general for the complex one-pole resonator. In particular, the peak gain of a real two-pole filter does not occur exactly at resonance, except when , , or . See §B.6 for more on peak-gain versus resonance-gain (and how to normalize them in practice). #### Two-PolePartial Fraction Expansion Note that every real two-pole resonator can be broken up into a sum of two complex one-pole resonators: (B.7) where and are constants (generally complex). In this parallel one-pole'' form, it can be seen that the peak gain is no longer equal to the resonance gain, since each one-pole frequency response is tilted'' near resonance by being summed with the skirt'' of the other one-pole resonator, as illustrated in Fig.B.9. This interaction between the positive- and negative-frequency poles is minimized by making the resonance sharper ( ), and by separating the pole frequencies . The greatest separation occurs when the resonance frequency is at one-fourth the sampling rate ( ). However, low-frequency resonances, which are by far the most common in audio work, suffer from significant overlapping of the positive- and negative-frequency poles. To show Eq.(B.7) is always true, let's solve in general for and given and . Recombining the right-hand side over a common denominator and equating numerators gives which implies The solution is easily found to be where we have assumed im, as necessary to have a resonator in the first place. Breaking up the two-pole real resonator into a parallel sum of two complex one-pole resonators is a simple example of a partial fraction expansion (PFE) (discussed more fully in §6.8). Note that the inverse z transform of a sum of one-pole transfer functions can be easily written down by inspection. In particular, the impulse response of the PFE of the two-pole resonator (see Eq.(B.7)) is clearly Since is real, we must have , as we found above without assuming it. If , then is a real sinusoid created by the sum of two complex sinusoids spinning in opposite directions on the unit circle. Next Section:
2022-12-03 20:20:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9149311184883118, "perplexity": 944.1118326941622}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710936.10/warc/CC-MAIN-20221203175958-20221203205958-00755.warc.gz"}
http://piratesandrevolutionaries.blogspot.com/2016/03/the-axioms-of-axiomatic-set-theory-from.html
## 27 Mar 2016 ### The Axioms of Axiomatic Set Theory, from Schuller’s Course, “Geometric Anatomy of Theoretical Physics” [Search Blog Here. Index-tags are found on the bottom of the left column.] [Central Entry Directory] [Logic & Semantics, Entry Directory] [Set Theory, entry directory] [Unless otherwise noted, the following is paraphrase and quotation from Schuller, with occassional time codes in brackets referencing the relevant places in the video. Please consult the lectures themselves, since my own presentation is not to be trusted. (Proofreading is incomplete, and I am not trained in this field. So there are many mistakes.) I present it here merely because I will reference parts in future posts. My commentary is in brackets.] Notes from Frederic P. Schuller The Axioms of Axiomatic Set Theory from Classes 1 and 2 of “Geometric Anatomy of Theoretical Physics / Geometrische Anatomie der Theoretischen Physik” Institute for Quantum Gravity University of Erlangen-Nürnberg Brief summary: We can ground all modern mathematics (and thus all advanced theoretical physics) without making any conceptual assumptions except for the inclusion relation (the epsilon relation or ∈–Relation) in axiomatic set theory. To this concept we add nine axioms which allow us to build up the machinery for the advanced mathematics that theoretical physics uses. To formulate these axioms we need propositional and predicate logic to provide the symbolic language we use. The nine axioms can be abbreviated EE PURP  IC  F [This overview tries to follow Schuller as closely as possible]: Basic Existence Axioms E1: The Axiom on ∈–Relation x y (xy) ⊻ ¬(xy) [xis a proposition if and only if x and y are both sets.] E2: Axiom of Existence of an Empty Set xy : yx There exists a set x such that for all y it is true that y is not an element of x. That is, there exists an empty set, a set that contains no elements. Construction Axioms P1: Axiom on Pair Sets xymu (um u=x u=y) [Let x and y be sets. Then there exists a set that contains as its elements precisely the sets x and y.] [Or: For all x and for all y, there exists a set m, such that for all u, u is an element of m if and only if u is x or u is y.] U: Axiom on Union Sets Let x be a set. Then there exists a set u whose elements are precisely the elements of the elements of x. xy c (c y ↔ ∃z (c ∈ z ∧ z ∈ x)) R: Axiom of Replacement x y a (∃z A(y, z) → ∃zx A (y, z)). Put another way. Let R be a functional relation. Let m be a set. Then the image of m under the functional relation R is again a set. P2: Axiom on Existence of Power Sets ∀x ∃p ∀y (y p ↔ z (zy → zx)) [Put another way. Let m be a set. There exists a set denoted P (m) whose elements are precisely the subsets of m.] Further Existence/Construction Axioms I: Axiom of Infinity m (∅∈m ∧ ∀x ∈ m ((x∪{x}) ∈ m)). There exists a set that contains the empty set and with every of its elements y it also contains the set with the element y (or {y}) as an element. C: Axiom of Choice X ((∀x X y X (x=y xy ≠ ∅)) → ∃z (∀x X ∃!y yxz)) if you have a collection X of pairwise disjoint non-empty sets, then you get a set z which contains one element from each set in the collection. [Schuller, using different formulae: Let x be a set whose elements are non-empty and mutually disjoint, then there exists a set y which contains exactly one element of each element of x.] Non-Existence Axiom F: Axiom of Foundation x≠∅ →∃y (yx yx = ∅) Every nonempty set is disjunct from one of its elements. [Schuller: Every non-empty set x contains an element y that has none of its elements in common with x.] Summary Class 1: “Logic of Propositions and Predicates” For the sake of providing the mathematical foundation for various fields in theoretical physics, Schuller in this class will work toward an account of differential geometry from the most basic concepts possible. This field of mathematics works with space, and space is made of sets of points. So to study these things, we first will need to look at what sets are. But set theory itself cannot be the absolute foundation of this project. For, were we to try to define a set, for example as a collection of elements, we would need already some other concepts (which presumably we cannot assume at this foundational level); for example, how do we define a “collection” and an “element”? The way we will get around this is by constructing set theory on the basis of axioms rather than on such definitions. But in order to formulate these axioms, we first need a language, in this case, of propositional and predicate logic.  [Note, while it may seem that Schuller will define the terms or concepts in propositional and predicate logic, he in fact uses a logical operator, the biconditional, which is understood merely in terms of its truth functionality, and what is being biconditionally combined are most basically just mechanically operable symbols.] What could the first definition be? For a definition you need notions that you already have in order to define a new notion. But if you do not have any notion yet, how would you start? The trick is to start axiomatically, and so we will have to write out axiomatic set theory. But that then raises the question, in what language would you possibly do that? So, actually before we come to set theory, we need another building block down here, and that would be logic. And we will deal with propositional and predicate logic first. That will define our language. Then we will be able to write down the axioms of set theory. (quoting Schuller, 0.08.30 - 0.09.15) Chapter 1: Axiomatic Set Theory 1.1 Propositional Logic Def. A proposition p is a variable that can take the values “true” or “false”. No others. [19.30] We can build new propositions from given ones. We do so by means of logical operators. They come in different forms. The simplest are: a) unary operators, of which there are four.  It takes one proposition p as given, and makes from it a new proposition. This one proposition can have values true or false. a1) The first is not, ¬. It makes a true value false and a false value true. b) The second is identity, which we will call ID p. This keeps the same values as p. 3) The third is the tautology operator, ⊤p. It is always true independent of p’s original values. 4) The fourth is the contradiction, ⊥p. Regardless of p’s original value, ⊥p is always false. [23.00] b) The binary operators. We start with two propositions that we assume as given. b1) “And,” pq. It is a new proposition. In total it is one proposition. It is true only when p and q are true. There are in fact 16 total such possibilities for binary operators. b2) “Or,” ∨. [True if at least one disjunct is true] b3) Exclusive or. [True if either disjunct is true, but false if both or neither.] b4) Implication arrow →. It takes a proposition and another, and makes a new one out of them. It is false only when the consequent is false and the antecedent true. This means that if the antecedent is false, then the whole implication is true. b5) Equivalence ↔ [True when both terms are either both true or both false]. [Note, more on the binary operators can be found in many sources.] Schuller then gives some remarks: 1) We agree on decreasing binding strength in the sequence in this order:  ¬, ∧, ∨, →, ↔ . 2) All higher order operators can be built from one single binary operator, namely the NAND operator (negated ‘and’). It is true only when both conjuncts are false. [0.35.20] 1.2 Predicate Logic [0.37.00] Def.: A predicate is a proposition-valued function of some variable(s). Take one example: P(x) Its truth value depends on what x is. Take another example: P(x, y) Its truth value depends on what the combination of x and y is. We can construct new predicates from given ones. a) We can combine a single predicate with a double to get a predicate for three variables. Q(x,y,z)↔P(x)∧R(y,z) b) We can convert the predicate P of one variable into a proposition, namely: x: P(x) The proposition is that formula above. It was constructed form a predicate of one variable: P(x). And we may read it, “For all x, p of x is true”. It is defined to be true if P(x) is true independently of x. [Written on the board: defined to be true, if P(x) ↔ true, independently of x.] Feel good example: P(x) is defined as “x is a human being” implies “x has been created.” [Written on the board as: P(x) ↔ (“x is a human being” → “x has been created.”)] Then ∀x: P(x) is true. b2) Existence quantification. It takes a proposition of one variable, and it reads, “there exists an x such that P(x)”. x: P(x) It is defined in the following way: x: P(x) ↔ ¬(∀x: ¬P(x)) Corollary: For all x, something is not true is equivalent to it is not true that there exists an x for which it is true. x: ¬P(x) ↔ ∃x: P(x) If something is not true for any x, then certainly it is not true that there exists one where it is true. [49.30] Remark: Quantification for more predicates of more than one variable: Q(y) ↔ ∀x: P(x,y) In the above, the x is the “bound variable”. The y, which survived the quantification and which appears again, we call the “free variable”. We can then quantify again for y. Remark 2: The order of quantification matters. So for all x, there exists a y such that P(x,y) [or ∀x: ∃x: P(x,y)] is generically different proposition than there exists a y such that for all x, P (x,y) [or ∃x: ∀x: P(x,y).] 1.3 Axiomatic Systems & Theory of Proofs We need the valid rules for writing a proof. Def. An axiomatic system is a finite sequence of propositions a1, a2, ..., aN, which are called axioms. (Note that at this point, we have not defined sets, and therefore we do not really have the numbers that were used as subscripts above. However, instead of these numbers we can use “pre-mathematical scripts”, like hash marks.) Def. A proof of a proposition p within an axiomatic system a1, a2, ..., aN, is a finite sequence of propositions q1, q2, ..., qM, such that the final qM proposition is the one you want to prove [Note, there is a cut in the recording here. Perhaps there is also some small measure of conceptual continuity that is lost too, but I am not sure. (1.02.30)] Remarks: A proof is a finite sequence of propositions q1 to qM, such that for any j that lies between 1 and the final step, 1 ≤ j M, that is, for any step of the proof, either of the following condition is true: (A) qj is a proposition from the list of axioms. This means that an arbitrary step in the proof, we may pick one of the axioms and put it there. [(A) stands for ‘axiomatic’] or (T)  The jth step of the proof, qj, is a tautology, which is a proposition/statement that is always true independent of the elementary propositions making it up, e.g., p∨¬p. This is always true (you can check a truth table to see this): (p∨¬p)↔⊤ . [(T) stands for ‘tautology’] or (M) for the jth step of the proof, qj , there exists for the proof m or n coming before qj, such that the proposition qm and qn implies the jth step, such that this (the prior formulation) is true. ∃ 1 ≤ m,n j: (qm qn qj) is true. [(M) stands for ‘modus ponens’] So we could have steps coming before in the proof, and their conjunction implies the jth step is true. Later Schuller will give one example of this sort of proof scheme, namely, the proof, the uniqueness of the empty set. The other proofs will be abbreviated. Remark: If p can be proven from an axiomatic system a1, ..., aN, we often write a1, ..., aN p, and this symbol means “proves”. Remark: This definition of proof allows to easily recognize a proof. An altogether different matter is to find a proof for a certain proposition. Remark: Obviously, any tautology, should it occur in the axioms, can be removed from the list of axioms without impairing the power of the axiomatic system. An extreme case of this is: the axiomatic system for propositional logic, which is the empty sequence. For, in propositional logic, all we can prove are tautologies. Thus we have no axioms. Def. An axiomatic system is consistent if there exists a proposition q which cannot be proven from the axiomatic system: ¬(a1, ..., aN q) . The idea behind this definition: Consider an axiomatic system containing contradictory propositions. For instance we have a series of propositions, then we have s, and later on we have not s. ..., s, ...., ¬s, .... We can conjoin them and imply q. s∧¬sq. Then by the deduction rule (M) clearly: This is a tautology, because s and not s is always false, and if the assumption (antecedent) is always false, then the whole implication is always true. This means that from any contradiction we can prove any arbitrary proposition. [So this is a bad situation that would indicate that the proof is inconsistent. Now, recall the definition from before: “An axiomatic system is consistent if there exists a proposition q which cannot be proven from the axiomatic system: ¬ (a1, ..., aN q)” The idea seems to be the following. If it is inconsistent, you can prove all propositions. But if there is at least one that cannot be proven, then it is not inconsistent, and is therefore consistent.] “So the problem is that any statement can be proven if you have contradictory assumptions, contradictory axioms. Now, it is a sign of not having inconsistency, of not having contradictory axioms, if it is simply not true that every statement can be proven.” [1.22.10] Theorem: Propositional logic is consistent. Proof: It suffices to show that there exists a proposition that cannot be proven propositional logic. Propositional logic has an empty sequence of axioms. So only (T) and (M) must carry any proof. So we can only add tautologies, and modus ponens only can operate on tautologies. So the only thing we can prove are tautologies. But that means q and ¬q cannot be proven, because it is not a tautology. Thus propositional logic is consistent. But with other axiomatic systems, proving consistency can be very difficult. Gödel even shows that under certain circumstances, this is impossible to prove. Schuller will give a rough outline of one of Gödel’s theorems. Theorem: (Gödel) Any axiomatic system that is powerful enough to encode the elementary arithmetic of natural numbers is either inconsistent or contains a proposition that can neither be proven nor disproven. So it in that case would contain true statements that cannot be proven. This sent shockwaves through mathematics, because the notion of truth seemed clear. Something is true if it can be proven, based on a system of assumptions. This is the pure truth of mathematics. Proof: It is complicated. Basic idea of the proof: (1) Assign to each (meta-) mathematical statement a number, now called Gödel-number. For example, consider a mathematical statement. Every symbol in the equation gets a number. [Then metamathematically you somehow combine the coding numbers to get one large number for each mathematical statement, I think. See (1.32.00)] Then you have a number for everything, which requires elementary arithmetic computation. (2) Use a “The barber shaves all people in his village who do not shave themselves”-type argument to identify a proposition that is neither provable nor disprovable. You use some reflexive construction. [Schuller then seems to mention diagonalization (1.34.30). See this entry. Schuller seems to be saying that we use a diagonalizing method to produce a number not in a previous set, then we give that number a Gödel code. “Then you produce a statement that uses the Gödel number of the very same statement. And then you arise at such a contradiction.” I do not follow so well, but I guess the idea is that you begin with a liar-like paradox of self-reference, then you encode it so that it can mathematically refer to its own self in a self-contradictory way. But since this self contradiction is happening all on the level of mathematics, that is problematic, because mathematics is not supposed to do that.] As we will see next class, everything in set theory is based on the element symbol, which is a predicate of two variables. There is no other predicate of two variables in set theory. Even equality is based on the element symbol. But we never get an explanation of what it means for x to be an element of y. And in fact, given this basic foundation, there is no a priori reason you cannot ask if a set is included in itself. We also learn that the only set you need to construct is the empty set, and all others will be based on it. We will learn the 9 or so axioms that reproduce the set theory upon which all modern mathematics is built. Class 2: “Axioms of Set Theory” 1.4 The ∈-Relation Set theory is built on the postulate that there is a fundamental relation (i.e., a predicate of two variables) called ∈ (epsilon). But we cannot have any preconceived notions about ∈. There will be no definitions of what ∈ is (apart from it being some predicate of two variables) or of what a set is. We will not make definitions saying something like “a set is ...”. Rather, we will use 9 axioms that speak of ∈ and sets. These axioms will teach us how to use ∈ and what constitutes a set. So it will only be the interplay between the ∈ and what we call ‘sets’ which defines what a set is and which defines what the ∈ means. Such an approach is necessary if we want to write down mathematics from scratch, without any prior notions or terms, on the basis of which we could define the element relation or set. Overview of the axioms: Schuller memorizes them as EE PURP IC F E, E: Basic existence axioms. PURP: Construction axioms, which instruct us how to build new sets, once we have given sets. IC: Further existence axioms, but they can also be classified as construction axioms. F: (Axiom of foundation) A non-existence axiom that says, “don’t use sets of the following kind.” [0.06.30] We will introduce some new relations. Using the element relation, ∈-relation, we can immediately define further relations. One is x is not an element of y, xy We see that a relation is a symbol that eats two variables. ∉(x,y) =: xy [Note, I do not know yet what the symbol =: means. Wolfram says this about a similar looking symbol, in their entry for the colon: As a part of the symbol A : = sometimes used to mean " A is defined as B." ] But it is tradition to put the symbol between the two variables, because it saves the brackets and comma. We define this to be equivalent to be not x in y [8.35]: ∉(x,y) =: xy : ↔ ¬(xy) And we define a subset relation, xy which is defined to be the case if for all a, it is true that a in x implies a in y. xy :↔ ∀a: (axay) And we define x equals y (and so we see the equality sign) is actually defined in terms of the epsilon relation. x equals  y means that x is a subset of y and y is a subset of x. x=y :↔ xy yx 1.5 Zermelo-Fraenkel Axioms of Set Theory Axiom E1: Axiom on the ∈-relation The first axiom describes the relation between ∈ and sets. x element y (xy)  is a proposition (that means it is either true or false) if and only if x and y are both sets. [11.30] [Later he gives a formal rendition of this axiom: For all x and for all y, x element y is either/or (exclusive or) true or it is false. ∀x : ∀y : (x∈y) ⊻ ¬(x∈y) [25.45] (Note, I am not familiar with his sign for exclusive disjunction, so I typed it with ⊻. It would mean in this case that either x is a member of y or it is not, and it cannot be neither or both. I am not sure how this formulation captures what he says about xy being a proposition if and only if x and y are both sets. This new formulation seems to just establish that y is a set.] We recall from first order predicate logic that we leave it entirely open the nature of the variables that enter such relations as xy, which is a predicate of two variables. This first axiom clarifies that xy is a proposition if it acts on what we are going to call ‘sets’. This is the first thing we know about sets. We now give a counter-example showing what is not a set. Counter-example: Assume there is an object u, some u, that contains all sets that do not contain themselves as an element . To be precise: we assume there exists a unique u such that for all z, we have z in u if and only if z is not in z; that is, all those sets in u that do not contain themselves. u : ∀z : (zuzz) [0.14.00] We assume z is a set, and we assume there is such a u. Question: is u a set? Now, consider this part of the formulation: zz. We might be bothered, because we normally distinguish a set from the elements of a set. And it may bother us that here we have a set either being an element of itself or not being so. Later we encounter another axiom that will say to ignore such cases. But for right now, in accordance with an above axiom that if we have the ∈-relation xy, if x and y are sets, then xy is a proposition. Therefore, zz is a proposition, because z is a set. Returning to the question, (is u a set?), we see that it is not. Consider if u were a set. Then one must be able to decide whether u in u (uu) is true or false. Why? If uu is a proposition, then it must be either true or false. But 1) assume that uu is true 2) assume that uu is false It must be that one of the two options is the case, because u is defined. [Before we continue with the reasoning, let us look again at the formulation at hand: u : ∀z : (zuzz) The part that reads zu would mean that u is a set, because something is included in it. So u would be a set with members z. But the next part of the formulation zz says that these members of u do not include themselves. So u is a set of other sets that do not include themselves. We are then wondering, is u itself one of the z that is included in itself.] Let us evaluate the two assumptions. 1) So assume uu is true. This means that it is true that u is an element of itself [that is, it fulfills the first part of the formulation zu, because u would be a z included in u. So uu makes the first side of the biconditional true. But, by making the first part of the biconditional true, that implies the second part is true, which reads zz. We are saying that u is such a z, and here that this z (or u) is not included in itself.] So, it immediately follows from the definition of u that u is not in u. This is a contradiction. If we assume u is in u, it implies u is not in u. So our assumption is false. 2) assume uu is false. This is equivalent to saying that u is not in u (or uu). [So recall that we are asking if u is an x in the formula u : ∀z : (zuzz) Since u is not included in u, then it fulfills the second side of the biconditional zz. But by affirming the truth of one side of the biconditional we have done the same to the other side, which says that z (which is now taken to be u) is in fact a member of u (or zu and thus uu). So it follows from the definition of u that u is an element of u, which contradicts our original assumption that it is not. Thus this assumption is also false. Conclusion: u is not a set. [That is, the set of sets which do not include themselves is not itself a set, because defining it as such leads to a contradiction.] [0.18.30] This of course is Russell’s paradox. The ZF axioms help avoid these problems. Axiom E2: Axiom of Existence of an Empty Set. There exists a set that contains no elements. There exists a set x such that for all y it is true that y is not an element of x. x : ∀y : yx Theorem: There is only one empty set. And because there is only 1, we can give it a name. Call it, ∅. [00.24.40] There are two proofs. Proof: (standard textbook style) Assume x and x′ are both empty sets. (We will show that any two empty sets are equal and therefore there is only one.) If that is true, then we can write something like: yx. But this is false, because x has no members. But since it is false, that means we can make an implication where any consequent is false. So we can write: (yx) → (yx′) But if this is true, that means it is true independently of what y is. But that means we can use the all quantor [I am not sure I get that step. I guess the idea is that it is true regardless of what y is, and therefore it is true for all y.] y : (yx) → (yx′) But that just means by definition that x is a subset of x′. [Recall our definition of subset: xy :↔ ∀a: (axay). This means that if we go with our formulation above, x is a subset of x′, because members of x are members of x′.] Conversely: we can write, (yx′)  → (yx) [yx′ is false, so we can make it the antecedent to a conditional with an arbitrary consequent. The whole implication will still be true.] And since it is true independently of y, we can write: y : (yx′)  → (yx) But this means, x′ is a subset of x. [See the above reasoning.] x′⊆x [Now, recall our definition for x=y: x=y :↔ xy yx ] In summary: x=x So if you take any two empty sets, they must be the same. So there is only one empty set. [0.28.30] Proof: (formal version) For the formal version, we need to encode our assumptions as axioms (notated a1, a2, etc.). The assumptions are that x and x′ are both empty sets. a1 ↔ ∀y : y a2 ↔ ∀y : yx′ Now we need a string of further propositions (notated q1, q2, etc.). We eventually want to arrive at the statement we want to prove, which is the statement that x=x′. a1 ↔ ∀y : y a2 ↔ ∀y : yx′ q1 q2 ... ... qM x=x We will use the three deduction rules (A), (T), and (M) to justify each step. We first write a tautology: if for all y, y is not an element of x, that implies that for all y, y is an element of x implies y is not an element of x′. a1 ↔ ∀y : y a2 ↔ ∀y : yx′ q1 ↔ ∀y : yx → ∀y : (yx yx′) ... [T] q2 ... ... qM x=x Why is this true? We know from the assumptions that this is true: (yx yx′). [For, both sides of the implication are false.] Now, since it is true (and since it is the consequent of the larger implication), we know that ∀y : yx → ∀y : (yx yx′) must be true. [It is a tautology, because no truth assignment can make it false. Let as make these substitutions: p=yx, ~p=yx, q=yx′, ~p=yx′. Here is a truth table for what might be the structure of the formulation: (made with Michael Rieppel’s online truth table generator) As we can see, there is no truth assignment that makes this proposition false.] Now we pull the first axiom and we write, q2 is the schema/proposition that for all y, y is not in x. a1 ↔ ∀y : y a2 ↔ ∀y : yx′ q1 ↔ ∀y : yx → ∀y : (yx yx′) ... [T] q2 ↔ ∀y : y... [A] ... ... qM x=x Then for q3, we use modus ponens. We take two prior steps of the proof q1 and q2, and we put an ‘and’ between them. We can then say that this conjunction implies q3 if it is a tautology. [0.34.20] [So we take q1, and we use it to affirm the antecedent in q2, which allows us to affirm the consequent, namely, ∀y : (yx yx′), which we then place as q3.] a1 ↔ ∀y : y a2 ↔ ∀y : yx′ q1 ↔ ∀y : yx → ∀y : (yx yx′) ... [T] q2 ↔ ∀y : y... [A] q3 ↔ ∀y : (yx yx′)   ... [M] ... qM x=x But we know from our definitions before that ∀y : (yx yx′) is another way to say that x is a subset of x′. [0.36.50] For q4, we write a tautology. For all y, y not an element of x′ implies for all y, y in x′ implies y in x. a1 ↔ ∀y : y a2 ↔ ∀y : yx′ q1 ↔ ∀y : yx → ∀y : (yx yx′) ... [T] q2 ↔ ∀y : y... [A] q3 ↔ ∀y : (yx yx′)   ... [M] q4 ↔ ∀y : yx′ → ∀y : (yx′→ yx)  ... [T] ... qM x=x [Using the same substitutions as before, we would get the following truth table, which shows it is a tautology. (made with Michael Rieppel’s online truth table generator) ] For q5, we pull axiom 2, which was that for all y, y is not in x′. a1 ↔ ∀y : y a2 ↔ ∀y : yx′ q1 ↔ ∀y : yx → ∀y : (yx yx′) ... [T] q2 ↔ ∀y : y... [A] q3 ↔ ∀y : (yx yx′)   ... [M] q4 ↔ ∀y : yx′ → ∀y : (yx′→ yx)  ... [T] q5 ↔ ∀y : yx′  ... [A] ... qM x=x Then for q6 we can now conclude q6 is a valid step of the proof, if it has the form for all y, y in x′ implies y in x. a1 ↔ ∀y : y a2 ↔ ∀y : yx′ q1 ↔ ∀y : yx → ∀y : (yx yx′) ... [T] q2 ↔ ∀y : y... [A] q3 ↔ ∀y : (yx yx′)   ... [M] q4 ↔ ∀y : yx′ → ∀y : (yx′→ yx)  ... [T] q5 ↔ ∀y : yx′  ... [A] q6 ↔ ∀y : (yx′ → yx) ... [M] qM x=x [For this step, q6, we take q4, or ∀y : yx′ → ∀y : (yx′→ yx), and we affirm its antecedent with q5, and we thus conclude by modus ponens that ∀y : (yx′ → yx).] Now, we recall that this is another way to write: x ′⊆x. [He then says we use modus ponens on q3 and q6 to get the conclusion. But I do not see how either can affirm an antecedent in the other. I also do not see a way to just affirm the antecedent in either one using other steps. The only reasoning I can find for how we get to the conclusion is the definition for the equality of sets. x=y :↔ xy yx. But for the proof to work, we need modus ponens. I failed to understand this proof, and I invite corrections. At around 0.38.00 he seems to be saying that q6 implying q3 is a tautology, and thus we can write the conclusion (he points to x′⊆x then to xx′ then to the x′ of x=x′. But I do not know what he means there, because I would not know how to make a truth table for all those parts.] [0.38.45] Axiom P1: Axiom on Pair Sets Let x and y be sets. Then there exists a set that contains as its elements precisely the sets x and y. What is important is that their combination of sets makes another set. For all x and for all y, there exists a set m, such that for all u, u is an element of m if and only if u is x or u is y. x : ∀y : ∃m : ∀u : (um u=x u=y) Restated: For any sets, there exists a set such that its elements are precisely those that are either x or y. [This seems to be saying that if you have sets x and y, then you have the set m which is their combination.] Notation: denote this set m by {x, y}. [0.43.25] What the axiom assures us of, is that if x is a set and y is a set, then this whole thing counts as a set. Worry: Is {x, y} the same as {y, x}? Yes it is. No worries. This is because, if a comes from {x, y}, then a comes from {y, x}. And if a comes from {y, x}, that means a comes from {x, y}. This means the set {x, y} is a subset of {y, x} and vice versa. So in total they are the same. [0.45.00] Remark/Def.: The pair set axiom also guarantees the existence of sets of one element. If we write set x: {x} Then we define the set x as the pair set of x with x. {x} := {x, x} [0.46.15] Axiom U: Axiom on Union Sets Let x be a set. Then there exists a set u whose elements are precisely the elements of the elements of x. [Schuller says that he will no longer render the axioms into first order predicate logic statements of the sort we have been making. For that reason, I cannot trust the renditions I try to make. But we should attempt something, and I will base them on other sources. The wiki page for this axiom shows it as. They translate this as: “Given any set A, there is a set B such that, for any element c, c is a member of B if and only if there is a set D such that c is a member of D and D is a member of A.” Or more simply, “For any set A, there is a set UA which consists of just the elements of that set.” (Wiki) The formula is very complicated. We apparently want to speak of two sets, A and another set with exactly the same members as A. I do not know why we have two other sets mentioned, B and D, rather than just one. At any rate, we learn that members c are in all three sets. The members are in set B if and only if they are in D, which is a set that is in A. To make this formulation look more like the other Schuller has given, perhaps we could write it as: x : ∃y : ∀c : (c y ↔ ∃z (c z z x)) ] We might be concerned with the definition as Schuller gave it, because we are speaking of union, but we mentioned only one set, x. [Recall he said: “Let x be a set. Then there exists a set u whose elements are precisely the elements of the elements of x.”] He explains that the set x is already the collection of all the sets that will be put into the union. Notation: The set u is written with a big cup u. u = ∪x Ex. Let a, b be sets. [First recall the pair set axiom: “Let x and y be sets. Then there exists a set that contains as its elements precisely the sets x and y.” In the following, he seems to be saying that from set a we can derive {a}, which I guess is the set containing set a, and we do this on the basis of the pair set. Perhaps this is because we consider a to be both members of the pair, but I am not sure.] Take the set with the element {a}, and take the set {b}, and we can put them together into a set: {{a}, {b}}, and we call it x. This is guaranteed by the pair set axiom. We then ask, what is the union of x? ∪x = ? As we said, the union of x is the set that contains precisely the elements of the elements of x. The elements of x are the set with the element a and the set with the element b [that is, the two internal bracketed sets of a and b], so the union of x is the set {a, b}. x : ∃y : ∀c : (c y ↔ ∃z (c z z x)) And let us stick with our above a, b example. So the c would be in the example the a and the b, the elements of the sets (these elements are of course themselves sets). The y set I think could be the final union, which was {a, b} in the example. The set z I think would be the internal set containing the set of either a or b. So in {{a}, {b}}, which was made with the axiom of pairing, the {a} and the {b} would be sets z in the formulation. That would make{{a}, {b}} then be set x in the formulation, since a (or c) is contained in set {a} (or set z), and {a} (or set z) is contained in x {or set {{a},{b}}. With that inclusion being true, {a} (or set c) is included in {a, b} (or set y).] x = {{a}, {b, c}} Then we know by the union set axiom that ∪x is a set. And it deserves a name. So we define ∪x as {a, b, c}, and we define it from two element sets. This second example immediately generalizes to a definition. Def: Let a1, a2, ..., aN be sets. Then define recursively for all N≥3, {a1, a2, ..., aN } := ∪ {{a1, ..., aN-1 }, {aN }} Note, Russell’s paradoxical set cannot be such a union. So we can only build a union of as many sets as can fit into a set. If we take all the sets that cannot contain themselves, we cannot collect them in a union. Axiom R: Axiom of Replacement Let R be a functional relation. Let m be a set. Then the image of m under the functional relation R is again a set. [Note, Schuller will speak of there existing precisely an x. And he uses this symbol: ∃! . But he assigns it for homework to figure out how to understand it.  The following I copy from James Aspnes webpage. ##### 3.4.1. Uniqueness An occasionally useful abbreviation is ∃!x P(x), which stands for “there exists a unique x such that P(x).” This is short for (∃x P(x)) ∧ (∀xy P(x) ∧ P(y) ⇒ x = y). An example is ∃!x x+1 = 12. To prove this we’d have to show not only that there is some x for which x+1 = 12 (11 comes to mind), but that if we have any two values x and y such that x+1 = 12 and y+1 = 12, then x=y (this is not hard to do). So the exclamation point encodes quite a bit of extra work, which is why we usually hope that ∃x x+1 = 12 is good enough. (James Aspnes) ] Def.1 (for functional relation): Relation R is called functional if for every x there exists precisely one y such that R of x and y. Or written more symbolically: Relation R is called functional if ∀x ∃!y : R (x, y) This is a function. To every x there is precisely one y. For another x, there is another y. Note that at this state there is no talk about x or y having to be a set. Def.2 (for image): The image of a set m under a functional relation R consists of all those y for which there is an xm such that R (x, y). [I am not sure from this definition, but the image of set m seems to be all those terms that result when we apply a function to some set of terms.] We will normally need to evoke a principle based on this axiom called the principle of restricted comprehension. [1.06.30] The axiom of replacement implies, but is not implied by, the principle of restricted comprehension. Principle of restricted comprehension: Let P be a variable of one variable. And let m be a set. Then, those elements ym for which P(y) holds (is true) constitute a set. Notation: This set is usually denoted the set that contains those y in m for which p of y holds. {ym | P(y)} We do this all the time. We say, select those elements for which a condition applies. But, this is not consistent. The Principle of Restricted Comprehension (PRC) is not to be confused with the INCONSISTENT Principle of Universal Comprehension (“P”UC) that Russell had problems with, which is stated, collect all the y for which p of y applies, is a set. But this is not true [Schuller writes (sic)]. {y | P(y)} is a set (sic) But this is too much, because P of y could be y that is not an element of y. The difference is that we need to say from the start we name a set m from which they are selected. We already assumed the m, and then the P(y) cannot make it bigger. The m is the restricting part. Schuller then proves it from the axiom of replacement. [I omit this part, but it can be found at 1.11.45 – 11.17.00] We will use restricted comprehension to define the complement. Def: Let u be a subset of m, then m without u is the collection of all those elements of m for which x does not lie in u. More symbolically: Let u m. Then m\u := {xm | xu} We know that {xm | xu} is a set, due to PRC, that is ultimately due to axiom of replacement. [Schuller leaves establishing intersection for homework.] [1.19.15] [At this point we might want a formula for the axiom of replacement. Wolfram says the following: Axiom of Replacement. One of the Zermelo-Fraenkel axioms which asserts the existence for any set a of a set x such that, for any y of a, if there exists a z satisfying A (y, z), then such z exists in x, x y a (∃z A(y, z) → ∃zx A (y, z)). It seems like this can be understood that we have a set a which includes all the terms we will be dealing with. Within this set we have a set x, which contains those terms z that result from the functional relation A(y, z). But probably I misread it.] Axiom P2: Axiom of Existence of Power Sets The power set is the collection of all subsets of a set. Historically, in naïve set theory, the PUC was thought to be needed in order to define for any set m, this object, the power set, marked with the curly P [typed here in boldface], P(m), and traditionally and inconsistently, using the PUC to collect all the subsets of m. P(m) := {y | ym} But we lack the restriction, so we are not going to do it this way. There is no other way to postulate that the power set exists. [1.26.10] Axiom on Existence of Power Sets Let m be a set. There exists a set denoted P(m) whose elements are precisely the subsets of m. [1.27.00] Ex: Let m be the set with elements a, b, which themselves are sets. Then the set, the power set of m, is the collection of all subsets. The empty set is an element. Then there are the one-elements subsets. Then there is the entire set. These are all the subsets of m. And they are all collected in the power set of m. More symbolically: m = {a, b} P(m) = {∅, {a}, {b}, {a, b}} [1.28.00] Is the result is a set? To say it is, we need an extra axiom dedicated to the existence of power sets [for various reasons the others do not work. See 1.28.10]. [We will try to make a formula for the axiom on the existence of power sets. Wikipedia writes this: In the formal language of Zermelo-Fraenkel axioms, the axiom reads: A P B [B P ↔ C (C B → C A)] where P stands for the Power set of A, . In English, this says: Given any set A, there is a set  such that, given any set B, B is a member of if and only if every element of B is also an element of A. More succinctly: for every set , there is a set consisting precisely of the subsets of . (Wiki) So using Schuller’s sort of notation, this might be something like: ∀x ∃p ∀y : (y p ↔ z (zy → zx)) It seems here that the original set whose power set we are finding is set x. And set x has members z. When we turn members z into the power set members, by making each one a set in itself and by combing these members z into grouped sets, they then make sets y, which are the members of the power set p. Let us use his example again: m = {a, b} P(m) = {∅, {a}, {b}, {a, b}} And let us compare it to the formulation. Perhaps it is like the following. Here, set m is like set x in the formulation. It is the original set whose power set we are finding. Then, its members a, b are like the z items in the formula. They themselves as individuals can be made into sets, and they can be grouped into other sets. All these possible settings are the terms ∅, {a}, {b}, {a, b}, which are the y terms in the formula. Then, the set of all these y terms is the power set p, which in this example, is {∅, {a}, {b}, {a, b}}.] Axiom I: Axiom of Infinity There exists a set that contains the empty set and with every of its elements y it also contains the set with the element y (or {y}) as an element. [1.30.40] Remark: One such set is, informally speaking, the set with the elements, empty set, the set with the element empty set, and with that, the set that contains as an element the set with the empty set as an element, as so on. ∅, {∅}, {{∅}}, {{{∅}}}, ... We then assign the natural numbers for each of these in the series. There are other sets that are guaranteed by the axiom of infinity, but this is one of those sets. Corollary: The set of non-negative integers, ℕ, is a set, according to axiomatic set theory. Remark: As a set, the real numbers, ℝ, can be understood as the power set of the integers. As a set, ℝ = P(ℕ) If you take the power set of the reals, and you get a bigger set, and so on until getting to the universe of sets. [1.34.10] So the only set we explicitly demanded was the empty set, and on its basis we arrived at the reals. [Let us try to formulate it. This is from the wiki page: In the formal language of the Zermelo–Fraenkel axioms, the axiom reads: I (∅ ∈ I ∧ ∀xI ((x∪{x}) ∈ I)). In words, there is a set I (the set which is postulated to be infinite), such that the empty set is in I and such that whenever any x is a member of I, the set formed by taking the union of x with its singleton {x} is also a member of I. Such a set is sometimes called an inductive set. (wiki) Let us rewrite it more in lines with Schuller’s notations. m (∅∈m ∧ ∀xm ((x∪{x}) ∈ m)). So here we have an infinite set m. It contains the empty set. And, for any member of m (perhaps the empty set included), the union of that member with the set containing that member is also in m. But this then becomes a member x, for which the same procedure applies, and this creates another member, which is a set containing a set containing a set, and so on infinitely.] In total, it will turn out, once all the axioms are there, that essentially every set that we will ever consider in standard mathematics, at least in mathematics  built on these axioms, will be ultimately, basically constructed from only the empty set by these construction axioms we wrote down. So set theory is very simple: it is just the empty set. (Schuller 1.35.30) [Although it is one video lecture, the class itself ends there and the video cuts to the next class where the axiom discussion resumes.] Axiom C: Axiom of Choice Let x be a set whose elements are non-empty and mutually disjoint (that means if you take the intersection of any two elements of the set x, then this intersection will be the empty set), then there exists a set y which contains exactly one element of each element of x. [1.38.20] So intuitively speaking, you look at the elements of x, which are all sets, and from each of these sets you pick one element, and you collect them all into the set y. And the axiom of choice guarantees you that there exists such a set that contains exactly one element of each element of x. Remark: Sometimes people call set y a dark set. Given any set x, there is no clear prescription how you pick an element of  each of the elements of x. Intuition. If you said x consists of pairs of shoes... x = {{left shoe 1, right shoe 1}, {left shoe 2, right shoe 2}, ... } You can have an algorithm: always pick the left shoe. y = {left shoe 1, left shoe 2, ...} And for this, you do not need the axiom of choice. [1.40.30] But if instead you were to consider a set of socks: x = {{sock 1 (left), sock 1 (right)}, {sock 2 (left), sock 2 (right)} ... } Note that left and right only tell us they are in a pair. They are indistinguishable as to which is which. So we cannot apply an algorithm like before. We need instead appeal to the axiom of choice to build set y. y= {sock 1, sock 2, ...} [1.41.50] Remark: The axiom of choice is independent of the other 8 axioms. That means we could have set theory with or without the axiom of choice. But standard mathematics uses it. Remark: There are a number of theorems that can only be proven with the axiom of choice; for example, the proof that every vector space has a basis needs the Axiom of Choice, and the proof that there exists a complete system of representatives of an equivalence relation also requires this axiom. [1.44.50] [At this point we should try to create a formulation for the axiom of choice. This is from William Weiss’ An Introduction to Set Theory: The Axiom of Choice X ((∀x X y X (x=y xy ≠ ∅)) → ∃z (∀x X ∃!y yxz)) In human language, the Axiom of Choice says that if you have a collection X of pairwise disjoint non-empty sets, then you get a set z which contains one element from each set in the collection. (Weiss 28) So let us look at this formulation. We primarily have an implication. We begin by thinking of a set X. It has members x, y, and z. Let us skip to the end. Here we are speaking of a set z. In the intersection of these sets x and z is a unique y. So the intersection of sets x and z is a set y with just one member. Or perhaps we should say, their intersections are just sets y. In other words, whenever you intersect a set x with a set z, you get a set y, but there can be many such intersections and many such y’s. So let us think of the socks example. X = {{sock 1 (left), sock 1 (right)}, {sock 2 (left), sock 2 (right)} ... } The axiom of choice allowed us to produce from this the following set: z = {sock 1, sock 2, ...} So as we can see, the X set is the one that contains a number of other sets, each having sock pairings. I think that the sets x would be the sets {sock 1 (left), sock 1 (right)}, {sock 2 (left), sock 2 (right)}, and so on, and they are contained in the larger set X, which is why there are also the extra braces around all these sets of pairs. The unique y’s, then, would be the members of set z, like sock 1, sock 2, and so on. So what is the intersection of  sets x and sets z? Let us compare them: x are {sock 1 (left), sock 1 (right)}, {sock 2 (left), sock 2 (right)}, and so on. z is {sock 1, sock 2, ...} When we intersect them, we just have the members of z. And we see that each member of this intersection is a unique member y from each x set pairing. The other side of the implication is harder for me to interpret. X ((∀x X y X (x=y xy ≠ ∅)) → ∃z (∀x X ∃!y yxz)) Now I am confused, even with Weiss’s explanation. I do not understand why we are considering x and y being equal. I can imagine z and y being equal. But x I would think has more contents than y. Weiss says that “if you have a collection X of pairwise disjoint non-empty sets ...”. I cannot discern how the antecedent of the formulation can be interpreted as saying this. Probably it is because my interpretation of the consequent is wrong, and probably I am mistaken for other reasons too. But apparently what he says is how we are really supposed to read it. It could also be that the x and y are like the two members of each pairing mentioned above. But then I do not understand this part from the consequent, ∃!y yxz. If x and y were the two members of each pairing, then if you intersect the x parts of the pairings with the final selection set of z, I would think that you would not have any y’s in it (assuming the x’s and the y’s are thought as different). So I have failed to interpret this formula, and I invite assistance. The intuitive explanation Schuller gives, however, is obvious.] Axiom F: Axiom of Foundation Every non-empty set x contains an element y that has none of its elements in common with x. It is a non-existence axiom. It excludes certain situations. Immediate implication: There is no set that contains itself as an element: xx for no set x. [1.48.45] If you do not have this, then every set needs to be built from the empty set (see axiom of infinity). [Let us find a formula for the axiom of foundation. Wolfram has it as: x≠∅ →∃y (yx yx = ∅) More descriptively, “every nonempty set is disjunct from one of its elements.” (Wolfram) So in the formula, we begin by considering set x. If set x is a nonempty set, then other things follow. What follows is that it will have a member that when intersecting with x will produce a nonempty set. In other words, set x will have at least one subset whose members are not the same as the members of x. It seems this means that the set x does not include itself as a set. I am not sure, but perhaps this also implies the following. Say we have the set specified as {1, 2}. Perhaps according to this axiom, the set cannot also at the same time be specified both as {1, 2} and {1, 2, {1, 2}}, but I am not sure.] Images, Ideas, Quotes from: Frederic P. Schuller. Class Video Lectures 1 and 2 [Taken apparently from actual classes 1, 2, and 3]. “Logic of Propositions and Predicates” and “Axioms of Set Theory.” From his course, “Geometric Anatomy of Theoretical Physics / Geometrische Anatomie der Theoretischen Physik” at the Institute for Quantum Gravity of the University of Erlangen-Nürnberg. Available at youtube at: Class 1: “Logic of Propositions and Predicates” https://youtu.be/aEJpQ3d1ltA Class 2: “Axioms of Set Theory” https://youtu.be/Cw5GkdgLgPo And Schuller’s page: https://www.gravity.physik.fau.de/members /people/schuller.shtml Or if otherwise indicated: Weiss, Willaim A. R. An Introduction to Set Theory. [Class notes soon to be published as a book] <http://www.math.toronto.edu/weiss/set_theory.pdf> Michael Rieppel’s online truth table generator http://mrieppel.net/prog/truthtable.html James Aspnes mathematical logic wiki: http://www.cs.yale.edu/homes/aspnes/p inewiki/MathematicalLogic.html Wiki pages: https://en.wikipedia.org/wiki/Axiom_of_power_set https://en.wikipedia.org/wiki/Axiom_of_infinity https://en.wikipedia.org/wiki/Axiom_of_pairing Wolfram pages: http://mathworld.wolfram.com/Zermelo- FraenkelAxioms.html http://mathworld.wolfram.com/AxiomofChoice.html .
2018-07-16 22:19:15
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8311270475387573, "perplexity": 725.7097938771861}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589470.9/warc/CC-MAIN-20180716213101-20180716233101-00067.warc.gz"}
https://scicomp.stackexchange.com/tags/stiffness/hot
# Tag Info 21 So there is a ton to say about this, and we will actually be putting a paper out that tries to summarize it a bit, but let me narrow it down to something that can be put into a quick StackOverflow post. I will make one statement really early and keep repeating it: you cannot untangle the efficiency of a method from the efficiency of a software. The details ... 18 The property follows from the property of the corresponding (weak form of the) partial differential equation; this is one of the advantages of finite element methods compared to, e.g., finite difference methods. To see that, first recall that the finite element method starts from the weak form of the Poisson equation (I'm assuming Dirichlet boundary ... 7 You will want to read up on operator splitting methods. In essence, in every "macro time step" you would treat fast processes by doing many "micro time steps" in one half of the algorithm, and then do a single macro time step for the slow processes in the other half. For higher order, you will want to use what's known as "Strang splitting". 6 There are so many Runge Kutta methods, including Dormand-Prince 45 Cash-Karp 54 Fehlberge (sic) 78 Is there any comparison between them? Well, sure. Here are some traits to compare: Is the method implicit or explicit? (All of your examples are explicit RK methods.) What is the order of convergence? Are there any embedded error ... 6 The term that you want to search for is multiple timestepping (see, for instance, [1-3]). [1] http://www.cs.unc.edu/Research/nbody/pubs/external/Berne/tuckerman-berne-rossi91.pdf [2] http://www3.nd.edu/~izaguirr/papers/newM3paper.pdf [3] http://arxiv.org/abs/1307.1167 6 You are correct: If you satisfy the CFL condition, then all that guarantees is that your scheme is stable, i.e., the numerical solution does not go to infinity. But the CFL condition says nothing about how accurate the numerical solution is. For that, indeed $\Delta z$ and $\Delta t$ must also be small enough compared to the features of the exact solution. ... 6 The only documentation I know about for the implementation of ode23t is in the paper which documents the implementation of ode23tb, the TRBDF2 method in MATLAB. As usually implemented, the trapezoidal rule is not strictly a one-step method because the truncation error estimate makes use of two previously computed solution values. The method is not efficient ... 5 There is no solid definition for stiff equations. I like Shampine's working definition the best: a differential equation is stiff if explicit methods are less computationally efficient than implicit methods. All of the other definitions about eigenvalues are purposefully vague because they cannot capture the fact that the "eigenvalues which matter" is ... 5 I have two extra points I would like to add to Wolfgang's answer. A formulation of the CFL condition that I find more useful than the classic formula is this: A necessary condition for the stability of a numerical scheme is that the numerical domain of dependence bounds the physical domain of dependence. This is exactly what good old $$\dfrac{\Delta ... 5 ode15s is designed to handle stiff systems of ODEs so I doubt if the problem you are encountering is that your "equations are too stiff" It is more likely that your spatial discretization has an error for some reason or your have some other MATLAB programming error. I suggest the following approach to debug this: Set the final integration time for ode15s to ... 4 The zero eigenvalue is not particularly harmful. Think of the linear system$$ \dot x = A x, x(0) = x_0 $$then the components of x(t) decay in proportion to the (magnitude of the) eigenvalues of A (assuming the eigenvalues are all non-negative), whereas the components of x(t) in direction of eigenvectors that belong to zero eigenvalues simply stay ... 4 You can see the formulas for explicit RK4 at Mathworld. Given the simplicity of the expressions it is very reasonable to code one for yourself which is why you may not have found a module. However, if as you say you are solving a stiff IVP then an explicit Runge-Kutta method is not appropriate (the regions of absolute stability are bounded). Perhaps you ... 4 There are a couple of questions implicit in your post: How does one deal with non-uniqueness of the algebraic equations generated by any implicit numerical method? Typically you have a very good initial guess -- the previous time step solution. This will usually ensure that e.g. Newton's method converges to the correct root. For extremely large values ... 4 The approach of using two equations for the real and imaginary parts of an equation is often used. It may lead to more cumbersome formulas, but it is definitely possible and common. An example where this is done in deal.II (a library that I maintain) is here: https://www.dealii.org/developer/doxygen/deal.II/step_29.html 4 In BDF schemes for \dot y = f, one uses$$ f(t_n)=\dot y(t_n) and tries to approximate \dot y(t_n)\approx \sum_{j=0}^k\alpha_k y_{n-j} by the current value y_n (that is to be computed) and the k previously computed approximations. In the presented approach, in (5), y is approximated as a polynomial p in t fitted to y_{n-j}, so that the ... 4 This is not an answer to your question, but more of an observation: More often than not, an ODE is "stiff" or a linear system is "nearly singular" because of a mistake either in deriving the equation to be solved, or in implementing it. Trying to find a way to solve what you have is then just a way to paper over the problem. If you were to find a way to ... 3 Can we expect radau5 to cope with discontinuities, or should we integrate the two trajectories separately? Differential equation methods cannot easily handle discontinuities like this. If you step over a discontinuity, you cannot prove that you will not have order loss. In fact, you normally will have loss of accuracy. Because of this, you want to make sure ... 3 First of all, I cannot judge the quality of the DotNumerics implementation of Radau5, I can only assume it is a direct port of the Fortran original from Hairer and Wanner. A full description of their Radau5 implementation is in their book on solving stiff differential equations. The test you refer to is a check to see if the Jacobian should be recomputed. ... 3 The number of rows and columns in the final global sparse stiffness matrix is equal to the number of nodes in your mesh (for linear elements). For example if your mesh looked like: then each local stiffness matrix would be 3-by-3. Once all 4 local stiffness matrices are assembled into the global matrix we would have a 6-by-6 global matrix. For example the ... 2 I assume you're trying to solve an equation that looks like: \begin{align} -\nabla \cdot (a(x)\nabla{u}) = f, \end{align} for x in some domain \Omega, although the same approach would be fine (for a residual evaluation, anyway) if a were also a function of u. The stiffness matrix will take the form \begin{align} A_{ij} = \int_{\Omega}a(x)\nabla\... 2 If stiffness of element is not positive, then the system is not stable. So the model is most likely not correct. Look at the most basic equation of harmonic oscillatorm x''(t) + k x(t) = f(t)The solution is unstable if k is negative (look at the roots of the characteristic equation). It means the solution will blow up. The stiffness has to be a ... 2 Odespy needs from you a 1st order system of ODEs, namely something of the form \begin{align} \frac{d\mathbf{y}}{dt} = \mathbf{f}(\mathbf{y},t), \tag{1} \end{align} where, for your particular problem, y_i are single-variate complex-valued functions that depend solely on time (that means you need to arrive at the semi-discrete form first by discretizing the ... 2 I don't know of a better way to handle this with SciPy since I don't think it has event handling. But if you're willing to venture beyond SciPy, the following software have the capability, either documented as event handling or rootfinding: Sundials' CVODE MATLAB's ode23 and ode15s Julia's DifferentialEquations.jl Sundials wrapper and Rosenbrock methods ... 2 Rosenbrock methods utilize embedded lower order methods in order to calculate errors for adaptive time stepping. In addition, Rosenbrock methods do not have to solve an implicit system (just a linear system). There is no form of iteration then that takes place in them (unless you're using a Krylov linear solver). Maximum number of steps for stiff solver can ... 2 If you relax the independence between \epsilon and \Delta t to be "approximate", i.e. use a method which has a really large stable region even if it's not A-stable, then there are plenty of methods which can get good performance. A standard set of methods for this are the Backward Differentiation Formulas (BDF) or the Numerical Differentiation Formulas (... 2 Due to your formulation, I call X(z) = \begin{bmatrix} x(z) \\ p_{x}(z) \end{bmatrix} so your ODE is written in matrix form:X^{'}(z) = C(z) X(z)$$Where: C(z) = \begin{bmatrix} 0 & A(z) \\ B(z) & 0 \end{bmatrix}. Your general formula by using backward Euler method is:$$\frac{X(z+\Delta z) - X(z)}{\Delta z} = C(z+\Delta z) X(z+\Delta z)$$... 2 See comments for the original discussion. The lack of L-stability of the trapezoidal rule in the first step is the source of your problem. A simpler and famous toy problem that shows the point is the Curtiss-Hirschfelder equation$$y'=-2000(y-\cos(t)) \\y(0)=0 Integrating this with Backward Euler and the Trapezoidal rule gives the following result (I wrote ... 1 The term "stiff" has many definitions, but a commonly understood one would be "there are two or more timescales associated with the process being modeled, and these timescales are very different". In your case, you have a single ODE whose solution is a single function $y(t)$. As a consequence, it has only a single timescale, and so one would not consider ... 1 If you want to stick to the Python scientific family, you could opt for Assimulo. Assimulo is a wrapper around a lot of ODE integrators, providing a common interface. If you happen to be running on Windows, you can find wheels at Christoph Gohlke's page. Assimulo allows to handle state events to allow for discontinuities in the RHS function of your ODE but ... 1 Mathematically speaking, stiffness is meaningless for a single differential equation, and is rather attributed to a set of differential equations that have different time-scales (e.g. when trying to solve two coupled equations with time-scales of 1 second and 1 day, respectively). However, a single equation can also be referred to as stiff if certain ... Only top voted, non community-wiki answers of a minimum length are eligible
2021-06-22 06:41:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8556665778160095, "perplexity": 538.2596922215841}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488512243.88/warc/CC-MAIN-20210622063335-20210622093335-00462.warc.gz"}
https://www.physicsforums.com/threads/curl-or-maxwell-equations-in-higher-dimensions.211215/
Curl or maxwell equations in higher dimensions 1. Jan 26, 2008 snowstorm69 Anyone know what topic, branch of math, book, or subject I should look up in order to find a formulation for Maxwell's equations in higher spatial dimensions? I don't mean having time as a 4rth dimension. I mean a 4rth (and more) spatial dimension. This would require the maxwell exquations involving Curl to be represented in higher dimensions, which would require that the curl itself be represented in higher dimensions. Does the curl (and do the 2 maxwell's equations involving curl) only apply to 3-D or is it extendable to higher dimensions? Where can I read about this? Thanks! 2. Jan 26, 2008 Vid 3. Jan 26, 2008 snowstorm69 Dear Vid, Thank you so much! Can't wait to read this, and very cool that he was your high school physics teacher! Meanwhile I hope this doesn't discourage others from posting a response but... this response does look excellent! My heartfelt thanks for your help. 4. Jan 27, 2008 haushofer The book of Zwiebach on string theory covers this topic quite early, somewhere in the first chapters. Maybe interesting for you. 5. Jan 27, 2008 snowstorm69 Thanks, haushofer, Looks like a great book; I think I'm going to get it. Also... what about the Helmholtz equation that you get from the Maxwell's equations; any idea where to read about that in higher spatial dimensions? I may post that as a separate question but am very interested in an answer whether from this thread or another. Thanks guys. 6. Feb 21, 2008 Phrak To do it 'right' you first might want to redefine the 4-vector potential as a 5-vector. But it doen't partition as nicely as it does in spacetime. In the usual spacetime you get the electric and magnetic fields that both appear to be vectors (but don't really transform as vectors with a change in inertial frame). In 5 dimensions you get something you might call an 'electric field' that 4 components--one that can be associated with each spacial dimension. But the other part that generalizes the magnetic field has 6 elements. (Each element is associated with two of your spacial dimensions rather than one-on-one.) On top of all that, you get magnetic monopoles popping out of it, after taking another derivative. This all has to do with the way the generalization of the crossproduct that utilizes the completely antisymmetric field tensor behaves in higher dimensions. There's nothing wrong with investigating this, of course, it just might not turn out to be as you expect. -deCraig
2017-12-12 10:43:43
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8076769113540649, "perplexity": 761.9142257357374}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948515313.13/warc/CC-MAIN-20171212095356-20171212115356-00346.warc.gz"}
https://notstatschat.rbind.io/2017/07/26/tail-bounds-under-sparse-correlation/
# Tail bounds under sparse correlation Attention Conservation Notice: Very long and involves a proof that hasn’t been published, though the paper was rejected for unrelated reasons. Basically everything in statistics is a sum, and the basic useful fact about sums is the Law of Large Numbers: the sum is close to its expected value. Sometimes you need more, and there are lots of uses for a good bound on the probability of medium to large deviations from the expected value. One of the nice ones is Bernstein’s Inequality, which applies to bounded variables. If the variables have mean zero, are bounded by $$\pm K$$, and the variance of the sum is $$\sigma^2$$, then $\Pr\left(\left|\sum_i X_i\right|>t\right)\leq 2e^{-\frac{1}{2}\frac{t^2}{\sigma^2+Kt}}.$ The bound is exponential for large $$t$$ and looks like a Normal distribution for small $$t$$.  You don’t actually need the boundedness; you just need the moment bounds it implies: for all $$r>2$$, $$EX_i^r\leq K^{r-2}r!E[X_i^2]/2$$. That looks like the Taylor series for the exponential, and indeed it is. These inequalities tend to only hold for sums of independent variables, or ones that can be rewritten as independent, or nearly independent. My one, which this post is about, is for what I call sparse correlation.  Suppose you’re trying to see how accurate radiologists are (or at least, how consistent they are). You line up a lot of radiologists and a lot of x-ray images, and get multiple ratings.  Any two ratings of the same image will be correlated; any two ratings by the same radiologist will be correlated; but ‘most’ pairs of ratings will be independent. You might have the nice tidy situation where every radiologist looks at every image, in which case you could probably use $$U$$-statistics to prove things about the analysis. More likely, though, you’d divide the images up somehow. For rating $$X_i$$, I’ll write $${\cal S}_i$$ as the set of ratings that aren’t independent of $$X_i$$, and call it the neighbourhood of $$X_i$$.  You could imagine a graph where each observation has an edge to each other observation in its neighbourhood, and this graph will be important later. I’ll write $$M$$ for the size of the largest neighbourhood and $$m$$ for the size of the largest set of independent observations.  If you had 10 radiologists each reading 20 images, $$M$$ would be $$10+20-1$$ and $$m$$ would be $$10$$.  I’ll call data sparsely correlated if $$Mm$$ isn’t too big. If I was doing asymptotics I’d say $$Mm=O(n)$$ I actually need to make the stronger assumption that any two sets of observations that aren’t connected by any edges in the graph are independent: pairwise independence isn’t enough. For the radiology example that’s still fine: if set A and set B of ratings don’t involve any of the same images or any of the same radiologists they’re independent. A simple case of sparsely correlated data that’s easy to think about (if pointless in the real world) is identical replicates.  If we have $$m$$ independent observations and take $$M$$ copies of each one, we know what happens to the tail probabilities: you need to replace $$t$$ by $$Mt$$ and $$\sigma^2$$ by $$M\sigma^2$$ (ie, $$M^2$$ times the sum of the $$m$$ independent variances). We can’t hope to do better; it turns out we can do as well. The way we get enough independence to use the Bernstein’s-inequality argument is to make up $$M-1$$ imaginary sets of data.  Each set has the same joint distribution as the original variables, but the sets are independent of each other.  Actually, what we need is not $$M$$ copies but $$\chi$$ copies, where $$\chi$$ is the number of colours needed for every variable in the dependency graph to have a different colour from all its neighbours. $$M$$ is always enough, but you can sometimes get away with fewer. Here’s the picture, for a popular graph The original variables are at the top. We needed three colours, so we have the original variables and two independent copies. Now look at the points numbered ‘1′.  Within a graph these are never neighbours because they are the same colour, and obviously between graphs they can’t be neighbours. So, all the variables labelled `1′ are independent (even though they have horribly complicated relationships with variables of different labels). There’s a version of every variable labelled ‘1′, another version labelled ‘2′, and another  labelled ‘3′. The proof has five steps.  First, we work with the exponential of the sum in order to later use Markov’s inequality to get exponential tail bounds. Second, we observe that adding all these extra copies makes the problem worse: a bound for the sum of all $$M$$ copies will bound the original sum. Third, we use the independence within each label to partially factorise an expectation of $$e^{\textrm{sum}}$$ into a product of expectations. We use the original Taylor-series argument based on the moment bounds to get a bound for an exponential moment. And, finally, we use Markov’s inequality to turn that into an exponential tail bound.  The first, and last two, steps are standard, the second and third are new. Theorem: Suppose we have $$X_i$$, $$i=1,2,\ldots,n$$ mean zero with neighbourhood size $$M$$. Suppose that for each $$X_i$$ $EX_i^r\leq K^{r-2}r!\sigma_i^2/2$ and let $$\sigma^2\geq\sum_{i=1}^n\sigma_i^2$$ Then $\Pr\left(\left|\sum_i X_i\right|>t\right)\leq2e^{-\frac{1}{2}\frac{t^2}{M\sigma^2+MKt}}.$ Proof: The $$M$$ copies of the variables are written $$\tilde X_{ij}$$ with $$i=1,\dots,n$$ and $$j=1,\dots,M$$, and the labelled versions as $$X_{i(j)}$$ for the copy of $$X_i$$ labelled $$j$$ By independence of the copies from each other $\begin{eqnarray*} E\left[\exp(\frac{1}{M}\sum_{j=1}^M\sum_{i=1}^n\tilde X_{ij}) \right]&=&\prod_{j=1}^ME\left[\exp(\frac{1}{M}\sum_{i=1}^n\tilde X_{ij}) \right]\\ &=&E\left[\exp(\frac{1}{M}\sum_{i=1}^nX_{i}) \right]^M\\\ &\geq&E\left[\exp(\frac{1}{M}\sum_{i=1}^nX_{i}) \right] \end{eqnarray*}$ so introducing the extra copies makes things worse. Now we use the labels. We can factor the expectation into a product over $$i$$, since $$X_{i(j)}$$ with the same $$j$$ and different $$i$$ are independent. $E\left[\exp(\frac{1}{M}\sum_{j=1}^M\sum_{i=1}^n\tilde X_{i(j)}) \right]=\prod_i E\left[\exp\left(\frac{1}{M}\sum_{j=1}^M \tilde X_{i(j)} \right)\right]$ Now we use the moment assumptions to get moment bounds for the sum $E\left[\left(\frac{1}{M}\sum_{j=1}^M\sum_{i=1}^n\tilde X_{i(j)}\right)^r\;\right]\leq M^{r-1}E\left[\left(\sum_{i=1}^n \tilde X_{i1}\right)^r\,\right]\leq M^{r-1}r!K^{r-2}\sigma^2$ Writing $$S_n$$ for $$\sum_{i=1}^n X_i$$, and $$\tilde S_n$$ for $$\sum_{i,j} \tilde X_{ij}$$ we have (for a value $$c$$ to be chosen later) $\begin{eqnarray*} E e^{cS_n} &\leq& Ee^{c\tilde S_n}\\ &=& 1+\frac{1}{2}\sigma^2c^2\sum_{r=2}^\infty \frac{c^{r-2}E\tilde S_n^r}{r!\sigma^2/2}\\ &<&\exp\left[\frac{1}{2}\sigma^2c^2\sum_{r=2}^\infty \frac{c^{r-2}E\tilde S_n^r}{r!\sigma^2/2}\right]\\ &\leq&\exp\left[\frac{1}{2}\sigma^2c^2\sum_{r=2}^\infty \frac{c^{r-2}M^{r-1}r!K^{n-2}\sigma^2/2}{r!\sigma^2/2}\right]\\ &<&\exp\left[\frac{1}{2}M\sigma^2c^2\sum_{r=2}^\infty(cMK)^{r-2} \right]\\ &=&\exp\left[\frac{M\sigma^2c^2}{2(1-cMK)}\right] \end{eqnarray*}$ Write $$\tilde K$$ for $$MK$$ and $$\tilde \sigma^2$$ for $$M\sigma^2$$ to get $E e^{cS_n}<\exp\left[\frac{\tilde\sigma^2c^2}{2(1-c\tilde K)}\right]$ Markov’s inequality now says $P[S_n\geq t\tilde\sigma]\leq \frac{Ee^{cS_n}}{e^{ct\tilde\sigma}}<\exp\left[\frac{\tilde\sigma^2c^2}{2(1-c\tilde K)}-ct\tilde\sigma\right],$ We’re basically done: we just need to find a good choice of $$c$$. The calculations are the same as in Bennett’s 1962 proof of Bernstein’s inequality, where he shows that $$c=t/(\tilde Kt+\tilde \sigma)$$ gives $P[S_n\geq t]<\exp\left[-\frac{1}{2}\frac{t^2}{\tilde\sigma^2+\tilde Kt}\right]$ That’s an upper bound, and adding the same lower bound at most doubles the tail probability. So we are done.
2019-05-22 15:44:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8976057767868042, "perplexity": 348.6714304016279}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256858.44/warc/CC-MAIN-20190522143218-20190522165218-00284.warc.gz"}
http://tex.stackexchange.com/questions/134432/how-to-write-2-parallel-arrows-in-the-xymatrix-environment
# How to write 2 parallel arrows in the xymatrix environment? I want to draw 2 parallel arrows in the xy matrix environment. Basically instead of the standard arrow (\ar[r]), I want to have something like \begin{smallmatrix}\longrightarrow\\\longrightarrow\end{smallmatrix}. I tried leaving the arrow blank and just writing $$\xymatrix{\begin{smallmatrix}\longrightarrow\\\longrightarrow\end{smallmatrix}}$$, but its not working. Edit: I need it to write two simplicial objects and arrows between them. - $\xymatrix{ A \ar@<-.5ex>[r] \ar@<.5ex>[r] & B }$
2016-02-12 05:51:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9925527572631836, "perplexity": 1178.8137984231707}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701163438.83/warc/CC-MAIN-20160205193923-00070-ip-10-236-182-209.ec2.internal.warc.gz"}
https://tex.stackexchange.com/questions/463918/setting-conda-environment-for-pythontex-in-texstudio
# Setting conda environment for pythontex in TexStudio I am using Texstudio with a build configuration based on this question in order to run PythonTex and embed python code into my documents. This worked and I was able to use the \begin{pycode} and \end{pycode} successfully. However the libraries I want to use are in a dedicated conda environment and not in the main system python. I added the --interpreter argument to my build in Texstudio such that it uses the interpreter from that environment. My build command looks like this in Texstudio: txs:///compile | pythontex %.tex --interpreter "C:\ProgramData\Anaconda2\envs\wps_env36\python.exe" | txs:///compile | txs:///view Simple commands like print('hello') will work. However as soon as I try to import libraries which I know exist in this environment it returns an error. This indicates to me that although the interpreter is correctly set, other parameters necessary for the functioning of the conda environment are not. How do I activate a conda environment such that it becomes the one pythontex is using within TexStudio? You can activate the conda environment (activate.bat) and chain it (&&) with running pythontex using: "C:\ProgramData\Anaconda2\Scripts\activate.bat" "C:\ProgramData\Anaconda2\envs\wps_env36\python.exe" && pythontex %.tex By the way, it is generally not recommended to pollute the build chain with raw commands like you did. It's better to save this as a user command, then call it during your Build & View call. To show that pythontex is in the correct environment, (on a bare test environment with pygments installed): \documentclass{article} \usepackage{pythontex} \begin{document} \begin{pycode} import sys print(sys.version) \end{pycode} hello world \end{document} gives whereas my base python environment has version 3.6.7.
2020-11-27 02:19:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8153839111328125, "perplexity": 3626.613215425817}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141189038.24/warc/CC-MAIN-20201127015426-20201127045426-00214.warc.gz"}
https://forum.allaboutcircuits.com/threads/sinewave-with-c.21526/
# Sinewave with C# #### FBorges22 Joined Sep 11, 2008 109 Greetings, I am trying to write a program that generates an array to be used in the excel to represent a sinewave signal. The signal should have the following characteristics: Sampling Rate: 10000 Number of Samples: 2500 Amplitude: 170 Frequency: 60Hz The image annexed in this post show how the program should works: And here is the code that is not working as it should be... What is wrong with him? Rich (BB code): using System; namespace Sinewave { class Program { static double[] data = new double [2500]; static void Main(string[] args) { for (int i = 0; i < 2500; i++) { data = 170 * Math.Sin(2 * Math.PI * 60 * i * 0.00004); Console.WriteLine(data.ToString()); } } } } Thanks, FBorges22 #### Attachments • 7.3 KB Views: 77 #### peajay Joined Dec 10, 2005 67 Frequency generation works like this: Rich (BB code): for (sample_number = 0; sample_number < number_of_samples; sample_number++) { time_in_seconds = sample_number / sample_rate; sample = amplitude * sin ( 2 * PI * frequency_in_hz * time_in_seconds ); } In your code, where you multiply by 0.00004, that's effectively using a sample rate of 25000, which isn't the sample rate that you want.
2019-10-22 06:27:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31230008602142334, "perplexity": 5745.572281940537}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987803441.95/warc/CC-MAIN-20191022053647-20191022081147-00047.warc.gz"}
http://mathoverflow.net/revisions/59585/list
MathOverflow will be down for maintenance for approximately 3 hours, starting Monday evening (06/24/2013) at approximately 9:00 PM Eastern time (UTC-4). Yes, the complex orientation can be factored through these truncations of BP. Either classical methods (the Baas-Sullivan theory of manifolds with singularity - see Baas' "On bordism theory of manifolds with singularities") or more modern methods (see e.g. Strickland's "Products on MU-modules") produce truncated Brown-Peterson $BP\langle n\rangle$ as a tower of "quotients" $$MU \to \cdots \to BP \to \cdots \to BP\langle 2\rangle \to \ell \to H\mathbb{Z} \to H\mathbb{Z}/p$$ and this produces a sequence of compatible complex orientations on these, provided of course that you've produced compatible multiplicative structures on all of the $BP\langle n\rangle$. The problem doesn't really change if you use $ku$. Also, note that $ku$ and $\ell$ have nicer and more natural multiplicative structures and orientations than any version of $BP\langle n\rangle$ does is known to in general. Yes, the complex orientation can be factored through these truncations of BP. Either classical methods (the Baas-Sullivan theory of manifolds with singularity - see Baas' "On bordism theory of manifolds with singularities") or more modern methods (see e.g. Strickland's "Products on MU-modules") produce truncated Brown-Peterson $BP\langle n\rangle$ as a tower of "quotients" $$MU \to \cdots \to BP \to \cdots \to BP\langle 2\rangle \to \ell \to H\mathbb{Z} \to H\mathbb{Z}/p$$ and this produces a sequence of compatible complex orientations on these, provided of course that you've produced compatible multiplicative structures on all of the $BP\langle n\rangle$. The problem doesn't really change if you use $ku$. Also, note that $ku$ and $\ell$ have nicer and more natural multiplicative structures and orientations than any version of $BP\langle n\rangle$ does is known to in general.
2013-06-19 05:08:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9109638929367065, "perplexity": 896.8039256211789}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707773051/warc/CC-MAIN-20130516123613-00079-ip-10-60-113-184.ec2.internal.warc.gz"}
https://datascience.stackexchange.com/questions/82670/how-to-interprete-percentile-information-from-the-describe-function-in-pandas
# How to interprete percentile information from the describe function in Pandas? I am a bit stumped on how to interpret the percentile information you see when you call the describe function on dataframes in Pandas. I believe I have a basic understanding of what percentile means. For example if in a test someones score 40% which ranks at the 75% percentile, this means that the score is higher than 75% of the total scores. But I don't know how to translate this knowledge to interpret what I see from the describe function. To illustrate, given the following: test = pd.DataFrame([1,2,3,4,5,1,1,1,1,9]) test.describe() This prints out something similar to this: | count | 10.000000 | |-------|-----------| | mean | 2.800000 | | std | 2.616189 | | min | 1.000000 | | 25% | 1.000000 | | 50% | 1.500000 | | 75% | 3.750000 | | max | 9.000000 | Now I do not know how to interpret the values assigned to 25%, 50% and 75%. For example 5 out of the 10 values is set to 1, but the 50% has a value of 1.50000, clearly it is not saying 1.5 has a value of 50% because there is not even 1.5 in the data set. Also why is 25% set to 1.000000 and 75% set to 3.750000? I know I am interpreting this wrong hence this question! Would appreciate if someone can help understand this • After testing on my own dataset, 50% row is the same as if I print df.median(), so to me 50% correspond to your median, 25% is 1st quartile and 75% is 3rd quartile – BeamsAdept Oct 7 '20 at 9:59 • But why is 25% set to 1.000000 and 75% set to 3.750000? – Finlay Weber Oct 7 '20 at 10:22 Pandas' describe function internally uses the quantile function. The interpolation parameter of the quantile function determines how the quantile is estimated. The output below shows how you can get 3.75 or 3.5 as the 0.75 quantile based on the interpolation used. linear is the default setting. Please take a look at Pandas' quantile function source code here 1 test = pd.Series([1,2,3,4,5,1,1,1,1,9]) test_series = test[0] quantile_linear = test.quantile(0.75, interpolation='linear') print(f'quantile based on linear interpolation: {quantile_linear}') quantile based on linear interpolation: 3.75 quantile_midpoint = test.quantile(0.75, interpolation='midpoint') print(f'quantile based on midpoint interpolation: {quantile_midpoint}') quantile based on midpoint interpolation: 3.5 Percentiles indicate the percentage of scores that fall below a particular value. They tell you where a score stands relative to other scores. For example: a person height 215 cm is at the 91st percentile, which indicates that his hight is higher than 91 percent of other scores. Percentiles are a great tool to use when you need to know the position of a value/score respect to a population/data distribution you're considering. Where does a value fall within a distribution of values? While the concept behind percentiles is straight forward, there are different mathematical methods for calculating them. In your example 50% correspond to the median of the ordered values distribution. In this case the median is calculated between two values: 1 and 2 so the median is calculated (in this case 'cause the number of values is even so the median as to be calculated between the fifth and sixth ordered values ) as the mean between them 1.5. • ok, why then is 75% 3.750000 and 25% 1.000000 ? – Finlay Weber Oct 7 '20 at 10:20 Since you have 10 elements (which is even), you have a little tricky thing : If you want the 50% (= the median), you have to take the mean between 5th and 6th element (starting at 1), so you have 5 elements in both side : E E E E E1 | E2 E E E E 1 1 1 1 1 | 2 3 4 5 9 In your case, E1 = 1 and E2 = 2 (since it's sorted because you want median and quartiles), so this results as Median = 1.5 25% is easily understandable, first 5 values of your sorted df are "1", so if you make a cut in the first quarter, you'll find a 1 I still have an issue with the 75%... To me, if you cut it right, the value taken by the 75% is E3 : E E E E E E | E3 | E E E
2021-06-24 05:57:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6050243973731995, "perplexity": 850.4101138082335}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488551052.94/warc/CC-MAIN-20210624045834-20210624075834-00463.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/algebra-2-1st-edition/chapter-5-polynomials-and-polynomial-functions-5-7-apply-the-fundamental-theorem-of-algebra-5-7-exercises-skill-practice-page-384/35
## Algebra 2 (1st Edition) In the given polynomial of degree $3$, there are $3$ zeros. Here, we have $g(x)=-x^3+5x^2+12$ By Descartes' Rule, we have $1$ positive real root. Now, $g(x)=-(-x)^3+5(-x)^2+12=x^3+5x+12$ The above polynomial shows that there is no sign change. By Descartes' Rule, we have no negative real root. Thus, the other two roots must be imaginary. Hence, positive real root =1 Negative real root= 0 Imaginary roots: 2
2019-10-21 07:55:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8695498108863831, "perplexity": 549.1472562050723}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987763641.74/warc/CC-MAIN-20191021070341-20191021093841-00261.warc.gz"}
https://www.nag.com/numeric/mb/nagdoc_mb/manual_25_1/html/g01/g01gbf.html
Integer type:  int32  int64  nag_int  show int32  show int32  show int64  show int64  show nag_int  show nag_int Chapter Contents Chapter Introduction NAG Toolbox # NAG Toolbox: nag_stat_prob_students_t_noncentral (g01gb) ## Purpose nag_stat_prob_students_t_noncentral (g01gb) returns the lower tail probability for the noncentral Student's $t$-distribution. ## Syntax [result, ifail] = g01gb(t, df, delta, 'tol', tol, 'maxit', maxit) [result, ifail] = nag_stat_prob_students_t_noncentral(t, df, delta, 'tol', tol, 'maxit', maxit) Note: the interface to this routine has changed since earlier releases of the toolbox: At Mark 23: tol was made optional (default 0) ## Description The lower tail probability of the noncentral Student's $t$-distribution with $\nu$ degrees of freedom and noncentrality parameter $\delta$, $P\left(T\le t:\nu \text{;}\delta \right)$, is defined by $PT≤t:ν;δ=Cν∫0∞ 12π∫-∞ αu-δe-x2/2dx uν-1e-u2/2du, ν>0.0$ with $Cν=1Γ 12ν 2ν- 2/2 , α=tν.$ The probability is computed in one of two ways. (i) When $t=0.0$, the relationship to the normal is used: $PT≤t:ν;δ=12π∫δ∞e-u2/2du.$ (ii) Otherwise the series expansion described in Equation 9 of Amos (1964) is used. This involves the sums of confluent hypergeometric functions, the terms of which are computed using recurrence relationships. ## References Amos D E (1964) Representations of the central and non-central $t$-distributions Biometrika 51 451–458 ## Parameters ### Compulsory Input Parameters 1:     $\mathrm{t}$ – double scalar $t$, the deviate from the Student's $t$-distribution with $\nu$ degrees of freedom. 2:     $\mathrm{df}$ – double scalar $\nu$, the degrees of freedom of the Student's $t$-distribution. Constraint: ${\mathbf{df}}\ge 1.0$. 3:     $\mathrm{delta}$ – double scalar $\delta$, the noncentrality argument of the Students $t$-distribution. ### Optional Input Parameters 1:     $\mathrm{tol}$ – double scalar Default: $0.0$ The absolute accuracy required by you in the results. If nag_stat_prob_students_t_noncentral (g01gb) is entered with tol greater than or equal to $1.0$ or less than  (see nag_machine_precision (x02aj)), then the value of  is used instead. 2:     $\mathrm{maxit}$int64int32nag_int scalar Default: $100$. See Further Comments for further comments. The maximum number of terms that are used in each of the summations. Constraint: ${\mathbf{maxit}}\ge 1$. ### Output Parameters 1:     $\mathrm{result}$ – double scalar The result of the function. 2:     $\mathrm{ifail}$int64int32nag_int scalar ${\mathbf{ifail}}={\mathbf{0}}$ unless the function detects an error (see Error Indicators and Warnings). ## Error Indicators and Warnings Errors or warnings detected by the function: If on exit ${\mathbf{ifail}}\ne {\mathbf{0}}$, then nag_stat_prob_students_t_noncentral (g01gb) returns $0.0$. ${\mathbf{ifail}}=1$ On entry, ${\mathbf{df}}<1.0$. ${\mathbf{ifail}}=2$ On entry, ${\mathbf{maxit}}<1$. ${\mathbf{ifail}}=3$ One of the series has failed to converge. Reconsider the requested tolerance and/or maximum number of iterations. ${\mathbf{ifail}}=4$ The probability is too small to calculate accurately. ${\mathbf{ifail}}=-99$ ${\mathbf{ifail}}=-399$ Your licence key may have expired or may not have been installed correctly. ${\mathbf{ifail}}=-999$ Dynamic memory allocation failed. ## Accuracy The series described in Amos (1964) are summed until an estimated upper bound on the contribution of future terms to the probability is less than tol. There may also be some loss of accuracy due to calculation of gamma functions. The rate of convergence of the series depends, in part, on the quantity ${t}^{2}/\left({t}^{2}+\nu \right)$. The smaller this quantity the faster the convergence. Thus for large $t$ and small $\nu$ the convergence may be slow. If $\nu$ is an integer then one of the series to be summed is of finite length. If two tail probabilities are required then the relationship of the $t$-distribution to the $F$-distribution can be used: $F=T2,λ=δ2,ν1=1 and ν2=ν,$ and a call made to nag_stat_prob_f_noncentral (g01gd). Note that nag_stat_prob_students_t_noncentral (g01gb) only allows degrees of freedom greater than or equal to $1$ although values between $0$ and $1$ are theoretically possible. ## Example This example reads values from, and degrees of freedom for, and noncentrality arguments of the noncentral Student's $t$-distributions, calculates the lower tail probabilities and prints all these values until the end of data is reached. ```function g01gb_example fprintf('g01gb example results\n\n'); t = [ -1.528 -0.188 1.138]; df = [ 20 7.5 45 ]; delta = [ 2 1 0 ]; p = t; fprintf(' t df delta p\n'); for j = 1:numel(t) [p(j), ifail] = g01gb( ... t(j), df(j), delta(j)); end fprintf('%8.3f%8.3f%8.3f%8.4f\n', [t; df; delta; p]); ``` ```g01gb example results t df delta p -1.528 20.000 2.000 0.0003 -0.188 7.500 1.000 0.1189 1.138 45.000 0.000 0.8694 ```
2022-11-26 16:48:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 52, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9859994649887085, "perplexity": 3888.2452038504907}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446708010.98/warc/CC-MAIN-20221126144448-20221126174448-00644.warc.gz"}
http://math.stackexchange.com/questions/30731/will-two-convex-hulls-overlap
# Will two convex hulls overlap? I ran into the following problem while working in neural nets. Given natural numbers $b$ and $r$, uniformly randomly choose $b+r$ points within a unit square. Call the $b$ points the blue points and the $r$ points red points. What is the probability $p(b,r)$ that the convex hull of the blue points, $H_b$ , overlaps with $H_r$? Partial answer : I can't immediately think of anything but a multi-fold brute-force integral for this. Intuitively, it seems to me (I could be incorrect) that $p(b,r)$ has to satisfy $\lim_{b\rightarrow \infty} p(b,r) = 1$ and likewise $p(b,r) \rightarrow 0$ as $b$ or $r$ approach 0. Also, given $b$ random points, we can compute the expected size of its convex hull according to this paper , though it's not clear to me how to use this. I don't know how to connect these disparate hints at solution and would like suggestions. - If you can find the expected area of the convex hull of $b$ blue points (call it $E[A]$), then the rest is easy. The probability that one red point, chosen uniformly and independently of the blue points, will lie in the convex hull is then again $E[A]$ (to see this, condition on the location of the blue points). So the probability that the convex hulls overlap is just the probability that some red point lies inside the convex hull of the blue points, which by independence is $1-(1-E[A])^r$.
2014-11-27 01:38:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9274365305900574, "perplexity": 132.65688931112945}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931007715.70/warc/CC-MAIN-20141125155647-00111-ip-10-235-23-156.ec2.internal.warc.gz"}
https://blender.stackexchange.com/questions/69228/using-mathutils-geometry-box-fit-2d
# Using mathutils.geometry.box_fit_2d How does one use the angle returned from mathutils.geometry.box_fit_2d? For example, in the image above the face is selected & the following is run in the console. (There is no transform on mesh object, so vert coordinates are global coords) >>> bm = bmesh.from_edit_mesh(C.object.data) >>> points = [v.co.xy for v in bm.verts if v.select] >>> points [Vector((39.5, 125.0)), Vector((40.25, 107.0)), Vector((107.0, 163.0)), Vector((80.5, 159.5))] >>> a = box_fit_2d(points) >>> degrees(a) 50.00498629110601 The plane (the square) is rotated 50.005 degrees. I've found a couple of example usages generally for UVs http://www.programcreek.com/python/example/56195/mathutils.Vector. The angle returned by box_fit_3d is the angle by which the given points have to be rotated to best fit them into a rectangle aligned with the axes. The example script linked in your question – currently there is only one script with a call to box_fit_3d – does indeed use the angle to calculate the coordinates of the rotated points: angle = mathutils.geometry.box_fit_2d(cos_2d) mat = mathutils.Matrix.Rotation(angle, 2) cos_2d = [(mat * co) for co in cos_2d] xs = [co.x for co in cos_2d] ys = [co.y for co in cos_2d] But if you instead want to fit the original points into a rotated rectangle then you would have to rotate the rectangle by the negative of that angle. The image below shows the face from your question together with a plane rotated by −50.005 degrees instead. If you're also interested in the optimal size of the rectangle I guess it's still best to calculate the rotated points and then determine the smallest and largest coordinates like it's been done in the example script. width = max(xs) - min(xs) height = max(ys) - min(ys)
2020-07-07 15:22:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5157735347747803, "perplexity": 1960.3425470258203}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655893487.8/warc/CC-MAIN-20200707142557-20200707172557-00217.warc.gz"}
https://www.physicsforums.com/threads/powers-expressions.75444/
# Powers Expressions 1. May 12, 2005 ### Raza Evalulate, leaving the answer as a fraction: a) $$\left(\frac{4}{9}\right)^\frac{3}{2}$$ b) $$(-27)^-^\frac{1}{3}$$ Thanks 2. May 12, 2005 ### Imo Some useful hints: $$a^\frac{b}{c} = (a^b)^\frac{1}{c} = (a^\frac{1}{c})^b$$ $$a^-^b = \frac{1}{a}^b$$ $$a^\frac{1}{b} = \sqrt{a}$$ hope that helps 3. May 12, 2005 ### Jameson And to add on to Imo's hints, remember that an exponent is written in the form of $$\frac{\mbox{power}}{\mbox{root}}$$ For instance, $$9^{\frac{3}{2}} = (\sqrt{9})^3$$ 4. May 12, 2005 ### Raza Thank you, I finally got it. I lost my math notes and my exam is coming pretty soon so I needed to review. 5. May 12, 2005 ### whozum $$a^-^b = \frac{1}{a^b} = \left(\frac{1}{a}\right)^b$$
2017-04-30 22:40:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5003889799118042, "perplexity": 6792.223620247369}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917125881.93/warc/CC-MAIN-20170423031205-00457-ip-10-145-167-34.ec2.internal.warc.gz"}
https://embed.planetcalc.com/8312/
# Geometric sequence calculator and problems solver This online calculator solves common geometric sequence problems. This online calculator can solve geometric sequence problems. Currently, it can help you with the two common types of problems: 1. Find the n-th term of a geometric sequence given the m-th term and the common ratio. Example problem: A geometric sequence with a common ratio equals -1, and its 1-st term equals 10. Find its 8-th term. 2. Find the n-th term of a geometric sequence given the i-th term and j-th term. Example problem: An geometric sequence has its 3-rd term equals 1/2, and its 5-th term equals 8. Find its 8-th term. The detailed description of the solutions is shown through geometric sequence theory underneath the calculator, as always. #### Geometric sequence calculator and problems solver First Term of the Geometric Sequence Common Ratio nth Term of the Sequence Formula Unknown Term equals to ### Geometric sequence To recall, an geometric sequence or geometric progression is a sequence of numbers where each term after the first is found by multiplying the previous one by a fixed, non-zero number called the common ratio. Thus, the formula for the n-th term is $a_n=a_1r^{n-1}$ where r is the common ratio. You can solve the first type of problems listed above by calculating the first term a1, using the formula $a_1=\frac{a_n}{r^{n-1}}$ and then using the geometric sequence formula for the unknown term. For the second type of problems, first, you need to find a common ratio using the following formula derived from the division of equation for one known term by an equation for another known term $\frac{a_n}{a_m}=\frac{a_1r^{n-1}}{a_1r^{m-1}} \implies \frac{a_n}{a_m}=\frac{r^{n-1}}{r^{m-1}} \implies \frac{a_n}{a_m}=r^{n-m} \implies r=\sqrt[n-m]{\frac{a_n}{a_m}}$ After that, it becomes the first type of problem. For convenience, the calculator above also calculates the first term and general formula for the n-th term of a geometric sequence. URL copied to clipboard PLANETCALC, Geometric sequence calculator and problems solver
2022-01-29 12:56:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 3, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7265018224716187, "perplexity": 759.6917493732013}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320306181.43/warc/CC-MAIN-20220129122405-20220129152405-00553.warc.gz"}
https://halfpriceprototypes.com/qa/quick-answer-what-is-the-equation-for-sample-mean.html
# Quick Answer: What Is The Equation For Sample Mean? ## How do you find the sample mean difference? The expected value of the difference between all possible sample means is equal to the difference between population means. Thus, E(x1 – x2) = μd = μ1 – μ2.. ## What is the probability of the sample mean? The statistic used to estimate the mean of a population, μ, is the sample mean, . So the probability that the sample mean will be >22 is the probability that Z is > 1.6 We use the Z table to determine this: … P( > 22) = P(Z > 1.6) = 0.0548. ## How do you find the expected sample mean? The standard error (SE) of the sample sum is the square-root of the sample size, times the standard deviation (SD) of the numbers in the box. The expected value of the sample mean is the population mean, and the SE of the sample mean is the SD of the population, divided by the square-root of the sample size. ## How do you find the sample mean and sample standard deviation? Here’s how to calculate sample standard deviation:Step 1: Calculate the mean of the data—this is xˉx, with, \bar, on top in the formula.Step 2: Subtract the mean from each data point. … Step 3: Square each deviation to make it positive.Step 4: Add the squared deviations together.More items… ## How do I calculate the sample mean? How to calculate the sample meanAdd up the sample items.Divide sum by the number of samples.The result is the mean.Use the mean to find the variance.Use the variance to find the standard deviation. ## Is the sample mean the same as the mean? Differences. “Mean” usually refers to the population mean. This is the mean of the entire population of a set. … The mean of the sample group is called the sample mean. ## How do you find the mean and mode? To find it, add together all of your values and divide by the number of addends. The median is the middle number of your data set when in order from least to greatest. The mode is the number that occurred the most often. ## Does sample size affect the mean? Center: The center is not affected by sample size. The mean of the sample means is always approximately the same as the population mean µ = 3,500. Spread: The spread is smaller for larger samples, so the standard deviation of the sample means decreases as sample size increases. ## What happens to the mean as the sample size increases? The population mean of the distribution of sample means is the same as the population mean of the distribution being sampled from. … Thus as the sample size increases, the standard deviation of the means decreases; and as the sample size decreases, the standard deviation of the sample means increases.
2020-10-20 08:07:04
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8744579553604126, "perplexity": 289.88734652676976}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107871231.19/warc/CC-MAIN-20201020080044-20201020110044-00187.warc.gz"}
http://astronomy.stackexchange.com/questions/447/future-of-cmb-observations-how-will-our-knowledge-of-the-early-universe-change/832
Future of CMB observations: How will our knowledge of the early universe change? The Planck satellite has been presented and awaited for a long time as the ultimate experiments for measuring temperature fluctuations in the cosmic microwave background (CMB) over the full sky. One of the big questions that still need answer and that Planck might help clarify is about the dynamics and driving mechanisms in the first phases of the universe, in particular in the period called inflation. Thankfully there is room for improvements at small scales, i.e. small pieces of sky observed with extremely high resolution, and more importantly for experiments to measure the polarisation of CMB. I know that for the next years a number of polarisation experiments, mostly from ground and balloons, are planned (I'm not sure about satellites). For sure some of these result will rule out some of the possible inflationary scenarios, but to which level? Will we ever be able to say: "inflation happened this way"? - I'm not prepared to write a full post on the topic at this moment, but one of the big things researchers are interested in measuring is a very special parameter labeled f_nl. This parameter has to do with what's known as primordial non-Gaussianity, which essentially introduces the idea that the power-spectrum of the universe is not scale-free. –  astromax Oct 2 '13 at 16:38 right. I forgot about non gaussianity. –  Francesco Montesano Oct 3 '13 at 12:08 @astromax I would be interested in an answer here too, if you find time for it. –  Dilaton Oct 5 '13 at 15:21 –  called2voyage Oct 15 '13 at 18:44 The other thing people are looking at is this idea of primordial non-Gaussianity, which are second order corrections to the Gaussian fluctuations present in the cmb (review article; early planck results). Measuring a parameter called $f_{nl}$ (deviation from Gaussianity) has been a fairly crucial part of current and future studies and will also help rule out various inflationary models. This $f_{nl}$ parameter is defined as follows: In this case the multipole coefficients $a_{lm}$ of the CMB temperature map can be written as $$a_{lm} = a_{lm}^{(G)} + f_{nl} a_{lm}^{(NG)}$$ where $a_{lm}^{(G)}$ is the Gaussian contribution and $a_{lm}^{(NG)}$ is the non-Gaussian contribution.
2014-08-27 14:52:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5939221978187561, "perplexity": 742.3332873449275}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500829421.59/warc/CC-MAIN-20140820021349-00347-ip-10-180-136-8.ec2.internal.warc.gz"}
https://physics.unm.edu/SQuInT/2019/program.php?session=all
## Program #### SESSION 1: Ions Chair: (Murray Holland) 8:30am - 9:15amJohn Bollinger, National Institute of Standards and Technology, Boulder (invited) Quantum control and simulation with large trapped-ion crystals Abstract. I will describe efforts to improve the control of large, single-plane crystals of several hundred ions in Penning traps and employ these crystals for quantum sensing and quantum simulation. We isolate and control two internal levels or “spin” degree of freedom in each ion with standard techniques. Long-range interactions between the ions are generated through the application of spin-dependent optical-dipole forces that couple the spin and motional degrees of freedom of the ions. When this coupling is tuned to produce a coupling with a single motional mode (typically the center-of-mass mode), this system is described by the iconic Dicke model. Long-range Ising interactions and single-axis twisting are produced through this spin-motion coupling. To benchmark dynamics, we measure out-of-time-order correlations (OTOCs) that quantify the build-up of correlations and the spread of quantum information. We also employ spin-dependent optical dipole forces to sense center-of-mass motion that is small compared to the ground state zero-point fluctuations. This enables the detection of weak electric fields and may provide an opportunity to place limits on dark matter couplings due to particles such as axions and hidden photons that couple to ordinary matter through weak electric fields. 9:15am - 9:45amMegan Ivory, University of Washington Novel trap for 2D ion crystal experiments Abstract. Quantum computation has thus far been limited by number of available qubits. In trapped ions, most computation has been performed in linear Paul traps to avoid micromotion which is thought to lead to low gate fidelities. Recent theoretical work by the Duan group shows that micromotion can be compensated with the use of segmented laser pulses, allowing for fidelities <99.99% in two-dimensional ion crystals of >100 ions. Here, we seek to experimentally demonstrate high-fidelity quantum gates in Ba+ ions in a planar crystal. To do so, we have developed a novel trap system specifically for producing Ba+ crystals. The trap geometry is based on simulations we developed for modeling trapped ion dynamics and equilibrium positions. The electrodes are comprised of a segmented ring which allows us to dynamically tune the transverse trap frequencies to produce both planar and linear traps. We present progress towards the demonstration of large ion crystals of varying trap frequency anisotropies. In addition to high-fidelity quantum gates in planar ion crystals, the system in development can also be used for quantum chemistry simulations and the study of crystalline order, defects, and phase transitions. 9:45am - 10:15amCrystal Noel, University of California Berkeley Electric-field noise from thermally-activated fluctuators in a surface ion trap Abstract. Electric field noise is a major limiting factor in the performance of ion traps and other quantum devices. Despite intensive research over the past decade, the nature and cause of electric field noise near surfaces is not very well understood. We probe electric-field noise near the surface of an ion trap chip in a previously unexplored high-temperature regime. A saturation of the noise amplitude occurs around 500 K, which, together with a small change in the frequency scaling, points to thermally activated fluctuators as the origin of the noise. The data can be explained by a broad distribution of activation energies around 0.5 eV. These energies suggest atomic displacements as a relevant microscopic mechanism, likely taking place at the metal surface. #### SESSION 2: Computer science Chair: (Elizabeth Crosson) 10:45am - 11:30amAnne Broadbent, University of Ottawa (invited) Uncloneable encryption Abstract. In 2002, Gottesman answered this question in the positive, proposing a quantum encryption scheme for classical messages, with a decryption process that detects any attempt to copy the ciphertext. Clearly, classical information alone does not allow such a functionality, since it is always possible to perfectly copy a classical ciphertext while avoiding detection. However, Gottesman left open the question of restricting the knowledge that two recipients could simultaneously have on a plaintext, after an attack on a single ciphertext. Here, we address this open question by showing that Wiesner's conjugate coding can be used to achieve this type of uncloneable encryption for classical messages. Our approach is a prepare-and-measure scheme and the analysis is done in the quantum random oracle model, using techniques from the analysis of monogamy-of-entanglement games. 11:30am - 12:00pmYigit Subasi, Los Alamos National Laboratory Quantum algorithms for systems of linear equations inspired by adiabatic quantum computing Abstract. We present two quantum algorithms based on evolution randomization, a simple variant of adiabatic quantum computing, to prepare a quantum state $$ket{x}$$ that is proportional to the solution of the system of linear equations $$A \vec{x}=\vec{b}$$. The time complexities of our algorithms are $$O(\kappa^2 \log(\kappa)/\epsilon)$$ and $$O(\kappa \log(\kappa)/\epsilon)$$ where $$\kappa$$ is the condition number of $$A$$ and $$\epsilon$$ is the precision. Both algorithms are constructed using families of Hamiltonians that are linear combinations of products of $$A$$, the projector onto the initial state $$|b>$$, and single-qubit Pauli operators. The algorithms are conceptually simple and easy to implement. They are not obtained from equivalences between the gate model and adiabatic quantum computing, and do not use phase estimation or variable-time amplitude amplification. We describe a gate-based implementation via Hamiltonian simulation and prove that our second algorithm is almost optimal in terms of $$\kappa$$. Like previous methods, our techniques yield an exponential quantum speed-up under some assumptions. Our results emphasize the role of Hamiltonian-based models of quantum computing for the discovery of important algorithms. #### SESSION 3: Measurement-based quantum computation Chair: (Rafael Alexander) 1:30pm - 2:15pmMercedes Gimeno-Segovia, PsiQuantum Corp. (invited) Photonic quantum computing Abstract. Photons make great qubits, they are cheap to produce, resilient to noise and the only known option for quantum communication. The two main traditional arguments against a fully linear-optical quantum computing architecture have been the lack of deterministic photonic entangling gates and the predisposition of photons to loss. However, a number of theoretical breakthroughs have made these arguments loose strength, while implementations in silicon photonics have opened the door to manufacturability at large scale. In this talk, I will describe an architecture for fault tolerant quantum computing based on linear optics, in the process I will explain how measurement-induced non-linearity can overcome the challenge of creating entanglement and how loss can be tackled with well-known error correcting codes. 2:15pm - 2:45pmRobert Raussendorf, University of British Columbia A computationally universal phase of quantum matter Abstract. We provide the first example of a symmetry protected quantum phase that has universal computational power. Throughout this phase, which lives in spatial dimension two, the ground state is a universal resource for measurement based quantum computation. #### SESSION 4: Quantum optics Chair: (Alberto Marino) 3:15pm - 3:45pmMichael G. Raymer , University of Oregon High-efficiency demultiplexing of quantum information in temporal modes Abstract. Information can be encoded in single photons using temporal modes (sets of field-orthogonal wave-packet shapes). Temporal modes span a high-dimensional quantum state space and integrate into existing single-mode fiber communication networks, thus creating a new framework for quantum information science. A major challenge to achieving full control of temporal-mode states is their multiplexing and demultiplexing with zero crosstalk. Such add/drop functionality can be achieved by frequency conversion (FC) via nonlinear wave mixing, which can exchange the quantum states between two narrow spectral bands in a temporal-mode-selective manner. By tailoring the shape of the pump laser pulse and the phase-matching conditions of a second-order nonlinear optical medium, one can achieve moderate selectivity for different temporally orthogonal wave packets. To exceed this limit, we demonstrate a two-stage “Ramsey” interferometric FC scheme [1], predicted by theory to reach near-perfect (100%) selectivity. Using the two-stage scheme, we demonstrate a large increase over the single-stage selectivity limit, for the first three natural (“Schmidt”) modes of the FC process. This result paves the way for implementing arbitrary single-photon unitary operations, and thus various protocols such as QKD, in the temporal-mode basis. 1. “High-selectivity quantum pulse gating of photonic temporal modes using all-optical Ramsey interferometry,” D. V. Reddy and M. G. Raymer, Optica, 5, 423 (2018) 3:45pm - 4:15pmSteve Young, Sandia National Laboratories General modeling framework for quantum photodetectors Abstract. Photodetection plays a key role in basic science and technology, with exquisite performance having been achieved down to the single photon level. Further improvements in photodetectors would open new possibilities across a broad range of scientific disciplines, and enable new types of applications. However, it is still unclear what is possible in terms of ultimate performance, and what properties are needed for a photodetector to achieve such performance. Here we present a general modeling framework for single- and few- photon detectors wherein the entire detection process - including the photon field, environmental coupling, and measurement output - is treated holistically and quantum mechanically. The formalism naturally handles field states with single or multiple photons as well as arbitrary detector configurations. It is explicitly constructed to provide performance characteristics and naturally furnishes a mathematical definition of ideal photodetector performance. The framework reveals how specific photodetector architectures and physical realizations introduce limitations and tradeoffs for various performance metrics, providing guidance for optimization and design. 4:15pm - 4:45pmSofiane Merkouche, University of Oregon Entangled-state measurements based on mode-resolved two-photon sum-frequency generation Abstract. Projective measurements onto entangled quantum states (commonly referred to as "entangled measurements") are an essential tool for many quantum information processing applications, for example quantum repeaters and quantum-state teleportation. The most well-studied such measurement is the Bell-state measurement for two qubits. Here, we introduce a two-photon multi-mode entangled-state measurement scheme based upon sum-frequency generation (SFG) followed by mode-resolved single-photon detection. We show that the mode-resolved detection of the output of a two-photon SFG process acts as a projective measurement onto the two-photon entangled state produced by the time-reversed parametric downconversion process in the perturbative limit. We analyze the applicability of such a measurement both for temporal- and spatial-mode entanglement, and show how this can be exploited for high-dimensional quantum teleportation, entanglement swapping, and quantum illumination. #### SESSION 6: AMO physics Chair: (Grant Biedermann) 8:30am - 9:15amManuel Endres, California Institute of Technology (invited) Quantum simulation with alkali and alkaline-earth Rydberg-arrays Abstract. Recently, cold alkali atoms in optical tweezer arrays have emerged as a versatile platform for quantum simulation. I will review these developments and give an update about ongoing experiments with alkaline-earth atoms: 1) I will introduce atom-by-atom assembly as a fast and simple method to generate defect-free atomic arrays. 2) I will review how such arrays can be used as a quantum simulator for specific types of transverse- & longitudinal-field Ising-models with 1/R^6 interaction. 3) I will outline how we are currently extending this work to alkaline-earth atoms using Strontium-88; particularly, I will illustrate how this new direction could overcome current coherence limits and enable scalability to larger tweezer arrays. 9:15am - 9:45amAlex Burgers, California Institute of Technology Engineering atom-light interactions in photonic crystal waveguides Abstract. Integrating cold atoms with nanophotonics enables the exploration of new paradigms in quantum optics and many body physics. Advanced fabrication capabilities for low-loss dielectric materials provide powerful tools to engineer band structure and light-matter couplings between photons and atoms. The current system at Caltech to explore such phenomena consists of a quasi-one-dimensional photonic crystal waveguide (PCW) whose band structure arises from periodic modulation of the dielectric structure. The waveguide design gives rise to stable trap sites for atoms at each unit cell of the crystal (150 sites for the 1D waveguide). Atoms localized in these traps will interact with one another via guided modes of the waveguide creating a versatile system that can be utilized for both quantum memories and quantum simulation. We have performed extensive trajectory simulations of atoms delivered by an optical lattice to the PCWs. The good correspondence between simulation and data enables us to understand the microscopic dynamics of atoms near the waveguide and introduce auxiliary GMs that perturb the atoms and reveal how they can be delivered to these GM trap regions. I will present recent efforts to achieve high fractional filling of trap sites within the PCW using the optical lattice delivery system and discuss future research goals. 9:45am - 10:15amMurray Holland, University of Colorado JILA New frontiers in laser cooling of neutral atoms and trapped ions Abstract. We theoretically analyze the novel physics in two recent demonstrations of laser cooling [1,2]. First, we describe laser cooling by Sawtooth Wave Adiabatic Passage (SWAP) of neutral atoms in free space that possess narrow linewidth transitions [3]. SWAP cooling exploits the extreme coherence on offer using near-resonant laser fields whose frequencies are time dependent. With a reduced reliance on spontaneous emission compared to Doppler cooling, SWAP cooling shows promise for cooling systems lacking closed cycling transitions, such as molecules. Second, we numerically investigate the efficiency of near ground-state cooling of large 2D ion crystals in Penning traps using electromagnetically induced transparency (EIT) [4]. We show that, in spite of the challenges posed by these rotating multi-ion crystals, the large bandwidth of drumhead modes (hundreds of kilohertz) can be rapidly cooled to near ground-state occupations. We predict a surprising enhancement of the cooling rate of the center-of-mass mode with increasing number of ions. We will highlight relevant experimental results in support of our theories. [1] M. Norcia et al. New J. Phys. 20 (2018) [2] E. Jordan et al. arXiv:1809.06346 (Sep. 2018) [3] J.P. Bartolotta et al. Phys. Rev. A 98 (2018) [4] A. Shankar et al. arXiv:1809.05492 (2018) #### SESSION 7: Open-system dynamics and simulation Chair: (Todd Brun) 10:45am - 11:30amHoward Carmichael, University of Auckland (invited) Monitored quantum jumps: The view from quantum trajectory theory Abstract. Quantum jumps are emblematic of all things quantum. Certainly that is so in the popular mind…and more than just an echo from the past, the term “quantum jump” still holds a prominent position within the lexicon of modern physics. What, however, is the character of the jump on close inspection? Is it discontinuous and discrete, as in Bohr’s original conception? Or is it some form of continuous Schrödinger evolution that might be monitored and reconstructed, even interrupted and turned around? I consider the jumps of single trapped ions observed in the mid-1980s [1], where an understanding drawn from quantum trajectory theory favours the latter option. I present that understanding and its connection to the modern view of continuous quantum measurement, and support this view from the theory side with experimental results [2], which recover the continuous and deterministic path of quantum jumps in a superconducting circuit using conditional quantum state tomography. [1] W. Nagourney et al., Phys. Rev. Lett. 56, 2797 (1986); T. Sauter et al., Phys. Rev. Lett. 57, 1696 (1986); J. C. Bergquist et al., Phys. Rev. Lett. 57, 1699 (1986). [2] Z. K. Minev, S. O. Mundhada, S. Shankar, P. Rheinhold, R. Gutiérrez-Jáuregui, R. J. Schoelkopf, M. Mirrahimi, H. J. Carmichael, and M. H. Devoret, arXiv:1803.00545 (2018). 11:30am - 12:00pmKathleen Hamilton, Oak Ridge National Laboratory Noisy circuit training of generative models with superconducting qubits Abstract. Many NISQ devices with < 20 qubits are becoming available for public use, but the lack of detailed noise models makes it difficult to use simulation to predict performance on hardware. We have used a recently introduced class of generative models [1] to quantify the performance of noisy superconducting qubits. Many sources of error and noise can be identified (e.g. decoherence, gate fidelities and measurement errors), and while the use of noise-robust stochastic optimizers can train circuits to a reasonable degree of accuracy, the cohesive incorporation of error mitigation into circuit training remains an open question. Our work focuses on how the performance of generative models on noisy qubits can be improved without error mitigation: by minimizing the number of noisy gates in a circuit and using sampling rates to improve the dynamics of gradient-based circuit training. [1] Liu, Jin-Guo, and Lei Wang. "Differentiable learning of quantum circuit Born machine." arXiv:1804.04168 (2018). This work was supported as part of the ASCR Testbed Pathfinder Program at Oak Ridge National Laboratory under FWP #ERKJ332 #### SESSION 8: Quantum communication Chair: (F. Elohim Becerra Chavez) 1:30pm - 2:15pmErika Andersson, Herriot-Watt University (invited) Things you can do with your quantum key distribution setup: signatures and oblivious transfer Abstract. Modern cryptography is more than encryption, and quantum cryptography is more than quantum key distribution. "Quantum digital signatures” were first proposed by Gottesman and Chuang in 2001, inspired by public-key signature schemes. Broadly speaking, a signature guarantees that a message cannot be altered or forged. There can be more than one possible recipient, and messages can be forwarded from one recipient to another. The first quantum signature scheme developed into something more practical, essentially using the same experimental components as quantum key distribution. This led to realisations of measurement-device-independent quantum signatures at Toshiba, Cambridge, and by J-W Pan’s group in China. Oblivious transfer is another functionality different from encryption. In 1-out-of-2 oblivious transfer, a receiver obtains only one of two bits sent by a sender. The sender does not know which of the two bits the receiver obtains, and the receiver does not know the other bit. Such a “poor communication channel” is, perhaps surprisingly, an important primitive for secure multiparty computation. Quantum oblivious transfer is possible, but with some limitations. In the second half of this talk, I will describe a scheme for quantum oblivious transfer that works at least as well than any of the previous ones, and which only needs the same components as standard quantum key distribution. 2:15pm - 2:45pmKonrad Banaszek, University of Warsaw Quantum optical fingerprinting without a shared phase reference Abstract. Quantum fingerprinting allows two remote parties to determine whether their datasets are identical or different by transmitting exponentially less information compared to the classical protocol with equivalent performance. Standard optical implementations of quantum fingerprinting based on coherent states of light require phase stability between the sending parties. Here we present a quantum fingerprinting protocol which exploits higher-order optical interference between optical signals with a random global phase. Its performance has been verified in a proof-of-principle experiment discriminating between binary visibility hypotheses. Actual demonstration of quantum advantage over the known bound on the performance of classical fingerprinting protocols should be possible using currently available technology. 2:45pm - 3:15pmJoseph Chapman, University of Illinois at Urbana-Champaign Time-bin and polarization superdense teleportation for space applications Abstract. To build a global quantum communication network, low-transmission, fiber-based communication channels can be supplemented by using a free-space channel between a satellite and a ground station on Earth. We have constructed a system that generates hyperentangled photonic "ququarts'' and measures them to execute multiple quantum communication protocols of interest. We have successfully executed and characterized superdense teleportation---our measurements show an average fidelity of 0.94±0.02, with a phase resolution under 7° allowing reliable transmission of >10^5 distinguishable quantum states. Additionally, we have demonstrated the ability to compensate for the Doppler shift, which would otherwise prevent sending time-bin encoded states from a rapidly moving satellite, thus allowing the low-error execution of phase-sensitive protocols during an orbital pass. #### SESSION 9a: Simulations in the NISQ era (Alvarado D) Chair: (Christopher Jackson) 3:45pm - 4:15pmNathan Lysne, University of Arizona What a small-scale, highly-accurate quantum processor can teach us about analog quantum simulation Abstract. Quantum systems that offer reasonably accurate control over tens of qubits have now been realized in several contexts. It is thought such noisy intermediate-scale quantum (NISQ) devices may be capable of classically hard tasks such as analog quantum simulation (AQS). Yet, it remains unclear if a quantum processor without error correction and fault tolerance can compute meaningful results when subject to realistic imperfections. To probe this question we have developed a universal, highly accurate analog quantum processor operating in the 16D Hilbert space comprised of the total atomic spin of individual Cs atoms in the electronic ground state. Advances in optimal control enables us to drive arbitrary unitary transformations with very high fidelity (>99%) which we can use to perform simulations of any quantum system that fits in this Hilbert space. In particular, we have studied the feasibility of simulating several model Hamiltonians that exhibit features of interest to AQS, such as chaos and hypersensitivity (the quantum kicked top), and quantum phase transitions (the Lipkin-Meshkov-Glick and transverse Ising models). Experimentally, we demonstrate AQS of each of these models, with high fidelity at the quantum state level and accurate tracking of dynamical features. With this small-scale highly accurate quantum simulator, we can now reintroduce errors in a controlled fashion and study how they impact AQS of complex dynamics, in the laboratory as well as numerical modeling. 4:15pm - 4:45pmGopikrishnan Muraleedharan, University of New Mexico CQuIC Quantum computational supremacy in the sampling of Bosonic random walkers on a one-dimensional lattice Abstract. A quantum device that performa a computational task more efficiently than a current state-of-the-art classical computer is said to demonstrate quantum computational supremacy QCS. One path to achieving QCS in the short term is via sampling complexity; random samples are drawn from a probability distribution by measuring a complex quantum state in a defined basis. Surprisingly, a gas of identical noninteracting bosons can yield sampling complexity due solely to quantum statistics, as shown by Aaronson and Arkhipov, and dubbed boson sampling the context of identical photons scattering from a linear optical network. We generalize this to noninteracting bosonic quantum random walkers on a 1D lattice, and study the complexity of the resulting probability distribution obtained in static and time dependent lattices. We consider physical realizations based on controlled transport of ultra-cold atoms in a spinor optical lattice as well as a quantum gas microscope using optical tweezers. We quantify analytically and numerically how a sequence of random Hamiltonian evolution approaches Haar random SU ($$d$$) unitary. This, together with identical particle interference can yield QCS. We also study how much pseudorandomness is necessary to demonstrate QCS in terms of closeness to a t-design. 4:45pm - 5:15pmNoah Davis, University of Texas, Austin Simulating and evaluating the coherent Ising machine Abstract. Physical annealing techniques present methods for taking advantage of qubits without the need for universal quantum computers. Particularly, annealing systems may offer calculation speed-ups for certain NP-hard optimization problems such as the Max-Cut problem and the Sherrington-Kirkpatrick model. Among promising annealing systems, the coherent Ising machine (CIM) has demonstrated particular potential for solving dense examples of these problems. A CIM uses classical measurement and feedback to couple the degenerate optical parametric oscillators which make up its logical qubits. We use the master equations governing this measurement-feedback system to simulate an idealized (but still classically controlled) CIM on a high performance computing cluster. We present an analysis of this simulation and compare it to experimental instances of CIMs along with other popular annealing methods. 5:15pm - 5:45pmLucas Kocia, National Institute of Standards and Technology, Maryland Stationary phase method in discrete Wigner functions and classical simulation of quantum circuits Abstract. We apply the periodized stationary phase method to discrete Wigner functions of systems with odd prime dimension using results from $$p$$-adic number theory. We derive the Wigner-Weyl-Moyal (WWM) formalism with higher order $$hbar$$ corrections representing contextual corrections to non-contextual Clifford operations. We apply this formalism to a subset of unitaries that include diagonal gates such as the $${\pi}/{8}$$ gates. We characterize the stationary phase critical points as a quantum resource injecting contextuality and show that this resource allows for the replacement of the $$p^{2t}$$ points that represent $$t$$ magic state Wigner functions on $$p$$-dimensional qudits by $$\le p^{t}$$ points. We find that the $${\pi}/{8}$$ gate introduces the smallest higher order $$hbar$$ correction possible, requiring the lowest number of additional critical points compared to the Clifford gates. We then establish a relationship between the stabilizer rank of states and the number of critical points and exploit the stabilizer rank decomposition of two qutrit $${\pi}/{8}$$ gates to develop a classical strong simulation of a single qutrit marginal on $$t$$ qutrit $${\pi}/{8}$$ gates that are followed by Clifford evolution, and show that this only requires calculating $$3^{\frac{t}{2}+1}$$ critical points corresponding to Gauss sums. This outperforms the best alternative qutrit algorithm for any number of $${\pi}/{8}$$ gates to full precision. 5:45pm - 6:15pmLukasz Cincio, Los Alamos National Laboratory Learning short- and constant-depth algorithms: application to state overlap and entanglement spectroscopy Abstract. Short-depth algorithms are crucial for reducing computational error on near-term quantum computers, for which decoherence and gate infidelity remain important issues. Here we present a machine-learning approach for discovering such algorithms. We apply our method to a ubiquitous primitive: computing the overlap ${\rm Tr}(\rho\sigma)$ between two quantum states $\rho$ and $\sigma$. The standard algorithm for this task, known as the Swap Test, is used in many applications such as quantum support vector machines. Here, our machine-learning approach finds algorithms that have shorter depths than the Swap Test, including one that has a constant depth (independent of problem size). Taking this as inspiration, we also present a novel constant-depth algorithm for computing the integer R\'enyi entropies, ${\rm Tr}(\rho^n})$, where our circuit depth is independent of both the number of qubits in $\rho$ as well as the exponent $n$. These integer R\'enyi entropies are useful, e.g., for computing the entanglement spectrum for condensed matter applications. Finally, we demonstrate that both our state overlap algorithm and our R\'enyi entropy algorithm have increased robustness to noise relative to their state-of-the-art counterparts in the literature. #### SESSION 9b: Error mitigation and correction (Alvarado E) Chair: (Jim Harrington) 3:45pm - 4:15pmBrandon Ruzic, Sandia National Laboratories Characterizing errors in entangled-atom interferometry Abstract. Recent progress in generating entanglement between neutral atoms provides opportunities to advance quantum sensing technology. In particular, entanglement can enhance the performance of accelerometers and gravimeters based on light-pulse atom interferometry. We study the effects of error sources that may limit the sensitivity of such devices, including errors in the preparation of the initial entangled state, spread of the initial atomic wave packet, and imperfections in the laser pulses. Based on the performed analysis, entanglement-enhanced atom interferometry appears to be feasible with existing experimental capabilities. 4:15pm - 4:45pmBibek Pokharel, University of Southern California Demonstration of fidelity improvement using dynamical decoupling with superconducting qubits Abstract. Quantum computers must be able to function in the presence of decoherence. The simplest strategy for decoherence reduction is dynamical decoupling (DD), which requires no encoding overhead and works by converting quantum gates into decoupling pulses. Here, using the IBM and Rigetti platforms, we demonstrate that the DD method is suitable for implementation in today’s relatively noisy and small-scale cloud-based quantum computers. Using DD, we achieve substantial fidelity gains relative to unprotected, free evolution of individual superconducting transmon qubits. To a lesser degree, DD is also capable of protecting entangled two-qubit states. We show that dephasing and spontaneous emission errors are dominant in these systems, and that different DD sequences are capable of mitigating both effects. Unlike previous work demonstrating the use of quantum error correcting codes on the same platforms, we make no use of postselection and hence report unconditional fidelity improvements against natural decoherence. Quantum simulation of fermions: geometric locality and error mitigation Abstract. We consider mappings from fermionic systems to spin systems that preserve geometric locality in more than one spatial dimension. They are useful to simulating lattice fermionic systems on a quantum computer, e.g., the Hubbard model. Locality-preserving mappings avoid the large overhead associated with the nonlocal parity terms in conventional mappings, such as the Jordan-Wigner transformation. As a result, they often provide solutions with much lower circuit depths. Here, we construct locality-preserving mappings that can also detect/correct single-qubit errors without introducing extra physical qubits beyond those required by the original mappings. We discuss error mitigation strategies based on these encodings for quantum algorithms such as the variational quantum eigensolver. 5:15pm - 5:45pmVictor V. Albert, California Institute of Technology Characterizing and developing bosonic error-correcting codes Abstract. Continuous-variable or bosonic quantum information processing is a field concerned with using one or more harmonic oscillators to protect, manipulate, and transport quantum information. The large oscillator Hilbert space provides alternative encodings that are currently outperforming encodings into registers of many qubits: break-even error correction has been achieved with the bosonic cat codes but not yet with a many-qubit system. However, an analysis of theoretical capabilities of bosonic codes is missing. We have undertaken a program identifying (1) Which codes are able to protect against dominant noise in realistic bosonic systems? (2) Why those codes perform so well? and (3) How to extend codes to multiple modes advantageously? We provide answers to all these questions. First, we calculate the error-correction conditions of single-mode codes, showing that Gottesman-Kitaev-Preskill (GKP) codes offer the best performance. Second, we prove that GKP codes achieve the quantum capacity (up to a constant offset) of the thermal loss channel. Third, we present a multimode extension of the cat codes that increases both experimental feasibility and theoretically achievable performance. 5:45pm - 6:15pmSepehr Nezami, Stanford University Continuous symmetries and approximate quantum error correction Abstract. Quantum error correction and symmetries are relevant to many areas of physics, including many-body systems, holographic quantum gravity, and reference-frame error-correction. Here, we determine that any code is fundamentally limited in its ability to approximately error-correct against erasures at known locations if it is covariant with respect to a continuous local symmetry. Our bound vanishes either in the limit of large individual subsystems, or in the limit of a large number of subsystems. In either case, we provide examples of codes that approximately achieve the scaling of our bound: an infinite-dimensional rotor extension of the three-qutrit secret-sharing code, an infinite-dimensional five-rotor perfect code, and a many-body Dicke-state code. Furthermore, we prove an approximate version of the Eastin-Knill theorem that puts a severe quantitative limit on a code’s ability to correct erasure errors if it admits a universal set of transversal logical gates. This bound goes to zero only inversely in the logarithm of the local physical subsystem dimension. We provide examples of codes circumventing the Eastin-Knill theorem: random unitary covariant codes, many-body generalized W-state code, and families of codes whose transversal gates form a general group G. In the context of the AdS/CFT correspondence, our approach provides insight into how time evolution in the bulk corresponds to time evolution on the boundary without violating the Eastin-Knill theorem. #### SESSION 9c: Quantum information theory and algorithms (Alvarado FGH) Chair: (Yigit Subasi) 3:45pm - 4:15pmMark Wilde, Louisiana State University Exact entanglement cost of quantum states and channels under PPT-preserving operations Abstract. This paper establishes single-letter formulas for the exact entanglement cost of generating bipartite quantum states and simulating quantum channels under free quantum operations that completely preserve positivity of the partial transpose (PPT). First, we establish that the exact entanglement cost of any bipartite quantum state under PPT-preserving operations is given by a single-letter formula, here called the κ-entanglement of a quantum state. This formula is calculable by a semidefinite program, thus allowing for an efficiently computable solution for general quantum states. Notably, this is the first time that an entanglement measure for general bipartite states has been proven not only to possess a direct operational meaning but also to be efficiently computable, thus solving a question that has remained open since the inception of entanglement theory over two decades ago. Next, we introduce and solve the exact entanglement cost for simulating quantum channels in both the parallel and sequential settings, along with the assistance of free PPT-preserving operations. The entanglement cost in both cases is given by the same single-letter formula and is equal to the largest κ-entanglement that can be shared by the sender and receiver of the channel. It is also efficiently computable by a semidefinite program. 4:15pm - 4:45pmFelix Leditzky, University of Colorado JILA Dephrasure channel and superadditivity of coherent information Abstract. The quantum capacity of a quantum channel captures its capability for noiseless quantum communication. It lies at the heart of quantum information theory. Unfortunately, our poor understanding of nonadditivity of coherent information makes it hard to understand the quantum capacity of all but very special channels. In this paper, we consider the dephrasure channel, which is the concatenation of a dephasing channel and an erasure channel. This very simple channel displays remarkably rich and exotic properties: we find nonadditivity of coherent information at the two-letter level, a big gap between single-letter coherent and private informations, and positive quantum capacity for all complementary channels. Its clean form simplifies the evaluation of coherent information substantially and, as such, we hope that the dephrasure channel will provide a much-needed laboratory for the testing of new ideas about nonadditivity. 4:45pm - 5:15pmAniruddha Bapat, University of Maryland Joint Quantum Institute Bang-bang control as a design principle for classical and quantum optimization algorithms Abstract. Physically motivated classical heuristic optimization algorithms such as simulated annealing (SA) treat the objective function as an energy landscape, and allow walkers to escape local minima. It has been speculated that quantum properties such as tunneling may give quantum algorithms the upper hand in finding ground states of vast, rugged cost landscapes. Indeed, the Quantum Adiabatic Algorithm (QAO) and the recent Quantum Approximate Optimization Algorithm (QAOA) have shown promising results on various problem instance that are considered classically hard. Here, we argue that the type of \emph{control} strategy used by the optimization algorithm may be crucial to its success, both classically and quantumly. Along with SA, QAO and QAOA, we define a new, bang-bang version of simulated annealing, BBSA, and study the performance of these algorithms on two well-studied problem instances from the literature. Rather than a quantum advantage, we find evidence for a design advantage. Both classically and quantumly, the successful control strategy is found to be bang-bang, exponentially outperforming the annealing analogues on the same instances. Lastly, we construct O(1) time QAOA protocols for a large class of symmetric cost functions, and provide an accompanying physical picture. 5:15pm - 5:45pmArjendu Pattanayak, Carleton College Unusual entanglement dynamics in the quantum kicked top Abstract. We study the quantum kicked top in the experimentally accessible regime of a few qubits $$N \in \{2, 8\}$$. We focus on the entanglement dynamics $$|\psi(t)>$$ of intial spin coherent states on the $$(J_x,J_y,J_z)$$ sphere. We demonstrate that the quantum behavior at a given location can correlate with, or anti-correlate with, or be decorrelated with the limiting $$N \to \infty$$ classical phase-space behavior. Globally, quantum spectra and eigenfunctions visualized via expansion coefficients in the Hilbert space of the $$J_z$$ operator are shown to be periodic in $$K$$ whence the quantum dynamics are (quasi-)periodic in time $$T$$ and nonlinear kick strength $$K$$, unlike the classical dynamics although decoherence distinguishes between different $$K$$ regimes. Further, there are patterns in the quantum dynamics that repeat as a function of $$N$$. We explore novel oscillations where |$$\psi>$$ moves between two maximally entangled (GHZ-like) configurations $$(|\chi_+>, |\chi_->)$$ which occur for $$N=4,8$$ in our system. We show that linear combinations of the $$\chi$$ states relax to different final entangled states for a decoherent Kraus map of weighted sum of Floquet operators. Thus quantum entanglement for a classically chaotic system can depend on initial conditions (but not as for the classical system) and can yield final high entanglement even for states 'thermalized' under decoherence. We connect to the classical phase-space dynamics via the Husimi projections of these $$\chi$$ states. 5:45pm - 6:15pmAlexander Meill, University of California San Diego Entanglement properties of quantum random walks Abstract. We examine the entanglement assumptions used to derive dynamics in highly symmetric quantum random walks. #### SESSION 10: Quantum metrology Chair: (Carlton M. Caves) 8:30am - 9:15amLee McCuller, Massachusetts Institute of Technology (invited) Improving the sensitivity of Advanced LIGO with squeezed light Abstract. The upcoming observing run 3 of Advanced LIGO will include quantum squeezed light at both observatories to improve sensitivity. This talk will detail the ongoing commissioning, as well as the technical requirements driving the design and control of audio-band squeezed light sources for gravitational wave optical interferometers. In addition, R&D is ongoing to demonstrate frequency dependent squeezed vacuum at 50Hz using optical filter cavities. Such an implementation will allow LIGO to reduce quantum radiation pressure noise below the standard quantum limit to achieve improvements spanning its observation band. 9:15am - 9:45amBaochen Wu, University of Colorado JILA Towards spin-squeezed matter-wave interferometry Abstract. Quantum entanglement permits the creation of spin-squeezed states where the fundamental quantum noise of one atom can be partially cancelled by another atom. Spin-squeezed states are particularly promising for enhancing precision measurements beyond the standard quantum limit for unentangled atoms. We have previously demonstrated 18 dB of squeezing (Cox et al, PRL, 116, 093602) with cavity-assisted non-demolition measurements in 87Rb atoms. Here we present our recent efforts towards building an intracavity, guided matter-wave interferometer in which spin-squeezing will be mapped to momentum states. This could help pave the way for better determinations of fundamental constants, more precise inertial sensors, and enhanced searches for dark matter. 9:45am - 10:15amKatherine McCormick, University of Colorado Sensing near the Heisenberg limit with a trapped-ion mechanical oscillator Abstract. Developing tools for precisely controlling and measuring the motion of a trapped ion could contribute to several possible applications, such as improving fidelities of quantum computations, opening up new avenues for quantum simulations and using ions as quantum-mechanical sensors in searches for new physics. I will discuss recent work aimed at characterizing and improving the level of motional coherence in trapped-ion systems, and present potential extensions for precision measurement applications. First, I will present results on the generation of oscillator number states up to n = 100 and superpositions of the form |0⟩+|n⟩, with n up to 18. These superposition states are used to measure the motional frequency with a sensitivity that ideally follows the 1/n Heisenberg scaling. Second, we investigate the spectrum of motional frequency noise using a series of coherent displacements of the motion of the ion, with features similar to Ramsey and dynamical decoupling sequences. These techniques, while demonstrated in a trapped-ion system, should be widely adaptable to other quantum-mechanical harmonic oscillators. #### SESSION 11: Quantum chaos Chair: (Justin Dressel) 10:45am - 11:30amShohini Ghose, Wilfrid Laurier University (invited) Chaos, stability and quantum-classical correspondence in spin systems Abstract. Classical chaos is characterized by extreme sensitivity of a system's dynamics to small perturbations in initial conditions. At the quantum level, a similar characterization of chaos remains a challenge due to the uncertainty principle and the apparent linearity of quantum evolution. We have explored the question of quantum chaos in spin systems both theory and experiments. Various signatures of chaos and classical bifurcations can be observed in a deeply quantum regime as well as the semiclassical regime. Chaos can affect quantum phenomena such as entanglement and fidelity decay that play important roles in quantum information processing. We present a method to quantify the Bohr correspondence principle in chaotic systems, and explain previous conflicting results regarding the connection between chaos and entanglement. 11:30am - 12:00pmPablo Poggi, University of New Mexico CQuIC Feedback-based simulation of quantum nonlinear dynamics: The Quantum Kicked Top in an ensemble of two-level atoms Abstract. We study the implementation of a measurement-based feedback scheme to realize quantum nonlinear dynamics. We specifically study the Quantum Kicked Top (QKT), a standard paradigm of quantum chaos, in the context of an ensemble of spins. The scheme uses a sequence of (not-so) weak measurements of a collective spin variable and global rotations conditioned on the measurement outcome. We show that the resulting dynamics, ensemble averaged over many realizations, is governed by a combination of the QKT Hamiltonian and a non-unitary channel which vanishes in the semiclassical limit, recovering the Classical Kicked Top. We also analyze individual quantum trajectories, where we explore the emergence of chaotic behaviour and revisit the role of the measurement process in the quantum-to-classical transition. #### SESSION 12: Semiconductor qubits Chair: (Kai-Mei Fu) 1:30pm - 2:15pmJason Petta, Princeton University (invited) Towards microwave assisted spin-spin entanglement Abstract. Electron spins are excellent candidates for solid state quantum computing due to their exceptionally long quantum coherence times, which is a result of weak coupling to environmental degrees of freedom. However, this isolation comes with a cost, as it is difficult to coherently couple two spins in the solid state, especially when they are separated by a large distance. Here we combine a large electric-dipole interaction with spin-orbit coupling to achieve spin-photon coupling. Vacuum Rabi splitting is observed in the cavity transmission as the Zeeman splitting of a single spin is tuned into resonance with the cavity photon. We achieve a spin-photon coupling rate as large as $$g _s/2 {\pi}$$ = 10 MHz, which exceeds both the cavity decay rate $${\kappa}/2{\pi}$$ = 1.8 MHz and spin dephasing rate $${\gamma}/2{\pi}$$ = 2.4 MHz, firmly anchoring our system in the strong-coupling regime. Moreover, the spin-photon coupling mechanism can be turned off by localizing the spin in one side of the double quantum dot. Recent progress towards microwave assisted spin-spin entanglement will be presented. 2:15pm - 2:45pmTyler Keating, HRL Laboratories Spin-blockade spectroscopy of Si/SiGe quantum dots Abstract. Many exchange-based platforms for spin qubits rely on spin-to-charge conversion for initialization and readout. The robustness of spin-to-charge conversion depends on the singlet-triplet energy splitting of two electrons occupying one dot, which is set by the energy of the dot's lowest-lying excited state. We demonstrate a model-independent technique to measure this energy, using repeated single-shot measurement across the spin-to-charge window. In our Si/SiGe triple dot device, we find that excitation energies vary smoothly with nearby gate bias, suggesting that the lowest-lying states are orbital in character. We also consider other, model-specific parameters that could be extracted from this type of measurement. #### SESSION 13: Quantum characterization and tomography Chair: (Robin Blume-Kohout) 3:15pm - 3:45pmScott Aaronson, University of Texas, Austin Gentle measurement of quantum states and differential privacy Abstract. In differential privacy (DP), we want to query a database about n users, in a way that "leaks at most $${\epsilon}$$ about any individual user," conditioned on any outcome of the query. Meanwhile, in gentle measurement, we want to measure n quantum states, in a way that "damages the states by at most $${\alpha}$$," conditioned on any outcome of the measurement. In both cases, we can achieve the goal by techniques like deliberately adding noise to the outcome before returning it. We prove a new and general connection between the two subjects. Specifically, on products of n quantum states, any measurement that is $${\alpha}$$-gentle for small $${\alpha}$$ is also O($${\alpha}$$)-DP, and any product measurement that is $${\epsilon}$$-DP is also O($${\epsilon}$$$${ \sqrt{n}}$$) -gentle. Illustrating the power of this connection, we apply it to the recently studied problem of shadow tomography. Given an unknown d-dimensional quantum state $${\rho}$$, as well as known two-outcome measurements $$E_1$$,...,$$E_m$$, shadow tomography asks us to estimate Pr[$$E_1$$ accepts $${\rho}$$], for every i $${\in}$$[m], by measuring few copies of $${\rho}$$. Using our connection theorem, together with a quantum analog of the so-called private multiplicative weights algorithm of Hardt and Rothblum, we give a protocol to solve this problem using $$O\Bigl(\left(log m\right)^2 (log d)^2\Bigr)$$ copies of $${\rho}$$, compared to Aaronson's previous bound of $$Õ\Bigl(\left(log m\right)^4 (log d)\Bigr)$$. Our protocol has the advantages of being online (that is, the $$E_1$$'s are processed one at a time), gentle, and conceptually simple. 3:45pm - 4:15pmKristine Boone, University of Waterloo Randomized benchmarking under different gatesets Abstract. We provide a comprehensive analysis of the differences between two important standards for randomized benchmarking (RB): the Clifford-group RB protocol proposed originally in Emerson et al (2005) and Dankert et al (2006), and a variant of that RB protocol proposed later by the NIST group in Knill et al, PRA (2008). While these two protocols are frequently conflated or presumed equivalent, we prove that they produce distinct exponential fidelity decays leading to differences of up to a factor of 3 in the estimated error rates under experimentally realistic conditions. These differences arise because the NIST RB protocol does not satisfy the unitary two-design condition for the twirl in the Clifford-group protocol and thus the decay rate depends on non-invariant features of the error model. Our analysis provides an important first step towards developing definitive standards for benchmarking quantum gates and a more rigorous theoretical underpinning for the NIST protocol and other RB protocols lacking a group-structure. We conclude by discussing the potential impact of these differences for estimating fault-tolerant overheads. 4:15pm - 4:45pmJohn Gamble, Microsoft Research Operational, gauge-free quantum tomography Abstract. As quantum processors become increasingly refined, benchmarking them in useful ways becomes a critical topic. Traditional approaches to quantum tomography, such as state tomography, suffer from self-consistency problems, requiring either perfectly pre-calibrated operations or measurements. This problem has recently been tackled by explicitly self-consistent protocols such as randomized benchmarking, robust phase estimation, and gate set tomography (GST). An undesired side-effect of self-consistency is the presence of gauge degrees of freedom, arising from the lack fiducial reference frames, and leading to large families of gauge-equivalent descriptions of a quantum gate set which are difficult to interpret. We solve this problem through introducing a gauge-free representation of a quantum gate set inspired by linear inversion GST. This allows for the efficient computation of any experimental frequency without a gauge fixing procedure. We use this approach to implement a Bayesian version of GST using the particle filter approach, which was previously not possible due to the gauge. Within Bayesian GST, the prior information allows for inference on tomographically incomplete data sets, such as Ramsey experiments, without giving up self-consistency. We demonstrate simulated examples of this approach for a variety of experimentally-relevant situations, showing the stability and generality of both our gauge-free representation and Bayesian GST. 4:45pm - 5:15pmTimothy Proctor, Sandia National Laboratories Randomized benchmarking of many-qubit devices Abstract. Quantum information processors incorporating 5 - 10s of qubits are now commonplace, but the standard method for benchmarking quantum gates - Clifford randomized benchmarking - is infeasible to implement on more than a few qubits in any near-term devices. In this talk, we present a series of modifications to Clifford randomized benchmarking that enable truly holistic benchmarking of entire devices. Importantly, these new techniques are adaptable based on experimental goals. They can be made highly robust or more scalable as needed, and they can be used to estimate, e.g., two-qubit gate error rates or the magnitude of crosstalk errors. Moreover, our methods allow for the benchmarking of universal gates, and continuously parameterized gates. We demonstrate our techniques on current systems, with experimental results on up to 16 qubits. Sandia National Labs is managed and operated by National Technology and Engineering Solutions of Sandia, LLC, a subsidiary of Honeywell International, Inc., for the U.S. Dept. of Energy’s National Nuclear Security Administration under contract DE-NA0003525. This research was funded by IARPA. The views expressed in the article do not necessarily represent the views of the DOE, IARPA, the ODNI, or the U.S. Government. #### SESSION 14: New intersections with high energy physics Chair: (Ivan Deutsch) 5:30pm - 6:15pmPatrick Hayden, Stanford University (invited) Quantum error correction in quantum gravity Abstract. TBA SQuInT Chief Organizer Akimasa Miyake, Associate Professor [email protected] SQuInT Local Organizers Rafael Alexander, Postdoctoral Fellow Chris Jackson, Postdoctoral Fellow
2022-12-09 11:56:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5307444334030151, "perplexity": 1489.7909406926506}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711396.19/warc/CC-MAIN-20221209112528-20221209142528-00365.warc.gz"}
http://by.tc/tag/math/
#### math Pardon the mess. I wrote this with a different markup tool and it looks like it doesn't want to play nice with Ghost. I started writing this a few days ago, and have since ironed out some things which I hope to come back and flesh out. I leave the rest of the post as-is for now, half for my Dear Reader, and half as a note to self. TL;DR: I'm virtually positive I'm right about most of this, and have come up with several different ways to theoretically exploit continued fractions to generate primes of arbitrary size. Unfortunately, and naturally, PCFs are just as deeply nuanced as most prime-connected stuff, meaning same ol' story, different tune. If it were possible to cheaply generate large PCFs, it would be a very different story, and I haven't completely ruled out the possibility yet. In fact, it would be sufficient merely to develop an algorithm which could spit out the length of the period of the PCF for a given radical, assuming it ran in reasonable time. What I did find in the literature about it suggests that it's an open problem, but there also wasn't very much literature compared to some of your more mainstream areas of number theory. If for whatever reason this doesn't load or looks messed up, try reading here (or here!) Wanted to jot this down as food for thought before I forgot. And so I did. So we have factorials, denoted with a ! suffix, e.g. 4! = 1 \times 2 \times 3 \times 4 = 24, or more generally $n! := \prod_{k=1}^{n} k=1 \cdot 2 \cdot 3 \cdot \ldots \cdot (k-1) \cdot k.$ Among many other things, k! represents the number of possible permutations of a set of unique elements, that is, the number of different ways we can order a group of things. We've also got 2^x, the "power set" of x. If we have a set \mathbf{S} containing |\mathbf{S}| items, 2^{|\mathbf{S}|} is the total number of unique subsets into which it could be partitioned, including the null set. The notion of a power set plays a significant role in transfinite math. \aleph _ {0} is the "smallest" of the infinities, representing the cardinality of the countable numbers (e.g. the set of integers \mathbb{Z}). If we assume the Continuum Hypothesis/Axiom of Choice, the next smallest cardinality is the power set 2^{\aleph _ {0}} = \aleph _ {1}, which corresponds to the cardinality of the real set \mathbb{R}. Under \mathsf{CH}, there is believed to be no cardinality between consecutive power sets of aleph numbers. And there's your background. So, I wondered whether n! or 2^n grows faster; it does not take too much thought to realize it's the the former. While 2^n is merely growing by a factor of 2 with each index, n! grows by an ever-increasing factor. In fact, it follows that even for an arbitrarily large constant C, you still end up with \lim _ {n\rightarrow\infty} n! - C^n = \infty. (The limit also holds for division: \lim _ {n\rightarrow\infty} \frac{n!}{2^n} = \infty.) But here is where my understanding falters. We've seen that n!, in the limit, is infinitely larger than 2^n; I would think it follows that it is therefore a higher cardinality. But when you look at 2^{\aleph _ {k}} vs. \aleph _ {k} !, some obscure paper I just found (and also Wolfram Alpha) would have me believe they're one and the same, and consequentially both equal to \aleph _ {k+1}. Unfortunately, I can't articulate exactly why this bothers me. If nothing else, it seems counter-intuitive that on the transfinite scale, permutations and subsets are effectively equivalent in some sense. ...but I suddenly I realize I'm being dense. One could make the same mathematical argument for 3^n as for n! insofar as growing faster, and in any case, all of these operations are blatantly bijective with the natural numbers and therefore countable. Aha. Well, if there was anything to any of this, it was that bit about permutations vs. subsets, which seems provocative. Well, next time, maybe I'll put forth my interpretation of \e as a definition of \mathbb{Z}. Whether it is the definition, or one of infinitely many differently-shaded definitions encodable in various reals (see \pi), well, I'm still mulling over that one... This is something I worked on a year ago, so I'll keep it (relatively) brief. ##### Inspiration There's a keypad on my apartment building which accepts 5-digit codes. One day on the way in, I started thinking about how long it would take to guess a working code by brute force. The most obvious answer is that with five digits, you have 10^5 = 100,000 possible codes; since each one of these is 5 digits long, you're looking at 500,000 button pushes to guarantee finding an arbitrary working code. ##### Realization But then I realized you could be a little smarter about it. If you assume the keypad only keeps track of the most recent five digits, you can do a rolling approach that cuts down your workload. For example, say I hit 12345 67890. If the keypad works as described, you're not just checking two codes there, you're checking 6: 12345, 23456, 34567, 45678, 56789, 67890. ##### Foundation The next natural question was how many button-pushes this could actually save. After doing some work, I satisfied myself that you could cut it down by a factor of ~D, where D is the number of digits in a code. So if you typed the right digits in the right order, instead of the 500,000 before, you're only looking at 100,004 (you need an extra 4 to wrap the last few codes up). ##### Experimentation The next natural question was: how do you actually come up with that string of digits? It has to be perfect, in that it contains every possible 5-digit combination without repeating any of them. As with almost any exploratory problem, the best approach is to simplify as much as possible. For instance, consider a binary code three digits long, which only has 2^3 = 8 different codes: {000, 001, 010, 011, 100, 101, 110, 111}. My formula suggested you should be able to hit all eight of those in a 2^3 + 3 - 1 = 10 digit string, and it's easy enough to put one together by a little trial and error: 0 0 0 1 1 1 0 1. I found it was easiest to treat these strings as cyclical, so the 0 1 at the end wrap around to give you 0 1 0 and 1 0 0. As a bonus, any rotation of this string will work just as well. ##### Perspiration As I scaled the problem up, however, more and more things became clear. First, above a certain point, you start getting multiple viable optimal strings that are not simple transformations of one another. Second, finding an elegant way to generate these strings was not turning out to be easy. I found one mechanical way of generating a valid string that worked, but I didn't love it. If you list all the combinations you have to cover, and then slot each combination into a buffer greedily, meaning the earliest spot where it can fit (potentially with overlap), it works out. E.g.: ##### Beautification At some point I realized the generative process could be viewed as a directed graph, the nodes representing an N-length code, its successors delineating alternatives for continuing the string. After a few attempts, I got a pretty clear-looking one down (the node labels 0-7 are standing in for their binary counterparts): As it turns out, you can start on any node and if you trace a Hamiltonian cycle—touching each vertex only once and ending back at the start—the numbers you hit along the way form a valid optimal string. This approach also scales with the parameters of the problem, but requires a messier or multi-dimensional graph. ##### Confirmation Whenever I stumble into open-ended problems like this, I avoid Googling for the answer at first, because what's the fun in that? I was pretty sure this would be a solved problem, though, and after spending a while working this through myself, I looked for and found Wikipedia's page on De Bruijn sequences. As usual, I was beaten to the punch by somebody a hundred years earlier. Hilariously, however, in this case the results matched up better than expected. Check out the Wiki page, notably a) the graph on the right, and b) the first line of the section "Uses". ##### Notation If you want to see the wild scratch pad which brought me from point A to point B, by all means, enjoy. I've been trying to visualize the problem in many different ways, tables and graphs and geometry and anything else that seems plausible. Here's one I nailed down this morning that's concise and readily graspable for any coder. First, ignore the 2. It just gets in the way. And then we get going. Starting with 3, we'll build up a list of primes and I'll show how you can easily check for Goldbach numbers while you do. As you build a string of bits, on each new number, you reverse the bits you have so far and slide it over one. Much easier understood with a demonstration, starting with n=6 => n/2=3 and incrementing n=n+2, as it must be even. ``````n=6 3 prm? 1 rev 1 n=8 3 5 prm? 1 1 rev 1 1 n=10 3 5 7 prm? 1 1 1 rev 1 1 1 n=12 3 5 7 9 prm? 1 1 1 0 rev 0 1 1 1 n=14 3 5 7 9 11 prm? 1 1 1 0 1 rev 1 0 1 1 1 n=16 3 5 7 9 11 13 prm? 1 1 1 0 1 1 rev 1 1 0 1 1 1 n=18 3 5 7 9 11 13 15 prm? 1 1 1 0 1 1 0 rev 0 1 1 0 1 1 1 n=20 3 5 7 9 11 13 15 17 prm? 1 1 1 0 1 1 0 1 rev 1 0 1 1 0 1 1 1 `````` All we're doing with rev is reversing the order of the "prime?" bits. If you watch from one to the next, you can see that this is actually resulting in a kind of ongoing shift each time. Observe: ``````prm? 1 1 1 0 1 1 0 1 1 0 1 0 0 1 n 3 5 7 9 11 13 15 17 19 21 23 25 27 29 6 1 8 1 1 10 1 1 1 12 0 1 1 1 14 1 0 1 1 1 16 1 1 0 1 1 1 18 0 1 1 0 1 1 1 20 1 0 1 1 0 1 1 1 22 1 1 0 1 1 0 1 1 1 24 0 1 1 0 1 1 0 1 1 1 26 1 0 1 1 0 1 1 0 1 1 1 28 0 1 0 1 1 0 1 1 0 1 1 1 30 0 0 1 0 1 1 0 1 1 0 1 1 1 32 1 0 0 1 0 1 1 0 1 1 0 1 1 1 `````` Notice all we are doing, for each new line, is taking prm?(n), sticking it at the beginning of the new line, and shoving over all the rest. Same as above, but easier to see here. Note also the prime bits read down vertically in the regular order. ###### So? The thing is, you can check for any and all Goldbach numbers by simply bitwise ANDing two corresponding strings together. (This may actually be an efficient way to code this, should such a need arise.) Take n=28: `````` 3 5 7 9 11 13 15 17 19 21 23 25 prm? 1 1 1 0 1 1 0 1 1 0 1 0 rev 0 1 0 1 1 0 1 1 0 1 1 1 ----------------------------------------- & 0 1 0 0 1 0 0 1 0 0 0 1 `````` The resulting bits identify which numbers are "Goldbach numbers," primes which will sum to n. In this case we get two pairs: 5 + 23 = 28 11 + 17 = 28 This process will give you all valid Goldbach numbers for a given n. `````` prm? 1 1 1 0 1 1 0 1 1 0 1 0 0 1 n 3 5 7 9 11 13 15 17 19 21 23 25 27 29 6 (1) 3+3=6 8 (1 1) 3+5=8 10 (1 (1) 1) 3+7=10 5+5=10 12 0 (1 1) 1 5+7=12 14 (1 0 (1) 1 1) 3+11=14 7+7=14 16 (1 (1 0 1 1) 1) 3+13=16 5+11=16 18 0 (1 (1 0 1) 1) 1 5+13=18 7+11=18 `````` ...and so on. I added the parentheses to emphasize the pairs, but it's just matching up by working from the outsides to the center. It did make me notice something I should have seen before, which is that if you don't want to bother with all the reversing and stuff, you accomplish the same thing by taking the prime bit string, counting in from both sides, and watching for matches that are both 1. Same exact thing. The astute observer may point out that I'm not actually doing anything in this write-up, which is true. Especially viewed this most recent way, all we're doing is literally taking the Goldbach Conjecture by its definition, and looking manually for pairs of primes that add up. If there is something to be gained from this approach, I suspect it lies in study of the "sliding" nature of the reversal version, and seeing how it conspires not to leave holes. This is almost certainly biased on my hunch that Goldbach's undoing lies in a thorough examination of the mechanism driving its avoidance of symmetry; to violate Goldbach, there would have to be an instance of a certain symmetry, which would in turn be representative of a periodic pattern to a degree verboten to primes. But hey, that's just me. This is a quick story about today's thing that I discovered that is already in Wiki. I must be up in the triple digits at this point. That said, this is one of the more obvious ones. I was reading a thread about which sorting algorithm could be considered "best," and someone mentioned a couple of algorithms which allegedly run in O(n log log n) time. This came as a shock, since I'd thought n log n was the brick wall for an honest sort, and I wondered if these other sorts were for real, whether that would imply the existence of a linear time sort. I tried to imagine what the lower bound would be, and figured there must be a minimum number of comparisons required for some number of elements. Didn't take long to get from there to realizing that n integers can be arranged in n! different permutations, which I reasoned meant that one must gather enough information (read: take comparisons) from the data to uniquely identify which of n! different arrangements the sort is in. That, in turn, screams out for a log, specifically log_2(n!). If the sort permutation is truly random, then on average, we should expect to be able to identify it from log_2(n!) bits (read: comparisons, again.) To be a little more precise, I guess it'd be more like $\frac{\lceil{n \log_2 (n!)}\rceil}{n} .$ I cheated a little here and plugged lim_{n->\infty} [log_2(n!)] into Wolfram Alpha, and it was clear the dominating factor is, surprise surprise, n log n. As for those mystery n log log n algorithms, they were tough to track down too, and there seemed to be a lack of final consensus on the issue. Due to the math described herein if nothing else, they must operate with some limitation or another on domain or input, assuming they do work. Later, seeing the bottom of the Wikipedia page on sorting algorithms, I saw all of this done similarly, with the weird surprise that once you get to n=12, the maximum number of comparisons required suddenly goes up by one inexplicably, requiring 30 comparisons where we predict 29. Sadly, the footnote links explaining those aberrations were in journals behind paywalls, but the gist seemed to be that it was a bit of an open (or at least highly non-trivial) question.
2019-02-20 02:26:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6840329170227051, "perplexity": 419.2884885914334}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247494125.62/warc/CC-MAIN-20190220003821-20190220025821-00632.warc.gz"}
https://physics.stackexchange.com/questions/537414/normalization-in-perturbation-theory
# Normalization in perturbation theory When we have a system with hamiltonian $$H = H_{0} + V$$, we can expand the ground state wavefunction $$\Psi_{0}$$ using the wavefunction of the non-interacting system $$\phi_{0}$$, that is an eigenfunction of $$H_{0}$$. In the Rayleigh-Schrodinger perturbation theory, we choose the normalization of $$\Psi_{0}$$ such that $$\langle \phi_{0} | \Psi_{0} \rangle = 1$$. My question is: How do we know that these wave functions are not orthogonal? In this case, this product would be zero. I read in the book "A Guide to Feynman Diagrams in the Many-Body Problem" that, when the interacting system has a different symmetry from the non-interacting system, we will have necessarily $$\langle \Psi_{0} | \phi_{0} \rangle = 0$$, but I don't know what means "have a different symmetry", and I also would like a explanation about when we can do this normalization. It’s built in the physics of the perturbation approach. Perturbation theory makes sense when the dominant term to the exact ground state is the unperturbed ground state, else it is not a perturbation. More to the point, one expects that the largest overlap $$\vert \langle \Psi_0\vert \phi_k\rangle\vert$$ occurs for $$k=0$$. if this is NOT the case, it’s no longer a perturbation of the system described by $$H_0$$. For "reasonable" $$H$$ and $$V$$ it can be shown that $$H+\lambda V$$ has an eigenfunction $$\Psi(\lambda)$$ that varies smoothly with $$\lambda$$ for "small" $$\lambda$$. Whether $$\langle \Psi(0) \Psi(\lambda) \rangle \ne 0$$ for $$\lambda \in [0,1]$$ depends on properties of $$H$$ and $$V$$.
2020-12-04 11:57:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 20, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8464447259902954, "perplexity": 122.87021189061248}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141735600.89/warc/CC-MAIN-20201204101314-20201204131314-00714.warc.gz"}
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Analytical_Sciences_Digital_Library/Active_Learning/Contextual_Modules/Sample_Preparation/05_Sampling
# Sampling ## I. Headspace Sampling for Volatile and Semi-volatile Analytes Introduction. Imagine you are a member of the highway patrol, and observe a car veering across the center and side lines of the interstate at 2:30 am on a Sunday morning. Suspecting that the operator is driving under the influence of alcohol, you pull the car over, and get the driver out of the car to perform a field sobriety test. After these are failed, you ask the driver to submit to a breathalyzer, an electrochemical test for alcohol in the expelled breath of the driver, to determine if their blood alcohol level exceeds the legal limit in most states of 0.08% w/v. If the driver fails or refuses to submit to this test, it is common practice for you as the law enforcement officer to place the suspect under arrest, and transport the suspect to the nearest medical facility or police forensic laboratory for a blood test to determine the level of alcohol content (BAC). If the driver refuses this test, the law generally requires that the suspect be charged as if the maximum level has been exceeded. Figure 1. Field Sobriety Test1 Q1. You probably have seen enough crime shows to predict how the laboratory analysis for blood alcohol content will be done. Whether you know or not, you should be able to describe the physical characteristics of both the sample (blood) and the material to be analyzed (ethyl alcohol) that might have to be considered in the analysis. Describe if you can what part of the sample will be used, and what method of analysis might be employed. Blood is a complex matrix, containing hundreds of substances, including red and white blood cells along with large molecular weight proteins and enzymes. The primary component is water (92%), which we know boils at about 100 oC (depending on pressure), and has a vapor pressure of about 3 kPa near room temperature. Ethanol boils at a lower temperature than water (about 78 oC) and has a higher vapor pressure, about twice that of water (6 kPa at 20 oC). Recall that vapor pressure is defined as the pressure exerted by a gas in a closed container when at equilibrium with the corresponding liquid phase. This means that at a fixed temperature, the amount of substance in the gas phase relative to the liquid phase increases with vapor pressure. Substances with high values of vapor pressure are considered to be more volatile than those that do not. Considering that the larger molecular weight components of blood have much smaller vapor pressures than water and ethanol, one might consider the vapor phase above the liquid (blood) sample as a good place to look for ethanol in the absence of the very large number of possible interferents. The equilibrium is illustrated in Figure 2. A sealed vial contains, in this case, equal volumes of air space and liquid. The substance of interest, say ethanol, is dissolved in the complex matrix, say blood, at some level. Once equilibrium is reached between the gas and liquid phases, ethanol (and other volatile substances) will be found at concentrations described by the partition coefficient (K) for each analyte: $\mathrm{K = \dfrac{C_{liquid}}{C_{gas}}} \label{Eq. 1}$ Figure 2. Equilibrium between liquid and gas phases in sealed vial. In addition, each analyte will have a propensity to enter the gas phase that is dependent upon the relative volumes of the gas and liquid phases. This is described by the phase ratio, β, given by $\mathrm{β = \dfrac{V_{gas}}{V_{liquid}}} \label{Eq. 2}$ Combining the equations for K and for β yields an expression2 that describes the concentration of analyte in the gas phase as $\mathrm{C_{gas} = \dfrac{C_{liquid}}{K + β}} \label{Eq. 3}$ Q2. Knowing that we are interested in analyzing the amount of alcohol in blood, you would rightly assume that the larger the concentration of the analyte in the gas phase, the easier it would be to evaluate. Using Le Chatelier’s principle as a guide, list and explain ways you might increase the concentration of ethanol in the gas phase over the liquid blood sample. Q3. What analytical method(s) do you know about that are amenable to volatile substances in the gas phase? Static headspace sampling with gas chromatographic analysis. The most common test performed by forensic laboratories is that for blood ethanol.3 Typically this involves the use of gas chromatography following headspace sampling. A gas-tight syringe is most often used to collect a sample from the headspace of a sealed vial, and that gas sample is injected into the gas chromatograph. In most cases, the vial is heated, but no purge gas is introduced into the sample vial during static headspace analysis (more on this later). Sampling and sample introduction can be done manually, but more often an autosampler is used. If the autosampler employs a syringe, it is generally heated to prevent the condensation of volatiles prior to their injection. Heated transfer lines that directly connect the sample vial and the injector on the chromatograph are also frequently employed. Q4. What advantages can you see for chromatographic analysis when sampling only from the volatile gas phase above a complex matrix? The headspace technique samples only the volatile substances at equilibrium in the gas phase, and eliminates the need for sample cleanup while preventing the injection of non-volatile materials that can often contaminate the chromatographic system. While this introduction has focused on blood as a sample matrix, headspace analysis is possible for solid or liquid samples. Improving the amount of volatile analyte in the gas phase. The two most important parameters affecting the distribution between two phases are temperature and solubility. The effect of temperature on the distribution coefficient, K, is given by $\mathrm{\ln K = \dfrac{A}{RT} – B} \label{Eq. 4}$ where A and B are thermodynamic constants, R is the ideal gas constant, and T is the temperature in Kelvin.2 Thus, increasing the temperature will decrease the value of K. Recall that K = Cliquid/Cgas, which means that Cgas increases with T. For ethanol in water, the values for K are 1355 at 40 oC and 328 at 80 oC.4 Increasing the temperature to increase Cgas works best for analytes that have a high solubility (high K value) in the sample matrix. For analytes with low solubility in the sample matrix (low K value), an increase in the volume of sample relative to the headspace volume (decreasing β) will be more effective at increasing Cgas.5 The addition of large concentrations of an inorganic salt like sodium chloride can also result in a reduction of matrix solubility (“salting out”) for many analytes, especially polar ones, leading to an enhanced analyte concentration in the headspace.4 Q5. To this point in our discussion of static headspace analysis, we have considered the GC analysis of a gaseous sample taken from above the sample matrix. The concentration of analyte in the gas sample is dependent upon temperature and solubility in the sample matrix. Instead of just sampling the gas directly, can you think of a way to physically enhance the amount of analyte you obtain from the gas sample prior to its introduction into the GC? (Hint: Brita® filter.) Dynamic headspace sampling with gas chromatographic analysis. The Brita® filter, and water purification devices like it, contain activated carbon, a form of carbon that has been treated to yield very high surface area-to-mass ratios which remove impurities by adsorption, and ion exchange resins, which remove impurities based on their charge. A cartoon6 (from Brita®) of the filter is shown below. The measure of impurities that can be removed is a function of the surface area in contact with the water sample being filtered, and the extent of charge interaction with the ion exchangers. Figure 4. Cutaway View of a Brita® Filter.6 In dynamic headspace sampling, the analyte is allowed to equilibrate into the headspace above the sample matrix as before, but then a flow of inert gas (usually GC carrier) is introduced into the headspace through a needle and allowed to flow out of the vial via a second needle. This gas flow is directed either onto a cold trap, or more commonly onto a trap containing a solid adsorbent. As analyte is removed from the headspace, it is replenished according to its value of K, the partition coefficient. Over time, the concentration in the sample matrix is reduced, allowing for an exhaustive, or near-exhaustive collection of the analyte on the trap. Dynamic headspace can be used for either solid or liquid samples. A schematic of the method is given in Figure 5. The concentration of analyte in the headspace is reduced over time according to $\mathrm{C_t = C_o e^{-(F\, t\, /\, V)}} \label{Eq. 5}$ where Ct is the concentration of analyte in the headspace after time t, Co is the initial equilibrium concentration in the headspace, F is the flow rate of purge gas, t is the purge time, and V is the headspace volume.7 Q6. Rearrange Eq. \ref{Eq. 5} to solve for the time, t, required to remove 99% of the analyte from a sample if the flow rate of helium is 40 mL/min and the volume of the headspace is 10 mL. Commonly used sorbent materials for dynamic headspace analysis include silica gel, activated carbon, and Tenax®, a porous polymer prepared from 2,6-dipheny-p-phenylene oxide.8 These can be used alone, or in series as a multi-layer trap, depending upon the complexity of the sample being collected. Tenax® has a low affinity for water, and works well for nonpolar volatiles. Silica gel is highly polar and retains water and other polar analytes well. Activated carbon is hydrophobic and is very good at trapping highly volatile compounds.5 Following extraction, the trap is heated and the sample thermally desorbed onto the column of a gas chromatograph. Reverse inert gas flow is directed through the trap so that the least volatile analytes, which were trapped first, are prevented from contacting the strongest of the adsorbents, which could lead to slow desorption.5 Purge and trap. If instead of directing the inert gas flow through the headspace above the sample, the gas is bubbled through an analyzed liquid sample, the technique is referred to as purge and trap. This is most often applied to volatile analytes in a water matrix, as for example in EPA 524.2, a method for the analysis of purgeable organics in drinking water.9 In general, a fritted sparging tube, either 5 or 25 mL in volume, as shown in Figure 6 (from Restek10), is used for purge and trap. The inert gas is introduced through a needle placed within the solution volume, while the sample rests on a glass frit. As in dynamic headspace analysis, the purge gas is directed onto a sorbent trap, where it is concentrated prior to desorption onto a gas chromatograph. Detection limits for many volatiles can be increased more than 1000x with the use of purge and trap compared with static headspace.2 Figure 6. Sparge Tube for Purge and Trap10 ## II. Solid Phase Microextraction (SPME) Introduction. Headspace GC has been demonstrated to be an effective technique for the analysis of volatile compounds in liquid or solid matrices. More of a challenge is encountered with semi-volatile or non-volatile compounds in these matrices. Consider atrazine, one of the most commonly used pesticides in the United States, and a major contaminant of ground and surface waters. Atrazine causes disruption of hormonal levels in humans, and the EPA considers a safe level in finished waters to be only 3 ppb for a yearly average.11 Figure 7. Structure of Atrazine Q7. Consider the structure of atrazine. Would you expect it be water soluble? Why? What other characteristics of the molecule might be important to consider when performing an analysis? Historically, the most common method for the analysis of atrazine in natural waters involved a liquid-liquid extraction (LLE), a sample volume of 1 L (adjusted to pH 7.0) and multiple volumes (100 mL minimum) of an organic solvent like dichloromethane. The organic extracts were combined and reduced in volume prior to analysis by GC or HPLC.12 Q8. What would you consider to be the drawbacks to the LLE method detailed above? Q9. If you were to design an “ideal” method for sampling and preconcentration of an environmental contaminant like atrazine in natural waters, what characteristics would you build in to the method? Much effort is being expended in the analytical community to develop methods for environmental contaminants that replace techniques like LLE that require large sample volumes, often unfriendly chlorinated solvents, and long prep times. One technique that has proven to be quite capable of addressing most, if not all, of these concerns is solid phase microextraction (SPME). Solid phase microextraction (SPME) utilizes a small length (1-2 cm) of fused silica fiber that is coated with a sorbent material allowing to selectively preconcentrate analyte from either the gas or liquid phase. The technique allows for easy, one-step sample collection without the need of solvents or long sample preparation time. It can be used for direct sample collection from air or water in the field, and has even been applied to in vivo analysis.13 After sample exposure, the fiber is introduced into the heated injector of a gas chromatograph where the analyte is desorbed. With an appropriate interface allowing for removal of the analyte into a solvent, the technique can also be applied to HPLC analysis.14 The SPME fiber, which is fragile, is housed inside of a syringe needle to protect it before and after sampling. In headspace or direct immersion sampling, the fiber is exposed to the sample only after the vial septum has been punctured. Following extraction, the fiber is retracted before insertion into the GC, and once inside the injection liner, the fiber is extended for desorption. The fiber apparatus is shown in Figure 8. Figure 8. SPME Fiber Assembly. At top, the fiber is retracted inside the syringe needle. Below, the inner sheath and the fiber have been extended out of the needle for sample extraction or desorption. Sorbent materials may be either liquid polymers, like polydimethylsiloxane (PDMS), polyacrylate (PA) and polyethyleneglycol (PEG), or solid particles, like CarboxenTM (porous carbon) and divinylbenzene (DVB), that are suspended within a liquid polymer. The mechanism for extraction is different for the two materials. The liquid polymers are referred to as absorbent coatings, into which analyte molecules diffuse, attracted primarily by the polarity of the polymer. Retention within the coating is a result of both these attractive forces and the thickness of the polymer, much like that seen for chromatographic stationary phases. Porous particles extract analyte molecules on the basis of adsorption, the result of $$\pi$$-$$\pi$$ or H-bonding, and van der Waals type interaction.12 Porous materials are generally characterized by pore size and extent of porosity, along with total surface area, with the strength of adsorption dependent upon analyte particle size. SPME fibers are available commercially from Suplelco/Sigma-Aldrich. In general, the choice of fiber is dictated by the properties of the analyte, notably polarity, molecular weight, volatility, and concentration range.14 PDMS coatings are available in a variety of thicknesses, dictated by sample volatility, and exhibit best sensitivities toward non-polar analytes. PA coated fibers are suited primarily for polar analytes in polar media. Mixed phase coatings are the best choice for smaller, volatile, and polar analytes.16 Detailed selection guides are available from the manufacturer, and in references 15 and 16. SPME fibers may be used in either the headspace above a sample matrix (HS-SPME), or directly immersed (DI-SPME) into a liquid. As for static headspace, the method relies on establishing equilibrium between two phases, and does not exhaustively remove the analyte from the sample, generally extracting between 2 – 20%.16 Figure 9. SPME fiber in vial headspace. Q10. Predict how the process of equilibrium between a SPME fiber in the gas phase above a sample and one immersed in a liquid sample might differ. Which of the two would you predict would occur faster? Q11. What complications would you expect from direct immersion (DI) SPME that you expect to be absent in HS-SPME? The distribution coefficient for an analyte in a sample matrix and the SPME fiber within that matrix is given by $\mathrm{K_{fs} = \dfrac{C_f}{C_s}} \label{Eq. 6}$ where Cf is the concentration of analyte within the fiber and Cs is the concentration of analyte in the sample matrix.16 Until an equilibrium condition is reached, the concentration within the fiber will increase over time. If the fiber is in solution, it is important the solution (or the fiber) is agitated during the concentration step as the analyte concentration in the solution layer close to the fiber will be depleted over time. In the gas phase above the sample matrix, the description of the distribution coefficient is more complicated, involving three phases instead of two. Additionally, as the analyte is desorbed from the headspace, more analyte will be pulled into the gas phase from the sample matrix. The equilibrium concentration of analyte into the fiber will be identical using extractions from either of the phases, as long as the volumes of the two are equivalent.15 Also, the amount of analyte extracted into the fiber is independent of the sample volume if KfsVf << Vs, where Vf is the volume of the fiber coating, and Vs is the volume of the sample.16 Advantages of HS-SPME over DI-SPME are similar to those described for HS-GC in the previous section. Sampling in the gas phase prevents fouling of the fiber with high molecular weight impurities, or those that might irreversibly adsorb to the fiber surface. In the absence of these complex matrix effects, DI extraction works best for analytes with lower volatility and polarity. Conversely, HS extraction is better suited for analytes with higher volatilities and polarity.5 Because of higher analyte diffusion rates in the gas phase relative to the liquid phase, equilibrium occurs faster for HS than for DI. Further, because of the relationship between large sample volume and amount of extracted analyte, SPME is ideal for field sampling, allowing direct analyte extraction in large volumes of air or water with no required sample pretreatment.13 Optimization of Conditions for SPME. In addition to sorbent type and thickness, other factors to consider in the optimization of analyte extraction by SPME include temperature, sample volume, pH and ionic strength, agitation conditions, and desorption parameters.5 Temperature. Increasing the temperature at which extraction is carried out increases the diffusion coefficients for the analyte, thus leading to a more rapid extraction. However, these kinetic effects are countered by a decrease in Kfs and consequently a lower amount of extracted analyte once equilibrium is achieved.15 If the analyte concentration is high in the sample, and sensitivity is not an issue, higher temperatures allow for shorter analysis times. Lower temperatures and longer extraction times may be required if sensitivity is a problem. Extraction efficiencies may also be increased in some instances by the use of a cold fiber technique, in which the fiber is cooled either by introduction of liquid CO2 via an inner capillary, or by Peltier cooling.15 Sample Volume. The equilibrium amount of analyte extracted from a sample is described by $\mathrm{n = \dfrac{K_{fs}\, V_f\, V_s\, C_o}{K_{fs}\, V_f + V_s}} \label{Eq. 7}$ where n is the number of moles extracted, Co is the analyte concentration in the sample, and other variables are as previously defined.16 In general, if KfsVf > Vs, then the amount of analyte extracted will increase with sample volume.15 In most laboratory settings, sample vials for use with SPME have volumes from 1.5 to 20 mL, with extraction amounts normally increasing with Vs over this volume range.15 As described previously, for very large values of Vs (direct air or water sampling, eg), the amount of extracted analyte will depend only upon Co for set values for Kfs and Vf. For low concentrations of volatile analytes (<50 ppb), the amount of extracted analyte may be observed to be independent of sample volume, while exponentially increasing calibration curves are often seen for large samples (>5 mL) containing high concentrations of analyte.13 pH and Ionic Strength. Commercially available SPME fibers employ sorbents that are neutral, and thus basic or acidic analytes must be converted to neutral species prior to extraction. Care must be exercised in pH adjustment of the sample matrix, as some fibers may be degraded at very high or very low pH. The addition of an inert salt, as described for HS-GC, will many times serve to enhance the extraction efficiencies for certain analytes.13 Agitation. Samples for SPME are typically agitated during extraction to decrease the time required to achieve equilibrium with the fiber, and to increase the amount of analyte extracted. There are many ways to achieve proper agitation, including magnetic stirring, sample vial movement, or movement of the fiber within the sample vial. The choice of agitation may be dictated by autosampler make/model, or selected according to specific method requirements for a particular analyte. Desorption. For GC analysis, samples are normally desorbed from the SPME fiber by insertion into the injection port of the chromatograph. Controllable parameters include inlet temperature, depth of fiber insertion, carrier gas flow at inlet, and desorption time. Typically, a small bore inlet liner (ca. 0.75 mm) is used without split flow during injection to increase the flow rate of carrier gas around the fiber and thereby the efficiency of extraction for adsorbed analytes.13 Inlet temperatures, fiber depth, and desorption times are optimized for rapid, but complete extraction of analyte. The Supelco website (among others) provides a large number of SPME applications that provide a convenient starting point for methods development.17 Q12. Access the Supelco SPME applications website (reference 17), and obtain suggested experimental conditions for the analysis of atrazine (a triazine environmental pesticide) from water samples. Stir Bar Sorptive Extraction (SBSE). SBSE, a variation of the SPME method involves the coating of sorbent material directly onto glass surrounding a magnetic stir bar with a length of 10-20 mm. The coating, PDMS or PDMS/ethlylene glycol (EG) copolymer, has a greater thickness than those available for SPME fibers (between 0.5-1.0 mm compared to 30-100 μm for SPME), allowing much larger sorptive capacities, and up to a 1000 times more sensitivity than SPME.18 The stir bar can be used either in the headspace above a sample, or more commonly within a liquid matrix with magnetic stirring.14 Vial size is 10-20 mL. Once extraction has been accomplished, the stir bar is transferred to a thermal desorption unit, where it is desorbed and introduced into the inlet of a gas chromatogram. Figure 10 (from Gerstel18) shows a stir bar inside of a sample vial (left) and inside of a desorption tube which is part of the thermal desorption unit (right). A library of SBSE applications can be found on the Gerstel website.19 Figure 10. Stir Bar Sorptive Extraction (SBSE).18 ## References 1. Anderson, Scott. News Tribune (La Salle, IL) [Online], July 9, 2014. newstrib.com/main.asp?SectionID=2&SubSectionID=29&ArticleID=37704 (accessed May 22, 2017). 2. Penton, Z. E. Headspace Gas Chromatography. In Handbook of Sample Preparation; Pawliszyn, J., Lord, H., Eds.; John Wiley & Sons: Hoboken, NJ, 2010, pp. 25-37. 3. Santoro, S. G.M. High-Throughput Blood Alcohol Analysis Determination Using Headspace Gas Chromatography, Forensic Magazine [Online], 02/10/2012. http://www.forensicmag.com/articles/2012/02/high-throughput-blood-alcohol-analysis-determination-using-headspace-gas-chromatography (accessed May 22, 2017). 4. Restek Corporation, 2000. A Technical Guide for Headspace Analysis Using GC. www.restek.com/pdfs/59895B.pdf (accessed May 22, 2017). 5. Slack, G. C., Snow, N. H., and Kou, D. Extraction of Volatile Organic Compounds From Solids and Liquids. In Sample Preparation Techniques in Analytical Chemistry; Mitra, S., Ed.; John Wiley & Sons: Hoboken, NJ, 2003, pp. 183-225. 6. Brita, LP. https://www.brita.com/why-brita/what-we-filter/ (accessed May 22, 2017). 8. Sigma-Aldrich Co. http://www.sigmaaldrich.com/analytical-chromatography/analytical-products.html?TablePage=14540726 (accessed May 22, 2017). 9. Environmental Protection Agency. Measurement of Purgeable Organic Compounds in Water by Capillary Gas Chromatography/Mass Spectrometry. EPA Method 524.2, 1992, www.epa.gov/homeland-security-research/epa-method-5242-measurement-purgeable-organic-compounds-water-capillary (accessed May 22, 2017). 10. Thomas, J. Restek Corporation. A 12-Minute Purge and Trap GC/MS Analysis for Volatiles. http://www.restek.com/Technical-Resources/Technical-Library/Environmental/env_A001 (accessed May 22, 2017). 11. Environmental Protection Agency. Atrazine Updates. http://www.epa.gov/pesticides/reregistration/atrazine/atrazine_update.htm (accessed May 22, 2017) 12. Rodríguez, J. A.; Aguilar-Arteaga, K.; Díez, C.; Barrado, E. Recent Advances in the Extraction of Triazines from Water Samples, Herbicides. In Advances in Research. Price, A. http://www.intechopen.com/books/herbicides-advances-in-research/recent-advances-in-the-extraction-of-triazines-from-water-samples (accessed May 22, 2017). 13. Supleco/Sigma-Aldrich Co. Solid Phase Microextraction: Theory and Optimization of Conditions. Bulletin 923, 1998. https://www.sigmaaldrich.com/content/dam/sigma-aldrich/docs/Supelco/Bulletin/4547.pdf (accessed May 22, 2017). 14. Shirey, R. E. SPME Commercial Devices and Fiber Coatings. In Handbook of Solid Phase Microextraction; Pawliszyn, J., Ed.; Chemical Industry Press: Beijing, 2009, pp. 86-115. 15. Risticevic, S., Vuckovic, D., and Pawliszyn, J. Solid Phase Microextraction. In Handbook of Sample Preparation; Pawliszyn, J., and Lord, H., Eds.; John Wiley & Sons: Hoboken, NJ, 2010, pp. 81-101. 16. Wells, M. J. M. Principles of Extraction and the Extraction of Semivolatile Organics from Liquids. In Sample Preparation Techniques in Analytical Chemistry; Mitra, S., Ed.; John Wiley & Sons: Hoboken, NJ, 2003, pp. 37-138. 17. Supelco/Sigma-Aldrich Co. SPME Applications Guide. 3rd Edition, 2001, https://www.sigmaaldrich.com/content/dam/sigma-aldrich/docs/Supelco/Bulletin/8652.pdf (accessed May 22, 2017). 18. Gerstel, Inc. Twister / Stir Bar Sorptive Extraction SBSE. http://www.gerstel.com/en/twister-stir-bar-sorptive-extraction.htm (accessed May 22, 2017). 19. Gerstel, Inc. Applications by Technology: Twister/Stir Bar Sorptive Extraction. http://www.gerstel.com/en/apps-twister-sbse.htm (accessed May 22, 2017). This page titled Sampling is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Contributor.
2022-09-30 10:01:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5190464854240417, "perplexity": 2680.81073941557}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335448.34/warc/CC-MAIN-20220930082656-20220930112656-00558.warc.gz"}
https://www.lazymaths.com/smart-math/algebra-problem-15/
Categories # [Smart Math] Algebra Problem 15 Here’s and example of a SMART MATH problem for ALGEBRA. ### Problem If $\frac{5}{3}+5+\frac{3}{5}+x=7$, what is x? 1. ${}^{4}\!\!\diagup\!\!{}_{15}\;$ 2. $-{}^{4}\!\!\diagup\!\!{}_{5}\;$ 3. ${}^{3}\!\!\diagup\!\!{}_{5}\;$ 4. $-{}^{4}\!\!\diagup\!\!{}_{15}\;$ 5. ${}^{2}\!\!\diagup\!\!{}_{3}\;$ ### The Usual Method $\frac{5}{3}+5+\frac{3}{5}+x=7$ $\therefore x=7-\frac{5}{3}-5-\frac{3}{5}$ $\therefore x=\frac{105-25-75-9}{15}$ $\therefore x=-\frac{4}{15}$ (Ans: 4) Estimated Time to arrive at the answer = 30 seconds. ### Using Technique Observe that in the equation $\frac{5}{3}+5+\frac{3}{5}>7$.
2022-01-17 10:09:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 11, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9405586123466492, "perplexity": 4500.340507705014}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300533.72/warc/CC-MAIN-20220117091246-20220117121246-00401.warc.gz"}
https://www.gkaonlineacademy.com/blog/?userid=5&blogpage=13
## User blog: Meguid El Nahas Anyone in the world Every time I attend an international mega congress or conference, I cant help reflecting as to the true purpose of these huge agglomeration of doctors, specialists and in our case Nephrologists. The true education value is at best modest; mostly senior speakers repeating well rehearsed mantras.... Mostly unpublishable free communications and posters....in fact previous studies showed that less than 25% of these free communications or posters ever make it to print due to poor and unpublishable and poor quality; I suspect at some meetings the percentage is even lower... So perhaps, we should forget CME and look for other benefits... They are a great opportunity to meet and see some of the good and great of our profession! They are a great opportunity to meet friends and colleagues and network. They are a great opportunity to discuss issues, research as well as plan collaborative initiatives. They are a great opportunity to familiarise oneself with the working of influences within medical societies and their impact on our practice and leadership. Some spend more times in committee rooms than in lecture halls. They are great opportunity to witness the best and worst of Big Pharma. They are great touristical opportunities as they are always held in distractingly beautiful cities where there is no competition between staying in door in badly lit conference hall or going out in the glorious sunshine....surely not teh best recipe to keep delegates attending lectures and talks...!!! PERHAPS, WE SHOULD NOT LOOK FOR EDUCATIONAL VALUE IN THE CONVENTIONAL SENSE BUT LOOK FOR PARALLEL AND ALTERNATE EDUCATIONAL VALUES OF INTERNATIONAL CONGRESSES! So lets share on OLA the true value of the ERA EDTA congress to those who attended; tell us what you gained from the Congress. [ Modified: Thursday, 1 January 1970, 1:00 AM ] Anyone in the world Attending the 50h Congress of the ERA EDTA in Istanbul. I appreciated how beautiful this city is. I also appreciated or once more realised how important the pharmaceutical industry (Pharma) is to medical conferences, congresses and nephrological education. Pharma supports heavily, even in times of recession, conferences and their infrastructure. Pharma also brings planeloads of delegates to attend these meetings. Pharma organises symposia and workshops. Pharma is everywhere....great in the first instance with sponsoriships of events, delegates, speakers and social entertainment. Then I listen to some speakers, attend some sessions and reflect how Pharma has permeated the medical psyche...slowly...insideously but surely.... Speakers who are also often those who have been generoulsy sponsored by Pharma and as Key Opinion Leaders (KOL) have financially been rewarded by Pharma, this has dented their independance, their scientific integrity and their portraying of Pharma sponsored research: TREAT, EVOLVE, RITUXIVAS, etc....all negative and inconclusive trials....become beacons of hope....for subgroups...if only the poor delegate understood it correctly....that really thery were not negative trials, using potentially dangerous products...but instead hopeful endaveours to improve the lives and health of a small subgroups and sub-sub-groups worthy benefitors of these wonderful and expensive drugs... I stand up and ask a RCT expert (prof David Jayne) whats the point of RCTs if negative results are discarded and we fall back on anecdotes and the answer is...in Lupus nephritis there has only been two drugs supported by RCTs....so my thought was why bother with RCTs...just dish out Rituximab to everybody and wait to stop more trials such as BELONG due to side effects or design them in such a way that they are underpoewred, like LUNAR, so that they continue to be used with the excuse....that the study was underpowered and the sample size too small....somebody could have told a Pharma company investing hundred of millions of dollars in a RCTs that it was underpowered...perhaps even one of their eminent clinicla advisors generously paid to advise....on RCT design???!!!! Elegant and elaborate lectures are given to mask lack of evidence; elegance and powerpoint animation replacing and even covering form soft data, lack of evidence and lack of integrity.... Perhaps after all that is the way of medical life and research. Perhaps, Nephrologists dont know better... Perhaps, Pharma is smart and nephrologists greedy.... after all we are Nephrologists but also human beings.... BUT A WORD OF WARNING FOR THOSE WHO ATTEND THESE CONFERENCES, DONT LOOSE AS AN AUDIENCE THE POWER OF SKEPTICISM AND CRITICAL THINKING. It is not because such or such company payed your air fare and put you up in a 5 star hotel that you should leave your integrity as a critical mind at home....!!!! BE CRITICAL, CHALLENGE AND BE SKETICAL....this way at least you attempt to find some of the truths amongst the Hype! PS: I declare conflict of interest as I have also been sponsored by Pharma over the years and have as KOL often been generously paid for my participation in pharma advisory boards....!!! [ Modified: Thursday, 1 January 1970, 1:00 AM ] Anyone in the world FDA Limits Duration, Usage of Tolvaptan Due to Possible Liver Injury ROCKVILLE, Md -- April 30, 2013 -- The US Food and Drug Administration (FDA) has determined that the drug tolvaptan (Samsca) should not be used for longer than 30 days and should not be used in patients with underlying liver disease because it can cause liver injury, potentially requiring liver transplant or death. An increased risk of liver injury was observed in recent large clinical trials evaluating tolvaptan for a new use in patients with autosomal dominant polycystic kidney disease (ADPKD) The FDA has worked with the manufacturer to revise the tolvaptan drug label to include these new limitations. The tolvaptan drug label has been updated to include the following information: • Limitation of the duration of tolvaptan treatment to 30 days (Dosage and Administration and Warnings and Precautions sections) • Removal of the indication for use in patients with cirrhosis. Use of tolvaptan in patients with underlying liver disease, including cirrhosis, should be avoided because the ability to recover from liver injury may be impaired (Indications and Usage and Use in Specific Populations sections). • Description of liver injuries seen in clinical trials of patients with ADPKD. • Recommendation to discontinue tolvaptan in patients with symptoms of liver injury. Data Summary Tolvaptan was approved in May 2009 for the treatment of clinically significant euvolemic and hypervolemic hyponatremia. Patients should be in a hospital for initiation and re-initiation of therapy to evaluate the therapeutic response before subsequently receiving tolvaptan in the outpatient setting. Tolvaptan is being studied for another indication: delay in progression of renal disease in adult patients with ADPKD. Three cases of serious liver injury attributed to tolvaptan were observed in a placebo-controlled trial in ADPKD and its open-label extension study, indicating the potential for the drug to cause liver injury that could progress to liver failure. In addition, tolvaptan was associated with an increased incidence of ALT elevations greater than 3 times the upper limit of normal: 42 of 958 (4.4%) patients in the tolvaptan group versus 5 of 484 (1.0%) patients in the placebo group. The serious liver injury cases were consistent with Hy’s law. Analysis of safety information in the clinical trials that supported the hyponatremia indication (and in other populations such as those with heart failure) did not demonstrate hepatotoxicity. However, the controlled hyponatremia trials were of short duration --about 30 days. Although the FDA has received spontaneous post-marketing reports of elevated liver enzymes and other liver events in patients taking tolvaptan, these reports are difficult to interpret because many of the patients had underlying disease that can be associated with elevated liver enzymes or liver injury (cirrhosis, heart failure or cancer). Based on the cases of liver injury in patients participating in the ADPKD trials, the FDA worked with the manufacturer to revise the tolvaptan drug label to include the above information, to reduce the potential for serious liver injury. Comment: Another drug bites the dust...and reminds us all that post-marketing surveillance is key to ascertain the safety of new medications. The TEMPO study of Tolvaptan in ADPKD showed a marginal benefit on CKD progression. Enthusiasts and those prompted by the sponsor/Pharma claimed a breakthrough in ADPKD management, in spite of the fact that the benefit on CKD progression was minimal (see OLA Blog TEMPO TEMPERED). So now it seems that whilst the BENEFIT is marginal,  the RISK, side effects, is potentially high. Back to the drawing board with the management of ADPKD; mTOR antagonists also have a bad/high RISK v BENEFIT profile! For alternative therapeutic interventions to slow cysts progression and CKD in ADPKD, see the excellent review by Chang & Ong in Nephron: 3 hopeful strategies for ADPKD: 1. reduce intracellular cAMP manipulations, 2. inhibit cell proliferation, 3. reduce tubular fluid secretion. http://www.ncbi.nlm.nih.gov/pubmed/22205396 Also more mundane interventios such as BP control and RAAS inhibition: HALT_PKD trial: The HALT-PKD study (underway) may tell us all we need to know that to slow the progression of ADPKD we need to optimise BP control...but that combination ACEi + ARB may not be such a good idea after all....this trial may have been overtaken by the negative outcomes of trials of maximum/combined RAAS inhibition (such as ONTARGET and ALTITUDE). Is it still a viable option???? The ADPKD trials story also reminds us of the value of surrogate markers; intervention such as mTOR antagonists and VAPTANS (TOLVAPTAN) reduce cyst size and their expansion...BUT...hardly affect kidney function....!!!! surrogates...surrogates...surrogates....are NO substitutes for HARD ENDPOINTS. Finally, manipulating key intracellular mediators such as cAMP (Tolvaptan), mTOR (Sirolimus and everolimus) or even key mediators such as the RAA system may do more harm than good! So far the lessons of clinical translation in ADPKD: What works in rats and mice doesnt always translate safely and effectively into humans...!!! [ Modified: Thursday, 1 January 1970, 1:00 AM ] Anyone in the world That eGFR is NOT a measure of KIDNEY FUNCTION but instead a CALCULATION/DERIVATION reflecting serum creatinine levels!? In this month, KI (April 2013): Turin and colleagues from Alberta in Canada show data suggesting that changes in eGFR over time; declining and increasing are associated with increased were independently associated with mortality. http://www.ncbi.nlm.nih.gov/pubmed/23344477 Abstract: Using a community-based cohort we studied the association between changes in the estimated glomerular filtration rate (eGFR) over time and the risk of all-cause mortality. We identified 529,312 adults who had at least three outpatient eGFR measurements over a 4-year period from a provincial laboratory repository in Alberta, Canada. Two indices of change in eGFR were evaluated: the absolute annual rate of change (in ml/min per 1.73 m(2) per year) and the annual percentage change (percent/year). The adjusted mortality risk associated with each category of change in eGFR was assessed, using stable eGFR (no change) as the reference. Over a median follow-up of 2.5 years there were 32,372 deaths. Compared to the reference participants, those with the greatest absolute annual decline less than or equal to 5 ml/min per 1.73 m(2) per year had significantly increased mortality (hazard ratio of 1.52) adjusted for covariates and kidney function at baseline (last eGFR measurement). Participants with the greatest increase in eGFR of 5 ml/min per 1.73 m(2) per year or more also had significantly increased mortality (adjusted hazard ratio of 2.20). A similar pattern was found when change in eGFR was quantified as an annual percentage change. Thus, both declining and increasing eGFR were independently associated with mortality and underscore the importance of identifying change in eGFR over time to improve mortality risk prediction. OLA BLOG COMMENT: Throughout the discussion the authors refer to changes in "kidney function" relying on changes with time in estimated GFR (eGFR)!!!!! When will nephrologists remember that changes in eGFR DO NOT equate solely to changes in RENAL FUNCTION....??? Instead, they equate to changes in serum creratinine levels. When will Nephrologists remember that changes in serum CREATININE levels can be due to a number of non-renal factors??? including: Dietary intake Muscle mass and Creatinine metabolism When will Nephrologists relaise that a fall in creatinine, especially in the elderly, is a reflection of sarcopenia; itself associated with increased risk of death! So rather than argue that a rising creatinine is bad news, they should have argued, or at least mentioned..., that a falling serum creatinine is bad news; WASTING and SARCOPENIA in the elderly increases risk of MORTALITY! So the observation under discussion reminds me of the REVERSE EPIDEMIOLOGY observed in renal ESRD patients; serum: low potassium, low creatinine, low phosphorus are all predicotrs of higher mortality....due to MALNUTRITION and WASTING, so does low serum albumin! The authors have merely extended these observation to an observed cohort showing that falling/declining serum creatinine levels are equally associated with higher mortality in people with CKD, mostly elderly. Similar misrepresentations of eGFR are contaminating our literature. One such example: eGFR and progression of Alzeihmer's disease (AD); nothing to do with eGFR but instead all to do with falling LEAN BODY MASS (LBM)! "....Individuals with AD demonstrated a paradoxical finding in which lower baseline MDRD eGFR (HIGHER SERUM CREATININE) was associated with less cognitive decline and brain atrophy, a phenomenon not observed in non-AD controls.Those with lower eGFR had HIGHER LBM; in other words, less wasted and sarcopenic...." http://www.ncbi.nlm.nih.gov/pubmed/21098656 Therefore accounting for LEAN BODY MASS would significantly mitigate the misinterpretation of such observations including that relating to mortality with higher eGFR. It is high time that Nephrologists remember and dont forget that eGFR DOES NOT EQUATE TO MEASURED GFR! and That eGFR is an unsuitable tool to measure renal function in those with declining muscle mass! [ Modified: Thursday, 1 January 1970, 1:00 AM ] Anyone in the world A debate has been raging for years between those who advocate health through public and governmental policies and those who feel the public should be informed and left to choose what is best for its health. A number of reminders have recently highlighted such dilemmas. In a March issue of the NEJM, Kotchen and colleagues remind the readers of the issue of Dietary Salt in health and diseasehttp://www.ncbi.nlm.nih.gov/pubmed/23534562 They put forward an extremely balanced review of the arguments for dietary salt restrictions and the worldwide recommendations to reduce dietary salt intake (sodium chloride intake to around 5g/day). They also put forward the reservations some have about translating science into public policies as well as the potential risks associated with overzealous implementation; a J-shapes curve may for instance characterize the relationship between salt consumption and cardiovascular morbidity and mortaility. http://www.ncbi.nlm.nih.gov/pubmed/22639013 http://www.ncbi.nlm.nih.gov/pubmed/22068711 It is also notable that in spite of worldwide salt restriction recommendations and public policies, that the public has not followed as shown by data from the US showing little change over the last decade in salt consumption; remaining around 8.5g of sodium chloride/day!? This is to a large extent the reflection of the very high salt content of processed and packaged food as well as the quality of fast food provided by restaurants and fast food chains...cheap food is salty! http://www.ncbi.nlm.nih.gov/pubmed/20577156 This debate and limitations of public health policies brought into focus the current difficulties the mayor of New York City (Michael Bloomberg) is facoing with his Soda Ban; the ban on serving in NY city sugary beverages in containers larger than 16 ounces (475ml). This initiative was successfully challenged in court as a judge judged it to be..."arbitrary and capricious...". Clearly, Big Soda (industry) won the first round! Mayor Bloomberg is appealing... Big Soda like Big Tobacco industry before it is fighting back. Big Tobacco (industry) has fought ban on smoking in public places for decades only to loose such a battle in recent years when a growing number of countries have implemented a no smoking policy in public places. Whilst such ban remains subject to scrutiny and its impact on public health difficult to evaluate for years, initial analysis suggest major health benefits in the short and long term: http://www.ncbi.nlm.nih.gov/pubmed/21976052. In the meanwhile Big Tobacco (industry) has moved on to promote smoking in emerging economies where it is rapidly rising, along with the risk of cardiovascular disease! The questions I really ponder are: 1. Is science directly translatable to public health policies? 2. Are public policies the answer to public health issues? 3. Are public health policies effective? 4. Why is the public so reluctant to implement them? Ultimately should the Public be Forced, Coerced or Taxed to a better Lifestyle or Informed and Educated to choose for Himself...? [ Modified: Thursday, 1 January 1970, 1:00 AM ] Anyone in the world Intensive glucose control improves kidney outcomes in patients with type 2 diabetes. ### Source The George Institute for Global Health, University of Sydney, Sydney, New South Wales, Australia. ### Abstract The effect of intensive glucose control on major kidney outcomes in type 2 diabetes remains unclear. To study this, the ADVANCE trial randomly assigned 11,140 participants to an intensive glucose-lowering strategy (hemoglobin A1c target 6.5% or less) or standard glucose control. Treatment effects on end-stage renal disease ((ESRD), requirement for dialysis or renal transplantation), total kidney events, renal death, doubling of creatinine to above 200 μmol/l, new-onset macroalbuminuria or microalbuminuria, and progression or regression of albuminuria, were then assessed. After a median of 5 years, the mean hemoglobin A1c level was 6.5% in the intensive group, and 7.3% in the standard group. Intensive glucose control significantly reduced the risk of ESRD by 65% (20 compared to 7 events), microalbuminuria by 9% (1298 compared to 1410 patients), and macroalbuminuria by 30% (162 compared to 231 patients). The progression of albuminuria was significantly reduced by 10% and its regression significantly increased by 15%. The results were almost identical in analyses taking account of potential competing risks. The number of participants needed to treat over 5 years to prevent one ESRD event ranged from 410 in the overall study to 41 participants with macroalbuminuria at baseline. Thus, improved glucose control will improve major kidney outcomes in patients with type 2 diabetes. # Commentary Kidney International (2013) 83, 346–348; doi:10.1038/ki.2012.431 ## Intensive glycemic control in type 2 diabetics at high cardiovascular risk: do the benefits justify the risks? Sabin Shurraw1,2 and Marcello Tonelli1,2 1. 1Department of Medicine, University of Alberta, Edmonton, Canada 2. 2Alberta Kidney Disease Network, Canada Correspondence: Marcello Tonelli, Department of Medicine, University of Alberta, Alberta Kidney Disease Network, 7-129 Clinical Science Building, Edmonton, Alberta T6B 2G3, Canada. E-mail: [email protected] Top ### ABSTRACT Perkovic et al. use novel data from the ADVANCE study to report on the potential renal benefits of standard glycemic control, compared with intensive glycemic control (mean hemoglobin A1c 7.3 and 6.5%, respectively). Intensive glycemic control reduced the risk of new-onset microalbuminuria, new-onset macroalbuminuria, and progression of albuminuria. The risk of end-stage renal disease was also reduced in patients treated with intensive glycemic control, although the number of events was small. Most guidelines, including those from the Kidney Disease Outcomes Quality Initiative (K/DOQI) (National Kidney Foundation), suggest that glycemic control is an important clinical objective for all diabetic patients with and without chronic kidney disease (CKD). These guidelines recommend a target hemoglobin A1c of approximately 7.0% ‘to prevent or delay complications of diabetes, including diabetic kidney disease,’ noting that more intensive treatment improves albuminuria, but evidence for any effect on loss of glomerular filtration rate (GFR) is sparse.1 Perkovic et al.2 (this issue) explore the potential benefits of intensive glycemic control for renal outcomes, using a post hoc analysis of the ADVANCE trial. ADVANCE3 randomly assigned 11,140 patients to standard glycemic control following local guidelines versus intensive glycemic control (target A1c6.5%). Included patients had type 2 diabetes (average duration 8 years) and were more than 55 years old (average age 66 years). Only patients at high risk were included, based on a history of major macrovascular disease, microvascular disease (overt nephropathy or retinopathy), or one major cardiovascular risk factor. After a median duration of 5 years, mean A1c was 7.3 vs. 6.5%, respectively, in the two groups. There was no difference in the risk of macrovascular events between groups (hazard ratio (HR) 0.94, 95% confidence interval (CI) 0.84–1.06, P=0.32). However, patients who were in the intensive glycemic control group had fewer microvascular events (HR 0.86, 95% CI 0.77–0.97, P=0.01), primarily due to a 21% reduction in ‘new or worsening nephropathy’ (HR 0.79, 95% CI 0.66–0.93, P=0.006); there was no effect on retinopathy. The new analysis by Perkovic et al.2 begins by providing us with more insight into the prevalence of preexisting renal disease in the 11,140 ADVANCE participants. At baseline, approximately 27% of patients had microalbuminuria (an inclusion criterion) but only 3.6% had macroalbuminuria. Most patients had no CKD (55%), while CKD stages 2 and 3 was present in 15% and 19% of patients, respectively. Advanced CKD (stages 4 and 5) was present in 0.5% of all patients. As expected based on the main finding of ADVANCE, outcomes related to proteinuria appeared more favorable with intensive glycemic control: new-onset microalbuminuria (33.5 vs. 36.3%), new-onset macroalbuminuria (3.0 vs. 4.3%), and progression of albuminuria by 1 stage (23.3 vs. 25.3%) (allP<0.012). Patients treated with intensive glycemic control had more regression of albuminuria by 1 stage (61.2 vs. 56.3%) and more regression to normoalbuminuria (56.3 vs. 50.2%) (all P0.002). The authors acknowledge that albuminuria is of questionable reliability as a surrogate marker for renal outcomes and appropriately focus on the risk of end-stage renal disease (ESRD). In these new analyses, the risk of ESRD was significantly reduced in patients treated with intensive glycemic control (vs. standard treatment) (HR 0.35, 95% CI 0.15–0.83, P=0.017). Furthermore, patients with preexisting renal disease seemed to derive more benefit from intensive glycemic control as reflected by a lower number needed to treat (NNT): the NNT was 152 for any albuminuria, 147 for estimated GFR <60ml/min, 85 for estimated GFR <60ml/min per1.73m2 with any albuminuria, and 41 for macroalbuminuria (irrespective of GFR). These findings were consistent in various subgroups, including participants with baseline A1c above or below median (7.2%), with or without retinopathy, in both assigned blood pressure treatment groups (ADVANCE was a two-by-two factorial trial of glycemic and blood pressure control), both men and women, and with age above or below median. On the basis of these data, the authors suggest that intensive glycemic control (presumably A1c <6.5% as targeted in ADVANCE) may be a useful strategy to prevent the development of ESRD in patients with type 2 diabetes. Before widespread adoption of such a strategy, the reader should consider some important aspects of this post hoc analysis. First, despite an apparent reduction in the risk of ESRD with intensive glycemic control, there was no significant effect on serum creatinine over time—only a non-significant trend toward more frequent doubling of serum creatinine (to >200μmol/l) in intensively treated patients (HR 1.15, P=0.42). The authors propose that doubling of serum creatinine may be an imprecise ‘surrogate’ for progression of diabetic nephropathy to ESRD, as it may capture patients suffering acute kidney injury due to sepsis, shock, and so on. In support of this hypothesis, there was a non-significant trend toward lower risk of sustained doubling of serum creatinine with intensive glycemic control (HR 0.83,P=0.38). Nonetheless, the possibility remains that other factors besides intensive glycemic control per se contributed to the apparent reduction of the risk of ESRD in the treatment group. For example, patients in the (unblinded) intensive treatment arm might have been observed more closely by their treating physician—which in turn could have reduced the risk of acute kidney injury or its consequences. Second, the number of patients who developed ESRD during ADVANCE was exceedingly low (27 events in 11,140 patients=0.24%). This contributed to the high NNT (445 patients to prevent one case of ESRD over 5 years)—although the NNT was lower in patients with more advanced CKD at baseline, likely because of the higher absolute risk in this group. Despite the high quality of the analyses, the small number of events may reduce confidence in the findings. The results of ADVANCE and the current analysis by Perkovic et al.2 must be interpreted in the context of other randomized controlled trials that have assessed the impact of intensive glycemic control on clinically relevant kidney outcomes. ACCORD was similar to ADVANCE and enrolled 10,251 patients with type 2 diabetes and high cardiovascular risk (>40 years old with known cardiovascular disease, or >55 years old with anatomical evidence of significant atherosclerosis, albuminuria, left ventricular hypertrophy, or two cardiovascular risk factors), with randomization to conventional glycemic control (A1c 7.0–7.9%) versus an even more intensive regimen targeting A1c <6%.4 This trial was terminated early (after 3.5 years) because of increased mortality in intensively treated patients. However, a subsequent analysis reported on renal end points at trial’s end.5 Intensive glycemic control resulted in lower A1c at one year (median 6.4 vs. 7.5%) and, similarly to ADVANCE, resulted in a 20–30% reduction in the risk of new-onset micro- and macroalbuminuria, but no reduction in the risk of doublings in serum creatinine (in fact, a significant increase: HR 1.07, P=0.016)—and, in contrast to the results reported by Perkovic et al.,2 no decrease in ESRD (HR 0.95, 95% CI 0.73–1.24, P=0.71). The most recent randomized trial of intensified glycemic control, VADT, was published in 2009 and enrolled 1791 military veterans with long-standing type 2 diabetes (mean duration 11.5 years), 40% of whom had known cardiovascular disease.6 Patients were randomized to standard therapy versus intensified glycemic control to decrease A1c by 1.5%; A1c between groups was 8.4 vs. 6.9%. After a median of 5.6 years, there was no difference in the risk of mortality or microvascular end points, other than a reduced risk of progression of albuminuria; the risks of doubling of serum creatinine and stage 5 CKD were similar between groups (P=0.99 and P=0.35, respectively). What can we conclude about the effect of glycemic control on diabetic nephropathy in type 2 diabetes, and, more broadly, on patient survival and cardiovascular events? We believe that it is reasonable and generally safe to target an A1c of 7%. The UK Prospective Diabetes Study (UKPDS) showed that early, more intensive glycemic control (A1c of 7.0 vs. 7.9%) in patients with newly diagnosed type 2 diabetes safely reduced microalbuminuria and doubling of serum creatinine (as well as retinopathy).7 So, should clinicians routinely target an A1c of 6–7% or lower? As discussed, high-quality data from ACCORD, VADT, and ADVANCE all demonstrate that this strategy will improve proteinuria-based surrogate outcomes, but only the post hoc analysis of ADVANCE by Perkovic et al.2 suggests that such an intensive strategy may reduce the clinically relevant outcome of ESRD. However, adopting an intensive glycemic control strategy may also pose risks to patients. A1ctargets below 6.5% led to increased mortality (largely due to myocardial infarction) in ACCORD and had no significant effect on cardiovascular events or mortality in ADVANCE. One may speculate that patients enrolled in the latter two trials, most of whom had established type 2 diabetes and major cardiovascular risk factors, were more susceptible to the adverse consequences of hypoglycemia, as opposed to the younger, healthier UKPDS participants. Similarly, observational data suggest that achieved A1c<6.5% is associated with excess mortality in patients with diabetes and established CKD.8 Thus, intensive glycemic control appears to have both risks and benefits—and despite the important findings of Perkovic et al.,2 this strategy cannot be broadly recommended at present. Current data do not allow clinicians to confidently identify patients in whom the risk-to-benefit ratio of tighter glycemic control is especially favorable. Until such data are available, we suggest that an A1c target <6.5% for type 2 diabetes should be used cautiously, if at all—perhaps only in well-informed patients who are younger, at lower risk for hypoglycemia, and free of symptomatic cardiovascular disease. OLA COMMENTARY: This is an excellent and balanced review of the recent publication on a posthoc analysis of ADVANCE putting in the context of other intensive v conventional glycemia control stuides in T2DM. It highlights the facts that in high risk T2DM patients: 1. Intensive glycemia control with HbA1c <7% is either associated with no CVD benefit or increase mortality (ACCORD) 2. Intensive glycemia control has NO effect on renal HARD ENDPOINTS such as decline of GFR or incidence of ESRD 3. Observational studeis also suggest increased mortality with HbA1c<6.5% The study commenetd upon by Perkovic et al also highlights a number of issues: 1. The concern about endless mining of data to find positive results in posthoc analyses. These are at best hypothesis generating (difficult when considerable data suggest the opposite...) or at worst futile and misleading excercises. 2. The distinction between statistical analysis and the true clinical value of such observations; p vale <0.05 but number needed to treat to prevent 1 ESRD 445 patients over 5 years!!!!! 3. The use of serum creatinine as a marker of progressive CKD/DN, in elderly patients with CVD and a tendency to sarcopenia; thus dissociating further changes in sCr = eGFR and true measured GFR and true progression of Diabetic nephropathy. Intensive glycemia control with its induced side effecst and increased morbidity may be associated with a fall in sCr hence the apparently stable sCR in the Perkovic study in spite of possible worsening of true GFR/kidney function....?! 4. The use of ESRD in the absence of measured GFR as a hard endpoint; this can be misleading and observer baised as the decision to start RRT often has subjective elements to it; a good example of such dissociation was seen in REIN studies where in patients with reduced GFR<45; Ramipril had NO effect on measured GFR decline but decreased the number of those reaching ESRD....!! http://www.ncbi.nlm.nih.gov/pubmed/10437863 5. Glycemia control may improve/lower sCr through improved on the impaired tubular secretion of Cr observed and associated with DM. http://www.ncbi.nlm.nih.gov/pubmed/15882297 http://www.ncbi.nlm.nih.gov/pubmed/12753302 6. The disconnect between reduction of albuminuria and progression of diabetic nephropathy/CKD; most of the studies showing that intensive glycemia control benefit albuminuria but NOT progression of kidney function decline. This may reflect that glycemia control can affect albuminuria in many ways unconnected to slowing the decline in GFR. 7. Lowering glycemia can improve urianry albumin excretion in the following way: a. Increasing CVD morbidity and decreasing protein intake due to poor health; this would in turn reduce albuminuria that is often proportional to the protein intake. b. Affecting glycation and charge of albumin which in turn decrease its filtration and reabsortion rates. http://www.ncbi.nlm.nih.gov/pubmed/9187409 http://www.ncbi.nlm.nih.gov/pubmed/8477883 c. Improving peritubular circulation an dimproving proximal tubular reabsorption of albumin; many beleive that microalbuminuria in DM is a reflection of vascular and fall in peritubular capillary perfusion impacting/decreasing proximal tubular reabsorption of albumin. http://www.ncbi.nlm.nih.gov/pubmed/22734110 http://www.ncbi.nlm.nih.gov/pubmed/21401356 ALTOGETHER, IT IS MISLEADING TO CLAIM PROTECTION FROM PROGRESSIVE DIABETIC NEPHROPATHY THROUGH THE REDUCTION IN ALBUMINURIA AND UNRELATED BIOMARKER. IT IS MISLEADING TO EQUATE CHANGES IN SCR INDEPENDENTLY OF THE HARD ENDPOINT OF MEASURING GFR. START OF RRT/ESRD IS DIFFICULT TO INTERPRET IN THE ABSENCE OF HARD DATA RELATING TO TEH RATE OF DECLINE OF KIDNEY FUNCTION. [ Modified: Thursday, 1 January 1970, 1:00 AM ] Anyone in the world Prevalence of Diagnosed Cancer According to Duration of Diagnosed Diabetes and Current Insulin Use Among U.S. Adults With Diagnosed Diabetes Findings from the 2009 Behavioral Risk Factor Surveillance System Authors :CHAOYANG LI,GUIXIANG ZHAO,XIAO-JUN WEN,EARL S. FORD, LINA S. BALLUZ, org DIABETES CARE Diabetes Care Publish Ahead of Print, published online January 8, 2013 this article recently published in diabetes care raises suspicious about long term insulin use in T2DM and cancer . The aim of the study was to To determine whether longer duration of diagnosed diabetes and current insulin use are associated with increased prevalence of cancer among adults with diagnosed diabetes, Authors analyzed a large population-based sample from the 2009 Behavioral Risk Factor Surveillance System (BRFSS) in the U.S. The BRFSS is a standardized telephone survey that assesses key behavioral risk factors, lifestyle habits,and chronic illnesses and conditions among adults aged $18 years in all U.S. RESULT :There were a total of 34,424 adults with diagnosed diabetes participating in the survey with the diabetes module. Of them, 8,460 had missing data on diabetes age, insulin use, and selected covariates. Among adults with diagnosed diabetes and with complete data on cancer and diabetes-related covariates (n = 25,964), there were 11,165 men (weighted percentage, 52.8%), 18,673 NH whites (65.3%), 3,575 NH blacks (16.0%), 2,348 Hispanics (13.1%), and 1,368 participants with NH other race/ ethnicity (5.6%). Approximately 4.7% of adults with diagnosed diabetes were estimated to have type 1 diabetes (n = 491 men and 721 women), 70.5% were type 2 diabeticwithout current insulin use (n = 7,820 men and 10,475 women), and 24.8% were type 2 diabetic with current insulin use (n = 2,854 men and 3,603 women). The mean age was 58.6 years (median 59.0 years). The mean age at diabetes diagnosis was 47.6 years (49.0 years). The unadjusted prevalence for cancers of all sites among men with type 2 diabetes and current insulin use was higher than those with either type 1 diabetes (P , 0.001) or those with type 2 diabetes and no current insulin use (P , 0.001) among both men and women. After adjustment for age, the difference in the prevalence estimates for cancers of all sites remained between adults with type 2 diabetes with current insulin use and those with type 2 diabetes with no current insulin use among men (P , 0.001) and women (P , 0.001). Among both men and women with type 2 diabetes, the prevalence estimates for cancers of all sites were significantly higher among those who had diabetes >15 years than among those who had diabetes ,15 years after adjustment for all selected covariates . Specifically,the prevalence was estimated to be significantly higher among adults who had diabetes$15 years for colon cancer, melanoma, nonmelanoma skin cancer, and cancer of urinary tract among men and the cancers of the breast, female reproductive tract, and skin among women than those who had diabetes ,15 years. Among both men and women with type 2 diabetes, the prevalence estimate for cancers of all sites was ~1.5 times higher among those who used insulin than those who did not use insulin after adjustment for demographic characteristics and selected health risk factors. current insulin use remained significantly associated with increased prevalence of cancers of all sites among both men and women and increased prevalence of skin cancer (both melanoma and nonmelanoma) among men and cancer of the reproductive tract . 1-The relation between Insulin use and cancer needs more attention and further .Further research may be warranted. 2-The major strength this study was the use of a large population-based sample, which enabled investigators to provide stable estimates of cancer prevalence among adults with diabetes in the general population. 3- There were also several limitations , the study is a cross-sectional study in which persons who self-reported diagnosed cancer were cancer survivors and included those who were newly diagnosed and those who had a preexisting condition. Persons who died of cancer were excluded in this self-reported cross-sectional survey. Therefore, these results based on the prevalence of diagnosed cancer suggest crosss ectional associations and preclude causal associations between duration of diagnosed diabetes or current insulin use and cancer. 4-Age at diagnosis of diabetes or cancer, current insulin use, and cancer types were self-reported by survey participants; thus, recall bias may be possible. 5-duration of diagnosed diabetes may not represent actual duration of exposure to diabetes because people may be asymptomatic for many years before medical diagnosis. However and again this study raised the red flag between possible association between insulin and cancer that was raised before for Lantus. Professor Alaa Sabry [ Modified: Thursday, 1 January 1970, 1:00 AM ] Anyone in the world American Society for Critical Care Medicine (Puerto Rico): January 22, 2013 GFRs Overestimated in ICU Patients with AKI Glomerular filtration rates (GFRs) of critically ill patients with acute kidney injury (AKI) are routinely overestimated, data presented at the Society for Critical Care Medicine's 2013 annual meeting suggest. Investigators believe urine output should be used instead of creatinine-based equations to assess kidney function in oligoanuric ICU patients. The average baseline serum creatinine level was 0.9 mg/dL, and 10% of subjects had a documented history of chronic kidney disease. On each of the first four days of AKI, patients were between 1.8 and 3.7 liters fluid positive. Ten percent of the patients were prescribed trimethoprim. The researchers assumed that the patients had a true GFR of less than 15 mL/min/1.73 m2. They compared this to the patients' estimated GFRs (eGFRs) calculated from six existing equations. The equations were the Cockcroft-Gault using actual body weight (CG-ABW), Cockcroft-Gault using ideal body weight (CG-IBW), Jeliffe, Modified Jeliffe, the four-variable Modification of Diet in Renal Disease (MDRD-4) study formula, and the Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI) equations. Results of all six equations significantly overestimated GFR, even after the researchers adjusted for patients' daily variation in creatinine clearance. The closest approximation of the true GFR was given by the CG-IBW, which yielded a day-adjusted eGFR of 32 ml/min/1.73m2. The next-most accurate was the CG-ABW, with a day-adjusted eGFR of 51 ml/min/1.73m2. The least accurate was the Jeliffe equation, with a day-adjusted eGFR or 65 ml/min/1.73m2. Statistically and clinically significant overestimation of true GFR persisted out to the fourth day of AKI. The findings echo those of previous studies. For example, a multicenter observational study published in 2010 showed the CG-ABW, MDRD and Jeliffe equations overestimated urinary creatinine clearance by 80%, 33% and 10%, respectively (Nephrol Dial Transplant 2010;25:102-107). Clearly, this doesnt fully appreciate that: 1. eGFR (Regardless of the CR based Formula used) is NOT applicable to AKI! 2. eGFR is NOT applicable to non-steady state situations! 3. eGFR is NOT applicable to sick patients with malnutrition and sarcopenia! 4. Serum Creatinine is an UNRELIABLE marker of true GFR/Kidney Function in AKI! Other Biomarkers are not much better and a circular argument goes that they rise before serum Cr goes up....but serum creatinine is an unreliable marker of AKI...so in the absence of Gold Biomarker for AKI, clinical judgement is key to the Diagnosis and Management of AKI! http://ndt.oxfordjournals.org/content/early/2012/10/31/ndt.gfs380.full.pdf+html [ Modified: Thursday, 1 January 1970, 1:00 AM ] Anyone in the world Mohan and colleagues in the December issue of JASN report on the prognostic value of pre-transplant DSA (Donor Specific Antibodies) measured by solid phase assays (SPA) in relation to renal allografts outcomes. Mohan et al. JASN 2012;23:2061-2071. They undertook a systematic review of cohort studies comprising a total of 1119 patients inclduing 145 with isolated DSA-SPA. They noted that in the presence of negative Complemenet dependent cytotoxicity (CDC) crossmatch, a positive DSA-SPA (such a Luminex) doubles the risk of antibody-mediated rejection (AMR) and increases the risk of long term graft failure. This suggests that recipients should be checked for DSA regardless of a negative cross match. Of interest, the negative impact of a positive DSA-SPA test at transplantation on outcomes was noted regardless of the SPA titre (mean fluorescence index: MFI, low 1000). This observation is in agreement with previous publications: For instance, Lefaucheur and colleagues in 2010 showed that patients with MFI >6000 had >100-fold higher risk for AMR compared to those with MFI <465. The presence of HLA-DSA did not affect patients survival. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2938596/ Gary Hill and colleagues in Paris showed that DSA+ recipients have a three fold increase incidence of renal arteriosclerosis. Such accelerated arteriosclerosis was noted early within teh course of the allograft (3-12 months after transplantation). http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3083319/ 1. Interesting observation consistent with the literature on the topic. 2. How would routine DSA-SPA screening affect outcomes? This may stratify patients into higher risk warranting stronger initial/induction immunosuppression with ATG/Alemtuzumab. It may also trigger closer monitoring of DSAs after transplantation and lead to related maintenance immunosuppression related strategies. Alternatively, DSA screening would be undertaken if and when AMR is suspected; onset of albuminuria for instance! 3. Is routine DSA-SPA of all transplant recipients cost effective? 4. Are all DSAs harmful? 5. Is detection of DSA-SPA antibodies of unknown significance create unecessary investigations, excessive testing and unecessary patients' anxiety? Is this a luxury few renal transplantation centres can afford or is it a game changer in renal transplantation? [ Modified: Thursday, 1 January 1970, 1:00 AM ] Anyone in the world Professor Richard Glassock wrote: Thank Dr Khwaja your for your passionate plea for declaring that access to high-quality healthcare (including life-extending procedures such as dialysis and transplantation for those who suffer from ESRD) is a human right not a privilege.  Such a position was codified 65 years ago in the Universal Declaration of Human Rights adopted by the UN General Assembly (Article 27)—mentioning adequate but not high-quality health care and avoiding the issue of cost. In the face of limited resources, difficult choices must be made on how to extend this right of access toadequate health care to the maximum extent possible across the broad spectrum of health problems in a given society (the burden of disease). Surely the socio-economic status of individuals suffering from the consequences of ill-health should never be a criterion for making such difficult choices in a civilized society. Nevertheless, the winners in the lottery of life (the affluent) will always have a privileged status in regards to their access to the health care game.  What you have written about is the dilemma of what to do for those who did not win the lottery of life, by virtue of birth or circumstances. Societies and the governments they form must make these difficult choices for the populations they are entrusted to serve (or oppress as the case may be).  We all recognize that chronic kidney disease (CKD) and its end-stages can be a debilitating and devastating development for individuals, but on a population basis it ranks rather low compared to other common non-communicable health issues, and it tends to disproportionately affect the elderly.  According to the Global Burden of Diseases (2010) study recently reported in Lancet (volume 380, December 15, 22, 29, 2012) in a landmark series of papers, CKD ranked 39th for years lived with disability among 289 diseases and injuries (average of 58 years lived in disability per 100,000 population--low back pain and major depression ranked 1st and 2nd).  In 2010 CKD ranked  24th in a list of 235 causes of death (up from 32nd in 1990) in terms of global years of life lost, but 7th among non-communicable disease (up from 10th in 1990). Not surprisingly, ischemic heart disease ranked  1st in this category.  Among the top 10 ranked disorders in terms of global years of life lost, 6 were communicable, 3 were non-communicable and 1 was related to injury.  The global ranking of CKD (including ESRD), in terms of years of life lost, ranged from 6th (in Central Latin America) 36th (in Central sub-Saharan Africa. While these details do not truly reflect the degree of human suffering brought about by CKD or any other disease, they do provide a useful perspective in the challenging arena of choice-making from the societal perspective. Resource-rich countries such as North America and Western Europe have adopted a variety of strategies to deal with the burden of disease in their unique regions.  The United Kingdom adopted a strategy of universal access and “free” at the point-of- car for all of its citizens after WW II (The National Health Service; NHS); whereas the United States more recently adopted a  non-universal, capitalistic (free-market) framework, focusing on the elderly, the disabled and the poor.  The Affordable Care Act (“Obamacare”) is extending this reach into a broader range of its citizens, but it still does not approach the NHS in terms of universality of access, except in the arena of ESRD care. Many resource-poor countries have naturally focused on common health issues arising from communicable diseases, such as water potability, vaccination, and endemic infectious diseases (e.g. HIV and Malaria).  The ever-present threats or realities of war have also had a bearing on allocation of scarce resources for health. Many countries are now in transition from a pre-occupation with communicable diseases to the non-communicable ones, especially as their populations age, consequent to lower birth rates and better control of  life-threatening infectious disease. Yes, the percentage of gross domestic product allocated to diagnosis and treatment of disease varies widely among the countries of the world.   The large amounts of money spent in resource-rich countries does not always result in a uniformly high quality of life and excellent outcomes of care.  Also, in a capitalistic society there is always the opportunity for fraud and abuse and in socialist schemes the implicit risk of rationing by the queue. Universal care cannot be equated with “free” care- it is merely a formalized way of redistributing capital in the form of taxation policies.  As we are learning in the USA, if we are to guarantee access of high-quality health care to everybody, either our taxes must increase or the cost of the care-provided, in aggregate, must come down. The latter means fewer units of care and /or a lower cost per unit.  Where does care for CKD or ESRD fit in this new equation, and how will other less affluent populations grapple with the disconnect between the burden of care, in its varied forms, and the ability of governments (or individuals) to sustain the funding of care, without the prospect of insolvency?  A middle ground must be sought, but some form of rationing, implicit or explicit, seems inevitable. In the case of CKD and ESRD, like ischemic heart disease, stroke and COPD, an effort to prevent disease or slow its progression is certainly a wise choice, considering the alternatives.  Improvements in the care of patients with ESRD already under treatment with dialysis or transplantation, will improve the quality of life, but will also steadily increase the number of patients treated (like better survivorship with cancer chemotherapy), at least until some new balance is achieved between incidence rate and death rates among the treated population.  Global screening of asymptomatic persons for the presence of CKD in the population as a whole does not seem to be a viable option at the present, but efforts to detect and control Obesity, Diabetes and Hypertension may be a cost-effective way of lowering the burden of CKD in vulnerable populations, and would have the added benefits of addressing issues in ischemic heart disease, stroke, blindness, amputations and congestive heart failure that contribute so much to the global burden of disease.  Such an approach need not have CKD as it central theme. An organized, coherent, simple and universally-agreed upon system of classification, nosology and staging of CKD is a highly desirable goal- and much progress has been made by KDOQI in 2002 and KDIGO in 2012.  However, this system must in the final analysis, in the perspective of optimal allocation of resources in rich and poor countries alike, accurately identify those individuals most likely to benefit from interventions and at the lowest achievable cost.   There should be a low tolerance for both “false positives” and “false negatives”, especially when disease labeling can have untoward consequences and when erroneous reassurance leads to damaging delays in appropriate treatment. You make a plea that organized Nephrology couple their advocacy for logical classification of CKD (largely based on prognosis) and clinical guidelines with a strong message that care for patients with kidney disease be universally available, publically supported (“free” at the point-of –care) and of the highest-quality. Assuredly, you must recognize that such advocacy, on a global stage, creates the necessity for agonizingly difficult decisions involving prioritization among a list of equally or more pressing problems of health in an environment of limited or soon-to-be limited resources. In addition, other social issues such as education, poverty, war and its prevention are competitive to health issues.  Surely access to adequate health care is a right, and not a privilege for the fortunate few, but the expression of this right by populations, through their governments should be leavened by reason and by the ethical principle of the providing the greatest good for the largest number, without consideration of the social worth of the individual.  Physicians adhere to the traditional medical ethic of “rendering to each patient a full measure of service and devotion” Foregoing such a “full measure” can be easily justified when the treatment is useless or unnecessary.  Similar decisions can be fraught with much difficulty (and risk) when such “full measure” competes with broader social issues.  It is the tension in this complex arena that you address in your poignant and passionate essay. Richard J. Glassock, MD, MACP [ Modified: Thursday, 1 January 1970, 1:00 AM ]
2020-08-14 08:01:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2935382127761841, "perplexity": 7167.33368186378}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439739182.35/warc/CC-MAIN-20200814070558-20200814100558-00577.warc.gz"}
https://cstheory.stackexchange.com/questions/36957/pair-of-vertex-disjoint-cycles-in-a-directed-graph
# Pair of vertex disjoint cycles in a directed graph What is the fastest known deterministic algorithm that can recognize directed graphs with a pair of vertex disjoint cycles? I know graphs with min outdegree three always have such a pair (Thomassen'83), but even so I cannot find an efficient algorithm in the general case. Does anyone know a reference for this? • For undirected graph, it is NP-complete to recognize graphs with vertex set partitionable into two equal-size vertex disjoint cycles. – Mohammad Al-Turkistany Nov 11 '16 at 18:19 • The characterization for undirected graphs is also non-trival, due to Lovasz, and can be found e.g. here: arxiv.org/abs/1601.03791. – domotorp Nov 11 '16 at 19:51 According to Grohe and Grüber "Parameterized approximability of the disjoint cycle problem" (ICALP 2007) there is an algorithm for finding $k$ vertex-disjoint cycles in a digraph, in time $n^{f(k)}$ for some function $f$ (polynomial for fixed $k$ but not FPT) in the section 5 of Reed, Robertson, Seymour and Thomas, "Packing directed circuits" (Combinatorica 1996) (which in turn uses theorem 3 of "The directed subgraph hemeomorphism problem" of Fortune, Hopcroft, and Wyllie.) For a strongly connected digraph $H$ and a general digraph $G$, there is an algorithm which runs in $|G|^{f (k+|H|)}$ and finds $k$ disjoint butterfly models of $H$ in $G$ if exists. For finding two disjoint cycles we have $|H|=1, k=2$. This is a direct consequence of algorithmic proof of Theorem 4.3 in
2020-07-09 11:53:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7487819194793701, "perplexity": 548.6190126388118}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655899931.31/warc/CC-MAIN-20200709100539-20200709130539-00187.warc.gz"}
https://en.wikipedia.org/wiki/Multilayer_perceptron
# Multilayer perceptron A multilayer perceptron (MLP) is a class of feedforward artificial neural network. An MLP consists of at least three layers of nodes. Except for the input nodes, each node is a neuron that uses a nonlinear activation function. MLP utilizes a supervised learning technique called backpropagation for training.[1][2] Its multiple layers and non-linear activation distinguish MLP from a linear perceptron. It can distinguish data that is not linearly separable.[3] Multilayer perceptrons are sometimes colloquially referred to as "vanilla" neural networks, especially when they have a single hidden layer.[4] ## Theory ### Activation function If a multilayer perceptron has a linear activation function in all neurons, that is, a linear function that maps the weighted inputs to the output of each neuron, then linear algebra shows that any number of layers can be reduced to a two-layer input-output model. In MLPs some neurons use a nonlinear activation function that was developed to model the frequency of action potentials, or firing, of biological neurons. The two common activation functions are both sigmoids, and are described by ${\displaystyle y(v_{i})=\tanh(v_{i})~~{\textrm {and}}~~y(v_{i})=(1+e^{-v_{i}})^{-1}}$. The first is a hyperbolic tangent that ranges from -1 to 1, while the other is the logistic function, which is similar in shape but ranges from 0 to 1. Here ${\displaystyle y_{i}}$ is the output of the ${\displaystyle i}$th node (neuron) and ${\displaystyle v_{i}}$ is the weighted sum of the input connections. Alternative activation functions have been proposed, including the rectifier and softplus functions. More specialized activation functions include radial basis functions (used in radial basis networks, another class of supervised neural network models). ### Layers The MLP consists of three or more layers (an input and an output layer with one or more hidden layers) of nonlinearly-activating nodes making it a deep neural network. Since MLPs are fully connected, each node in one layer connects with a certain weight ${\displaystyle w_{ij}}$ to every node in the following layer. ### Learning Learning occurs in the perceptron by changing connection weights after each piece of data is processed, based on the amount of error in the output compared to the expected result. This is an example of supervised learning, and is carried out through backpropagation, a generalization of the least mean squares algorithm in the linear perceptron. We represent the error in output node ${\displaystyle j}$ in the ${\displaystyle n}$th data point (training example) by ${\displaystyle e_{j}(n)=d_{j}(n)-y_{j}(n)}$, where ${\displaystyle d}$ is the target value and ${\displaystyle y}$ is the value produced by the perceptron. The node weights are adjusted based on corrections that minimize the error in the entire output, given by ${\displaystyle {\mathcal {E}}(n)={\frac {1}{2}}\sum _{j}e_{j}^{2}(n)}$. Using gradient descent, the change in each weight is ${\displaystyle \Delta w_{ji}(n)=-\eta {\frac {\partial {\mathcal {E}}(n)}{\partial v_{j}(n)}}y_{i}(n)}$ where ${\displaystyle y_{i}}$ is the output of the previous neuron and ${\displaystyle \eta }$ is the learning rate, which is selected to ensure that the weights quickly converge to a response, without oscillations. The derivative to be calculated depends on the induced local field ${\displaystyle v_{j}}$, which itself varies. It is easy to prove that for an output node this derivative can be simplified to ${\displaystyle -{\frac {\partial {\mathcal {E}}(n)}{\partial v_{j}(n)}}=e_{j}(n)\phi ^{\prime }(v_{j}(n))}$ where ${\displaystyle \phi ^{\prime }}$ is the derivative of the activation function described above, which itself does not vary. The analysis is more difficult for the change in weights to a hidden node, but it can be shown that the relevant derivative is ${\displaystyle -{\frac {\partial {\mathcal {E}}(n)}{\partial v_{j}(n)}}=\phi ^{\prime }(v_{j}(n))\sum _{k}-{\frac {\partial {\mathcal {E}}(n)}{\partial v_{k}(n)}}w_{kj}(n)}$. This depends on the change in weights of the ${\displaystyle k}$th nodes, which represent the output layer. So to change the hidden layer weights, the output layer weights change according to the derivative of the activation function, and so this algorithm represents a backpropagation of the activation function.[5] ## Terminology The term "multilayer perceptron" does not refer to a single perceptron that has multiple layers. Rather, it contains many perceptrons that are organised into layers.An alternative is "multilayer perceptron network". Moreover, MLP "perceptrons" are not perceptrons in the strictest possible sense. True perceptrons are formally a special case of artificial neurons that use a threshold activation function such as the Heaviside step function. MLP perceptrons can employ arbitrary activation functions. A true perceptron performs binary classification (either this or that), an MLP neuron is free to either perform classification or regression, depending upon its activation function. The term "multilayer perceptron" later was applied without respect to nature of the nodes/layers, which can be composed of arbitrarily defined artificial neurons, and not perceptrons specifically. This interpretation avoids the loosening of the definition of "perceptron" to mean an artificial neuron in general. ## Applications MLPs are useful in research for their ability to solve problems stochastically, which often allows approximate solutions for extremely complex problems like fitness approximation. MLPs are universal function approximators as showed by Cybenko's theorem,[3] so they can be used to create mathematical models by regression analysis. As classification is a particular case of regression when the response variable is categorical, MLPs make good classifier algorithms. MLPs were a popular machine learning solution in the 1980s, finding applications in diverse fields such as speech recognition, image recognition, and machine translation software,[6] but thereafter faced strong competition from much simpler (and related[7]) support vector machines. Interest in backpropagation networks returned due to the successes of deep learning. ## References 1. ^ Rosenblatt, Frank. x. Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms. Spartan Books, Washington DC, 1961 2. ^ Rumelhart, David E., Geoffrey E. Hinton, and R. J. Williams. "Learning Internal Representations by Error Propagation". David E. Rumelhart, James L. McClelland, and the PDP research group. (editors), Parallel distributed processing: Explorations in the microstructure of cognition, Volume 1: Foundation. MIT Press, 1986. 3. ^ a b Cybenko, G. 1989. Approximation by superpositions of a sigmoidal function Mathematics of Control, Signals, and Systems, 2(4), 303–314. 4. ^ Hastie, Trevor. Tibshirani, Robert. Friedman, Jerome. The Elements of Statistical Learning: Data Mining, Inference, and Prediction. Springer, New York, NY, 2009. 5. ^ Haykin, Simon (1998). Neural Networks: A Comprehensive Foundation (2 ed.). Prentice Hall. ISBN 0-13-273350-1. 6. ^ Neural networks. II. What are they and why is everybody so interested in them now?; Wasserman, P.D.; Schwartz, T.; Page(s): 10-15; IEEE Expert, 1988, Volume 3, Issue 1 7. ^ R. Collobert and S. Bengio (2004). Links between Perceptrons, MLPs and SVMs. Proc. Int'l Conf. on Machine Learning (ICML).
2017-07-21 05:30:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 19, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7238291501998901, "perplexity": 1235.4189549261919}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423716.66/warc/CC-MAIN-20170721042214-20170721062214-00653.warc.gz"}
http://graphdb.ontotext.com/documentation/free/faq.html
# FAQ¶ Where does the name “OWLIM” (the former GraphDB name) come from? The name originally came from the term “OWL In Memory” and was fitting for what later became OWLIM-Lite. However, OWLIM-SE used a transactional, index-based file-storage layer where “In Memory” was no longer appropriate. Nevertheless, the name stuck and it was rarely asked where it came from. What kind of SPARQL compliance is supported? All GraphDB editions support: Is GraphBD Jena-compatible? Yes, GraphBD is compatible with Jena 2.7.3 with a built-in adapter. For more information, see Using GraphDB with Jena. What are the advantages of using solid-state drives as opposed to hard-disk drives? We recommend using enterprise grade SSDs whenever possible as they provide a significantly faster database performance compared to hard-disk drives. Unlike relational databases, a semantic database needs to compute the inferred closure for inserted and deleted statements. This involves making highly unpredictable joins using statements anywhere in its indices. Despite utilising paging structures as best as possible, a large number of disk seeks can be expected and SSDs perform far better than HDDs in such a task. How to find out the exact version number of GraphDB? The major/minor version and patch number are part of the GraphDB distribution .zip file name. They can also be seen at the bottom of the GraphDB Workbench home page, together with the RDF4J, Connectors, and Plugin API’s versions. A second option is to run the graphdb -v startup script command if you are running GraphDB as a standalone server (without Workbench). It will also return the build number of the distribution. Another option is to run the following DESCRIBE query in the Workbench SPARQL editor: DESCRIBE <http://www.ontotext.com/SYSINFO> FROM <http://www.ontotext.com/SYSINFO> It returns pseudo-triples providing information on various GraphDB states, including the number of triples (total and explicit), storage space (used and free), commits (total and whether there are any active ones), the repository signature, and the build number of the software. How to retrieve repository configurations? To see what configuration data is stored in a GraphDB repository, go to Repositories and use Download repository configuration as Turtle icon Then you could open the result file named repositoryname-config.ttl Why can’t I use my custom rule file (.pie) - an exception occurred? To use custom rule files, GraphDB must be running in a JVM that has access to the Java compiler. The easiest way to do this is to use the Java runtime from a Java Development Kit (JDK). How to speed up a slow repository with enabled security when each request includes HTTP basic authentication? Every HTTP authentication request takes significant time due to the bcrypt algorithm which encrypts the clear text password to match with the hash in $GDB_HOME/work/workbench/settings.js. The best solution is to do one-time authentication and keep the JWT string. JWT has a much lower overhead. 1. Login and generate the authorization curl -X POST -I 'http://localhost:7200/rest/login/admin' -H 'X-GraphDB-Password: root' 1. Pass the returned JWT key curl -H 'Authorization: GDB eyJ1c2VybmFtZSI6ImFkbWluIiwiYXV0aGVudGljYXRlZEF0IjoxNTU3OTIzNTkxNDA0fQ==.OwSkajbUoHHsQGfwvaCxbb1f7bn0PJUeL4VbGEmNcWY=' http://localhost:7200/repositories/SYSTEM/size The JWT token expires every 30 days. To change the encryption algorithm from bcrypt to SHA-256 supported by the older GDB version, update the password token in $GDB_HOME/work/workbench/settings.js with the encrypted value of: password{user} echo root{admin} | sha256sum { "grantedAuthorities": [
2020-01-22 08:34:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2545023560523987, "perplexity": 6247.802838783653}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606872.19/warc/CC-MAIN-20200122071919-20200122100919-00431.warc.gz"}
http://www.numdam.org/item/ASNSP_1997_4_25_3-4_419_0/
Poincaré inequality for some measures in Hilbert spaces and application to spectral gap for transition semigroups Annali della Scuola Normale Superiore di Pisa - Classe di Scienze, Serie 4, Volume 25 (1997) no. 3-4, p. 419-431 @article{ASNSP_1997_4_25_3-4_419_0, author = {Da Prato, Giuseppe}, title = {Poincar\'e inequality for some measures in Hilbert spaces and application to spectral gap for transition semigroups}, journal = {Annali della Scuola Normale Superiore di Pisa - Classe di Scienze}, publisher = {Scuola normale superiore}, volume = {Ser. 4, 25}, number = {3-4}, year = {1997}, pages = {419-431}, zbl = {1039.60053}, mrnumber = {1655525}, language = {en}, url = {http://www.numdam.org/item/ASNSP_1997_4_25_3-4_419_0} } Da Prato, Giuseppe. Poincaré inequality for some measures in Hilbert spaces and application to spectral gap for transition semigroups. Annali della Scuola Normale Superiore di Pisa - Classe di Scienze, Serie 4, Volume 25 (1997) no. 3-4, pp. 419-431. http://www.numdam.org/item/ASNSP_1997_4_25_3-4_419_0/ [1] V.I. Bocachev - M. Röckner - B. Schmuland, Generalized Mehler semigroups and applications, Probability and Related Fields 114 (1996), 193-225. | MR 1392452 | Zbl 0849.60066 [2] A. Choinowska-Michalik - B. Goldys, On Ornstein-Uhlenbeck Generators, preprint S96-12 of the School of Mathematics, The University of New South Wales, 1996. [3] G. Da Prato, Null controllability and strong Feller property of Markov transition semigroups, Nonlinear Analysis TMA 25 (1995), 9-10, 941-949. | MR 1350717 | Zbl 0838.60048 [4] G. Da Prato, Characterization of the domain of an elliptic operator of infinitely many variables in L2(μ) spaces, Rend. Acc. Naz. Lincei, to appear. | Zbl 0899.47035 [5] G. Da Prato - J. Zabczyk, "Ergodicity for Infinite Dimensional Systems", London Mathematical Society Lecture Notes, 1996. | MR 1417491 | Zbl 0849.60052 [6] M. Furman, Analyticity of transition semigroups and closability ofbilinear forms in Hilbert spaces, Studia Mathematica 115 (1995), 53-71. | MR 1347432 | Zbl 0830.47033 [7] Z.M. Ma - M. Rockner, "Introduction to the Theory of (Non Symmetric) Dirichlet Forms", Springer-Verlag, 1992. | MR 1214375 | Zbl 0826.31001 [8] D.W. Stroock - B. Zegarlinski, The logaritmic Sobolev inequality for discrete spin systems on a lattice, Commun. Math. Phys. 149 (1992), 175-193. | MR 1182416 | Zbl 0758.60070 [9] B. Zegarlinski, The strong exponential decay to equilibrium for the stochastic dynamics associated to the unbounded spin systems on a lattice, preprint. | Zbl 0844.46050
2020-04-10 06:19:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.307950884103775, "perplexity": 2974.8563969440897}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371886991.92/warc/CC-MAIN-20200410043735-20200410074235-00346.warc.gz"}
https://tex.meta.stackexchange.com/questions/1263/rename-the-latex-general-tag-to-latex-project
# Rename the {latex-general} tag to {latex-project}? This is a follow-up discussion to What to do with the 'latex' tag?. New users keep trying to tag their normal LaTeX questions with and auto-completion of the tag input element will make that into . Maybe renaming it to like suggested somewhere already might help reducing this wrong tagging significantly. It points out that the tag is about LaTeX itself. • Hmm, can't seem to find a way to rename a tag. I guess one of the existing 'latex-general' items will need to be retaged as 'latex-project' to create the new tag first. Seems a bit odd. – Joseph Wright Apr 26 '11 at 14:23 • @Joseph: Simply merging to a non-existing tag doesn't work? – Martin Scharrer Apr 26 '11 at 14:24 • @Martin: Nope, I get a SQL error, complaining that 'latex-project' is not in the tags table. – Joseph Wright Apr 26 '11 at 14:25 • @Martin: Feel free to try yourself and see :-) – Joseph Wright Apr 26 '11 at 14:26 • An opportunity arose: job done :-) – Joseph Wright Apr 26 '11 at 14:49 • @Joseph: Make an answer out of it, so I can accept it in addition to the {status-completed} tag. – Martin Scharrer Apr 26 '11 at 14:58 The job is now done: use for general LaTeX questions. • Dang! I'm sure that a tag wiki excerpt for {latex-general} existed -- now it's gone. :-( – lockstep Apr 26 '11 at 15:37 • @lockstep: It was very short, something like 'This tag should be used for questions about LaTeX in general and the LaTeX Project: usually a more specific tag will be more appropriate'. – Joseph Wright Apr 26 '11 at 15:38 • I proposed a (rephrased) tag wiki excerpt. – lockstep Apr 26 '11 at 15:46 I think that this name change is unfortunate and should be changed again. It is very common for (especially new) users to use for any question related to any part of the tex system, whether a problem with a specific package or with an editor or general latex syntax. Almost invariably someone then does a tag edit to remove the tag. On this site this constant retagging is time wasting and confusing but using "latex project" as a name for "anything about TeX" leads to other misunderstandings. In particular we see people coming to http://latex-project.org reporting bugs in contributed packages or looking to obtain a TeX distribution such as miktex or texlive. http://latex-project.org is not for general tex questions or distribution: it is the project site for the core LaTeX code maintainers, so LaTeX2e base, tools, graphics (and amsmath, babel and psnfss) and also the LaTeX3 and expl3 codebases. The stackexchange tag should be reserved for subjects that would be appropriate on that site. EDITS DONE The general LaTeX tags are now for discussion of the latex base distribution, that is the classes and packages covered by https://www.ctan.org/tex-archive/macros/latex/base for discussion of the LaTeX project and its aims (that is the core development sources for LaTeX2e and LaTeX3, including expl3). That is subjects that are in scope for the project website at http://www.latex-project.org. for general LaTeX questions where no more suitable tag exits. this is also used for several legacy questions that were formally tagged with , just to ease the conversion. • Can't we just have latex which the site automatically drops, since .. well, the whole topic of this SE is about latex (and friends). – Johannes_B Nov 6 '15 at 10:20 • @Johannes_B The problem comes when you have a broad question with no other appropriate tag. For example, how would you handle the current top hit (the grandma question)? – Joseph Wright Nov 6 '15 at 10:27 • @JosephWright Honestly, the question is off-topic but fun. I would use meta-latex or fun. – Johannes_B Nov 6 '15 at 10:28 • @Johannes_B OK: I'd thought 'fun' or 'promotion'. Probably we do need something for questions about the team side of stuff. – Joseph Wright Nov 6 '15 at 10:30 • @JosephWright rep-generator :-) – David Carlisle Nov 6 '15 at 11:43 • @DavidCarlisle More seriously, probably it's useful to check over the current set of tagged questions an make sure they have a 'place to go'. – Joseph Wright Nov 6 '15 at 11:45 • @JosephWright could do that at the weekend. – David Carlisle Nov 6 '15 at 12:18 • Do you really need to distinguish latex-project from latex-general? Is it really bad (in the view of the L23 teams) that the grandma questions has this tag on it? – yo' Nov 6 '15 at 13:32 • @yo' yes, I think it is bad. – David Carlisle Nov 6 '15 at 14:07 • @yo' we distinguish the project from general use if people send bug reports about contributed packages to the latex bug database (and reject them). Having "latex-project" mean "about latex" on this site only helps to spread the confusion as to what exactly the project maintains (in particular not the binaries, not the distribution, and not the contrib packages) – David Carlisle Nov 6 '15 at 15:12 • As per a chat comment, perhaps latex-misc for cases where there isn't another good tag but where it's not about the project. – Joseph Wright Nov 6 '15 at 16:25 • I don't understand why latex as tag does not fit the bill. This would mean the question is not a plain tex/context one etc... sure, 95% of newcomers have a question about some package, but it is almost always a LaTeX package, hence latex tag is quite natural, and it is weird the site removes it automatically. It does convey some information. – user4686 Nov 6 '15 at 17:52 • @jfbu The issue is that it would apply to a large percentage of the questions on the site, so becomes meaningless (and uses up one tag slot). – Joseph Wright Nov 6 '15 at 18:32 • I, on the other hand, while relatively new to the site, find people trying to ask basic questions about latex tagging them as tex-core, which is utterly wrong. Given that there is already a latex-kernel tag (appropriately, for the kernel), is it possible to assign a tag called "latex-core" or "latex-base", meaning the kernel plus base classes and packages? – texnezio Nov 6 '15 at 18:47 • Hmm @JosephWright I suspect base is a better "public name" since "kernel" is a bit technical (and not really used anywhere outside of internal discussions more likely "format" ) also it could cover anything in base such as inputenc/docstrip, but if you want to leave that and use teh existing one , it's OK with me. – David Carlisle Nov 7 '15 at 9:21 The same thought had occurred to me, though I had the idea of the tag [latex-core] which is clearly not as good a name as yours. There seems to be no advantage at all to the "general" part of the tag, so I support the change. Look how many new users post replies to comments as an answer. Users are still used to general forums, where tagging a question latex is completely allright. Naturally, they want to tag their question here as well, of course with the latex tag. The system is smart and completes that to latex-project, which is in most cases wrong. Why not simply let the users tag the questions with latex and teach our smart system to automatically drop the tag. That of course does not mean, that the existense of a latex-general tag is impossible for questions that really deserve the tag. That would work if a user provides multiple tags, but as yo' points out: Many questions have just one single tag, which in that proposed case would mean the question would be untagged. So, not a solution. • No, dropping the tag is a bad idea, mostly because often it will be the only tag in the question. I think that it is a fate to have to re-tag latex-project questions from time to time. This tag is not the only one such, some others include errors, documentation, books etc. – yo' Nov 6 '15 at 13:31 • @yo' Ah, i forgot the possibility of only one tag being aplied. – Johannes_B Nov 6 '15 at 13:33 • +1 but why drop the tag ? I don't understand why having the latex tag on (probably) 95% (or more) of the questions, especially by newcommers, is considered a problem. – user4686 Nov 6 '15 at 19:57 • @jfbu Since the whole SE is about LaTeX, it seems to be a bit redundant. If we keep the tag and retag later (manually), we would be just in the situation we are currently. – Johannes_B Nov 6 '15 at 20:35 • are you sure SE is about LaTeX? that was not my understanding initially. I, naturally, very quickly concluded that LaTeX was the main framework for the overwhelming majority of questions, apart from the very big portion occupied by TikZ/pgf, which anyhow is almost always framed in a LaTeX context too. But there is some proportion of pure TeX questions. And even ConTeXT ones, although that seems to have almost vanished these days. – user4686 Nov 6 '15 at 20:46 • @jfbu Ok, i see your point. But LaTeX really is the most relevant thing, and plain TeX and ConTeXt questions will be retagged later, if needed. As later edited, my suggestion is flawed, so i think there isn't really any big concern. – Johannes_B Nov 6 '15 at 21:10 • @jfbu Definitely not just LaTeX but in percentage terms most questions are LaTeX-related. So tagging every LaTeX question as such would be somewhat overkill. – Joseph Wright Nov 6 '15 at 22:42 • @jfbu Every question is considered a LaTeX question unless tagged tex-core, plain-tex, context, papeeria, etc. – yo' Nov 9 '15 at 10:04
2019-11-21 23:50:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.766579270362854, "perplexity": 2665.730468614686}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671053.31/warc/CC-MAIN-20191121231600-20191122015600-00110.warc.gz"}
https://indico.cern.ch/event/192695/contributions/353544/
# Tipp 2014 - Third International Conference on Technology and Instrumentation in Particle Physics 2-6 June 2014 Beurs van Berlage Europe/Amsterdam timezone ## TORCH - a Cherenkov based Time-of-Flight Detetor 5 Jun 2014, 16:50 20m Berlagezaal (Beurs van Berlage) ### Berlagezaal #### Beurs van Berlage Oral Sensors: 1d) Photon Detectors ### Speaker Euan Niall Cowie (University of Bristol (GB)) ### Description TORCH (Time Of internally Reflected CHerenkov radiation) is an innovative time-of-flight system designed to provide particle identification over large areas up to a momentum of 10 GeV/c. Cherenkov photons emitted within a 1 cm thick quartz radiator are propagated by internal reflection and imaged on to an array of Micro-Channel Plate photomultiplier tubes (MCPs). Performing 3$\sigma$ pion/kaon separation at the limits of this momentum regime requires a time-of-flight resolution per track of 10-15 ps, over a ~10m flight path. With ~30 detected photons per track the required single-photon time resolution is ~70 ps. This presentation will discuss the development of the TORCH R&D program and present an outline for future work. ### Summary TORCH (Time Of internally Reflected CHerenkov radiation) is a highly compact Time-of-Flight (ToF) system utilizing Cherenkov radiation to achieve particle identification up to 10 GeV/c. At the upper limit of this momentum, a 10-15 ps resolution per track is required to achieve a 3$\sigma$ ToF difference between pions and kaons. TORCH will consist of a 1cm thick radiator plate equipped with light guides along the top and bottom of the plate which focus the produced Cherenkov radiation onto a series of micro-channel plate photomultipliers (MCPs). Precise timing of the arrival of the photons and their association with a particle track is then used to determine the particle time-of-flight. Around 30 photons are expected to be detected per track which results in a required time resolution per photon of around 70 ps. The time of propagation of each photon through the plate is governed by its wavelength which affects both its speed of propagation and its Cherenkov emission angle, and by measuring this angle to 1mrad precision TORCH will correct for chromatic dispersion. The performance of the system relies on the MCP combining fast timing and longevity in high radiation environments, with a high granularity to allow precise measurement of the Cherenkov angle. Development of a 53 mm x 53 mm active area device with 8x128 effective pixel granularity, sub 50ps time resolution and long lifetime is under way with an industrial partner as part of the TORCH development. A GEANT-4 simulation of the TORCH detector and its performance is currently being developed, taking accounting for the contributions to the overall TORCH resolution. This talk will focus on the requirements of the TORCH design and R&D developments including progress toward a prototype and the development and laboratory tests of the MCP. ### Primary author Euan Niall Cowie (University of Bristol (GB)) ### Co-authors Carmelo D'Ambrosio (CERN) Christoph Frei (CERN) David Cussans (University of Bristol (GB)) Didier Piedigrossi (CERN) Johan Maria Fopma (University of Oxford (GB)) Lucia Castillo Garcia (Ecole Polytechnique Federale de Lausanne (CH)) Prof. Neville Harnew (University of Oxford (GB)) Nicholas BROOK (BRISTOL) Roger Forty (CERN) Rui Gao (University of Oxford (GB)) Thierry Gys (CERN) Tibor Keri (University of Oxford (GB)) Slides
2019-07-22 18:18:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4677237570285797, "perplexity": 6576.294693871846}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195528208.76/warc/CC-MAIN-20190722180254-20190722202254-00159.warc.gz"}
https://www.trustudies.com/question/1972/9-a-5-m-60-cm-high-vertical-pole-cast/
Premium Online Home Tutors 3 Tutor System Starting just at 265/hour # 9.A 5 m 60 cm high vertical pole casts a shadow 3 m 20 cm long. Find at the same time (i) the length of the shadow cast by another pole 10 m 50 cm high, (ii) the height of a pole which casts a shadow 5 m long. Answer : Given: Scale of 1 cm represents 18 km.So, let x cm be the distance covered in the map for the drives of 72 km on a road. We can see that this is the case of direct proportion,So we have $$\frac{1}{x}=\frac{18}{72}$$ $$\Rightarrow 18x= 1\times72$$ $$\Rightarrow x= \frac{ 1 \times72}{18}$$ $$\Rightarrow x=4cm$$ Hence, for 72km of distance we have 4 cm of scale.
2023-04-01 21:10:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8721094727516174, "perplexity": 1224.7412735589742}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950247.65/warc/CC-MAIN-20230401191131-20230401221131-00631.warc.gz"}
https://www.remotesensing.dev/docs/B02-Visualization
# Lab 2 - Image Processing ## 2.1 - Overview​ In this lab, we will search for and visualize imagery in Google Earth Engine. We will discuss the difference between radiance and reflectance, make true color and false color composites from different bands and visually identify land cover types based on characteristics from the imagery. We will also discuss atmospheric effect on data collection by looking at the different data products available. #### Learning Outcomes​ • Extract single scenes from collections of images • Create and visualize different composites according • Use the Inspector tab to assess pixel values • Understand the difference between radiance and reflectance through visualization ## 2.2 - Searching for Imagery​ The Landsat program is a joint program between NASA and the United States Geological Survey (USGS) that has launched a sequence of Earth observation satellites (Landsat 1-9). Originating in 1984, the Landsat program provides the longest continuous observation of the Earth's surface. Take the time to monitor some of the fascinating timelapses using Landsat to showcase things like urban development, glacial retreat and deforestation. Let's load a Landsat scene over our region of interest, inspect the units and plot the radiance. Specifically, use imagery from the Landsat 8, the most recent of the sequence of Landsat satellites (at the time of writing, Landsat 9 just launched and data is not yet available). To inspect a Landsat 8 image (also called a scene) in our region of interest (ROI), we can choose a point to center our map, filter the image collection to get a scene with few clouds, and display information about the image in the console. You can either scroll to the area on the map you're interested in and choose a point or use the search bar to find your location. Use the geometry tool to make a point in the country Niger (for these exercises we will include the point location in the script). We will specifically be using USGS Landsat 8 Collection 1 Tier 1 Raw Scenes - if you read the documentation, the values refer to scaled, calibrated at-sensor radiance. Tier 1 means it is ready for analysis and is the highest quality imagery. There's quite a bit to learn about how the Landsat data is processed - if you will be working with Landsat extensively, take the time to read the Data Users Handbook for more information. We will filter the ImageCollection by date (year 2014) and location (to the ROI, which for this exercise is in Niger), sort by a metadata property included in the imagery called CLOUD_COVER and get the first image out of this sorted collection. #!pip install geemapimport ee, geemap, pprint, folium#ee.Authenticate()def build_map(lat, lon, zoom, vizParams, image, name): map = geemap.Map(center = [lat, lon], zoom = zoom) map.addLayer(image, vizParams, name) return mapdef add_ee_layer(self, ee_image_object, vis_params, name): map_id_dict = ee.Image(ee_image_object).getMapId(vis_params) folium.raster_layers.TileLayer( tiles=map_id_dict['tile_fetcher'].url_format, attr='Map Data &copy; <a href="https://earthengine.google.com/">Google Earth Engine</a>', name=name, overlay=True, control=True ).add_to(self)# Initialize the Earth Engine module.ee.Initialize() // Code Chunk 01var lat = 37.22; var lon = -80.42;var zoom = 11;var image_collection_name = "LANDSAT/LC08/C01/T1_TOA";var date_start = '2014-01-01';var date_end = '2014-12-31';var point = ee.Geometry.Point([lon, lat]);var landsat = ee.ImageCollection("LANDSAT/LC08/C01/T1_TOA")// Note that we need to cast the result of first() to Image. var image = ee.Image(landsat // Filter to get only images in the specified range. .filterDate(date_start, date_end) // Filter to get only images at the location of the point. .filterBounds(point) // Sort the collection by a metadata property. .sort('CLOUD_COVER') // Get the first image out of this collection. .first()); // Print the information to the console print('A Landsat scene:', image); var vizParams = { bands: ['B4', 'B3', 'B2'], min: 0, max: 0.4}; // Add the image to the map, using the visualization parameters. Map.setCenter(lon, lat, zoom)Map.addLayer(image, vizParams, 'true-color image'); The variable image now stores a reference to an object of type ee.Image. In other words, we have taken the image collection and reduced it down to a single image, which is now ready for visualization. Before we visualize the data, go to the console and click on the dropdown. Expand and explore the image by clicking the triangle next to the image name to see more information stored in that object. Specifically, expand properties and inspect the long list of metadata items stored as properties of the image. This is where the CLOUD_COVER property you just used is stored. There are band specific coefficients (RADIANCE_ADD_*, RADIANCE_MULT_* where * is a band name) in the metadata for converting from the digital number (DN) stored by the image into physical units of radiance. These coefficients will be useful in later exercises. ## 2.3 - Visualizing Landsat Imagery​ Recall from the last lab that Landsat 8 measures radiance in multiple spectral bands. A common way to visualize images is to set the red band to display in red, the green band to display in green and the blue band to display in blue - just as you would create a normal photograph. This means trying to match the spectral response of the instrument to the spectral response of the photoreceptors in the human eye. It's not a perfect match but this is called a true-color image. When the display bands don't match human visual perception (as we will see later), the visualization is called a false-color composite. #### 2.3.1 - True Color Composite​ To build a true color image we are building a variable called trueColor that selects the red / green / blue bands in order and includes the min and max value to account for the appropriate radiometric resolution - this piece can be tricky, as it is unique for each dataset you work with. You can find the band names and min-max values to use from the dataset documentation page, but a great starting point is to use the 'code example' snippet for each dataset, which will set up the visualization parameters for you. // Code Chunk 02var lat = 37.22; var lon = -80.42;var zoom = 11;var image_collection_name = "LANDSAT/LC08/C01/T1_TOA";var date_start = '2014-01-01';var date_end = '2014-12-31';var point = ee.Geometry.Point([lon, lat]);var landsat = ee.ImageCollection("LANDSAT/LC08/C01/T1")// Note that we need to cast the result of first() to Image. var image = ee.Image(landsat // Filter to get only images in the specified range. .filterDate(date_start, date_end) // Filter to get only images at the location of the point. .filterBounds(point) // Sort the collection by a metadata property. .sort('CLOUD_COVER') // Get the first image out of this collection. .first()); // Print the information to the console // Define visualization parameters in a JavaScript dictionary. var trueColor = { bands: ['B4', 'B3', 'B2'], min: 4000, max: 18000}; // Add the image to the map, using the visualization parameters. Map.addLayer(image, trueColor, 'true-color image'); There is more than one way to discover the appropriate min and max values to display. Try going to the Inspector tab and clicking somewhere on the map. The value in each band, in the pixel where you clicked, is displayed as a list in the console. Try clicking on dark and bright objects to get a sense of the range of pixel values. Also, layer manager in the upper right of the map display lets you automatically compute a linear stretch based on the pixels in the map display. #### 2.3.2 - False Color Composite​ Let's do the same thing, but this time we will build a false-color composite. This particular set of bands results in a color-IR composite because the near infra-red (NIR) band is set to red. As you inspect the map, look at the pixel values and try to find relationships between the NIR band and different land types. Using false color composites is a very common and powerful method of identifying land characteristics by leveraging the power of signals outside of the visible realm. Mining engineers commonly use hyperspectral data to pinpoint composites with unique signatures, and urban growth researchers commonly use the infrared band to pinpoint roads and urban areas. // Code Chunk 03var lat = 37.22; var lon = -80.42;var zoom = 11;var image_collection_name = "LANDSAT/LC08/C01/T1_TOA";var date_start = '2014-01-01';var date_end = '2014-12-31';var point = ee.Geometry.Point([lon, lat]);var landsat = ee.ImageCollection("LANDSAT/LC08/C01/T1")// Note that we need to cast the result of first() to Image. var image = ee.Image(landsat // Filter to get only images in the specified range. .filterDate(date_start, date_end) // Filter to get only images at the location of the point. .filterBounds(point) // Sort the collection by a metadata property. .sort('CLOUD_COVER') // Get the first image out of this collection. .first()); // Print the information to the console print('A Landsat scene:', image); // Define visualization parameters in a JavaScript dictionary. // Define false-color visualization parameters. var falseColor = { bands: ['B5', 'B4', 'B3'], min: 4000, max: 13000 }; // Add the image to the map, using the visualization parameters. Map.addLayer(image, falseColor, 'false-color composite'); Read through the Landsat data documentation and try playing with different band combinations, min and max values to build different visualizations. Unique Feature: You can include multiple visualization parameters in your script and toggle the layers on and off with the layer manager for easy comparison. ) ## 2.4 - At-Sensor Radiance​ The image data you have used so far is stored as a digital number that measures the intensity within the bit range - if data is collected in an 8-bit system, 255 would be very high intensity and 0 will be no intensity. To convert each digital number into a physical unit (at-sensor radiance in Watts/m2/sr/𝝁m), we can use a linear equation: $L_{\lambda} = a_{\lambda} * DN_{\lambda} + b_{\lambda} \qquad$ Note that every term is indexed by lamda ($\lambda$, the symbol for wavelength) because the coefficients are different in each band. See Chander et al. (2009) for details on this linear transformation between DN and radiance. In this exercise, you will generate a radiance image and examine the differences in radiance from different targets. Earth Engine provides built-in functions for converting Landsat imagery to radiance in Watts/m2/sr/𝝁m. It will automatically reference the metadata values for each band and apply the equation for you, saving you the trouble of conducing numerous calculations. This code applies the transformation to a subset of bands (specified by a list of band names) obtained from the image using select(). That is to facilitate interpretation of the radiance spectrum by removing the panchromatic band ('B8'), an atmospheric absorption band ('B9') and the QA band ('BQA'). Note that the visualization parameters are different to account for the radiance units. // Code Chunk 4var lat = 37.22; var lon = -80.42;var zoom = 11;var date_start = '2014-01-01';var date_end = '2014-12-31';var point = ee.Geometry.Point([lon, lat]);var landsat = ee.ImageCollection("LANDSAT/LC08/C01/T1")// Note that we need to cast the result of first() to Image. var image = ee.Image(landsat // Filter to get only images in the specified range. .filterDate(date_start, date_end) // Filter to get only images at the location of the point. .filterBounds(point) // Sort the collection by a metadata property. .sort('CLOUD_COVER') // Get the first image out of this collection. .first()); // Use these bands. var bands = ['B1', 'B2', 'B3', 'B4', 'B5', 'B6', 'B7', 'B10', 'B11']; // Get an image that contains only the bands of interest. var dnImage = image.select(bands); // Apply the transformation. var radiance = ee.Algorithms.Landsat.calibratedRadiance(dnImage); // Display the result. var radParams = {bands: ['B5', 'B4', 'B3'], min: 20, max: 110}; Map.addLayer(radiance, radParams, 'radiance'); Examine the radiance image by using Inspector and clicking different land cover types on the map near Blacksburg, VA. Click the chart icon in the console to get a bar chart of the different radiance values for each pixel. If the shape of the chart resembles Figure 1, that's because the radiance (in bands 1-7) is mostly reflected solar irradiance. The radiance detected in bands 10-11 is thermal, and is emitted (not reflected) from the surface. ## 2.5 - Top-of-Atmosphere (TOA) Reflectance​ The Landsat sensor is in orbit approximately 700 kilometers above Earth. If we are focused on the imagery of remote sensing (as opposed to studying something like atmospheric conditions or ambient temperature), then we want to find insights about the surface of the earth. To understand the way we calculate information, there are three main components. Digital Number (DN) is a value that is associated with each pixel - it is generic (in that it is an intensity value dependent upon the bit range), and it allows you to visualize the image where all pixels are in context. In most cases, DN is appropriate for analysis, image processing, machine learning, etc. Radiance is the radiation that collected by a sensor - this includes radiation from the surface of Earth, radiation scattered by clouds, position of the sun relative to the Earth and sensor, etc. In general, we want to correct radiance values and convert to reflectance. Reflectance is the ratio (unitless) of energy from the sun to the energy reflected off Earth's surface. In fact, it's more complicated than this because radiance is a directional quantity, but this definition captures the basic idea We can identify materials based on their reflectance spectra. Because this ratio is computed using whatever radiance the sensor measures (which may contain all sorts of atmospheric effects), it's called at-sensor or top-of-atmosphere (TOA) reflectance. Top of Atmosphere reflectance is the reflectance that includes the radiation from earth's surface and radiation from earth's atmosphere. Let's examine the spectra for TOA Landsat data. To get TOA data for Landsat, we can do the transformation using the built-in functions created by Earth Engine. We will be using 'USGS Landsat 8 Collection 1 Tier 1 TOA Reflectance' ImageCollection. // Code Chunk 5var lat = 37.22; var lon = -80.42;var zoom = 11;var date_start = '2014-01-01';var date_end = '2014-12-31';var point = ee.Geometry.Point([lon, lat]);var landsat = ee.ImageCollection("LANDSAT/LC08/C01/T1_TOA")// Note that we need to cast the result of first() to Image. var image = ee.Image(landsat // Filter to get only images in the specified range. .filterDate(date_start, date_end) // Filter to get only images at the location of the point. .filterBounds(point) // Sort the collection by a metadata property. .sort('CLOUD_COVER') // Get the first image out of this collection. .first()); // Use these bands. var bands = ['B1', 'B2', 'B3', 'B4', 'B5', 'B6', 'B7', 'B10', 'B11']; // Define reflective bands as bands B1-B7. See the docs for slice(). var reflectiveBands = bands.slice(0, 7); // See http://landsat.usgs.gov/band_designations_landsat_satellites.php var wavelengths = [0.44, 0.48, 0.56, 0.65, 0.86, 1.61, 2.2]; // Select only the reflectance bands of interest. var reflectanceImage = image.select(reflectiveBands); // Define an object of customization parameters for the chart. Map.addLayer(reflectanceImage, {bands: ['B4', 'B3', 'B2'], min: 0, max: 0.3}, 'toa'); var options = { title: 'Landsat 8 TOA spectrum in Blacksburg, VA', hAxis: {title: 'Wavelength (micrometers)'}, vAxis: {title: 'Reflectance'}, lineWidth: 1, pointSize: 4}; // Make the chart, using a 30 meter pixel. var chart = ui.Chart.image.regions( reflectanceImage, point, null, 30, null, wavelengths) .setOptions(options); // Display the chart. print(chart); Since reflectance is a unitless ratio in [0, 1], change the visualization parameters to correctly display the TOA data: Using Inspector, click several locations on the map and examine the resultant spectra. It should be apparent, especially if you chart the spectra, that the scale of pixel values in different bands is drastically different. Specifically, bands 10-11 are not in [0, 1]. The reason is that these are thermal bands, and are converted to brightness temperature, in Kelvin, as part of the TOA conversion. Very little radiance is reflected in this wavelength range; most is emitted from the Earth's surface. That emitted radiance can be used to estimate brightness temperature using the inverted Planck equation. Examine the temperature of various locations. To make plots of reflectance, select the reflective bands from the TOA image and use the Earth Engine charting API. There are several new methods in this code. The slice() method gets entries in a list based on starting and ending indices. Search the docs (on the Docs tab) for 'slice' to find other places this method can be used. Construction of the chart is handled by an object of customization parameters (learn more about customizing charts) passed to Chart.image.regions(). Customizing charts within GEE can be difficult, so spend time modifying the characteristics. Question 1: Upload the TOA reflectance plot you generated for Blacksburg, VA and briefly describe the relationship of reflectance peaks and troughs in the chart to the electromagnetic spectrum. ## 2.6 - Surface Reflectance​ The ratio of upward radiance at the Earth's surface to downward radiance at the Earth's surface is called surface reflectance. Unlike TOA reflectance, in which this information is collected at the sensor, the radiances at the Earth's surface have been affected by the atmosphere. both the inbound and outbound radiance from the sun is affected by its path through the atmosphere to the sensor. Unravelling those effects is called atmospheric correction ("compensation" is probably a more accurate term) and is beyond our scope of this lab. However, most satellite imagery providers complete this correction for the consumers. While you could use the raw scenes directly, if your goal is conduct analysis quickly and effectively, using the corrected Surface Reflectance image collections are quite beneficial and will save you quite a bit of time. In the datasets page for Landsat 8, it's broken up into the raw images, TOA, and Surface Reflectance. Question 2: Use the code chunk 5 pattern above to build a true-color (red-green-blue) image using Surface Reflectance data from Landsat 8 and a plot with the same wavelengths and structure as you did with the TOA. Upload the surface reflectance plot you and briefly describe its features. What differs or remains the same between the TOA plot and the surface reflectance plot? Note that the band names differ between the surface reflectance data and the TOA data. Question 3: When you build the surface reflectance visualization, you will need to scale the imagery and change the visualization parameters. Why? Read the dataset description to find out. Hint: What is the scale factor for bands 1-9? Question 4: In your code, set the value of a variable called azimuth to the solar azimuth of the image from code chunk 4. Do not hardcode the number. Use get(). Print the result and show you set the value of azimuth. Question 5: Add a layer to the map in which the image from code chunk 4 is displayed with band 7 set to red, band 5 set to green and band 3 set to blue. Upload a visual of the layer and show how you would display the layer name as falsecolor. Question 6: What is the brightness temperature of the given Blacksburg, VA point? Show how you make a variable in your code called temperature and set it to the band 10 brightness temperature. Use this guide for help. var point = ee.Geometry.Point([-80.42, 37.22]);var temperature = toaImage.reduceRegion( { #YOUR SOLUTION HERE# }) .get( #YOUR SOLUTION HERE#); Question 7: If you plot the Surface Reflectance data with the TOA visibility parameters, you'll notice that you get a blank image. To fix this issue, we have to apply a scale factor, which can be found in the documentation - note that all the optical bands (SR_B1-SR_B7) have a certain scale, while the other bands (that start with ST) vary. Bring in the Landsat Surface Reflectance collection, filter it down to one specific image from the point listed above (Blacksburg, VA), and use the multiply method to scale all of the bands (there are examples in both the the multiply documentation and the code Landsat SR Code snippet). Then, use the reduceRegion() method to find the reflectance value for band 5. Create a variable named reflectance to store this value and print it to the console. The value should fall into the range of [0-1]
2023-02-07 11:30:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 2, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2566071152687073, "perplexity": 3058.1323050292344}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500456.61/warc/CC-MAIN-20230207102930-20230207132930-00449.warc.gz"}
http://mathematica.stackexchange.com/questions/38891/formatting-excel-borders-with-net/38920
Formatting Excel Borders with .Net I have various tables generated in a Mathematica application and need to export this to MSExcel formatted in a specific way. The Excel formatting has to be generated by Mathematica. I am not familiar with .Net (or .NetLink), but after searching, found this very useful code by Chris Degnan. Needs["NETLink"] PutIntoExcel[data_List, cell_String, file_String] := Module[{rows, cols, excel, workbook, worksheet, srcRange}, {rows, cols} = Dimensions[data]; NetBlock[ InstallNET[]; excel = CreateCOMObject["Excel.Application"]; If[! NETObjectQ[excel], Return[$Failed], excel[Visible] = True; workbook = excel@Workbooks@Add[]; worksheet = workbook@Worksheets@Item[1]; srcRange = worksheet@Range[cell]@Resize[rows, cols]; srcRange@Value = data; srcRange@Interior@Color = 13959039; (* OLE colours from http://www.endprod.com/colors/ *) worksheet@Range["E5:F5"]@Font@Bold = True; worksheet@Range["E5:F5"]@Interior@Color = 61166; worksheet@Range["E6:E9"]@Font@Color = 255; (* Reset the numeric values to get the correct type *) worksheet@Range["E6:E9"]@Value = Rest[data[[All, 1]]]; workbook@SaveAs[file]; workbook@Close[False]; excel@Quit[]; ]]; LoadNETType["System.GC"]; GCCollect[]]; data = {{"Year", "Cartoon"}, {1928, "Mickey Mouse"}, {1934, "Donald Duck"}, {1940, "Bugs Bunny"}, {1949, "Road Runner"}}; outputfile = "C:\\Temp\\demo.xlsx"; Quiet[DeleteFile[outputfile]]; PutIntoExcel[data, "E5", outputfile]; Print[Panel[TableForm[data, TableSpacing -> {2, 4}]]]; This explains in detail how to format colours, but I run into problems with borders. This code does work: worksheet@Range["B3:C4"]@Borders@Color = 255; However, specifying specific parts of the border does not: worksheet@Range["B3:C4"]@Borders[xlDiagonalDown]@Color = 255; and I get this error: NET::nocomprop: No property named xlDiagonalDown exists for the given COM object. Specifying the weight of the line like this: worksheet@Range["B3:C4"]@Borders@Weight = xlThick; gives a different error: NET::methodargs: Improper arguments supplied for method named Weight. Can anyone suggest what may be wrong? Then after exporting a fancy formatted table to Excel, I need to export an Excel formula into the formatted cells, to enable the Excel user to modify and play with their own input data. - I am getting the following error when running your code "A .NET exception occurred: \ "System.Reflection.TargetInvocationException: Exception has been \ thrown by the target of an invocation. ---> \ System.Runtime.InteropServices.COMException: Microsoft Excel cannot \ access the file 'C:\\ Temp \\ 534E8A50'. There are several possible \ reasons:" – Liam Jan 15 at 17:37 1 Answer Excel VBA enumeration values cannot be accessed symbolically through COM. We must use the corresponding numeric values found by consulting the Microsoft Excel object model enumeration reference. The relevant enumerations in this case are XlBordersIndex (xlDiagonalDown = 5) and XlBorderWeight (xlThick = 4). Once we know the enumeration values, the code is straight-forward: xlDiagonalDown = 5; xlThick = 4; borders = range@Borders@Item[xlDiagonalDown]; borders@Weight = xlThick; Side Note: Complications Take note of the use of Item in the Borders@Item[xlDiagonalDown] expression. If we wrote simply Borders[xlDiagonalDown], we would get an error message complaining that there is no such property. The reason is that Mathematica models COM properties using definitions that hold their arguments. Borders is a property, so a direct argument of xlDiagonalDown remains unevaluated and is interpreted as a (non-existent) subproperty name. Borders@Item, on the other hand, is a method. Method arguments are not held, so xlDiagonalDown gets evaluated to its numeric value. It is possible to use the Borders property directly, albeit in ugly fashion: With[{dd = xlDiagonalDown}, borders = range@Borders[dd]] (* or *) borders = range@Borders[#] &@ xlDiagonalDown (* or *) borders = range@Borders[5] Complete Example Here is a complete example, using Item: Needs["NETLink"]; InstallNET[]; LoadNETType["System.GC"];$outputFile = "C:\\Temp\\demo.xlsx"; Quiet @ DeleteFile @ $outputFile; NETBlock @ Module[{xl, book, sheet, range, borders, xlDiagonalDown, xlThick} , xlDiagonalDown = 5 ; xlThick = 4 ; xl = CreateCOMObject["Excel.Application"] ; book = xl@Workbooks@Add[] ; sheet = book@Worksheets@Item[1]; ; range = sheet@Range["B2:G6"] ; borders = range@Borders@Item[xlDiagonalDown] ; borders@Color = 255 ; borders@Weight = xlThick ; book@SaveAs[$outputFile] ; book@Close[] ; xl@Quit[] ] GCCollect[]; SystemOpen @ $outputFile (* DeleteFile @$outputFile *) Formulas Formulas can be written into spreadsheet cells using the Range.Formula property. Such formulas must be expressed in Excel syntax. Here is an example with a formula that uses relative cell references and computes the Fibonacci sequence: NETBlock @ Module[{xl, book, sheet} , xl = CreateCOMObject["Excel.Application"] ; book@SaveAs[$outputFile] ; book@Close[] ; xl@Quit[] ] GCCollect[]; SystemOpen @$outputFile (+1) What borders@Color = 255` mean? Removing or changing this command does not change anything in my case (I use Excel 2003), the borders are black. – Alexey Popkov Dec 17 '13 at 6:58
2016-07-01 22:38:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17280715703964233, "perplexity": 8690.954850810942}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403826.29/warc/CC-MAIN-20160624155003-00001-ip-10-164-35-72.ec2.internal.warc.gz"}
https://th.b-ok.org/book/508040/ccf4e6
หน้าหลัก Thermo field dynamics and condensed states # Thermo field dynamics and condensed states ปี: 1982 พิมพ์ครั้งที่: NH สำนักพิมพ์: Elsevier Science Ltd ภาษา: english จำนวนหน้า: 606 ISBN 10: 0444863613 ISBN 13: 9780444863614 File: DJVU, 3.44 MB You can write a book review and share your experiences. Other readers will always be interested in your opinion of the books you've read. Whether you've loved the book or not, if you give your honest and detailed thoughts then people will find new books that are right for them. 1 ปี: 1973 ภาษา: english File: DJVU, 1.48 MB 2 ### Macromolecules in Solution and Brownian Relativity ปี: 2008 ภาษา: english File: PDF, 1.63 MB H. UMEZAWA H. MATSUMO O M.TACHIKI NORTH-HOLLAND THERMO FIELD DYNAMICS AND CONDENSED STATES H. UMEZAWA and H. MATSUMOTO Theoretical Physics Institute, University of Alberta, Canada M. TACHIKI The Research Institute for Iron, Steel and Other Materials, University of Tohoku, Japan NORTH-HOLLAND PUBLISHING COMPANY AMSTERDAM • NEW YORK • OXFORD © NORTH-HOLLAND PUBLISHING COMPANY - 1982 All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without the prior permission of the copyright owner. ISBN: 0 444 86361 3 Publishers: NORTH-HOLLAND PUBLISHING COMPANY AMSTERDAM - NEW YORK - OXFORD Sole distributors for the U.S.A. and Canada: ELSEVIER SCIENCE PUBLISHING COMPANY, INC. 52 VANDERBILT AVENUE NEW YORK, N.Y. 10017 Library of Congress Cataloging in Publication Data Umezawa, H. (Hiroomi) , 1921+- Thermo field dynamics and condensed states. Includes bibliographical references and index. 1. Quantum field theory. 2. Many-body problem. I. Matsumoto, H. II. Tachiki, M0 (Masashi), 1931- . III. Title. QC17ii-.Ii.5.U526 530.1^3 81-22^-91 ISBN O-W-8636I-3 AACR2 Printed in The Netherlands CONTENTS Preface xv Chapter 1 Introduction 1 1.1. Quantum theory for many body systems 1 The dual structure of the formulation 1 The theory for ordered states 2 Quantum field theory at finite temperature 3 Macroscopic objects in quantum many body systems 4 Topological objects 5 Disorder in ordered states 6 The interaction between macroscopic objects and quanta 6 Surface phenomena 6 Quantum field theory, quantum mechanics and classical theory 7 1.2. Plan of the book 8 1.3. Notation 9 Chapter 2 Quantum theory for many body systems 15 2.1. The number representation; creation and annihilation operators 15 Classification of single-particle states and wave packets 15 Classification of many-particle states and number series 16 Annihilation and creation operators for bosons 16 Annihilation and creati; on operators for fermions 17 2.2. Unitarily inequivalent representations 18 Non-countability of the set {\m,...)} 18 Unitarily inequivalent representations 19 2.3. The Fock space > 19 The [0]-set and the vacuum 19 Countability of the [0]-set 20 vi Contents Construction of the [0]-set on the vacuum 20 Inner products 21 Fock space 22 The vacuum in Fock space 23 The commutation relations 23 Creation and annihilation operators for a particle with an arbitrary wave packet 24 The Fourier representation 25 A concluding remark 27 2.4. Some examples of unitarily inequivalent representations 27 The Bogoliubov transformation of boson operators 28 The Bogoliubov transformation of fermion operators 33 The boson field translation 34 The boson condensation 36 2.5. The physical particle representation and the dynamical map 37 The Fock space of physical particles 37 Total energy and physical particle energy 37 The free Hamiltonian and momentum operator for physical particles 38 The Heisenberg fields and the Heisenberg equation 39 The Hamiltonian for the Heisenberg equation 40 The physical particle representation for Heisenberg fields 40 The self-consistent method 41 Some simple examples of self-consistent calculations 42 The dynamical map 44 The normal product 45 A summary 46 2.6. Free fields for physical particles 46 Free physical fields 46 Space-time variation of creation and annihilation operators 46 Requirements for the free physical fields 47 The free field equations for physical fields 47 The classification of free field equations 48 The divisor 49 The hermitization matrix 50 The Lagrangian for free physical fields 51 The inner product of wave functions 51 An orthonormalized complete set of solutions of the free field equation (2.6;7) 52 The structure of the physical field 54 Some examples of free field equations 54 The sum rule 58 The commutation relation and statistics 61 The canonical conjugate of <p° 63 Projection of creation and annihilation operators 64 64 Contents vii Chapter 3 The physical particle representation, the S- matrix and composite particles 67 3.1. The (N, 0)-model 67 Introduction 67 The (N, 0)-model 67 Self-consistent calculation 68 Canonical relations 73 Composite particle 77 The Hamiltonian 81 The asymptotic condition 83 The S-matrix 85 The Bethe-Salpeter equation 87 Compositeness and elementarily 92 3.2. The physical particle representation and perturbative calculation 93 Introduction 93 The adiabatic factor 94 The physical particle representation 95 Particle reactions 98 3.3. The physical particle representation and the variational oo principle 3.4. A general consideration on the physical particle representation 101 The dynamical map 101 The relation between H and Ho 102 The asymptotic limit and S-matrix 104 Stability of the vacuum 106 Stability of the single-particle states 106 Complex fields 107 3.5. The reduction formula and the Lehmann-Symanzik- Zimmermann formula 108 The reduction formula 108 The L-S-Z formula 111 3.6. The spectral representations of two-point functions 113 Many-point functions 113 The spectral representations of two-point functions 113 The causal two-point function of the physical field 117 References 118 Chapter 4 Quantum field theory at finite temperature 119 4.1. Thermo field dynamics 119 Vlll Contents Quantum excitations at finite temperature 119 The Hamiltonian and momentum including the thermal reservoir effects 121 The Bogoliubov transformation 121 The temperature-dependent vacuum 122 Determination of Ok 123 The tilde operation 125 The tilde operation and the Heisenberg equation 127 Examples of free fields and their two-point functions 129 Product rules for two-point functions 137 4.2. The dynamical map at finite temperature, the tilde substitution law and the Kubo-Martin-Schwinger relation 139 The Heisenberg equation at finite temperature 139 The dynamical map at finite temperature 140 The tilde substitution law 142 The Kubo-Martin-Schwinger relation 144 The extension of the L-S-Z formula at finite temperature 146 4.3. The spectral representation of the two-point functions 149 4.4. The Bethe-Salpeter equations at finite temperature 155 References 156 Chapter 5 Some examples of thermo field dynamical computations 157 5.1. The electron-phonon system 157 The model 157 The Bethe-Salpeter equation 159 The spectral representation of the proper self-energy 160 The physical electron energy 161 5.2. The contact interaction model for itinerant electron ferromagnetism 162 The model 162 The magnetization 165 The Bethe-Salpeter equation for the magnon 166 Calculation of D(k) and N"1^) 168 The magnon 169 5.3. The Heisenberg model for localized spin ferromagnetism 170 The Heisenberg model 170 The localized magnetic ions 170 The Hamiltonian and spin operator 171 The Heisenberg equation 172 The mean field approximation 172 The spin wave excitation (magnon) 175 The vertex function and the dynamical map of the spin operator 178 The dispersion equation of the magnon 181 Contents IX 5.4. Superconductivity 183 The Lagrangian 183 The Heisenberg equation 184 The B-S equation for <0, P\T[if/a, ^+]|0, /3> 185 Renormalization 186 The physical electron and its two-point function 187 The gap equation 188 The critical temperature 189 The current and charge density operator 190 The B-S equation for Dafiy(x, y; z) 191 The equation for Da(i (x - y) 192 Calculation of Qij(k) 194 Solutions of the B-S equations (5.4;66) 198 The generalized gap equation 199 Calculation of D3(/c) 200 Calculation of the two-point function of the charge density 202 The dynamical map of the current and charge density 204 The rearrangement of phase symmetry 205 The Coulomb effect 207 The boson characteristic function 211 The ground state energy 214 5.5. The plasma oscillations in normal metals 216 The current and charge density operators 217 The two-point function for j and p 218 The transverse and longitudinal plasmons 221 References 223 Chapter 6 Invariance and the Noether current 225 6.1. The Noether current and the Ward-Takahashi relations 225 The Noether current and the Lagrangian 225 The Noether current and generator 228 Derivation of the W-T relations 229 A compact expression of the W-T relations 230 Generalization of the W-T relations 232 The W-T relations for a local current 233 6.2. The Ward-Takahashi relations at finite temperature 234 The tilde conjugate 234 The W-T relations 234 6.3. A simple example of the Ward-Takahashi relations 235 6.4. An example of a loop expansion; itinerant electron ferromagnetism 239 Approximation methods and W-T relations 239 The spin symmetry 241 The W-T relations for spin symmetry 243 The existence of the magnon 245 The W-T relations for the vertices 248 The magnon two-point functions and the magnon energy 252 The Heisenberg equation 254 Derivation of the Dyson equations 255 The basic relations for the one-loop approximation 257 Successive improvement of the loop approximation 261 Practical computations and applications 261 References 265 Chapter 7 The dynamical rearrangement of symmetries and the dynamical map 267 7.1. General consideration 267 Symmetry rearrangement 267 The W-T relation for Q-symmetry 270 The Goldstone theorem 271 Elimination of e from the W-T relations 274 Symmetry rearrangement 276 Finite temperature 283 7.2. The phase symmetry 283 7.3. The dynamical rearrangement of spin symmetry 287 7.4. Symmetry rearrangement and low energy theorems 293 Low energy theorems 293 Low energy theorem for spin wave scattering 296 Low energy theorem for tt-tt scattering 297 The Goldberger-Treiman relation 298 Low energy theorems and low temperature behaviour 302 7.5. Symmetry rearrangement and the group contraction 303 7.6. The infrared effect of the Goldstone bosons and the order parameter 308 7.7. Crystals 311 Periodic condition 311 The W-T relation 312 Phonons 314 Point group and polarization vector 315 Phonon field 317 Equivalence between the coordinate representation and the phonon field representation 319 Dynamical rearrangement of translation symmetry 322 Dynamical map 323 The momentum, the stress tensor and the energy 324 References 328 Chapter 8 Quantum electrodynamics in solids I: rearrangement of gauge symmetry and the dynamical map 331 8.1. Superconductivity 331 The Lagrangian 331 The canonical commutation relations and the Heisenberg equations 333 The Goldstone boson and ghost fields 334 Determination of the gauge condition 338 The plasmon 340 The two-point function and the divisor for the plasmon 342 Preliminary studies of the dynamical maps 345 Dynamical rearrangement of gauge symmetry 347 General structure of the dynamical map 348 Photon self-energy and infinite conductivity 350 8.2. Quantum electrodynamics in normal metals 354 The Lagrangian 354 The Heisenberg equation, gauge condition and physical state condition 355 The plasmon 357 Rearrangement of gauge and phase symmetry, the dynamical maps and the renormalized charge 358 The photon self-energy 362 References 367 Chapter 9 Extended objects of quantum origin 369 9.1. A simple consideration 369 Extended objects in quantum field systems 369 The Heisenberg equation 370 The boson transformation 371 The physical meaning of the boson-transformed Heisenberg field if/f 372 A compact expression of the dynamical map of if/ 373 The self-consistent potential induced by extended objects 374 The tree approximation and the Euler equation 376 The translation modes and the quantum coordinate 379 The quantum coordinate and the dynamical map 384 The Hilbert space for the system with extended objects 386 The generalized coordinates 386 Quantum corrections and renormalization 388 Symmetrization of quantum coordinates 390 Construction of if/ 391 Classical and quantum mechanical extended objects 391 9.2. A simple example 392 The model 392 The energy renormalization 394 xii Contents The wave-function renormalization 395 The vertex renormalization 396 The vacuum-value renormalization 398 The renormalization constants 398 The static soliton 399 The interaction between a single quantum and an extended object 400 The dynamical map of $402 The one-loop correction of the classical field 402 9.3. A general consideration on the boson transformation 404 The boson transformation theorem 405 The self-consistent potential 407 The quantum coordinate 408 The quantum coordinate and internal symmetry 408 Energy of extended objects 409 Renormalizability and extended objects 410 Condition for static extended objects 411 9.4. The asymptotic condition and the asymptotic Hamiltonian 412 Introduction 412 The quantum coordinate 413 The asymptotic condition 413 The (x - xo - ^-combination 414 The translation operator, P 414 The angular momentum operator, J 416 Weak form of the Hamiltonian 419 Space-time symmetry and the weak Hamiltonian 421 9.5. The (c<7)-transmutation and the generalized coordinate 423 The generalized coordinate and the Lorentz transformation 424 Classical and quantum mechanical extended objects 433 The free-field equation and the (c<7)-transmutation 434 Free motions and forced motions of extended objects 441 Extended objects with trapped fermions 441 References 441 Chapter 10 Extended objects with topological singularities 443 10.1. General considerations 443 The boson transformation with topological singularities 443 The topological singularities and Goldstone bosons 444 The topological charge 446 The complete condition for a topological singularity 446 The topological line singularity 448 The topological surface singularity 451 The topological quantum number, vortex-flux quantization and the Burgers vector of crystal dislocations 453 Contents xiii 10.2. Extended objects in crystals 456 The phonon equation 456 The singular boson transformation 457 Isotropic, low-momentum and linear approximation 458 Dislocations 461 Grain boundaries 463 Point defects 469 10.3. Topological singularities associated with non-Abelian group symmetries 472 Introduction 472 The gauge transformation matrix 473 The integrability condition and topological singularities 475 The condition for single-valuedness of the order parameter 476 The mapping between coordinate space and symmetry space 476 Topological singularities 477 The topological quantum number and stability of extended objects 481 The complete condition for topological singularities in Q)d-N 482 The topological singularity domains of higher dimensions 482 SU(2)-triplet model and monopole 484 A comment on the instanton 490 The boson transformation method 490 Gauge field theory 491 References 495 Chapter 11 Quantum electrodynamics in solids II: macroscopic phenomena 497 11.1. Derivation of the classical Maxwell equation 497 Macroscopic phenomena 497 The dynamical maps 497 The boson transformation 499 Topological singularities 501 The linear approximation 503 11.2. The conductivity and the dielectric constant 503 The magnetic induction and electric field 503 The dielectric constant and conductivity 504 11.3. Superconductivity 508 The Maxwell-type equation 508 The Laplace equation 509 The energy of extended objects 509 The order parameter 511 Calculation of the proper photon self-energy 512 The boson characteristic function and the London penetration depth 515 XIV Contents The Coulomb gauge Vortices Impurity effects Type I, type 11/1 and type II/2 superconductors Electron states at the vortex center and the core energy The Gibbs free energy The Maki parameters Anisotropic superconductors 11.4. Magnetic superconductors Introduction History Ternary rare earth compounds The Maxwell-type equation The screening effect and the spin-periodic phase The order parameter References Chapter 12 Surface boundary phenomena 12.1. Introduction 12.2. Crystal surface sound waves Oscillating surface singularities Displacement fields 12.3. The Josephson phenomenon 12.4. The surface magnetic field in superconductors Semi-infinite non-magnetic superconductors Semi-infinite magnetic superconductors Thin films References Chapter 13 Concluding remarks Critical phenomena The Kondo effect The theory of the laser and the Raman effect Application of soliton theory Some fundamental questions Index PREFACE In this book quantum field theory at finite temperature is formulated so that it can deal with ordered states of many body systems in which many kinds of extended topological objects are created and interact with quanta. By providing a consistent scheme for macroscopic objects coexisting with quanta, this formalism presents a unified view of classical, quantum mechanical and quantum field theoretical objects. Nowadays, systems in which macroscopic and microscopic objects coexist are attracting much attention from scientists in many different areas of physics. The book is intended to present a clear perspective over these many areas. Therefore, it is expected to interest not only solid state physicists but also other physicists such as high energy particle physicists and cosmologists, though most of the examples are chosen from solid state physics. Since our point of view in this book and a brief account of the contents are summarized in Chapter 1, these will not be repeated here. Here, we wish to point out that, since this book is self-contained as a book for many body problems, it is expected to be suitable for a graduate course for students with an introductory knowledge of quantum field theory. This book is a by-product of the research collaboration of the three authors. For the past several years, every summer at Edmonton we have had a kind of work shop for collaboration on several problems in solid state physics and quantum field theory. This gathering included not only us but also many of our colleagues from various countries. The gathering has been an extremely stimulating and enjoyable occasion and we expect that it will continue to be so in the future. In the winter, collaboration continued through long distance communications. The social side of the activity of the summer gathering has been taken care of by Mrs. Tamae Umezawa whose amiable hospitality has made the gathering extremely enjoyable. We would like to thank her for her efforts by dedicating this book to her. The finances of the summer gathering have been taken care of by the Natural Sciences and Engineering Research Council of Canada, the xvi Preface Theoretical Physics Institute and the Faculty of Science at The University of Alberta, and the Ministry of Education, Science and Culture, Japan. We are grateful for this support. We would like to thank Mrs. Carole Voss who did most of the typing work for this book. She was also a very valuable person to the summer gathering. With her extremely efficient and organized effort the arrangement of the gathering has been very smooth. The early stages of the gathering and typing work for this book were taken care of by the capable hands of Mrs. Gabriele Braun, to whom we extend many thanks. We also thank Mr. H. Yokota for his beautiful drawing work. In the course of preparing this book, we have been helped and encouraged by many of our colleagues. In particular, Dr. G. Semenoff kindly read through the manuscripts and made valuable criticisms. Many parts of the book have benefited from our discussions with Dr. F. Mancini, Dr. N. Papastamatiou, Dr. Y. Takahashi, Dr. M. Umezawa and Mr. J. Whitehead. The valuable comments of these colleagues are very much appreciated. We would like to thank Dr. W. Montgomery of the North-Holland Publishing Company for his excellent way of handling the publication of this book and also for his patience in regard to our very slow writing. Edmonton, 1981 H. Matsumoto M. Tachiki H. Umezawa CHAPTER 1 INTRODUCTION 1.1. Quantum theory for many body systems At the outset, quantum field theory evolved as an analytical method in the physics of elementary particles. However, in the course of its own development, many features which are suitable for the analysis of phenomena in quantum many body systems were gradually manifest. This began with the theory of the Fock representation which made it clear that quantum field theory supplies a language which is suitable for the description of a quantum system whose states can be classified by a set of number series. It is obvious that a quantum many body system requires such a language. The dual structure of the formulation Quantum field theory came still closer to solid state physics, when the theory was formulated in terms of the free fields called incoming fields. This showed that, although the basic relations in quantum field theory are expressed in terms of the so-called Heisenberg fields, the theoretical results can be described in terms of incoming fields. The fact that these incoming fields are free fields reminds us of the so-called quasi-particles which are the quanta manifest in observable phenomena in solid state physics. In this book those free fields, in terms of which the theoretical results are expressed, are called the "physical particles" or "physical quanta". Phonons, magnons, plasmons, etc. are some examples of physical quanta. Thus, the language of quantum field theory has a dual structure; the basic relations are expressed in terms of the Heisenberg fields, while the observable results are described in terms of the "physical fields". To solve 2 Thermo field dynamics and condensed states a quantum field theoretical problem is to find a mapping between these two languages. This mapping is called the dynamical map. The theory for ordered states A large step forward occurred when the phenomenon of spontaneous symmetry breakdown was fully understood in the terminology of quantum field theory. Until 1955 we considered only those solutions in which all of the invariant transformations are unitarily implemented, although, since 1953, there had been a mathematical development indicating that a canonical transformation does not need to be unitarily implementable in a quantum theory for an infinite number of degrees of freedom. Guided by the Bardeen-Cooper-Schrieffer theory for superconductivity, the study of quantum field theory has been led naturally to problems of unitarily non-implementable transformations, because the phenomenon of spontaneous symmetry breakdown falls into this category. Since the appearance of ordered states in solid state physics is a result of the spontaneous breakdown of symmetry, this study led to a quantum field theoretical understanding of ordered states. The central theorem is the celebrated Goldstone theorem which in practical terms means that all ordered states are maintained by certain gapless-energy bosons (the Goldstone bosons); a fact which was widely known by solid state physicists through their studies of various ordered states (phonons in crystals, magnons in ferromagnets, etc.). More precisely, the collective mode, which acts as a medium of communication in the process of maintaining the order, is the Goldstone mode. Another way of stating the same thing is that the order is a manifestation of the condensation of Goldstone bosons. When a symmetry is spontaneously broken, its symmetry transformation regulates the condensation of the bosons. The fact that a huge number of condensed bosons participate in this regulation process, explains why this transformation is not unitarily implementable. The phenomenon of spontaneous breakdown of symmetry raises an interesting question. Since a solution of broken symmetry is also a solution of the original Heisenberg equation which is invariant under the symmetry transformations associated with the broken symmetry, the original invariance must be preserved by the solution in some way. The question is how is the invariance preserved through the course of the breakdown of symmetry? The dual structure of the language of quantum field theory mentioned previously is very useful in answering this ques- Introduction 3 tion. Symmetry is the manifestation of an invariance and the form of this manifestation can be changed through the mapping between the two languages. The observable form of a symmetry is expressed in the terminology of the physical quanta, and therefore, can be different from the expression in terms of the Heisenberg fields. In this way, the spontaneous breakdown of symmetries (i.e. the appearance of ordered states) is understood as a rearrangement of symmetry thorough the dynamical map. This situation is summarized as follows: creation of ordered states = spontaneous breakdown of symmetries, = dynamical rearrangement of symmetries. By supplying an excellent formalism for the study of ordered states, quantum field theory has become an even more powerful method in solid state physics. Quantum field theory at finite temperature It is obvious that, to be able to deal with problems in solid state physics, we should reformulate the theory so as to take into account thermal effects. Since 1955, studies of Green's functions at finite temperature have undergone extensive development. However, the time- and temperature-dependent formalism cannot fully appreciate the causal Green's function method, which is such a powerful technique in the usual quantum field theory as is witnessed by the Feynman formalism. On the other hand, the well-known Matsubara formalism can take advantage of Feynman diagram techniques, but it is hard for this formalism to deal with time-dependent phenomena. Furthermore, being formulated in terms of Green's functions, the temperature Green's function methods cannot easily utilize many kinds of operator transformations. Therefore, it is very desirable to reformulate the whole structure of quantum field theory by taking into account thermal effects. Since 1963 it has become common knowledge among axiomatic field theoreticists that the quantum theory of free fields at finite temperature can be consistently formulated when the number of degrees of freedom is doubled. This axiomatic formalism has made extensive progress and is now called rigorous statistical mechanics for quantum systems. However, it is another matter to formulate quantum field theory at finite temperature in a form which is ready to be used in practical computations in solid state physics. A theory 4 Thermo field dynamics and condensed states of this kind has been formulated and is now called thermo field dynamics. It has been shown that, not only the causal Green's functions but all of the techniques of usual quantum field theory can be utilized in thermo field dynamics. Macroscopic objects in quantum many body systems As soon as quantum field theory becomes involved in problems of ordered states, it immediately meets a new kind of challenge. Since quantum field theory was originally formulated for the study of high energy particle physics, it was a theory for a system of microscopic objects (quanta) only. In other areas, however, it is rare to find purely quantum ordered states without any extended objects. For example, it is hard to find crystals without dislocations, grain boundaries, or defects. In these crystals, there are many quantum excitations such as phonons and other quanta. Dislocations are classically behaving macroscopic objects which are created in a crystal and interact with phonons and other quanta. In one word, a crystal with dislocations presents an example of a system in which microscopic and macroscopic objects coexist and interact with each other. In solid state physics we frequently meet similar situations: e.g. vortices in superconductors, magnetic domains in ferromagnets, etc. The usual macroscopic currents observed in solids are also macroscopic objects. As a matter of fact, many of the observations in solid state physics are macroscopic behaviours of quantum many body systems. Thus, we are forced to deal with quantum many body systems in which certain macroscopic objects are self-consistently created. It is common to describe nature in terms of its strata structure. The crudest classification of this kind is given, for example, by cosmological objects, everyday objects, molecules, atoms, and so on. We then hastily point out that everyday objects consist of molecules, which consist of atoms, and so on, and that phenomena belonging to different strata are usually considered by different theories in physics. A particular distinction is made between the level of everyday objects and that of molecules. The objects which belong to the level of molecules or smaller are usually called microscopic; the other objects are called macroscopic. It has usually been stated that the dynamics of microscopic objects follow the laws of quantum theory, while those of macroscopic objects are ruled by the laws of classical physics. This has led to the question asking how quantum physics is related to classical physics. Since most measurement Introduction 5 instruments are classically behaving macroscopic objects, this question has opened the study of the measurement mechanism. This study has been pursued by many people over the past half century. However, nature is too complex to be grasped by the above mentioned "linear" viewpoint based on a simple strata structure. This becomes obvious upon glancing at many phenomena in ordered states where a variety of macroscopic objects are created and interact with quanta. In this way we are led to a fundamental question; how do macroscopic objects come out of microscopic systems? In solid state physics we frequently catch a glimpse of the relation between micro and macroscopic objects. An example is given by the calculation of the electric conductivity. This is a macroscopic quantity because it is the ratio of macroscopic current to macroscopic electric field. Linear response theory relates this macroscopic quantity to certain quantum fluctuation effects which are microscopic: the so-called fluctuation-dissipation theorem. However, to list known examples from solid state physics is not an adequate way of answering the above question. Rather, we wish to have a general mathematical formalism for the derivation of macroscopic results from a microscopic theory. Furthermore, we want such a formalism to be useful in the practical analysis of quantum many body systems with macroscopic objects. It is indeed possible to extend the formulation of quantum field theory to consider macroscopic objects created in quantum many body systems. This generalized formulation contains linear response theory and supplies us with systematic methods for the analysis of quantum many body systems with extended objects. According to this formulation, macroscopic objects in quantum many body systems are created by the condensation of certain bosons. Topological objects There are many kinds of macroscopic objects which carry topological singularities. For example, vortices in superconductors and crystal dislocations carry line singularities. As is shown in later chapters, grain boundaries and point defects in crystals and Josephson junctions in superconductors carry topological surface singularities. It can be proven that macroscopic objects associated with topological singularities can be created only by the condensation of gapless-energy bosons. (Here a gapless- energy quantum means a quantum whose minimum energy is zero.) This 6 Thermo field dynamics and condensed states explains why we find many kinds of macroscopic objects with topological singularities in any ordered state, because the ordered states are maintained by Goldstone bosons whose energies are gapless. Disorder in ordered states A remarkable feature of the macroscopic objects with topological singularities, mentioned above, is that these objects* manifest a certain disorder, because the order parameters vanish at the singularities. Therefore, although the creation of these macroscopic objects requires the presence of certain order, the result is a region of disorder created in an ordered state. An example is given by the following course: molecular system —» a crystal —» dislocations in the crystal —» an amorphous material. The interaction between macroscopic objects and quanta It is natural to expect that, as soon as macroscopic objects are created in a quantum system, the states of the quanta are influenced by the presence of the macroscopic objects. This effect of the macroscopic objects can be treated as a potential influencing the quanta. This potential is called the self-consistent potential. For example, electrons in the vicinity of a vortex center in a superconductor are influenced by the self-consistent potential induced by the vortex, and as a result, the electron energy is lowered. This is the origin of the core energy of vortices. Another example is the effect of a surface object on phonons in crystals; the result is the appearance of surface prjonons. Surface phenomena In reality all systems are of finite size. The boundary surface of a system, when it is not supported by an external force, is self-consistently maintained. The stability of a system implies that its boundary surface is maintained by a certain long range correlation (or collective mode). Although we can artificially vary the form of the boundary in a variety of ways, the collective mode tries to return the shape of the boundary to its most favorable shape as soon as the artificial effect is switched off. In Introduction 7 other words, the system selects the most preferable form of its boundary from infinitely many choices. This shows that even when we consider a stationary system of finite size, the system has an infinite number of degrees of freedom. Intuitively speaking, the boundary surface of a system in an ordered state is a macroscopic object (with a surface singularity) which is created by the condensation of Goldstone bosons. Certain types of oscillation of boundary surface singularities have rather stationary properties and create a kind of surface wave. The Rayleigh wave (a surface sound wave in a crystal) is an example of this kind. In this way many kinds of surface phenomena can be associated with surface singularities. Another example of a surface effect is the Josephson phenomenon. Although this phenomenon is usually described by the microscopic tunneling of Cooper pairs, we can also treat it as a macroscopic current naturally induced by the presence of a macroscopic surface. Quantum field theory, quantum mechanics and classical theory In the past, quantum field theory has been obtained by quantizing a classical theory. This historical process is reversed in the above consideration. There, the theory begins with the usual quantum field theory for a purely quantum system and then, by using a mathematical technique dealing with the boson condensation, creates macroscopic objects in the quantum system. The result is a theory which covers quantum field theoretic, quantum mechanical and classical objects. It is an interesting historical cycle that we are now deriving quantum mechanics and classical physics from quantum field theory. This development may illuminate the question of how the microscopic and macroscopic theories are related to each other. It is a remarkable feature of science at the present time that scientists in different areas are using similar intuitions and concepts. The study of order, symmetry breakdown and macroscopic objects are of central significance, not only in solid state physics, but also in high energy particle physics, astrophysics, chemistry and biochemistry. The study of solitons and quantum solitons in the mathematics of non-linear equations also falls into the same category of problems. Therefore, we have good reason to expect that quantum field theory will be useful in a wide area of science. 1.2. Plan of the book In this book we develop quantum field theory by following the steps described in the last section. The presentation of the general considerations will be supplemented by many practical examples from solid state physics. The theory of the Fock space for a quantum system whose states are classified by number series and the dual structure of the language (i.e. the Heisenberg fields and free physical quanta) of quantum field theory are explained in chapters 2 and 3. There, the so-called unitarily inequivalent representations will also be discussed. The general formalism of quantum field theory at finite temperature (thermo field dynamics) is presented in chapter 4. Following the construction of the theory, a detailed analysis of the spectral representations of the causal two-point functions (the causal correlation functions) and their products are discussed. These spectral representations and the rules for their products are the key elements for systematizing and simplifying many practical computations for systems at finite temperature. In chapter 5 thermo field dynamics is applied to many examples in solid state physics. The purpose of this chapter is two-sided; on the one hand, it introduces many typical problems in solid state physics, on the other hand, these examples illustrate many computational techniques in thermo field dynamics. When a reader finds some of these examples tedious, he may postpone them until he grasps the general idea of the book by going briefly through the general considerations. In the study of many-body problems there are many properties whose behaviours are controlled by a certain invariant nature of the dynamics of the system. When we introduce certain approximations in the study of these properties, particular care should be taken so that the approximation does not violate the invariance. Therefore, it is useful to have certain mathematical relations which summarize the results of the invariance. These relations do exist and they are called Ward-Takahashi (W-T) relations. Chapter 6 is devoted to these relations for a system at finite temperature. By using the W-T relations, we present in chapter 7 a general consideration of ordered states (i.e. the spontaneous breakdown of symmetry) and we discuss the Goldstone theorem and the dynamical rearrangement of symmetry. Several examples of ordered states are also discussed. In chapter 8 the quantum field theory developed in the previous chapters is applied to a particularly significant problem, i.e. quantum electrodynamics in solids, which includes both normal and superconduc- Introduction 9 ting metals. This consideration illustrates a method for the treatment of gauge invariance. Consideration of the creation of macroscopic objects in quantum many-body systems begins with chapter 9. This chapter presents the general features of a theory for quantum many-body systems with macroscopic objects. Particularly significant objects, i.e. those with topological singularities are discussed in chapter 10. This consideration is a natural extension of the quantum field theory presented in the first eight chapters, because the creation of macroscopic objects with topological singularities requires the presence of certain Goldstone bosons which maintain the ordered states; those topological objects are the results of condensations of the Goldstone bosons. Dislocations, grain boundaries and defects in crystals are also studied in chapter 10. In chapter 11 this formalism for macroscopic objects is applied to macroscopic phenomena in quantum electrodynamics in solids. There, macroscopic quantities such as the macroscopic current and field, conductivity, dielectric constants, etc. are studied. Linear response theory is derived as a part of the results. A detailed consideration for superconductivity is presented. The problem of the interplay between superconductivity and magnetism is also discussed. Chapter 12 is devoted to those phenomena which are caused by the presence of topological surface-singularities. The general consideration is followed by three examples: the first is the crystal surface sound wave, the second is the Josephson current, and the last one is the penetration of an external magnetic field into superconductors. It is partly due to the shortage of time on the authors' side and also due to the size limitation of the book that many important and interesting subjects have been omitted. Some of these subjects are mentioned briefly in the last chapter. 1.3. Notation In the following we summarize some of the notation used in this book. Whenever a different notation is used, it will be mentioned in the text. We frequently use the four dimensional notation x^ (/jl = 0,1, 2, 3) to express the space-coordinate x and the time t simultaneously: -={? (ji = i; space coordinate) (jjl = 0; time coordinate) . \ - > ) 10 Thermo field dynamics and condensed states The four dimensional wave-vector k^ indicates -fi j kt (v = i; wave vector) a) (jjl = 0; frequency). U .^A> The simple notation jc denotes a four dimensional vector jc^, while boldface symbols (for example, x) denote spatial vectors. Therefore, a function f(x) is a function of x and t. The metric tensors g**" and g^ are denned by g" = -g°°=l, gu = -goo=l, (1.3;3) g^ = g^ = 0 for^*. (1.3;4) Vectors with upper indices are given by *M = gM%, kfl = gflvkv. (1.3;5) Summation over repeated indices is understood. The scalar product of the three dimensional vectors k and x is denoted by k • x and the four dimensional scalar product k • x is denned by fc • x = fc^ = fc% = k • x - cot. (1.3;6) The derivative operator c^ is defined by w=-!L= [dldXi &= f") n vi\ ~dXfl \dldt (ji=o), K'^/} then {dldxi (a = i) -eiet U). ^8) The symbol V, is also used: Vi^d/dxt. (1.3;9) Then, V2 is t V2=XViVf. (1.3;10) A function of derivative operators F(d) is defined by its operation on the Fourier transform F(d) exp{ifc • jc} = F(ifc) exp{ifc • jc} . (1.3;11) For simplicity, we drop the i in F(ifc) in eq. (1.3; 11) and simply write F(k). Therefore the operation of F(d) on a function g(x), whose Fourier transform is g(x) = [ d4xG(k) exp{ifc • jc} , (1.3;12) is given by F(d)g(x) = [ d4kF(k)G(k) exp{ifc • jc} , (1.3;13) The three dimensional delta-function is denoted by 8(x): 8(x) ^ 8(x1)8(x2)8(x3), (1.3;14) while the four dimensional delta-function is denoted by 5(4)(jc): 8(4)(x) = 8(t)8(x). (1.3;15) We write fundamental constants such as h, c, etc. explicitly. The energy of a particle excitation E is expressed by a frequency co as E = h(o. (1.3;16) A field variable i//(x) and its canonical momentum tt(x) satisfy the canonical commutation relation [tt(x, 0, <A& 01± = -ih8(x-y). (1.3;17) The normalization of free fields are determined so as to satisfy the above relation. Since the frequency co is used to write energy as ho), the inverse temperature /3 defined by (l/kBT)E =/3(o (1.3; 18) 12 Thermo field dynamics and condensed states IS P = h/kBT. (1.3;19) When electromagnetism is considered, it is convenient to modify Xq in eq. (1.3;1), k0 in eq. (1.3;2) and d° in eq. (1.3;7) as x0=ct, k0 = -o>, d°^-^7- (1.3;20) c c at This modified notation will be used in chapters 8 and 11. Then the normalizations of the fields are modified by the factor c. To explain this let us consider the Lagrangian, ^(*)=-k*<^*, (1.3;21) = \[c-\dXldtf-{Vx)2]. (1.3;22) This gives tt(jc) = c~ld °x = c~2dXldt, (1.3;23) which leads to [*(*, 0, ^ (y, O] = ic2h8(x - y) . (1.3;24) The four dimensional vector potential Afl(x) is Heaviside-Lorentz rationalized units will be used in formal considerations and the results will be transformed into Gaussian units for practical applications. Then the current j^ in chapters 8 and 11 is defined so as to satisfy the Maxwell equation -^=i (1.3;26) with Introduction 13 FpV = dpAv - drAp . (1.3;27) Therefore /0 is the charge density p and the current j includes the c~l-iactor in the conventional definition. Physical fields are denoted by a superscript zero such as ^°, <p°,.. .,. CHAPTER 2 QUANTUM THEORY FOR MANY BODY SYSTEMS 2.1. The number representation; creation and annihilation operators Classification of single-particle states and wave packets Consider a system of particles whose single-particle states are classified by a discrete index i = 1, 2, 3,...,. To explain what the index i implies, let us recall that, according to wave mechanics, the state of a single particle is represented by a wave function i//(x) which is normalizable: [d3jc|(K*)|2<^. (2.1;1) Therefore i//(x) cannot be a plane wave; it is a wave packet. Throughout this book, we use the word "wave packet" for any spatially localized wave function. The wave functions of electrons bound to lattice points in crystal are also regarded as wave packets. Any normalizable function (/f(jc) can be expanded as *(*)= 2 <»(*)> (2-i;2) where the functions gt(x) form a countable set {g,(x); / = 1,2,...} of orthonormalized functions: Jd3xg;(x)g,(x) = V (2.1;3) A well-known example of an orthonormalized complete set {g,} is given by the harmonic oscillator wave functions which are classified by the principal quantum numbers. The expansion coefficients ct in eq. (2.1;2) can depend on time. When the particle has spin, we need another index which classifies the spin states. When this happens, our convention is to 16 Thermo field dynamics and condensed states let the index i classify both the orthonormalized wave functions and the spin states. In this way we can classify single particle states by the discrete index i. Classification of many-particle states and number series Let us now consider a state of a many body system. We denote by nt the number of particles occupying the ith state introduced above. Then, a state of the many-body system is identified when we specify n, for all i. This state is denoted by |ni, n2,...). Assembling all of these states, we construct the set {|ni, n2,...)}. When nt is permitted to take any non- negative integer number, the particles are called bosons. When nt = 0 or 1, they are called fermions. Note that these are not the only possible types of particle allowed within the framework of quantum field theory. Particles different from bosons or fermions are said to obey "parastatistics" and have received considerable attention. However, in this book, we consider only bosons and fermions. Annihilation and creation operators for bosons Let us first consider a system of bosons, and introduce the annihilation operator a, and the creation operator a\ by ajtti,..., n„ ...) = n}n\rii,..., nt — 1,...), (2.1;4) a[|tti,..., n„ ...) = (n, + l)1/2|fti,..., nt + 1,...), (2.1;5) respectively. We can move from any state to another state in the set {|tti, n2, • • •)} by means of repeated operations of the creation and annihilation operators. Eqs. (2.1;4) and (2.1;5) lead to AT,|tti,..., n„ ...) = nt\rii,..., nt...), (2.1 ;6) where N, is denned by N, = a]at (no summation over i). (2.1;7) This operator is called the ith number operator and Quantum theory for many body systems 17 N = 2>, (2.1;8) i is called the total number operator. Eqs. (2.1;4) and (2.1;5) lead to the relations: [ai9 a)]\nh n2,...) = dij\nh n2,...), (2.1;9) [ah aj]\nh n2,...} = 0 , (2.1 ;10) [a]9a)]\nun29...) = 0. (2.1;11) Annihilation and creation operators for fermions Let us now turn our attention to a system of fermions. The annihilation and creation operators are introduced by the following relations: ■ )=(0 for nt = 0 u '''9 " " *" \rj(nu ••-, Wi-i)|wi,..., n, — 1,...) for n, = 1, (2.1; 12) t. v f0 lr/(ni,..., nj_i)|ni,..., nt + 1,... for nt = 1 ) for nx; = 0 . (2.1;13) Our choice for the phase factor r/(ni,..., n,_i) is 7/(^,...,^)=(-1)2^. (2.1;14) This choice of the phase factor simplifies the operation of [a;, a)]+, which is defined by means of the notation [A,B]+ = AB + BA. (2.1;15) Indeed, choosing i >/ for definiteness, we obtain from eqs. (2.1; 12) and (2.1;13): _ f(—1)^^1,..., n7 + 1,..., n, — 1,...) for n7 = 0, n, = 1 lO otherwise , (2.1;16) 18 Thermo field dynamics and condensed states and = f(_VFWu..., n7 + 1,..., n, - 1,...) for n, = 0, nt = 1 ^ 1 10 otherwise , ' where M= 2 nt. (2.1;18) By considering the cases / < j and / = / in a similar manner, we can derive [ah a)]+\nh n2,...) = 8y\nl9 n2,...), (2.1;19) [ah aj]+\nh n2, ...)=0, (2.1;20) [a], aj]+|ni, n2, ...)=0, (2.1;21) for any choice of i and y. The operator N, = a[a, (no summation over i) (2.1;22) is called the ith number operator. Its operation is .)• (2.1;23) The total number operator is denned by N=*ZNt. (2.1 ;24) 2.2. Unitarily inequivalent representations Non-countability of the set {|tti,...)} A significant feature of the set {|ni, n2,...)} is its non-countability. This non-countability is easiest to see in the case of a fermion system, in which n, = 0 or 1. Using the binary number system, we consider the set of numbers {A = 0, n\n2... n,...} in which n, = 0 or 1. This set of numbers Quantum theory for many body systems 19 covers the interval (0,1) of the real line. This set of numbers has a one-to-one correspondence with the set {|m, n2,..., ty,...)}. We thus conclude that the set {|ni, n2,..., n„ ...)} is not countable. It is obvious that this conclusion also holds true in the case of a boson system. Unitarily inequivalent representations Since the set {|ni, n2,...)} is not countable, we cannot use this as the base of a separable* Hilbert space. To construct a separable Hilbert space by means of members of the set {|ni, n2,...)}, we should select a countable subset for the base of the Hilbert space. However, there are infinitely many ways of selecting countable subsets. When two different subsets can be used as base of representations for the operators (ah a J; i = 1, 2,...), these two representations are unitarily inequivalent to each other in the sense that a vector of one representation is not a superposition of base vectors of another representation. This creates a very deep difference between the situation in ordinary quantum mechanics and the one in quantum field theory. In quantum mechanics which is the quantum theory for a finite number of canonical variables, we do not have to worry about the choice of representations for canonical variables, because all possible representations are unitarily equivalent [1]. This situation disappears in the case of quantum field theory **. This has far reaching implications, as we shall see in later sections. 2.3. The Fock space The [0]-set and the vacuum At first glance it seems probable that the set {|ni, n2,...)} is unnecessarily large for the description of nature. After all, in all experiments, only a finite number of quanta is excited, although this number * A space ffl is said to be "separable", when it contains a countable basis {£„} such that any vector £ in$? can be approximated by a linear combination of £„ (i.e. 2 c„£„) to any accuracy. That is, for every £ in $? and any e >0 there exists a sequence {c„} such that |£ — 2„ Cninl < e for arbitrary e. ** Problems of unitarily inequivalent representations of canonical commutators appeared in the van-Hove model [2]. A detailed mathematical analysis of these problems was first made by Friedrichs [3]. Further analyses were made by Wightman and others [4]. 20 Thermo field dynamics and condensed states can be arbitrarily large. Therefore, we shall assume that the following subset is sufficient for the description of physical processes: [0]-set = | \nu n2,...); 2 n* = finite \. (2.3;1) This set contains the state of no particles (n, = 0 for all i). This state is called the vacuum and is denoted by |0): |0>=|0,0,...) . (2.3;2) On the other hand, it does not contain many states which were in the original set {|ni, n2,...)}. For example, the state with n, = 1 for all i is clearly not contained in the [0]-set. Countability of the \G\-set Let us prove that the [0]-set is a countable set. Since 2 n, = finite, being given any member of the [0]-set, we find an integer m such that n, = 0 for i>m and nw^0. We can also assign to every member of the [0]-set a second integer s through 2 nt = s. For each value of the integer ras, the corresponding members from the [0]-set are finite in number, and therefore can be ordered; a general member from the [0]-set can then be represented as & {a = 1,2,...), (2.3;3) where, as a increases, ms either remains unchanged or increases. We have thus proved the countability of the [0]-set. The [0]-set is now given by {&; a = 1,2,...}. Construction of the \ti\-set on the vacuum The vectors in the [0]-set are constructed by repeated application of the creation operators a] on the vacuum state. In the case of boson systems we write the vector \r%\, n2,...) as k, n2,...) = n (m!)-1/2(«!)"'|0> . (2.3;4) In the case of fermion systems k, «2,...>=IT «!l°>. (2-3;5) i where II' means that i covers only those numbers for which nx■ = 1. Furthermore, the operators a\ stand from left to right according to increasing order of i. Inner products Let us now introduce the conjugate vectors (ni, n2,.. .| and assume that the inner products of vectors and conjugate vectors are defined. The inner products are denoted by (n[, n2,.. \r%\, n2,...). The conjugate vector of the vacuum is denoted by (0|, and we assume that <0|0>=1. (2.3;6) We define the operation of a] and at in such a way that a, becomes the hermitian conjugate (or adjoint) of a}. Then eqs. (2.1;4) and (2.1;5) leads to (ni,..., ft,,.. \a\ = n}/2(nu ..., n, - 1,.. .| (2.3;7) and (ni,..., n„ .. .\at = (ni + l)1/2(ni,..., nt + 1,. . .| (2.3;8) for the boson systems. Then we have <m, n2,.. .| = <o| n (w,!)"ll\<*iYl • (2.3;9) i Note that eqs. (2.1;4) and (2.3;7) give 0:,10) = 0, (2.3; 10) <0|a! = 0. (2.3;11) Furthermore, eqs. (2.1;4) and (2.1;5) lead to 22 Thermo field dynamics and condensed states ]1 (1M, !)(a,)"-(a!)ni|0) = |0> . (2.3;12) i Use of eqs. (2.3;10-12) leads to <ni, n2,.. .|ni, n2,...) = n ^'«> (2.3; 13) i which implies that the [0]-set is an orthonormalized set. This can be proved also in the case of a fermion system, in which we have <ni, n2,.. .| = <0| IT «* • (2.3; 14) i Here the operators a, stand from right to left according to increasing order of i. When we use the notation £a in eq. (2.3 ;3), the orthonormalization relation eq. (2.3;13) reads simply as (&•&)= So*. (2.3;15) Now consider two vectors b — 2j Ca*s,a j b — 2^ ^<*ba • a a Then, the inner product (£ • £') is defined by (f-f)=22 c'ac'b(?a • &), (2.3;16) a 6 = 2 cki . (2.3;17) a The norm of a vector £ is defined by (£, £)1/2 and will be denoted by |£|. Fock space * The linear space denned by *[«] = f f = 2 c*&; 2 k«l2 = finite} (2-3;18) is separable [5]. This Hilbert space is called the Fock space [6]. Here f = S"=i cflffl is defined as the limit (N-»o°) of the vector series £N = Quantum theory for many body systems 23 E^=i cfl^, which is a Cauchy sequence when ££=1 \ca\2 is finite. In other words, |§v — £/v'|2 can be made smaller than arbitrarily small e by choosing suitably large N and N'. As is clear from eqs. (2.3;4) and (2.3;5), application of all possible polynomials in a] on the vacuum yields the set D = JS c<£a; N = finite integers} (2.3; 19) which is dense [5]* in X[a]. It is for this reason that the process followed in the construction of ffl[a] is referred to as "building the space by cyclic operations of creation operators on the vacuum". A detailed mathematical account of the construction of the Hilbert space is beyond the scope of this book. The interested reader is referred to the literature [4, 5]. The vacuum in Fock space It is obvious that the vacuum |0) belongs to the Fock space ffl[a] and satisfies the condition 0:,10) = 0 for all/. (2.3 ;20) Furthermore, when a vector £ in ffl[a] satisfies the condition ag = 0 for all i, then £ is related to the vacuum by £ = c|0) where c is a c-number. The commutation relations Eqs. (2.1;9-11) and (2.1;19-21) lead to <a|[a,aj]±|ft)=^<a|ft), (2.3;21) (a\[ah aj\±\b) = (a\[a], a)]±\b) = 0 , (2.3;22) for any two vectors \a) and \b) which belong to the dense set D. Here use * A set D is said to be "dense" in a normed vector space #? when for every vector £ in #? and any given e > 0 there exists a vector £ in D such that |£ - £| < e. Intuitively D "almost fills" ^f. For example the set of rational numbers is dense in the set of real numbers. 24 Thermo field dynamics and condensed states was made of the notations: [A, B]± = AB ± BA . (2.3;23) We call [A, f?]_ and [A, f?]+ a commutator and an anticommutator, respectively. In eqs. (2.3;21) and (2.3;22), the commutators are for the case of a boson system while the anticommutators are for the case of a fermion system. These relations imply that ffl[a] is a representation of operators ah the algebraic properties of which are denned by Ka}]±=$, (2.3 ;24) [a„ a7]± = [a/, a)]± = 0 . (2.3;25) Creation and annihilation operators for a particle with an arbitrary wave packet We now recall that the existence of the discrete index i is due to the fact that the spatial distribution of a single particle state is represented by a wave packet; i.e. a square-integrable function in configuration space. As was shown in eq. (2.1;2) such wave packets can always be expressed in terms of a countable set of orthonormalized functions g,(x): (ft, ft) - \ d3xg:(x)g,(x) = Stj. (2.3;26) Then the inner product of such wave packets is (f, g) - j dV(x)g(x), {23-21) = S^i, (2.3;28) i when f(x) and g(x) are given by /(*) = 2 Qg,(x), <?.3;29) i S(*)=E«te(*). (2.3;30) Quantum theory for many body systems 25 Since the one particle state associated with the wave packet gt(x) is represented, in the Fock space ffl[a], by the vector a]\0), the state vector for the one particle state associated with the wave packet f(x) should be a superposition, i.e. 2 c,o[|0). We therefore define the creation operator for a single particle with spatial distribution f(x) by a}: a} = 2 cfiL\. (2.3;31) The hermitian conjugate of a) gives the annihilation operator «/ = 2cia«- (2.3 ;32) Then eqs. (2.3;24) and (2.3;25) lead to K«J]± = (/;*) (2.3;33) [a* af]± = [a/, aj]± = 0 . (2.3;34) The Fourier representation Since the wave packet functions are Fourier-transformable, we can formulate the theory of Fock space in the terminology of the Fourier representation. As will be seen in later chapters, such a formulation is extremely useful in practical computations. We write the Fourier amplitude of g,(x) by gt(k): giW=|l|g'Weib- (23;35) Then the orth©normalization relation (2.3 ;26) leads to (gi, gj) = \ (|^3 g'i(k)gj(k) = Sv. (2.3;36) Similarly, introducing the Fourier transforms of f(x) and g(x): d3fc (2tt)- f(k)e,k* (2.3 ;37) 26 Thermo field dynamics and condensed states *<«) = / tffe g(k)e**, (2.3;38) (2tt) we can write the inner product as if,g) = \^fik)g(k), (2.3,39) = X c]d, ■ (2.3;40) From the viewpoint of Fourier analysis, it is convenient to use plane waves exp(ifcx) as our "basis" in wave packet space. On the other hand, plane waves cannot represent the spatial distribution of single particle states, since they are not normalizable (they also do not form a countable set). The way to reconcile these two requirements is through the introduction of the ^-function normalization. This is done as follows (for simplicity, we ignore the spin degrees of freedom). Introduce operators a(k) through the condition [a(k\ a\l)]± = 8(k -1) (2.3;41) [a(k\ a(l)]± = [a\k\ a\l)]± = 0 . (2.3;42) Since the ^-function is denned only in the sense of distribution theory, these relations are to be understood in the same sense; /(^/(^p f(k)g«)[a(k), a\l)]± d3fc f(k)g(k) = (f, g), etc., (2.3;43) (2tt) where f(k) and g(l) are suitable test functions. This leads to the identification a} = (2tt)-3/2 J d3kf(k)a\k), (2.3;44) which gives [af, 4]± =(f,g), (2.3;45) [«/. «*]± = [«/, 4)± = 0 • (2.3;46) Quantum theory for many body systems 27 These relations agree with eqs. (2.3;33) and (2.3;34). We therefore regard a} as the creation operator for a particle with spatial distribution f(x). Then, comparing eqs. (2.3;31) with (2.3;44), we find that a\ = (2tt)-3/2 j d3kgi(k)a\k) . (2.3;47) This is consistent with the original definition of a] which states that a] is the creation operator for a particle with spatial distribution g,(x). It should be noted that a(k) and af(k) are not defined on vectors of the Fock space: when they act on Fock space vectors, they produce states of infinite norm. For example, consider the vector \k) = af(k)\0). This leads to (k\k)=8°\0) which is infinite. This expresses the 5-function normalization referred to above. The operators a(k) are therefore not realized in our Hilbert space; their usefulness consists of their connection with the Fourier representation of a\ which was given in eq. (2.3;47). A concluding remark In conclusion, let us point out that the construction of the Fock space in this section was purely empirical; we never had to talk about the specific dynamic behaviour of the system under consideration. The last three sections can be thought of as setting up a quantum theoretical language in terms of which the system can be described. This "language" i.e. the Fock space, can therefore be used in any situation where we deal with quantum systems whose states are specified by number series. This is why the general framework constructed above can be used in such diverse areas as solid state physics and high energy physics. 2.4. Some examples of unitarily inequivalent representations We have seen in previous sections that, although all possible |tti, tt2, • • .)-states form a non-countable set, we can use the [0]-set for the base of our separable Hilbert space. A Hilbert space of this kind is called a Fock space. This space was constructed by cyclic operation of a] on the vacuum. Therefore, the algebraic relations (2.3;24-25) by themselves cannot determine the representations of a, uniquely even up to a unitarily equivalence; we also need to specify the vacuum state. The problem of choice of a representation of the algebraic relations (2.3 ;24—25) is a very complicated one. 28 Thermo field dynamics and condensed states The Bogoliubov transformation of boson operators To see an explicit example of unitarily inequivalent representations, let us consider two sets of boson annihilation operators, a(k) and p(k). Following the consideration in the previous three sections, we construct the [0]-set and build the Fock space which is denoted by !%?(a, /3). The vacuum |0) satisfies a(k)\0) = 0 and j8(*)|0> = 0. (2.4;1) The algebraic relations for these operators are [a(k),a\l)] = 8(k-l)9 (2.4;2) [P(k),p\l)] = 8(k-l). (2.4;3) Other commutators vanish. Let us now introduce the operators, a(k) and b(k) through the following relations: a(k) = cka(k)-dkp\-k), (2.4;4) b(k) = ck/3(k)-dka\-k). (2.4;5) Here the c-number coefficients are real functions of k2 and satisfy the relation cl -d\=l. (2.4;6) This relation guarantees that a(k) and b(k) satisfy the same algebraic relations as the ones for a(k) and P(k): [a(k),a\l)] = 8(k-l), (2.4;7) [b(k),b\l)] = 8(k-l). (2.4;8) Other commutators vanish. In one word, the transformation defined by eqs. (2.4;4) and (2.4;5) is canonical. This transformation is called a Bogoliubov transformation [7]. Following the argument in the previous section we introduce the wave Quantum theory for many body systems 29 packet operators at and bt: a] = (2tt)-3/2 f d?kgt(k)a\k)9 etc. (2.4;9) The action of these operators on vectors in ffl[a, /3] is defined through the relations (2.4;4) and (2.4;5). Let us simplify the situation by assuming that ck is positive. Then eq. (2.4;6) implies that we can write ck = cosh Ok, dk = sinh 6k . (2.4; 10) Let us now introduce the operator G(0) = exp[A(0)] (2.4;11) with A{6)= jd3kOk[c*(k)P(-k)-p\-k)a\k)], (2.4;12) which gives [a(k\A(e)] = -W\-k), (2.4;13) [p\-k\A(0)] = -0ka(k). (2.4;14) Repeated use of these relations leads to G-\6)a(k)G(6) = a(k) cosh 0k - /3f(-k) sinh 0k (2.4;15) which together with eq. (2.4;4) shows a(k) = G-\e)a{k)G{6). (2.4;16) Similarly we can show that b(k) = G-\0)l3(k)G(0). (2.4;17) This might suggest that the Bogoliubov transformation, eqs. (2.4;4) and 30 Thermo field dynamics and condensed states (2.4;5), is unitary. To inspect this point more carefully, we calculate the matrix elements of G(0). Let us begin with the vacuum expectation of G-\ey U{6) = (Q\G-\e)\0). (2.4;18) To calculate this, we change the parameter 6{k) by 6(k)+ s8(k -1) with any given /. The change of /o(0) due to this change of parameter is denoted by 8f(6; I). Then the functional derivative is defined by ^r/o(0) = Km 18/(6,1). (2.4; 19) OUl e-»0 £ Now eq. (2.4; 11) gives S 80i also /o(0) = -(0\a(l)l3(-l)G-\e)\0), (2.4;20) = <O|G-1(0)^t(-/)at(/)|O). (2.4;21) Since G (6) = G(— 6), we can calculate as a(l)p(-l)G-\6) = 0^(0)0^(-^(0/3(-/)0(-0) = G_1(0)[a(O cosh 0, + 0\-l) sinh 0,] X \fi(-1) cosh 6, + a\l) sinh 0,], (2.4;22) where use was made of the relation (2.4; 15) with 6k being replaced by -0k. We thus obtain from eq. (2.4;20): 8 /o(0) = -5(3)(0) sinh 0, cosh 0/,,(0) 50, -sinh2 0,<O|G-\0)p\-l)a\l)\0). (2.4;23) Then, eq. (2.4;21) leads to 8 86, /o(0) = - 5(3)(0) tanh 0/o(0). (2.4;24) Quantum theory for many body systems 31 Here 5(3)(0) means 8°\0)=\im8(k). (2.4;25) Since 8dkl8di = 8(k - /), the solution of eq. (2.4;24) subject to /0(0) = 1 is /o(0) = exp(- 8(3\0) [ d3fc log cosh 0k \ . (2.4;26) Since A(6) in eq. (2.4; 12) contains the pair operators (a/3 or )8tat) only, we need to study only the following matrix elements: W; I) = {0\[a(!)p(-l)]nG-\e)\0). (2.4;27) This gives ^-/„(0; l)=-(0\[a(l)p(-l)Y[a(l)p(-l)-pX-l)aXl)]G-l(e)\0) V = -fn+1(6; I) + n2[5(3)(O)]2/n_!(0; /) . (2.4;28) The solution is /n(0; I) = n\[8(3\0)]n exp(-8O)(0) | d3fc log cosh 0k Vtanh 0,)n . (2.4;29) This result implies that 10))-0-^)10) (2.4;30) = /o(0) exp(s(3)(0) | dWflOjSV*) tanh ft)|0>, (2.4;31) where /o(0) was given in eq. (2.4;26). If we now use the fact that £(3)(0) is infinite, [and therefore that /o(0) = 0], we see from eq. (2.4;31) that, when we expand |0)) in terms of the base vectors of ffl[a, /3], every expansion coefficient vanishes, i.e. |0)) does not belong to ^C\a, /3]. In other words, G_1(0) does not map ^C\a, /3] on itself. 32 Thermo field dynamics and condensed states Note that eq. (2.4; 16) leads to a(*)|0» = 0 and fc(*)|0» = 0 , (2.4;32) implying that |0)) is the vacuum associated with a(k) and b(k). Suppose now that we construct- the [0]-set by regarding a] and bJ as creation operators and build a Fock space which is denoted by ffl[a,b]. In this Fock space, there is the vacuum state |0)) which satisfies eq. (2.4;32). The above consideration shows that this vacuum |0)) does not belong to !%?[a, /3]. Thus, ffl[a,b] and ffl[a, /3] are two unitarily inequivalent representations in the sense that there is a vector in ffl[a, b] which cannot be given by a superposition of base vectors of ffl[a, /3]. As a matter of fact, in the case under consideration, we can show that no vectors in ffl[a,b] can be linear superpositions of basic vectors of ffl[a, /3]. This situation is usually described by the intuitive expression that ffl[a, b] and ffl[a, /3] are orthogonal to each other. We can see the origin of this phenomenon by recalling the formula S(k) = (2tt)-3 [ d3x elkx , (2.4;33) so that intuitively 8(3\0) = (2tt)-3 x (volume of the system). (2.4;34) This might suggest that, in reality, the unitary inequivalence mentioned above may not happen because every system has a finite size. However, this point of view seems to be too optimistic. To consider a stationary system of finite size, w6 should seriously consider the effect of the boundary. As will be shown in later chapters, this boundary is maintained by some collective modes in the system and behaves as a macroscopic object with a surface singularity, which itself has an infinite number of degrees of freedom. In this sense, a stationary system with a natural (self-maintained) boundary is quite different from a system which is artificially confined in a box. As was pointed out in section 2.2, the origin of the appearance of many unitarily inequivalent representations lies in the fact that the set {|tti, n2,...)} is not countable. The above results do not mean that we cannot define the action of at and bt on the vectors in $?[«, /3]. Indeed, as was pointed out previously, Quantum theory for many body systems 33 the action of at and bt on the vectors in %£\a, p] is determined by the relations (2.4;4) and (2.4;5). What we have shown above is that the canonical transformation [eqs. (2.4;4) and (2.4;5)] is not unitarily imple- mentable. When our Hilbert space is the Fock space ffl[a, /3], at and bt cannot be called annihilation operators, because in$?[«, /3] there is no vacuum associated with at and bt. The BogoUubov transformation of fermion operators We can treat the case of fermions in a similar fashion. Consider two sets of fermion annihilation operators a(k) and P(k)\ [a(k),a\l)]+ = 8(k-l)9 (2.4;35) [fi(k),p\l)]+ = 8(k-l)9 (2.4;36) [«(*), P(l)]+ = [«(*),P\l)]+ = 0, etc. (2.4;37) Following the arguments of the previous sections, we build the Fock space %[a,/3]. The vacuum |0> satisfies a(*)|0> = 0 and £(*)|0> = 0. We then introduce the operators a(k) and b(k) through the relations: a{k) = a(k) cos 0k - p\-k) sin 0k, (2.4;38) b(k) = P(k) cos 0k + a\-k) sin 0k, (2.4;39) where 6k is a function of k2. This transformation is called the Bogoliubov transformation [7]. It is easy to see that a(k) and b(k) satisfy the same algebraic relations as the ones for a(k) and fi(k): [a(k)9 a\l)]+ = S(k - I), (2.4;40) [b(k)9 b\l)]+ = 8(k - I), (2.4;41) [a(k)9 b(l)]+ = [a(k)9 b\l)]+ = 0, etc. (2.4;42) We can show that a(k) = G-\0)a(k)G(0), (2.4;43) b(k)=G-\e)p(k)G(e)9 (2.4;44) 34 Thermo field dynamics and condensed states where G(0) = exp[-A(0)] (2.4;45) with A(0)= td?kek[a(k)p(rk)-p\-k)a\k)]. (2.4;46) Calculation shows that |0» - G-\0)\0) (2.4;47) = /o(0) exp(s(3)(0) J d3fc log{l + a^t^V*) tan 0fc})|O>. (2.4;48) Here /o(0) = exp(s(3)(0) [ d3fc log cos 0k) , (2.4;49) which vanishes because 5(3)(0) is infinite. Therefore, we see that in this case too the representations ffl[a, /3] and ffl[a,b] are unitarily in- equivalent to each other unless 6k = 0 for all k. Although eq. (2.4;47) leads to a(t)|0»=fc(t)|0» = 0, (2.4;50) the vector |0)) does not belong to !%[a, /3]. Therefore, a(k) and b(k) cannot be called the annihilation operators when our choice of Hilbert space is $?[«, /3]. However the action of at and bt on the vectors in ffl[a, /3] is determined by the relations (2.4;38) and (2.4;39). The boson field translation As a last example, let us consider a set of boson annihilation operators a(k) and introduce the operators a(k) by a(k) = a(k) + ck, (2.4;51) Quantum theory for many body systems 35 where the c-number ck is a function of k. This is also called the Bogoliubov transformation. Since the transformation (2.4;51) induces a translation of the boson operator by the c-number ck, it is frequently called a field translation. Since ck is a c-number function, the operators a(k) satisfy the boson commutation relations. Therefore, the field translation is canonical. It is easy to see that a(k) = G-\c)a(k)G(c), (2.4;52) where G(c) = exp(- [ d3k[c*ka(k) - cka\k)]\ . (2.4;53) Using the Baker-Hansdorff formula eAeB = exp{A + B+ |[A, B] + n[A, [A, B]] + • • •}, (2.4;54) we obtain G~\c) = exp(-|f d3fc|cfc|2) exp(- f d3kcka\k)\ exp( f d3fcc;«(ifc)) . (2.4;55) This leads to |0» - G-\c)\0) (2.4;56) = exp(-|| d3fc|cfc|2) exp(- J dVfW)|0). (2.4;57) If it happens that |d3fc|cfc|2=oo? (2.4;58) the representations ^C\a\ and ffl[a] are unitarily inequivalent to each other. Although the vector |0)) satisfies a(k)\0)) = 0, it does not belong to 5if[a]. Therefore, when our choice of the Fock space is ffl[a], a(k) cannot 36 Thermo field dynamics and condensed states be called the annihilation operator. Note that eq. (2.4;58) happens for instance, when Ck = cd(k). The boson condensation Let us consider the structure of |0)). Eq. (2.4;57) shows that, formally at least, it corresponds to a superposition of states of arbitrary many a-bosons [the statement is only formal if eq. (2.4;58) is satisfied]. We can intuitively think of |0)) as a state where a-bosons are condensed; this phenomenon is called boson condensation. We say that the field translation (2.4;51) induces the boson condensation, even when eq. (2.4;58) holds true. The quantity (0\a\k)a(k)\0) = \ck\2 (2.4;59) is called the number of condensed bosons with momentum k. We can intuitively see why ffl[a] is unitarily inequivalent to ffl[a] when c(k) = c8(k). In this case, the spatial distribution of the condensed bosons is uniform (i.e. k = 0), and the total number of condensed bosons is | d3fc|c(fc)|2 = c28o)(0). (2.4;60) Considering eq. (2.4;34), we find that the density of the condensed boson is dB = ^ [ d3fc|c(fc)|2 = (2tt)-3c2 , (V: the volume), (2.4;61) which is finite even when V tends to infinity. Intuitively speaking, in order for the boson condensation to create a locally observable effect, the density dB, of the condensed bosons cannot be zero. When the spatial distribution of the condensed bosons is uniform, the total number of condensed bosons is VdB which becomes infinite when V is infinite. Since vectors in the Fock space ffl[a] contain practically no states with an infinite number of a-bosons, we intuitively understand why |0)) cannot belong to X[a], It should be noted that action of a(k) on the vectors in ffl[a] is determined by the relation (2.4;51) even when eq. (2.4;58) holds true. Quantum theory for many body systems 37 We finally note also that the field translation of the form (2.4;51) induces the so-called coherent state [8]. 2.5. The physical particle representation and the dynamical map The Fock space of physical particles The existence of infinitely many representations which are unitarily inequivalent to each other leads us to the question of which representation is the right choice for our Fock space. Adopting the statement in quantum theory that the Hilbert space should contain all of the observable states, we require that the Hilbert space is the Fock space associated with the particles which appear in observations. These particles are called the physical particles or physical quanta. Therefore, we choose the Fock space which is built by cyclic operations of creation operators of the physical particles (or physical quanta) on the physical vacuum. In one words, we use the physical particle representation. Total energy and physical particle energy To see how the physical particles behave, we look a little closer at the measurements of energies in particle reactions. In any such reaction, there are a certain number of particles, say A and B, colliding and a certain number of particles Ci,.. ., cN leaving the region of collision (fig. 2.1). The total energy of the colliding system is determined by measuring the energies of the incoming particles A and B before they enter the collision region; the sum of the energies of A and B is the total energy. If we also measure the energy of the outgoing particles c\, c2,..., cN after the collision, we find that the sum of energies of c\, c2,.. ., cN is equal to the total energy defined above. If we want to regard this experimental 38 Thermo field dynamics and condensed states result as the law of energy conservation, we should define the energy as the sum of the energies of each physical particle in the state. This has the surprising implication that particle reactions occur without consuming interaction energy. In chapter 3, we shall explain how this is possible. In solid state physics, the energy spectrum of the physical quanta can, in principle, be determined by means of certain external stimuli. The total energy measured in this way appears to be equal to the sum of the energies of the number of quanta which are excited. Summarizing, we require that the energy of a quantum system is the sum of the energies of each of the physical particles. The free Hamiltonian and momentum operator for physical particles Let ha)(k) denote the energy of a single physical particle with momentum hk. Then, the above requirement means HffiL\kx)oL\k^ . . . a\kn)\0) = (h 2 coik^a'ikiWfa) . . . a\kn)\0) , (2.5;1) which leads to [Ho, a\kt)]a\kd .. . a\kn)\0) = huik^aXk,) .. . a\kn)\0). (2.5;2) Here H0 is the operator whose eigenvalues are the energy of the system. Since the relation (2.5;2) is true for any n, we see that [Ho, a\k)] = hco(k)af(k) . (2.5;3) Since the energy is real, H0 is hermitian. Then, hermitian conjugation of eq. (2.5;3) gives [Ho, a(k)] = -ha>(k)a(k) . (2.5;4) Eqs. (2.5;3), (2.5;4) and (2.5;1) lead to Ho = h [ d3k(o(k)a\k)a(k). (2.5;5) Quantum theory for many body systems 39 The operator H0 is called the free Hamiltonian of the physical particles. In terms of wave packet creation operators a} in eq. (2.3;44), we have [Ho, a}] = h(27r)-3/2 f d3kco(k)f(k)a\k). (2.5;6) The action of H0 on wave packet states is understood by the following relations: H0a}\0) = /K2tt)-3/2 J d3ka>(k)f(k)a\k)\0), (2.5;7) H0a}al\0) = /i(27r)-3 f d3fc f d3l[co(k) + a>(1)]g(k)f(l)a\k)a\l)\0) , (2.5;8) etc. In eq. (2.5;8), the linear nature of the energy is explicit. Since H0 does not have any interaction terms, the particles created by af(k) are said to be free. By similar arguments, we find the momentum operator P: P = h [ d3kka\k)a(k). (2.5;9) The above consideration shows that since all of the observable states of particles are wave packet states, there are no eigenstates of Ho and P in our Hilbert space. The observable energy of the particle in the state |ag) = c4|0) is given, not by any eigenvalue of H0, but by the expectation value (ag\Ho\ag). However, this does not create any difficulty in the use of the energy concept, because, at least in principle, we can make the expectation value as close to an eigenvalue of H0 as we want by preparing a wave packet which is very close to a plane wave. A similar argument can be applied to the concept of momentum. The Heisenberg fields and the Heisenberg equation Although the observable energies of the system are determined by the eigenvalues of the free Hamiltonian Ho, the real Hamiltonian of the system cannot be H0 when the particles perform certain reactions. We 40 Thermo field dynamics and condensed states therefore need certain basic entities which determine the dynamics of the system. Since we are considering a quantum system, these basic entities are certain operators which change in our space-time world. Let if/(x) stand for these operators. These operators are usually called the Heisenberg fields. The dynamics of the system is then described by the spatial and temporal variation of the Heisenberg fields. The variation of il/(x) is determined by a certain equation. This equation is called the Heisenberg equation. When we identify a system under study, all of the theoretical analysis starts with the knowledge of the Heisenberg fields and the Heisenberg equation. We require that any operator which appears in the analysis of the system should be a linear combination of products of the Heisenberg field. The Hamiltonian for the Heisenberg equation We require that all of the space-time transformations are generated by certain operators. In particular, this requires the existence of the Hamiltonian which generates the time translation. This Hamiltonian will be denoted by H. Then, the canonical equation, ih(d/dt)il/ = [if/, H], is the Heisenberg equation. Here, to calculate the commutator [¢, H], we need the assumption that the Heisenberg fields satisfy the equal-time canonical commutation relations. The physical particle representation for Heisenberg fields When this consideration is combined with the statement that our Hilbert space is the Fock space of the physical free particles, the Heisenberg equation should be solved in such a way that the Heisenberg fields are realized in the Fock space of the physical free particles. In other words, we solve the Heisenberg equation in such a way that the Heisenberg fields are expressed in terms of certain free particles so that all of the matrix elements, (a\\Jj{x)\b), for vectors \a) and \b) belonging to the Fock space of the free particles are determined; these free fields are then regarded as the physical free particles, and the Fock space is the physical particle representation for the Heisenberg fields. In this way, we can determine the Fock space ffl[a] by the requirement that the Heisenberg fields are expressed in terms of annihilation and creation operators of certain free particles, which act as the physical free particles. In the next Quantum theory for many body systems 41 chapter, we will see that this requirement leads to (a\H\b) = (a\H0\b) + W0(a\b), (2.5;10) where Wo is a c-number, and \a) and \b) stand for vectors in ffl[a] of the physical particle representation. Note that the condition (2.5; 10) is not as strong as H = H0 + Wo, because it requires H = Hq+ Wo only when H is realized in the particular representation which is the Fock space$?[«]. The relations among matrix elements associated with a specific representation are called weak relations. Therefore, eq. (2.5; 10) is a weak condition, which can be used as the criterion which determines ffl[a]. The self-consistent method As soon as we are given a Hamiltonian H we find ourselves in a dilemma; although the calculation of the matrix elements (a\H\b) requires knowledge of the Fock space ffl[a] to which \a) and \b) belong, we do not know anything about the physical particles (and therefore, about ffl[a]) until the whole problem is solved. The root of this dilemma is the existence of an infinite number of irreducible representations which are unitarily inequivalent to each other; we do not have such a dilemma for the quantum mechanics of a finite number of canonical variables. The above dilemma is the kind of problem which is usually resolved by a self-consistent consideration. In this case, we are concerned with the self-consistency between the Hamiltonian H and the choice of the Fock space of physical particles. Briefly speaking, the self-consistent approach proceeds as follows: we first prepare a set of candidates for the physical fields by appealing to various physical considerations. These candidates are classified by certain parameters. Then we construct ffl[a] and determine the unknown parameters by the condition (2.5; 10). As an example, let us assume a Hamiltonian for the nucleon Heisenberg field. We then choose, as the initial set of physical particles, an isodoublet free Dirac field which is regarded as the physical nucleon. Leaving the mass of the physical nucleon undetermined, we try to solve eq. (2.5; 10) by using the canonical equal-time commutation relations. We may then find that whatever mass of the physical nucleon is chosen, eq. (2.5; 10) is not satisfied. We then introduce a new member in the set of physical fields, and then, we may find that eq. (2.5; 10) is now satisfied. This new member may turn out to 42 Thermo field dynamics and condensed states be the physical deuteron which is regarded as a composite particle. In chapter 3, we demonstrate the full course of the self-consistent method by means of a solvable model called the N0-model. It will be shown in chapter 6 that use of the canonical commutation relations and the so-called Ward-Takahashi relations simplify considerably the self-consistent calculation in many cases. Some simple examples of self-consistent calculations To catch a glimpse of the self-consistent approach, we now study a Hamiltonian which has a simple structure. We shall take a Hamiltonian of the form H = h [ d3k[ek{a\k)a(k) + b\k)b{k)} + vk{a(k)b(-k)+b\-k)a\k)}], (2.5;11) where a(k) and b(k) are the Heisenberg operators satisfying the boson commutation relations, and ek and vk are positive definite functions of k2 such that sk > vk. The spin wave in anti-ferromagnets is an example of this case [9]. Since eq. (2.5; 11) does not have the form of a free Hamiltonian, a(k) and b(k) are not the annihilation operators of physical particles. The Bogoliubov transformation, eqs. (2.4;4) and (2.4;5), with ck = cosh 6k and dk = sinh 6k brings the Hamiltonian (2.5; 11) into the form H = H0+ Wo (2.5;12) with H0=h J d3ka>k[a\k)a(k) +/3\k)/3(k)], (2.5;13) when we choose 6k as cosh 20k = ,2 Sk 2V/2, sinh 20k = ( 2 V\; 2 1/2. (2.5;14) \pk~Vk) \£k-Vk) Therefore a(k) and /3(k) are the annihilation operators of physical Quantum theory for many body systems 43 particles, and our Hilbert space is the Fock space ffl[a, /3]. The energy, ha)k, of physical particles and the vacuum state energy Wo are found to be <»k = (el-vl)m9 (2.5;15) Wo = hd(3\0) J d3k(cok - ek). (2.5;16) Note that although Wo is infinite, the energy density is finite when the integration in (2.5; 16) is finite. Here eq. (2.4;34) is considered. When a(k) and b(k) in eq. (2.5; 11) satisfy the fermion anticommutation relations (2.4;40-42), then the Bogoliubov transformation, eqs. (2.4;38) and (2.4;39), with cos 26k = /2 , „2\i/2 j sm 20* = ~/02 , „2\i/2 (2.5; 17) brings H into the form (2.5; 12) with cok = (e2k + vir\ (2.5;18) Wo = -hd(3\0) | d3k(cok - ek). (2.5;19) Note that, in this fermion case, we do not need the condition ek > vk. As a last example, let us consider the van Hove model [2]. In this model, the Hamiltonian is given as H = h | d3k[eka\k)a(k) + vk{a(k) + a\k)}] (2.5;20) with boson operators a(k). The c-numbers, ek and vk, are real functions of k2. The Bogoliubov transformation, which brings this Hamiltonian into the form (2.5;12), is the field translation (2.4;51), i.e. a(k) = a(k) + ck, with ck = —v\Jek. We find that o)k = ek, (2.5;21) Wo = -hd°\0) ( d3k £. (2.5;22) J €k In these cases we obtained the strong relation H = H0 + Wo rather than the weak relation (2.5;10), i.e. (a\H\b) = (a\H0\b) + Wo(a\b). This is due to the oversimplified nature of the models; H in eq. (2.5; 11) contains bilinear terms only, and H in eq. (2.5;20) contains only bilinear and linear terms. The dynamical map Let us now turn our attention to the time development of physical particle operators and Heisenberg operators. Since H0 in eq. (2.5;5) is the Hamiltonian for the physical particles, the time development of the annihilation operators of physical particles is given by a(k, t) = e^'a(fc) e"1^', (2.5;23) = a(k) exp[-ia)(k)t], (2.5;24) hermitian conjugation of which gives a\k, t) = Q^a\k) e-^><, (2.5;25) = a\k) exp[ico(k)t]. (2.5;26) Here H0 = H0/h. On the other hand, the time-development of the Heisenberg operators a(k, t) is determined by the Heisenberg equation which is the canonical equation ihfta(k,t)=[a(k,t\H], (2.5;27) or a(kj) = e™a(k)z-™ (2.5;28) with H = H/h. In the case of the boson model with the Hamiltonian (2.5; 11), the Bogoliubov transformation, eqs. (2.4;4) and (2.4;5), determines the time development of the Heisenberg operators as follows: a(k, t) = cosh 6k e-ita,k'a(k) - sinh 0k ei{°ktp\-k), (2.5;29) b(k, t) = cosh 6k e-la,ktp(k)- sinh 0k ela>kta\-k). (2.5;30) In the case of the fermion model with H in eq. (2.5;11) we find that a(k, t) = cos 0k e-^a(t) - sin 0k e1""/3f(-k), (2.5;31) Z>(*, 0 = cos 0k e-la,ktp(k) + sin 0k ei(°kta\-k). (2.5;32) In the case of the van Hove model (2.5;20), we have a(k, t) = e-la,kta(k) + ck. (2.5;33) The relations (2.5;29-33) determine all of the matrix elements of the Heisenberg operators [i.e. (a\a(k, t)\b) and (a\b(k, t)\b)] among the vectors in the physical particle Fock space !%?[a, /3]. For example, eq. (2.5;33) reads as (a\a(k, t)\b)=e-'1{°kt(a\a(k)\b) + ck(a\b). (2.5;34) In other words, relations (2.5;29-33) show how the Heisenberg operators are realized in the physical particle representation. These relations express the solutions of the Heisenberg equation (2.5;27) in terms of physical particle operators, a(k) and /3(k). An expression of this kind is called the dynamical map. In the simple examples considered here, the dynamical maps are linear [with the additional c-number ck in the case of eq. (2.5;33)], and are given by the strong relations (2.5;29-33). In more complicated cases, however, the dynamical map involves higher order products of the physical creation and annihilation operators and it is defined only through matrix elements (weak relations). We shall return to this point in later sections. In the above examples, the solutions of the Heisenberg equations expressed in terms of free fields naturally rewrite H into H0 + W0. This illustrates the general situation in which solutions of the Heisenberg equations satisfy the condition for the physical particle representation (2.5;10). Note that, in general, this condition H = Ho+ Wo is satisfied only in a weak sense, although in the simple examples above it is also a strong relation. The normal product When certain higher order products of physical particle operators appear in the dynamical map, these higher order products are arranged in the form of linear combinations of the so-called "normal products". The 46 Thermo field dynamics and condensed states normal product is a product of physical particle creation and annihilation operators af(k) and a(k), in which all of the creation operators stand on the left side of all of the annihilation operators; in this way all of the annihilation operators in the normal product annihilate the particles in the ket-state, while all of the creation operators create the particles in the bra-state. In other words, the calculation of matrix elements of normal products does not contain contractions of creation and annihilation operators. When a normal product contains n creation operators and m annihilation operators, its matrix elements correspond to the (m- particle-»n particle)-transition. This property of the normal products makes them convenient to use whenever we write the dynamical map. A summary In this section we introduced the notions of physical particles, the physical particle representation, the dynamical map of the Heisenberg operators and the normal product of physical particle operators. 2.6. Free fields for physical particles Free physical fields In this section we introduce an operator which describes the space-time behaviour of physical particles. This operator is called the free physical field. Space-time variation of creation and annihilation operators To construct the free physical field, we study the time- and space- development of the annihilation and creation operators of physical particles. The time development was given in eqs. (2.5;24) and (2.5 ;26): a(k, t)= a(k) exp[-ia)(k)t], (2.6; 1) a\k, t) = a\k) exp[io)(k)t]. (2.6;2) Since the momentum operator P is the generator of spatial translations, Quantum theory for many body systems 47 the spatial behaviour of annihilation and creation operators is determined by T = P/h as a(k; x, t) = e'lTxa(k, t) eiT* , (2.6:3) = a(k) exp[i{ifcx - (o(k)t}], (2.6;4) a\k; x, 0 = e-lTV(fc, 0 eiT*, (2.6;5) = a\k) exp[-i{fcx - (o(k)t}]. (2.6;6) It should be noted that the annihilation operator has negative frequency while the creation operator has positive frequency. Requirements for the free physical fields We require that the free physical field is a linear superposition of a(k; x, t) and af(k; x, t) with coefficients which depend neither on x nor on t. Furthermore we require that, the free physical field is constructed in such a way that there exists a projection procedure which can project out each creation or annihilation operator. This requirement guarantees that the dynamical maps of Heisenberg operators can be expressed in terms of products of free physical fields, because the dynamical maps are originally expressed in terms of the creation and annihilation operators. At the end of this section, we are going to show how these requirements are satisfied. When the physical particle has spin or other degrees of freedom, we have several annihilation operators, i.e. a(r)(fc;x, t), r = 1, 2,.. .. When we have particles and holes (or anti-particles) we may use a(r) for particles and /3(r) for holes. The free field equations for physical fields Since the physical field <p° is a superposition of the plane waves in eqs. (2.6;4) and (2.6;6), it satisfies a homogeneous differential equation: \(d)<p°(x) = 0. (2.6;7) Here x = (x, t). When the physical particle has spin or other degrees of 48 Thermo field dynamics and condensed states freedom, <p°(x) is a column vector: <P°(x)=l ■ (2.6;8) and A(d) is a n x n matrix. For example, <p° for the electron is a spin doublet. In the column, eq. (2.6;8), we usually assemble only those fields which have the same energy spectrum co(k). In the following we assume that this convention is used. The classification of free field equations Eq. (2.6;7) is called a free field equation of type 1, when it can be reduced to the eigenvalue equation (iJpe(V)y = 0, (2.6;9) while it is called a type-2 equation, when the eigenvalue equation is (^+6,^))^ = 0. (2.6;10) In eqs. (2.6;9) and (2.6;10) the derivative operators e(V) and w(V) are defined as follows: e(y)eik'x=e(k)elkx , etc. (2.6;11) This convention for the definition of derivative operators will be used throughout this book. Note that, in the case of the type-1 equation, the energy co(k) is given by le(t)[. When e(k) is non-negative, <p° has only annihilation operators. When e(k) becomes negative for a certain domain of k, <p° also contains creation operators. The latter case occurs only when <p° is a fermion field. Negative values of e(k) correspond to hole states of fermions. In the case of the type-2 equation, <p° contains both annihilation and creation operators associated with each momentum. Quantum theory for many body systems 49 TTte divisor Eqs. (2.6;9) and (2.6; 10) imply that there should exist a differential operator d(d) which satisfies d(d)\(d) = ift-e(V) (2.6;12) for the type-1 equation, and d(d)\(d)=-(£2 + co2(y)) (2.6;13) for the type-2 equation. The operator d(d) is called the divisor* of eq. (2.6;7). We shall now show that this operator allows us to construct a Green's function for eq. (2.6;7). Let us note first that eqs. (2.6; 12) and (2.6; 13) lead to det[d(d)] det[A(d)] ^ 0 because the derivatives d/dt and V are independent of each other. Thus, A(d) and d(d) are not singular, and therefore possess inverses. Applying A_1(d) to both sides of eq. (2.6;12) and noting that the matrix on the right hand side is a multiple of the identity, we obtain \(d)d(d) = i~e(y) (2.6;14) for the type-1 equation. Similarly, we obtain A(5)d(5)=-(^+o>2(V)) (2.6;15) for the type-2 equation. Let us now denote by AG(x) any of the Green's functions of eqs. (2.6; 12) or (2.6; 13): (i^- e(V)) AG(x) = 8(x)8(t) (type 1), (2.6;16) - (jp+ «>2(V)) AG(x) = 8(x)8(t) (type 2). (2.6; 17) * The divisor of the free field equation was originally introduced in relativistic quantum field theory [10]. Here it is generalized to non-relativistic cases. 50 Thermo field dynamics and condensed states Then we obtain the important result that d(d) AG(x) is a Green's function for eq. (2.6;7): A(d) d(d) AG(x) = 8(x)8(t). (2.6;18) The hermitization matrix When we define A(p) by A(p) exp[i(px - p0t)] = A(d) exp[i(px - p0t)], (2.6;19) the relation \(p)u = 0 with a vector w, is the eigenvalue equation which gives p0= ±co(p). Since the eigenvalues co(p) are real, k{p)u = 0 should be equivalent to an eigenvalue equation of a certain hermitian matrix (with the eigenvalues [p0 = ±<»>(p)]). This implies the existence of a non-singular matrix 17 which makes i?A(p) hermitian: A+(ph+=T7A(p). (2.6;20) Considering eq. (2.6; 19), this gives \\-d)Vf=ri\(d). (2.6;21) Then, use of eqs. (2.6; 12-15) leads to [v\(d)][d(d)v-1] = [d(d)v~l][vHd)], (2.6;22) which together with eqs. (2.6; 14) or (2.6; 15) implies that d(p)j]~l is also hermitian (7,^^(-3) = 6(3)7,^, (2.6;23) because i?A(p) is non-singular. The matrix 17 is called the hermitization matrix [10]. When eq. (2.6;21) is considered, the hermitian conjugation of eq. (2.6;7) gives <p°(x)\(-d) = 0, (2.6;24) Quantum theory for many body systems 51 where d means that the derivatives act on quantities on the left and <p° is defined as <p%x)=<p°Xx)7). (2.6;25) Notice that, since <p°(x) is a column vector, <p° must be a row vector. The Lagrangian for free physical fields The quantity ^ = [ d4x<p°(x)\(d)<p°(x) (2.6;26) is real. Clearly this is the Lagrangian for the <p°-field, since application of the variation principle to it yields the free field equation (2.6;7). The inner product of wave functions Assuming that A(d) is a polynomial in d/dt of degree not greater than two, we can put A(d) in the form: A(d) = A(0)(V) + iA(1)(V)(<9/<90 + \(2\V)(d/dt)2 . (2.6;27) Now define [10,11] r = A<»(V)-iA<2>(V)|, (2.6;28) where the following notation is used: d_=d__d_ dt ~ dt dt' Then we have (2.6;29) 52 Thermo field dynamics and condensed states ± \ d3xf(x)fg(x) = \ d3*/(*)(^+ |)A(1)(V)g(x) -i | d3xf(x)[(d/dtf - idldtf]\ (2)(V)g(x) = -ij d3xf(x)[\(d)-X(-d)]g(x), (2.6;30) where a spatial integration* by parts is used. When f(x) and g(x) satisfy the free field equation (2.6;7) [and therefore eq. (2.6;24)], the quantity in eq. (2.6;30) vanishes and therefore, (d3xf(x)fg(x) (2.6;31) is independent of time. We therefore call this quantity the inner product of the wave functions, f(x) and g(x\ which satisfy the free field equation (2.6;7). It should be noted that this inner product is not necessarily positive definite. As a matter of fact the positive definiteness is not needed, because this inner product has nothing to do with the notion of probability; the probability for physical reactions is related to the inner products of the vectors in the Fock space. An orthonormalized complete set of solutions of the free field equation (2.6;7) Being equipped with the above definition of inner product, we now construct an orthonormalized complete set of solutions of the free field equation (2.6;7). Let us first assume that eq. (2.6;7) is an equation of type 2. Then, it admits both negative and positive frequency solutions of the form: ur(k, x) = ur(k) exp[i{fot - co(k)t}], (2.6;32) vr(k, x) = vr(k) exp[-i{ib; - co(-k)t}]. (2.6;33) Here the superscript r refers to the spin and other degrees of freedom. Using eq. (2.6;19), we have Quantum theory for many body systems 53 \(k)ur(k) = 0 for fco = co(k), (2.6;34) \(-k)vr(k) = 0 for /c0= co(-k). (2.6;35) When we use ur(k, x) and i;5(/, x) for /(*) and g(x) in eq. (2.6;31) respectively, the quantity in eq. (2.6;31) is a superposition of exp[i{<y(A;) + co(-l)}t], although it should be time-independent; therefore it must be zero. We thus have the following orthogonality theorem: [d3xur(k, x)Tvs(l, x) = 0, (2.6;36) | d3xvs(k, x)fV(/, x) = 0. (2.6;37) We choose ur(k, x) and vr(k, x) to satisfy the following orthonor- malization condition: [ d3xur(k, x)Vus(l, x) = h8rs8(k - I), (2.6;38) [d3xvr(k, x)Tvs(l, x)=-hp8rs8(k - I). (2.6;39) The sign factor p will be explained below. The superscript r classifies the eigenvectors of a set of hermitian matrices which commute among them- selves and commute also with rj\(k) (and therefore, with 77T). Thus, we can choose a common sign for all of the quantities in eq. (2.6;38). Since A(d) is only determined up to a sign in eq. (2.6;7), we can always choose its sign (and therefore, the sign of T) in such a way that eq. (2.6;38) is positive. However, once the sign of A(d) is fixed by the requirement, eq. (2.6;38), there is no reason to expect that eq. (2.6;39) should be positive too. As a matter of fact, it can have either sign depending on the structure of T. This is the reason why there is the sign factor p = ±l (2.6;40) in the condition (2.6;39). Using the notations A(1)(fc) clkx = A(1)(V) e1**, etc., (2.6;41) 54 Thermo field dynamics and condensed states we introduce T(fc, E) = A(1)(fc) - 2EX(2\k). (2.6;42) Then, the orthonormaUzation conditions, eqs. (2.6;38) and (2.6;39), read as follows: ur(k)T[k, co(k)]us(k) = (277-)-¾¾ , (2.6;43) vr(k)T[-k, -co(-k)]vs(k) = - (27r)-3p8rsh . (2.6;44) In the case of equations of type 1, ur(k, x) is identified with positive e(k), while vr(k, x) appears for negative e(k). The orthonormaUzation condition is given by eqs. (2.6;43) and (2.6;44). The structure of the physical field We are now ready to specify the free physical field. The physical field <p° is given by <P°(x) = 2 f d3k[ur(k)ar(k) exp(i{* • x - co(k)t}) r •* + vr(k)pr\k) exp(-i{t • x - a>(-k)t})] (2.6;45) for equations of type 2, and by cp\x)= | d3k{6[e(k)]ur(k)ar(k)+ e[-e(-k)]vr(-k)/3rf(-k)}cikx-l£(k)t (2.6;46) for equations of type 1. Here d(x) is the step function with the properties, d(x) = 1 for x > 0 and 6(x) = 0 for x < 0. The negative values of e(k) can appear only in the case of fermions. Some examples of free field equations Let us now illustrate the above arguments by some examples. A simple example of an equation of type 2 is given by Quantum theory for many body systems 55 A(d)=-|i-a>2(V). (2.6;47) In this case, we have j) = 1, d(d) = 1, (2.6;48) f = i(d/dt), (2.6;49) T(k, E) = 2E, (2.6;50) p = 1. (2.6;51) Thus eqs. (2.6;43) and (2.6;44) lead to u(k) = v(-k) = (2ir)-3/2[2co(k)]-l/2hm (2.6;52) which gives <p\x) = (2tt)-3^ * J d3fc ([2^]1/2 e«*-»l + /8+(4) _tY|l/2 ,-i[*x-a>(-*)'] [2co(-k)] ). (2.6;53) A more complicated example is provided by the free field equation of physical electrons in superconductors [12]: A (3) = i^r3 + iAr2- e(V2). (2.6;54) This equation is of type 2, although its time derivative is of the first order. In eq. (2.6;54) A is a constant and the r's are the 2x2 Pauli matrices: Tl=(i o)'T2=C "o)'T3=(o -?)• (2-6;55) Therefore <p° is a doublet field. It is easy to show that V = t3 , (2.6;56) 56 Thermo field dynamics and condensed states d(d) = i^T3 + iAT2+£(V2), (2.6;57) d(d)X (d) = - (j£ + «,2(V)) , (2.6;58) o>(Jfc)=(e2, + A2)1/2, (2.6;59) T = r3, ' (2.6;60) r(Jfc, £) = r3. (2.6;61) Here £fc is defined by ek exp(ifot) = e(V2) exp(iib;). The conditions, eqs. (2.6;43) and (2.6;44), give w+(*)w(fc) = (277)-3¾ , (2.6;62) i;+(*M*) = -(27r)-3ph . (2.6;63) Since vfv is positive definite, we find that p = -1. (2.6;64) A standard method for construction of the wave function u(k, x) and v(k, x) is to make use of the divisor in the following fashion: u(k, x) = d(d)i7_1w exp{i[fot - co(k)t]}, (2.6;65) v(k, x) = d(d)rj-lw exp{-i[ib; - (o(-k)t]} . (2.6;66) Here w and w are column vectors which are determined by the conditions (2.6;62) and (2.6;63). It can be seen from eq. (2.6;15) that these wave functions satisfy the equation A(d)<p°=0. In eqs. (2.6;65) and (2.6;66), d(d)i7_1 was used
2020-04-01 13:52:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8579602837562561, "perplexity": 2133.2204073200774}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370505731.37/warc/CC-MAIN-20200401130837-20200401160837-00497.warc.gz"}
https://rdrr.io/cran/exptest/man/co.exp.test.html
# co.exp.test: Test for exponentiality of Cox and Oakes Description Usage Arguments Details Value Author(s) References Examples ### Description Performs Cox and Oakes test for the composite hypothesis of exponentiality, see e.g. Henze and Meintanis (2005, Sec. 2.5). ### Usage 1 co.exp.test(x, simulate.p.value=FALSE, nrepl=2000) ### Arguments x a numeric vector of data values. simulate.p.value a logical value indicating whether to compute p-values by Monte Carlo simulation. nrepl the number of replications in Monte Carlo simulation. ### Details The Cox and Oakes test is a test for the composite hypothesis of exponentiality. The test statistic is CO_n = n+∑_{j=1}^n(1-Y_j)\log Y_j, where Y_j=X_j/\overline{X}. (6/n)^{1/2}(CO_n/π) is asymptotically standard normal (see, e.g., Henze and Meintanis (2005, Sec. 2.5)). ### Value A list with class "htest" containing the following components: statistic the value of the Cox and Oakes statistic. p.value the p-value for the test. method the character string "Test for exponentiality based on the statistic of Cox and Oakes". data.name a character string giving the name(s) of the data. ### Author(s) Alexey Novikov, Ruslan Pusev and Maxim Yakovlev ### References Henze, N. and Meintanis, S.G. (2005): Recent and classical tests for exponentiality: a partial review with comparisons. — Metrika, vol. 61, pp. 29–45. ### Examples 1 2 co.exp.test(rexp(100)) co.exp.test(runif(100, min = 0, max = 1)) Search within the exptest package Search all R packages, documentation and source code Questions? Problems? Suggestions? or email at [email protected]. Please suggest features or report bugs with the GitHub issue tracker. All documentation is copyright its authors; we didn't write any of that.
2017-04-30 13:06:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22187979519367218, "perplexity": 13407.639395397893}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917125532.90/warc/CC-MAIN-20170423031205-00340-ip-10-145-167-34.ec2.internal.warc.gz"}
https://testbook.com/blog/electromagnetic-theory-gate-ec-quiz-7/
# Electromagnetic Theory GATE EC Practice Quiz 7 0 Save Here is Electromagnetic Theory GATE EC Quiz 7 to help you prepare for your upcoming GATE exam. The GATE EC paper has several subjects, each one as important as the last. However, one of the most important subjects in GATE EC is Electromagnetic Theory. The subject is vast, but practice makes tackling it easy. This quiz contains important questions which match the pattern of the GATE exam. Check your preparation level in every chapter of Electromagnetic Theory for GATE. Simply take the quiz and comparing your ranks. Learn about Maxwell’s equations, Transmission lines, polarization, Smith chart and more. Electromagnetic Theory for GATE EC Quiz 7 Que. 1 The direction of vector $$\vec A$$ is radially outward from the origin, with $$\left| \vec A \right|\; = \;k{r^n}$$ where $${r^2} = {x^2} + {y^2} + {z^2}$$and $$k$$ is a constant. The value of n for which $$\vec\nabla .\;\vec A\; = \;0$$ 1. -2 2. -1 3. 0 4. 1 Que. 2 A monochromatic plane wave of wavelength $$\lambda = 600\mu m$$  is propagating in the direction as shown in the figure below. $${\vec E_i},{\vec E_r},\;and\;{\vec E_t}\;$$denote incident, reflected, and transmitted electric field vectors associated with the wave. The expression for $${\vec E_r}$$ is 1. $$0.23\frac{{{E_0}}}{{\sqrt 2 }}\left( {{{\hat a}_x} + {{\hat a}_z}} \right){e^{ – j\frac{{\pi \times {{10}^4}\left( {x – z} \right)}}{{3\sqrt 2 }}}}V/m$$ 2. $$- \frac{{{E_0}}}{{\sqrt 2 }}\left( {{{\hat a}_x} + {{\hat a}_z}} \right){e^{j\frac{{\pi \times {{10}^4}z}}{3}}}V/m$$ 3. $$0.44\frac{{{E_0}}}{{\sqrt 2 }}\left( {{{\hat a}_x} + {{\hat a}_z}} \right){e^{ – j\frac{{\pi \times {{10}^4}\left( {x – z} \right)}}{{3\sqrt 2 }}}}V/m$$ 4. $$\frac{{{E_0}}}{{\sqrt 2 }}\left( {{{\hat a}_x} + {{\hat a}_z}} \right){e^{ – j\frac{{\pi \times {{10}^4}\left( {x + z} \right)}}{3}}}V/m$$ Que. 3 The angle of incidence θi and the expression for $${\vec E_i}$$ are 1. $$60^\circ and\frac{{{E_0}}}{{\sqrt 2 }}\left( {{{\hat a}_x} – {{\hat a}_z}} \right){e^{ – j\frac{{\pi \times {{10}^4}\left( {x + z} \right)}}{{3\sqrt 2 }}}}V/m$$ 2. $$45^\circ and\frac{{{E_0}}}{{\sqrt 2 }}\left( {{{\hat a}_x} + {{\hat a}_z}} \right){e^{ – j\frac{{\pi \times {{10}^4}z}}{3}}}\;V/m\;\;$$ 3. $$45^\circ and\frac{{{E_0}}}{{\sqrt 2 }}\left( {{{\hat a}_x} – {{\hat a}_z}} \right){e^{ – j\frac{{\pi \times {{10}^4}\left( {x + z} \right)}}{{3\sqrt 2 }}}}V/m$$ 4. $$60^\circ and\frac{{{E_0}}}{{\sqrt 2 }}\left( {{{\hat a}_x} – {{\hat a}_z}} \right){e^{ – j\frac{{\pi \times {{10}^4}z}}{3}}}V/m$$ Que. 4 The electric and magnetic fields for a TEM wave of frequency 14 GHz in a homogenous medium of relative permittivity εr and relative permeability μr = 1 are given by $$\vec E = {E_P}{e^{j\left( {\omega t – 280\pi y} \right)}}{\hat u_z}\frac{V}{m}$$ and $$H = 3{e^{j\left( {\omega t – 280\pi y} \right)}}{\hat u_x}\frac{A}{m}$$ Assuming the speed of light in free space to be 3 × 108 m/s, the intrinsic impedance of free space to be 120π, the relative permittivity εr of the medium and the electric field amplitude $$E_p$$ are 1. εr = 3, Ep = 120π 2. εr = 3, Ep = 360π 3. εr = 9, Ep = 360π 4. εr = 9, Ep = 120π Que. 5 A coaxial cable with an inner diameter of 1 mm and outer diameter of 2.4 mm is filled with a dielectric of relative permittivity 10.89. Given $${\mu _0}\; = \;4\pi \times {10^{ – 7}}H/m$$ and $${\varepsilon _0} = \frac{{{{10}^{ – 9}}}}{{36\pi }}\frac{F}{m}$$ the characteristic impedance of the cable is: 1. 330 Ω 2. 100 Ω 3. 143.3 Ω 4. 16 Ω ## More  Electromagnetic Theory for GATE EC Quizzes: Electromagnetic Theory for GATE EC Quizzes Try 1000+ Questions on our App. 5 years ago 5 years ago 5 years ago 5 years ago
2020-09-19 09:29:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7481658458709717, "perplexity": 1879.7722406799971}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400191160.14/warc/CC-MAIN-20200919075646-20200919105646-00212.warc.gz"}
https://mathoverflow.net/questions/208336/gradient-vector-fields-defined-with-respect-to-two-different-metrics-and-morse-t
Gradient vector fields defined with respect to two different metrics and Morse theory Given a differentiable manifold $M$, we can equip $M$ with a Riemannian metric $g$ or $g'$ to generate a pair of Riemannian manifolds $(M,g)$ and $(M,g')$, respectively. The gradient vector fields $X_f,X'_f \in TM$ of a function $f: M \to \mathbb{R}$ satisfy, for all $Y \in TM$, \begin{equation*} g(X_f,Y) = df(Y) \ , \end{equation*} and \begin{equation*} g'(X'_f,Y) = df(Y) \ . \end{equation*} My first question is: What intuition can be used to describe the flow of $X'_f$ on $(M,g)$? Now assume that $f$ is Morse "with respect to all of the appropriate combinations". Second question: What would it mean to attempt Morse theory with the flow of $X'_f$ using the Hessian $H_f$ constructed using the Levi-Civita connection $\nabla$ of $g$, i.e. defined for all $Y,Z \in TM$ by \begin{equation*} H_f(Y,Z) =g(\nabla_Z X'_f,Y) \ . \end{equation*} The idea being that this Hessian rather than the usual one would be used to index critical points of $f$. • It looks to me like your first question can be answered by looking up the properties of gradient-like'' vector fields for $f$ on $M$. Intuitively, such a vector field should have 0s of the right type at the critical points, its flows should increase g, and so on. You may be interested in this question: mathoverflow.net/questions/123989/… Or is it more specific than that? – Elizabeth S. Q. Goodman Jun 7 '15 at 0:48 • If you are asking how to intuitively see what it's like to change the metric, say for a torus: imagine embedding the torus in $\mathbb R^3$, and let $f$ be the height function; the gradient vectors are orthogonal projections of vertical ones onto the torus. Then imagine tilting the torus, so that all the level sets are the same: this will change the projected vectors showing the gradient of the new inherited metric. One such way of tilting'' is a sheer linear transformation such as $T(x, y, z)=(x+y, y, z)$. – Elizabeth S. Q. Goodman Jun 7 '15 at 0:53
2019-06-25 04:45:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8985523581504822, "perplexity": 243.67647837587376}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999787.0/warc/CC-MAIN-20190625031825-20190625053825-00192.warc.gz"}
https://gateoverflow.in/306226/andrew-tanenbaum-edition-5th-exercise-question-12-page-252
49 views Suppose that data are transmitted in blocks of sizes 1000 bits. What is the maximum error rate under which error detection and retransmission mechanism (1 parity bit per block) is better than using Hamming code? Assume that bit errors are independent of one another and no bit error occurs during retransmission. edited | 49 views In Hamming code we require 10 parity bits $2^P\geq P+M+1$ where M=1000 In Hamming code we require only one transmission (this is Forward Error Correcting Code). on detection of 1 bit error the receiver will try to guess the correct message So we're transmitting 1010 bits per block basis(message bits+redundant bits) in case of error detection and retransmission let the error rate be x per bit i.e. probability that a bit got corrupted=x in error detection and retransmission, we transmitting 1001 bits per block basis(1000+1 parity bit) for a block of 1000 bits there could be 1000x bit errors for example if x=0 i.e. probability of a bit getting corrupted is nil, there there will be no retransmission required If x=0.1 so in a block 0.1*1000=100 bits are inverted so there will be 100 retransmissions+1 beginning transmission. And we've to transmit 1001+100*1001 bits if x=1 then 1*1000 all bits are inverted and there will be 1000 retransmissions of 1001 bits for every bit error in a block 1001 bits will be retransmitted. so there will be 1000x*1001 bit retransmission+1 transmission of 1001 bits in the beginning so we're transmitting 1001+1000x*1001 bits if error detection and correction is better than Hamming code then 1001+100x*1001 < 1010 x<$9*10^{-6}$ approx Thus, a bit should have less than $9*10^{-6}$ probability of getting inverted by Loyal (5.2k points) edited by
2020-02-27 12:11:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7562016844749451, "perplexity": 3465.174690071164}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146681.47/warc/CC-MAIN-20200227094720-20200227124720-00149.warc.gz"}
http://kintali.wordpress.com/category/algorithms/
Open problems for 2014 Wish you all a Very Happy New Year. Here is a list of my 10 favorite open problems for 2014. They belong to several research areas inside discrete mathematics and theoretical computer science. Some of them are baby steps towards resolving much bigger open problems. May this new year shed new light on these open problems. • 2. Optimization : Improve the approximation factor for the undirected graphic TSP. The best known bound is 7/5 by Sebo and Vygen. • 3. Algorithms : Prove that the tree-width of a planar graph can be computed in polynomial time (or) is NP-complete. • 4. Fixed-parameter tractability : Treewidth and Pathwidth are known to be fixed-parameter tractable. Are directed treewidth/DAG-width/Kelly-width (generalizations  of  treewidth) and directed pathwidth (a generalization of pathwidth) fixed-parameter tractable ? This is a very important problem to understand the algorithmic and structural differences between undirected and directed width parameters. • 5. Space complexity : Is Planar ST-connectvity in logspace ? This is perhaps the most natural special case of the NL vs L problem. Planar ST-connectivity is known to be in $UL \cap coUL$. Recently, Imai, Nakagawa, Pavan, Vinodchandran and Watanabe proved that it can be solved simultaneously in polynomial time and approximately O(√n) space. • 6. Metric embedding : Is the minor-free embedding conjecture true for partial 3-trees (graphs of treewidth 3) ? Minor-free conjecture states that “every minor-free graph can be embedded in $l_1$ with constant distortion. The special case of planar graphs also seems very difficult. I think the special case of partial 3-trees is a very interesting baby step. • 7. Structural graph theory : Characterize pfaffians of tree-width at most 3 (i.e., partial 3-trees). It is a long-standing open problem to give a nice characterization of pfaffians and design a polynomial time algorithm to decide if an input graph is a pfaffian. The special of partial 3-trees is an interesting baby step. • 8. Structural graph theory : Prove that every minimal brick has at least four vertices of degree three. Bricks and braces are defined to better understand pfaffians. The characterization of pfaffian braces is known (more generally characterization of bipartite pfaffians is known). To understand pfaffians, it is important to understand the structure of bricks. Norine,Thomas proved that every minimal brick has at least three vertices of degree three and conjectured that every minimal brick has at least cn vertices of degree three. • 9. Communication Complexity : Improve bounds for the log-rank conjecture. The best known bound is $O(\sqrt{rank})$ • 10. Approximation algorithms : Improve the approximation factor for the uniform sparsest cut problem. The best known factor is $O(\sqrt{logn})$. Here are my conjectures for 2014 :) • Weak Conjecture : at least one of the above 10 problems will be resolved in 2014. • Conjecture : at least five of the above 10 problems will be resolved in 2014. • Strong Conjecture : All of the above 10 problems will be resolved in 2014. Have fun !! PolyTopix In the last couple of years, I developed some (research) interest in recommendation algorithms and speech synthesis. My interests in these areas are geared towards developing an automated personalized news radio. Almost all of us are interesting in consuming news. In this internet age, there is no dearth of news sources. Often we have too many sources. We tend to “read” news from several sources / news aggregators, spending several hours per week. Most of the time we are simply interested in the top and relevant headlines. PolyTopix is my way of simplifying the process of consuming top and relevant news. The initial prototype is here. The website “reads” several news tweets (collected from different sources) and ordered based on a machine learning algorithm. Users can login and specify their individual interests (and zip code) to narrow down the news. Try PolyTopix let me know your feedback. Here are some upcoming features : • Automatically collect weather news (and local news) based on your location. • Reading more details of most important news. • News will be classified as exciting/sad/happy etc., (based on a machine learning algorithm) and read with the corresponding emotional voice. Essentially PolyTopix is aimed towards a completely automated and personalized news radio, that can “read” news from across the world anytime with one click. ———————————————————————————————————————— Book Review of “Boosting : Foundations and Algorithms” Following is my review of Boosting : Foundations and Algorithms (by Robert E. Schapire and Yoav Freund) to appear in the  SIGACT book review column soon. —————————————————————————————————————- Book : Boosting : Foundations and Algorithms (by Robert E. Schapire and Yoav Freund) Reviewer : Shiva Kintali Introduction You have k friends, each one earning a small amount of money (say 100 dollars) every month by buying and selling stocks. One fine evening, at a dinner conversation, they told you their individual “strategies” (after all, they are your friends). Is it possible to “combine” these individual strategies and make million dollars in an year, assuming your initial capital is same as your average friend ? You are managing a group of k “diverse” software engineers each one with only an “above-average” intelligence. Is it possible to build a world-class product using their skills ? The above scenarios give rise to fundamental theoretical questions in machine learning and form the basis of Boosting. As you may know, the goal of machine learning is to build systems that can adapt to their environments and learn from their experience. In the last five decades, machine learning has impacted almost every aspect of our life, for example, computer vision, speech processing, web-search, information retrieval, biology and so on. In fact, it is very hard to name an area that cannot benefit from the theoretical and practical insights of machine learning. The answer to the above mentioned questions is Boosting, an elegant method for driving down the error of the combined classifier by combining a number of weak classifiers. In the last two decades, several variants of Boosting are discovered. All these algorithms come with a set of theoretical guarantees and made a deep practical impact on the advances of machine learning, often providing new explanations for existing prediction algorithms. Boosting : Foundations and Algorithms, written by the inventors of Boosting, deals with variants of AdaBoost, an adaptive boosting method. Here is a quick explanation of the basic version of AdaBoost. AdaBoost makes iterative calls to the base learner. It maintains a distribution over training examples to choose the training sets provided to the base learner on each round. Each training example is assigned a weight, a measure of importance of correctly classifying an example on the current round. Initially, all weights are set equally. On each round, the weights of incorrectly classified examples are increased so that, “hard” examples get successively higher weight. This forces the base learner to focus its attention on the hard example and drive down the generalization errors. AdaBoost is fast and easy to implement and the only parameter to tune is the number of rounds. The actual performance of boosting is dependent on the data. Summary Chapter 1 provides a quick introduction and overview of Boosting algorithms with practical examples. The rest of the book is divided into four major parts. Each part is divided into 3 to 4 chapters. Part I studies the properties and effectiveness of AdaBoost and theoretical aspects of minimizing its training and generalization errors. It is proved that AdaBoost drives the training error down very fast (as a function of the error rates of the weak classifiers) and the generalization error arbitrarily close to zero. Basic theoretical bounds on the generalization error show that AdaBoost overfits, however empirical studies show that AdaBoost does not overfit. To explain this paradox, a margin-based analysis is presented to explain the absence of overfitting. Part II explains several properties of AdaBoost using game-theoretic interpretations. It is shown that the principles of Boosting are very intimately related to the classic min-max theorem of von Neumann. A two-player (the boosting algorithm and the weak learning algorithm) game is considered and it is shown that AdaBoost is a special case of a more general algorithm for playing a repeated game. By reversing the roles of the players, a solution is obtained for the online prediction model thus establishing a connection between Boosting and online learning. Loss minimization is studied and AdaBoost is interpreted as an abstract geometric framework for optimizing a particular objective function. More interestingly, AdaBoost is viewed as a special case of more general methods for optimization of an objective function such as coordinate descent and functional gradient descent. Part III explains several methods of extending AdaBoost to handle classifiers with more than two output classes. AdaBoost.M1, AdaBoost.MH and AdaBoost.MO are presented along with their theoretical analysis and practical applications. RankBoost, an extension of AdaBoost to study ranking problems is studied. Such an algorithm is very useful, for example, to rank webpages based on their relevance to a given query. Part IV is dedicated to advanced theoretical topics. Under certain assumptions, it is proved that AdaBoost can handle noisy-data and converge to the best possible classifier. An optimal boost-by-majority algorithm is presented. This algorithm is then modified to be adaptive leading to an algorithm called BrownBoost. Many examples are given throughout the book to illustrate the empirical performance of the algorithms presented. Every chapter ends with Summary and Bibliography mentioning the related publications. There are well-designed exercises at the end of every chapter. Appendix briefly outlines some required mathematical background. Opinion Boosting book is definitely a very good reference text for researchers in the area of machine learning. If you are new to machine learning, I encourage you to read an introductory machine learning book (for example, Machine Learning by Tom M. Mitchell) to better understand and appreciate the concepts. In terms of being used in a course, a graduate-level machine learning course can be designed from the topics covered in this book. The exercises in the book can be readily used for such a course. Overall this book is a stimulating learning experience. It has provided me new perspectives on theory and practice of several variants of Boosting algorithms. Most of the algorithms in this book are new to me and I had no difficulties following the algorithms and the corresponding theorems. The exercises at the end of every chapter made these topics much more fun to learn. The authors did a very good job compiling different variants of Boosting algorithms and achieved a nice balance between theoretical analysis and practical examples. I highly recommend this book for anyone interested in machine learning. —————————————————————————————————————- TrueShelf 1.0 One year back (on 6/6/12) I announced a beta version of TrueShelf, a social-network for sharing exercises and puzzles especially in mathematics and computer science. After an year of testing and adding new features, now I can say that TrueShelf is out of beta. TrueShelf turned out to be a very useful website. When students ask me for practice problems (or books) on a particular topic, I simply point them to trueshelf and tell them the tags related to that topic. When I am advising students on research projects, I first tell them to solve all related problems (in the first couple of weeks) to prepare them to read research papers. Here are the features in TrueShelf 1.0. • Post an exercise (or) multiple-choice question (or) video (or) notes. • Solve any multiple-choice question directly on the website. • Add topic and tags to any post • Add source or level (high-school/undergraduate/graduate/research). • Show text-books related to a post • Show related posts for every post. • View printable version (or) LaTex version of any post. • Email / Tweet / share on facebook (or) Google+ any post directly from the post. • Add any post to your Favorites • Like (a.k.a upvote) any post. Feel free to explore TrueShelf, contribute new exercises and let me know if you have any feedback (or) new features you want to see. You can also follow TrueShelf on facebooktwitter and google+. Here is a screenshot highlighting the important features. Recreational Math Books – Part I Most of us encounter math puzzles during high-school. If you are really obsessed with puzzles, actively searching and solving them, you will very soon run out of puzzles !! One day you will simply realize that you are not encountering any new puzzles. No more new puzzles. Poof. They are all gone. You feel like screaming “Give me a new puzzle“. This happened to me around the end of my undergrad days. During this phase of searching for puzzles, I encountered Graceful Tree Conjecture and realized that there are lots of long-standing open “puzzles”. I don’t scream anymore. Well… sometimes I do scream when my proofs collapse. But that’s a different kind of screaming. Sometimes, I do try to create new puzzles. Most of the puzzles I create are either very trivial to solve (or) very hard and related to long-standing conjectures. Often it takes lots of effort and ingenuity to create a puzzle with right level of difficulty. In today’s post, I want to point you to some of the basic puzzle books that everybody should read. So, the next time you see a kid screaming “Give me a new puzzle“, simply point him/her to these books. Hopefully they will stop screaming for sometime. If they comeback to you soon, point them to Graceful Tree Conjecture  :) I will mention more recreational math books in part 2 of this blog post. How to teach Algorithms ? Algorithms are everywhere. They help us travel efficiently, retrieve information from huge data sets, secure money transactions, recommend movies, books, videos, predict stock market etc. It is very tough to think about a daily task that does not benefit from efficient algorithms. Often the algorithms behind most of these tasks are very simple, yet their impact is tremendous. When I say “simple”, they are simple to people who know them. Most common people consider algorithms too mathematical. They assume that it is beyond their capability to understand algorithms. What they do not realize is that algorithms are often simple extensions of our daily rational thinking process. For example, almost everybody considers it stupid to buy an item for $24 and pay$6 shipping, if there is free shipping for orders of more than $25. If you add one more item of cost$1, you saved $5. Also, when we pack our bags for travel, most of us try to do it as “efficiently” as possible, trying to carry as many “valuable” things as possible while trying to avoid paying for extra luggage on airlines. We consider these rational choices. Algorithms are simply “step-by-step procedures to achieve these rational objectives“. If you are an instructor (like me), teaching Algorithms, you might have noticed that most students (around 70%) are intimidated when they take a basic algorithms course. Most of them DO end up doing well in the course, but they consider the process painful. If they reflect on the course, most often they say “that course was a nightmare, I don’t remember what I learnt in that course”. They do not seem to have enjoyed the course. Probably they might remember 30% of the material. This is definitely not acceptable for such a fundamental course. Often, when I comeback to my office after teaching, I say to myself “I should have given them one more example, to help them get better intuition”. You can always do better job if you are given more time. Alas, we have time-bounded classes and almost infinite details to cover. We expect students to learn some concepts on their own and develop their own intuitions. We want to give “good” reading material. So, their understanding depends on how well these readings are written. Today’s post is about “How to teach Algorithms ?” Here is one of my experiences, while I was teaching an undergrad algorithms course at GeorgiaTech. I was teaching dynamic programming. I gave several examples to make sure that they understand the paradigm. At the end of the class, almost 50% of class had questions, because this is the first time they saw dynamic programming. I told them to see me in my office hours. I quickly implemented a java applet to show how the matrix entries are filled by the algorithm, step by step. When I showed them this applet and a pseudo-code side-by-side (highlighting every current line of code being executed), almost all of the students understood the main idea behind dynamic programming. Some of them also said “it is easy”. I was glad and wanted to add more algorithms in my code. The Kintali Language The goal is to have a very simple to understand “executable pseudo-code” along with an animation framework that “understands” this language. So I started designing a new language and called it Kintali language, for lack of a better word :) . I borrowed syntax from several pseudo-codes. It took me almost two years to implement all the necessary features keeping in mind a broad range of algorithms. I developed an interpreter to translate this language into an intermediate representation with callbacks to an animation library. This summer, I finally implemented the animation library and the front-end in Objective-C. The result is the Algorithms App for iPad, released on Sep 20, 2012. This is my attempt to teach as many algorithms as possible by intuitive visualization and friendly exercises. Features Current version has Sorting algorithms (Insertion Sort, Bubble Sort, Quick Sort and MergeSort). The main advantage of my framework will be demonstrated once I add graph algorithms. I will add some “adaptive” exercises and games too. For example, one of the games is to predict what is the next matrix entry that will be filled next by an algorithm. Also, I have the necessary framework to visually demonstrate recursion (by showing the recursion tree), dynamic programming (by showing the status (filled or waiting to be filled) of matrix entries), divide and conquer (by splitting the data) etc. Since the framework is ready, adding new algorithms will not take much time. Here is a screenshot of Quick Sort in action. Platforms After I developed the interpreter, I was wondering what platforms to support first. I went ahead with iPad because I developed the interpreter in C. Objective-C is a superset of C. The Mac desktop version should be available in couple of weeks. In the long run I will implement the Android, Linux and Windows 8 versions too. Goal The big goal here is to “almost” replace an algorithms textbook. I added a button to access relevant wikipedia articles (within the app) describing the corresponding algorithms. With simple pseudo-code, intuitive animations, adaptive exercises and easy access to online articles, I think this goal is definitely achievable. Questions I have some quick questions to all the instructors and students of Algorithms. • What algorithms do you want to see soon i.e., what algorithms did you have most difficulty learning/teaching ? • What are some current methods you use to overcome the limitations of “static” textbooks ? • Any more ideas to make algorithms more fun and cool to learn/teach ? I wanted to write this post after achieving at least 100 downloads. I assumed this will take a month. To my surprise, there were 100 downloads from 15 countries, in the first 40 hours. I guess I have to add new features faster than I planned. TrueShelf and Algorithms App are new additions to my hobbies. The others being Painting, BoardGames and Biking. Man’s got to have some hobbies. :) Follow Algorithms App on Facebook, Twitter and Google+. Download Algorithms App for iPad —————————————————————————————————————————————————— Open Problems from Lovasz and Plummer’s Matching Theory Book I always have exactly one bed-time mathematical book to read (for an hour) before going to sleep. It helps me learn new concepts and hopefully stumble upon interesting open problems. Matching Theory by Laszlo Lovasz and Michael D. Plummer has been my bed-time book for the last six months. I bought this book 3 years back (during my PhD days) but never got a chance to read it. This book often disappears from Amazon’s stock. I guess they are printing it on-demand. If you are interested in learning the algorithmic and combinatorial foundations of Matching Theory (with a historic perspective), then this book is a must read. Today’s post is about the open problems mentioned in Matching Theory book. If you know the status (or progress) of these problems, please leave a comment. —————————————————————————- 1 . Consistent Labeling and Maximum Flow Conjecture (Fulkerson) : Any consistent labelling procedure results in a maximum flow in polynomial number of steps. —————————————————————————- 2. Toughness and Hamiltonicity The toughness of a graph $G$, $t(G)$ is defined to be $+\infty$, if $G = K_n$ and to be $min(|S|/c(G-S))$, if $G \neq K_n$. Here $c(G-S)$ is the number of components of $G-S$. Conjecture (Chvatal 1973) : There exists a positive real number $t_0$ such that for every graph $G$, $t(G) \geq t_0$ implies $G$ is Hamiltonian. —————————————————————————- 3. Perfect Matchings and Bipartite Graphs Theorem : Let $X$ be a set, $X_1, \dots, X_t \subseteq X$ and suppose that $|X_i| \leq r$ for $i = 1, \dots, t$. Let $G$ be a bipartite graph such that a) $X \subseteq V(G)$, b) $G - X_i$ has a perfect matching , and c) if any edge of $G$ is deleted, property (b) fails to hold in the resulting graph. Then, the number of vertices in $G$ with degree $\geq 3$ is at most $r^3 {t \choose 3}$. Conjecture : The conclusion of the above theorem holds for non-bipartite graphs as well. —————————————————————————- 4. Number of Perfect Matchings Conjecture (Schrijver and W.G.Valiant 1980) : Let $\Phi(n,k)$ denote the minimum number of perfect matchings a k-regular bipartite graph on 2n points can have. Then, $\lim_{n \to \infty} (\Phi(n,k))^{\frac{1}{n}} = \frac{(k-1)^{k-1}}{k^{k-2}}$. —————————————————————————- 5. Elementary Graphs Conjecture : For $k \geq 3$ there exist constants $c_1(k) > 1$ and $c_2(k) > 0$ such that every k-regular elementary graph on 2n vertices, without forbidden edges , contains at least $c_2(k){\cdot}c_1(k)^n$ perfect matchings. Furthermore $c_1(k) \to \infty$ as $k \to \infty$. —————————————————————————- 6. Number of colorations Conjecture (Schrijver’83) : Let G be a k-regular bipartite graph on 2n vertices. Then the number of colorings of the edges of G with k given colors is at least $(\frac{(k!)^2}{k^k})^n$. —————————————————————————- Theorem : A graph is perfect if and only if it does not contain, as an induced subgraph, an odd hole or an odd antihole. —————————————————————————- TrueShelf I have been teaching (courses related to algorithms and complexity) for the past six years (five years as a PhD student at GeorgiaTech, and the past one year at Princeton). One of the most challenging and interesting part of teaching is creating new exercises to help teach the important concepts in an efficient way. We often need lots of problems to include in homeworks, midterms, final exams and also to create practice problem sets. We do not get enough time to teach all the concepts in class because the number of hours/week is bounded. I personally like to teach only the main concepts in class and design good problem sets so that students can learn the generalizations or extensions of the concepts by solving problems hands-on. This helps them develop their own intuitions about the concepts. Whenever I need a new exercise I hardly open a physical textbook. I usually search on internet and find exercises from a course website (or) “extract” an exercise from a research paper. There are hundreds of exercises “hidden” in pdf files across several course homepages. Instructors often spend lots of time designing them. If these exercises can reach all the instructors and students across the world in an efficiently-indexed form, that will help everybody. Instructors will be happy that the exercises they designed are not confined to just one course. Students will have an excellent supply of exercises to hone their problem-solving skills. During 2008, half-way through my PhD, I started collected the exercises I like in a private blog. At the same time I registered the domain trueshelf.com to make these exercises public. In 2011, towards the end of my PhD, I started using the trueshelf.com domain and made a public blog so that anybody can post an exercise. [ Notice that I did not use the trueshelf.com domain for three years. During these three years I got several offers ranging upto$5000 to sell the domain. So I knew I got the right name :) ] Soon, I realized that wordpress is somewhat “static” in nature and does not have enough “social” features I wanted. A screenshot of the old website is shown below. The new version of TrueShelf is a social website enabling “crowd-sourcing” of exercises in any area. Here is the new logo, I am excited about :) The goal of TrueShelf is to aid both the instructors and students by presenting quality exercises with tag-based indexing. Read the TrueShelf FAQ for more details. Note that we DO NOT allow users to post solutions. Each user may add his own “private” solution and notes to any exercise. I am planning to add more features soon. In the long-run, I see TrueShelf becoming a “Youtube for exercises”. Users will be able to create their own playlists of exercises (a.k.a problem sets) and will be recommended relevant exercises. Test-preparation agencies will be able to create their own channels to create sample tests. Feel free to explore TrueShelf, contribute new exercises and let me know if you have any feedback (or) new features you want to see. You can also follow TrueShelf on facebook, twitter and google+. Let’s see how TrueShelf evolves. Computing Bounded Path Decompositions in Logspace Today’s post is a continuation of earlier posts (here, here, here, here) on graph isomorphism, treewidth and pathwidth. As mentioned earlier, the best known upper bound for Graph Isomorphism of partial k-trees is LogCFL. Theorem ([Das, Toran and Wagner'10]) : Graph isomorphism of bounded treewidth graphs is in LogCFL. One of the bottlenecks of the algorithm of [DTW'10] is computing bounded tree decompositions in logspace. This is recently resolved by an amazing result of Elberfeld, Jakoby and Tantau [EJT'10]. The results in this paper are very powerful. Unfortunately, it is still not clear how to improve the LogCFL upper bound. Can we improve the upper bound for special cases of partial k-trees ? How about bounded pathwidth graphs ? Again, one bottleneck here is to compute bounded path decompositions in logspace. [EJT'10]‘s paper does not address this bottleneck and it is not clear how to extend their algorithm to compute path decompositions. In joint work with Sinziana Munteanu, we resolved this bottleneck and proved the following theorem. Sinziana is a senior undergraduate student in our department. She is working with me on her senior thesis. Theorem (Kintali, Munteanu’12) : For all constants $k, l \geq 1$, there exists a logspace algorithm that, when given a graph $G$ of treewidth $\leq l$, decides whether the pathwidth of $G$ is at most $k$, and if so, finds a path decomposition of $G$ of width $\leq k$ in logspace. A draft of our results is available here. The above theorem is a logspace counterpart of the corresponding polynomial-time algorithm of [Bodlaender, Kloks'96]. Converting it into a logspace algorithm turned out to be a tedious task with some interesting tricks. Our work motivates the following open problem : Open problem : What is the complexity of Graph Isomorphism of bounded pathwidth graphs ? Is there a logspace algorithm ? Stay tuned for more papers related to graph isomorphism, treewidth and pathwidth. I am going through a phase of life, where I have more results than I can type. Is there an app that converts voice to latex ? Is there a journal that accepts hand-written proofs ? :) Complexity of Tessel One of my hobbies (I developed during my PhD) is designing boardgames. I designed three boardgames so far, one of which is Tessel, a word-building game based on graph theory. I am glad that Tessel is getting good feedback especially from schools and families. One of the most time-consuming part of Tessel’s design is deciding what values to assign to the letters and deciding which pairs of letters to use in the tiles. The pairs of letters are carefully chosen based on computer simulations of frequency of letters in english words and their “relative” importance. The pairs are chosen so as to give fair share to both vocabulary skills and optimization skills. Today’s post is about a nice theoretical problem arising from this game. Before you read further, please read the rules of Tessel. Henceforth I will assume that you understood the rules and goal of this game. I guess you observed that the tiles are being placed on the edges of a planar graph. Tessel uses a special planar graph that has cycles of length 3,4,5 and 6. In general, this game can be played on any planar graph. I am planning to design another board using Cairo tessellation. Anyways, here is a theoretical problem : Let S be a set of finite alphabets. You are given two different words (using alphabets from S) of length l1 and l2. Construct a planar graph G and label each edge with two alphabets, such that there are two walks in G that correspond to the given two words. (Read the rules of tessel  and look at these examples to understand this correspondence). Your goal is to construct G with minimum number of vertices (or minimum number of edges). In general you can ask the above question given k different words. What is the complexity of this problem ? I don’t know. I haven’t given it a deep thought. These days whatever I do for fun (to take my mind off open problems), ends up in another open problem :(
2014-04-20 15:56:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 43, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5113123655319214, "perplexity": 816.4837293799872}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00618-ip-10-147-4-33.ec2.internal.warc.gz"}
http://planetmath.org/fermatcompositenesstest
# Fermat compositeness test The Fermat compositeness test is a primality test based on the observation that by Fermat’s little theorem if $b^{n-1}\not\equiv 1\pmod{n}$ and $b\not\equiv 0\pmod{n}$, then $n$ is composite. The Fermat compositeness test consists of checking whether $b^{n-1}\equiv 1\pmod{n}$ for a handful of values of $b$. If a $b$ with $b^{n-1}\not\equiv 1\pmod{n}$ is found, then $n$ is composite. A value of $b$ for which $b^{n-1}\not\equiv 1\pmod{n}$ is called a witness to $n$’s compositeness. If $b^{n-1}\equiv 1\pmod{n}$, then $n$ is said to be pseudoprime base $b$. It can be proven that most composite numbers can be shown to be composite by testing only a few values of $b$. However, there are infinitely many composite numbers that are pseudoprime in every base. These are Carmichael numbers (see OEIS sequence http://www.research.att.com/cgi-bin/access.cgi/as/njas/sequences/eisA.cgi?Anum=A002997A002997 for a list of first few Carmichael numbers). Title Fermat compositeness test FermatCompositenessTest 2013-03-22 13:17:36 2013-03-22 13:17:36 bbukh (348) bbukh (348) 15 bbukh (348) Algorithm msc 11A51 MillerRabinPrimeTest pseudoprime witness Carmichael numbers
2018-03-23 07:01:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 15, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8697999119758606, "perplexity": 615.2634190997858}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257648198.55/warc/CC-MAIN-20180323063710-20180323083710-00027.warc.gz"}
https://mathforcollege.com/ma/book2021/lu-decomposition-method-for-solving-simultaneous-linear-equations.html
# Chapter 7 LU Decomposition Method for Solving Simultaneous Linear Equations ## 7.1 Learning Objectives After successful completion of this section, you should be able to (1).solve a set of simultaneous linear equations using LU decomposition method (2).decompose a nonsingular matrix into LU form. (3).solve a set of simultaneous linear equations using LU decomposition method (4).decompose a nonsingular matrix into LU form. (5).find the inverse of a matrix using LU decomposition method. (6).justify why using LU decomposition method is more efficient than Gaussian elimination in some cases. ## 7.2 I hear about LU decomposition used as a method to solve a set of simultaneous linear equations. What is it? We already studied two numerical methods of finding the solution to simultaneous linear equations – Naïve Gauss elimination and Gaussian elimination with partial pivoting. Then, why do we need to learn another method? To appreciate why LU decomposition could be a better choice than the Gauss elimination techniques in some cases, let us discuss first what LU decomposition is about. For a nonsingular matrix $$\left\lbrack A \right\rbrack$$ on which one can successfully conduct the Naïve Gauss elimination forward elimination steps, one can always write it as $\left\lbrack A \right\rbrack = \left\lbrack L \right\rbrack\left\lbrack U \right\rbrack$ where $\left\lbrack L \right\rbrack = \text{Lower triangular matrix}$ $\left\lbrack U \right\rbrack = \text{Upper triangular matrix}$ Then if one is solving a set of equations $\left\lbrack A \right\rbrack\left\lbrack X \right\rbrack = \left\lbrack C \right\rbrack,$ then $\left\lbrack L \right\rbrack\left\lbrack U \right\rbrack\left\lbrack X \right\rbrack = \left\lbrack C \right\rbrack\ \text{as }\left( \lbrack A\rbrack = \left\lbrack L \right\rbrack\left\lbrack U \right\rbrack \right)$ Multiplying both sides by $$\left\lbrack L \right\rbrack^{- 1}$$, $\left\lbrack L \right\rbrack^{- 1}\left\lbrack L \right\rbrack\left\lbrack U \right\rbrack\left\lbrack X \right\rbrack = \left\lbrack L \right\rbrack^{- 1}\left\lbrack C \right\rbrack$ $\left\lbrack I \right\rbrack \left\lbrack U \right\rbrack \left\lbrack X \right\rbrack = \left\lbrack L \right\rbrack^{- 1}\left\lbrack C \right\rbrack\ \text{as }\left( \left\lbrack L \right\rbrack^{- 1}\left\lbrack L \right\rbrack = \lbrack I\rbrack \right)$ $\left\lbrack U \right\rbrack \left\lbrack X \right\rbrack = \left\lbrack L \right\rbrack^{- 1}\left\lbrack C \right\rbrack\ \text{as }\left( \left\lbrack I \right\rbrack\ \left\lbrack U \right\rbrack = \lbrack U\rbrack \right)$ Let $\left\lbrack L \right\rbrack^{- 1}\left\lbrack C \right\rbrack = \left\lbrack Z \right\rbrack$ then $\left\lbrack L \right\rbrack\left\lbrack Z \right\rbrack = \left\lbrack C \right\rbrack\ \ \ (1)$ and $\left\lbrack U \right\rbrack\left\lbrack X \right\rbrack = \left\lbrack Z \right\rbrack\ \ \ (2)$ So we can solve Equation (1) first for $$\lbrack Z\rbrack$$ by using forward substitution and then use Equation (2) to calculate the solution vector $$\left\lbrack X \right\rbrack$$ by back substitution. ## 7.3 How do I decompose a non-singular matrix [A], that is, how do I find [A] = [L][U]? If forward elimination steps of the Naïve Gauss elimination methods can be applied on a nonsingular matrix, then $$\left\lbrack A \right\rbrack$$ can be decomposed into LU as $\begin{split} \lbrack A\rbrack &= \begin{bmatrix} a_{11} & a_{12} & \ldots & a_{1n} \\ a_{21} & a_{22} & \cdots & a_{2n} \\ \vdots & \vdots & \cdots & \vdots \\ a_{n1} & a_{n2} & \cdots & a_{{nn}} \\ \end{bmatrix}\\ &= \begin{bmatrix} 1 & 0 & \ldots & 0 \\ {l}_{21} & 1 & \cdots & 0 \\ \vdots & \vdots & \cdots & \vdots \\ {l}_{n1} & {l}_{n2} & \cdots & 1 \\ \end{bmatrix}\ \ \begin{bmatrix} u_{11} & u_{12} & \ldots & u_{1n} \\ 0 & u_{22} & \cdots & u_{2n} \\ \vdots & \vdots & \cdots & \vdots \\ 0 & 0 & \cdots & u_{{nn}} \\ \end{bmatrix} \end{split}$ The elements of the $$\left\lbrack U \right\rbrack$$ matrix are exactly the same as the coefficient matrix one obtains at the end of the forward elimination steps in Naïve Gauss elimination. The lower triangular matrix $$\left\lbrack L \right\rbrack$$ has $$1$$ in its diagonal entries. The non-zero elements on the non-diagonal elements in $$\left\lbrack L \right\rbrack$$ are multipliers that made the corresponding entries zero in the upper triangular matrix $$\left\lbrack U\right\rbrack$$ during forward elimination. Let us look at this using the same example as used in Naïve Gaussian elimination. ## 7.4 Example 1 Find the LU decomposition of the matrix $\left\lbrack A \right\rbrack = \begin{bmatrix} 25 & 5 & 1 \\ 64 & 8 & 1 \\ 144 & 12 & 1 \\ \end{bmatrix}$ Solution $\begin{split} \left\lbrack A \right\rbrack &= \left\lbrack L \right\rbrack \left\lbrack U \right\rbrack\\ &= \begin{bmatrix} 1 & 0 & 0 \\ {l}_{21} & 1 & 0 \\ {l}_{31} & {l}_{32} & 1 \\ \end{bmatrix}\begin{bmatrix} u_{11} & u_{12} & u_{13} \\ 0 & u_{22} & u_{23} \\ 0 & 0 & u_{33} \\ \end{bmatrix} \end{split}$ The $$\left\lbrack U \right\rbrack$$ matrix is the same as found at the end of the forward elimination of Naïve Gauss elimination method, that is $\left\lbrack U \right\rbrack = \begin{bmatrix} 25 & 5 & 1 \\ 0 & - 4.8 & - 1.56 \\ 0 & 0 & 0.7 \\ \end{bmatrix}$ To find $${l}_{21}$$ and $${l}_{31}$$, find the multiplier that was used to make the $$a_{21}$$ and $$a_{31}$$ elements zero in the first step of forward elimination of the Naïve Gauss elimination method. It was $\begin{split} {l}_{21} &= \frac{64}{25}\\ &= 2.56 \end{split}$ $\begin{split} {l}_{31} &= \frac{144}{25}\\ &= 5.76 \end{split}$ To find $${l}_{32}$$, what multiplier was used to make $$a_{32}$$ element zero? Remember $$a_{32}$$ element was made zero in the second step of forward elimination. The $$\left\lbrack A \right\rbrack$$ matrix at the beginning of the second step of forward elimination was $\begin{bmatrix} 25 & 5 & 1 \\ 0 & - 4.8 & - 1.56 \\ 0 & - 16.8 & - 4.76 \\ \end{bmatrix}$ So $\begin{split} {l}_{32} &= \frac{- 16.8}{- 4.8}\\ &= 3.5 \end{split}$ Hence $\left\lbrack L \right\rbrack = \begin{bmatrix} 1 & 0 & 0 \\ 2.56 & 1 & 0 \\ 5.76 & 3.5 & 1 \\ \end{bmatrix}$ Confirm $$\left\lbrack L \right\rbrack \left\lbrack U \right\rbrack = \left\lbrack A \right\rbrack$$. $\begin{split} \left\lbrack L \right\rbrack\left\lbrack U \right\rbrack &= \begin{bmatrix} 1 & 0 & 0 \\ 2.56 & 1 & 0 \\ 5.76 & 3.5 & 1 \\ \end{bmatrix}\begin{bmatrix} 25 & 5 & 1 \\ 0 & - 4.8 & - 1.56 \\ 0 & 0 & 0.7 \\ \end{bmatrix}\\ &= \begin{bmatrix} 25 & 5 & 1 \\ 64 & 8 & 1 \\ 144 & 12 & 1 \\ \end{bmatrix} \end{split}$ ## 7.5 Example 2 Use the LU decomposition method to solve the following simultaneous linear equations. $\begin{bmatrix} 25 & 5 & 1 \\ 64 & 8 & 1 \\ 144 & 12 & 1 \\ \end{bmatrix}\begin{bmatrix} a_{1} \\ a_{2} \\ a_{3} \\ \end{bmatrix} = \begin{bmatrix} 106.8 \\ 177.2 \\ 279.2 \\ \end{bmatrix}$ Solution Recall that $\left\lbrack A \right\rbrack\left\lbrack X \right\rbrack = \left\lbrack C \right\rbrack$ and if $\left\lbrack A \right\rbrack = \left\lbrack L \right\rbrack\left\lbrack U \right\rbrack$ then first solving $\left\lbrack L \right\rbrack\left\lbrack Z \right\rbrack = \left\lbrack C \right\rbrack$ and then $\left\lbrack U \right\rbrack\left\lbrack X \right\rbrack = \left\lbrack Z \right\rbrack$ gives the solution vector $$\left\lbrack X \right\rbrack$$. Now in the previous example, we showed $\begin{split} \left\lbrack A \right\rbrack &= \left\lbrack L \right\rbrack \left\lbrack U \right\rbrack\\ &=\begin{bmatrix} 1 & 0 & 0 \\ 2.56 & 1 & 0 \\ 5.76 & 3.5 & 1 \\ \end{bmatrix}\begin{bmatrix} 25 & 5 & 1 \\ 0 & - 4.8 & - 1.56 \\ 0 & 0 & 0.7 \\ \end{bmatrix} \end{split}$ First solve $\left\lbrack L \right\rbrack\ \left\lbrack Z \right\rbrack = \left\lbrack C \right\rbrack$ $\begin{bmatrix} 1 & 0 & 0 \\ 2.56 & 1 & 0 \\ 5.76 & 3.5 & 1 \\ \end{bmatrix}\begin{bmatrix} z_{1} \\ z_{2} \\ z_{3} \\ \end{bmatrix} = \begin{bmatrix} 106.8 \\ 177.2 \\ 279.2 \\ \end{bmatrix}$ to give $z_{1} = 106.8$ $2.56z_{1} + z_{2} = 177.2$ $5.76z_{1} + 3.5z_{2} + z_{3} = 279.2$ Forward substitution starting from the first equation gives $z_{1} = 106.8$ $\begin{split} z_{2} &= 177.2 - 2.56z_{1}\\ &= 177.2 - 2.56 \times 106.8\\ &= - 96.208 \end{split}$ $\begin{split} z_{3} &= 279.2 - 5.76z_{1} - 3.5z_{2}\\ &= 279.2 - 5.76 \times 106.8 - 3.5 \times \left( - 96.208 \right)\\ &= 0.76 \end{split}$ Hence $\begin{split} \left\lbrack Z \right\rbrack &= \begin{bmatrix} z_{1} \\ z_{2} \\ z_{3} \\ \end{bmatrix}\\ &= \begin{bmatrix} 106.8 \\ - 96.208 \\ 0.76 \\ \end{bmatrix} \end{split}$ This matrix is same as the right-hand side obtained at the end of the forward elimination steps of Naïve Gauss elimination method. Is this a coincidence? Now solve $\left\lbrack U \right\rbrack \left\lbrack X \right\rbrack = \left\lbrack Z \right\rbrack$ $\begin{bmatrix} 25 & 5 & 1 \\ 0 & - 4.8 & - 1.56 \\ 0 & 0 & 0.7 \\ \end{bmatrix}\begin{bmatrix} a_{1} \\ a_{2} \\ a_{3} \\ \end{bmatrix} = \begin{bmatrix} 106.8 \\ - 96.208 \\ 0.76 \\ \end{bmatrix}$ $25a_{1} + 5a_{2} + a_{3} = 106.8$ $- 4.8a_{2} - 1.56a_{3} = - 96.208$ $0.7a_{3} = 0.76$ From the third equation $0.7a_{3} = 0.76$ $\begin{split} a_{3} &= \frac{0.76}{0.7}\\ &= 1.0857\end{split}$ Substituting the value of $$a_{3}$$ in the second equation, $- 4.8a_{2} - 1.56a_{3} = - 96.208$ $\begin{split} a_{2} &= \frac{- 96.208 + 1.56a_{3}}{- 4.8}\\ &= \frac{- 96.208 + 1.56 \times 1.0857}{- 4.8}\\ &= 19.691 \end{split}$ Substituting the value of $$a_{2}$$ and $$a_{3}$$ in the first equation, $25a_{1} + 5a_{2} + a_{3} = 106.8$ $\begin{split} a_{1} &= \frac{106.8 - 5a_{2} - a_{3}}{25}\\ &= \frac{106.8 - 5 \times 19.691 - 1.0857}{25}\\ &= 0.29048 \end{split}$ Hence the solution vector is $\begin{bmatrix} a_{1} \\ a_{2} \\ a_{3} \\ \end{bmatrix} = \begin{bmatrix} 0.29048 \\ 19.691 \\ 1.0857 \\ \end{bmatrix}$ ## 7.6 How do I find the inverse of a square matrix using LU decomposition? A matrix $$\left\lbrack B \right\rbrack$$ is the inverse of $$\left\lbrack A \right\rbrack$$ if $\left\lbrack A \right\rbrack\left\lbrack B \right\rbrack = \left\lbrack I \right\rbrack = \left\lbrack B \right\rbrack\left\lbrack A \right\rbrack$ How can we use LU decomposition to find the inverse of the matrix? Assume the first column of $$\left\lbrack B \right\rbrack$$ (the inverse of $$\left\lbrack A \right\rbrack$$) is $\lbrack b_{11}\ b_{12}\ldots\ \ldots b_{n1}\rbrack^{T}$ Then from the above definition of an inverse and the definition of matrix multiplication $\left\lbrack A \right\rbrack\begin{bmatrix} b_{11} \\ b_{21} \\ \vdots \\ b_{n1} \\ \end{bmatrix} = \begin{bmatrix} 1 \\ 0 \\ \vdots \\ 0 \\ \end{bmatrix}$ Similarly, the second column of $$\left\lbrack B \right\rbrack$$ is given by $\left\lbrack A \right\rbrack\begin{bmatrix} b_{12} \\ b_{22} \\ \vdots \\ b_{n2} \\ \end{bmatrix} = \begin{bmatrix} 0 \\ 1 \\ \vdots \\ 0 \\ \end{bmatrix}$ Similarly, all columns of $$\left\lbrack B \right\rbrack$$ can be found by solving $$n$$ different sets of equations with the column of the right-hand side being the $$n$$ columns of the identity matrix. ### 7.6.1 Example 3 Use LU decomposition to find the inverse of $\left\lbrack A \right\rbrack = \begin{bmatrix} 25 & 5 & 1 \\ 64 & 8 & 1 \\ 144 & 12 & 1 \\ \end{bmatrix}$ Solution Knowing that $\begin{split} \left\lbrack A \right\rbrack &= \left\lbrack L \right\rbrack\left\lbrack U \right\rbrack\\ &= \begin{bmatrix} 1 & 0 & 0 \\ 2.56 & 1 & 0 \\ 5.76 & 3.5 & 1 \\ \end{bmatrix}\begin{bmatrix} 25 & 5 & 1 \\ 0 & - 4.8 & - 1.56 \\ 0 & 0 & 0.7 \\ \end{bmatrix} \end{split}$ We can solve for the first column of $$\lbrack B\rbrack = \left\lbrack A \right\rbrack^{- 1}$$by solving for $\begin{bmatrix} 25 & 5 & 1 \\ 64 & 8 & 1 \\ 144 & 12 & 1 \\ \end{bmatrix}\begin{bmatrix} b_{11} \\ b_{21} \\ b_{31} \\ \end{bmatrix} = \begin{bmatrix} 1 \\ 0 \\ 0 \\ \end{bmatrix}$ First solve $\left\lbrack L \right\rbrack\left\lbrack Z \right\rbrack = \left\lbrack C \right\rbrack,$ that is $\begin{bmatrix} 1 & 0 & 0 \\ 2.56 & 1 & 0 \\ 5.76 & 3.5 & 1 \\ \end{bmatrix}\begin{bmatrix} z_{1} \\ z_{2} \\ z_{3} \\ \end{bmatrix} = \begin{bmatrix} 1 \\ 0 \\ 0 \\ \end{bmatrix}$ to give $z_{1} = 1$ $2.56z_{1} + z_{2} = 0$ $5.76z_{1} + 3.5z_{2} + z_{3} = 0$ Forward substitution starting from the first equation gives $z_{1} = 1$ $\begin{split} z_{2} &= 0 - 2.56z_{1}\\ &= 0 - 2.56\left( 1 \right)\\ &= - 2.56 \end{split}$ $\begin{split} z_{3} &= 0 - 5.76z_{1} - 3.5z_{2}\\ &= 0 - 5.76\left( 1 \right) - 3.5\left( - 2.56 \right)\\ &= 3.2 \end{split}$ Hence $\begin{split} \left\lbrack Z \right\rbrack &= \begin{bmatrix} z_{1} \\ z_{2} \\ z_{3} \\ \end{bmatrix}\\ &= \begin{bmatrix} 1 \\ - 2.56 \\ 3.2 \\ \end{bmatrix} \end{split}$ Now solve $\left\lbrack U \right\rbrack\left\lbrack X \right\rbrack = \left\lbrack Z \right\rbrack$ that is $\begin{bmatrix} 25 & 5 & 1 \\ 0 & - 4.8 & - 1.56 \\ 0 & 0 & 0.7 \\ \end{bmatrix}\begin{bmatrix} b_{11} \\ b_{21} \\ b_{31} \\ \end{bmatrix} = \begin{bmatrix} 1 \\ - 2.56 \\ 3.2 \\ \end{bmatrix}$ $25b_{11} + 5b_{21} + b_{31} = 1$ $- 4.8b_{21} - 1.56b_{31} = - 2.56$ $0.7b_{31} = 3.2$ Backward substitution starting from the third equation gives $\begin{split} b_{31} &= \frac{3.2}{0.7}\\ &= 4.571 \end{split}$ $\begin{split} b_{21} &= \frac{- 2.56 + 1.56b_{31}}{- 4.8}\\ &= \frac{- 2.56 + 1.56(4.571)}{- 4.8}\\ &= - 0.9524 \end{split}$ $\begin{split} b_{11} &= \frac{1 - 5b_{21} - b_{31}}{25}\\ &= \frac{1 - 5( - 0.9524) - 4.571}{25}\\ &= 0.04762 \end{split}$ Hence the first column of the inverse of $$\left\lbrack A \right\rbrack$$ is $\begin{bmatrix} b_{11} \\ b_{21} \\ b_{31} \\ \end{bmatrix} = \begin{bmatrix} 0.04762 \\ - 0.9524 \\ 4.571 \\ \end{bmatrix}$ Similarly, solving $\begin{bmatrix} 25 & 5 & 1 \\ 64 & 8 & 1 \\ 144 & 12 & 1 \\ \end{bmatrix}\begin{bmatrix} b_{12} \\ b_{22} \\ b_{32} \\ \end{bmatrix} = \begin{bmatrix} 0 \\ 1 \\ 0 \\ \end{bmatrix}\ \text{gives }\begin{bmatrix} b_{12} \\ b_{22} \\ b_{32} \\ \end{bmatrix} = \begin{bmatrix} - 0.08333 \\ 1.417 \\ - 5.000 \\ \end{bmatrix}$ and solving $\begin{bmatrix} 25 & 5 & 1 \\ 64 & 8 & 1 \\ 144 & 12 & 1 \\ \end{bmatrix}\begin{bmatrix} b_{13} \\ b_{23} \\ b_{33} \\ \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \\ 1 \\ \end{bmatrix}\ \text{gives }\begin{bmatrix} b_{13} \\ b_{23} \\ b_{33} \\ \end{bmatrix} = \begin{bmatrix} 0.03571 \\ - 0.4643 \\ 1.429 \\ \end{bmatrix}$ Hence $\left\lbrack A \right\rbrack^{- 1} = \begin{bmatrix} 0.04762 & - 0.08333 & 0.03571 \\ - 0.9524 & 1.417 & - 0.4643 \\ 4.571 & - 5.000 & 1.429 \\ \end{bmatrix}$ Can you confirm the following for the above example? $\left\lbrack A \right\rbrack\ \left\lbrack A \right\rbrack^{- 1} = \left\lbrack I \right\rbrack = \left\lbrack A \right\rbrack^{- 1}\left\lbrack A \right\rbrack$ ## 7.7 LU decomposition looks more complicated than Gaussian elimination. Do we use LU decomposition because it is computationally more efficient than Gaussian elimination to solve a set of n equations given by $$\mathbf{[A][X]=[C]}$$? For a square matrix $$\lbrack A\rbrack$$ of $$n \times n$$ size, the computational time[^1] $${CT}|_{{DE}}$$ to decompose the $$\lbrack A\rbrack$$ matrix to $$\lbrack L\rbrack\lbrack U\rbrack$$ form is given by ${CT}|_{{DE}} = T\left( \frac{8n^{3}}{3} + 4n^{2} - \frac{20n}{3} \right),$ where $T = \text{clock cycle time}^{2}$ The computational time $${CT}|_{{FS}}$$ to solve by forward substitution $$\left\lbrack L \right\rbrack\left\lbrack Z \right\rbrack = \left\lbrack C \right\rbrack$$ is given by ${CT}|_{{FS}} = T\left( 4n^{2} - 4n \right)$ The computational time $${CT}|_{{BS}}$$ to solve by back substitution $$\left\lbrack U \right\rbrack\left\lbrack X \right\rbrack = \left\lbrack Z \right\rbrack$$ is given by ${CT}|_{{BS}} = T\left( 4n^{2} + 12n \right)$ So, the total computational time to solve a set of equations by LU decomposition is $\begin{split} {{CT}|}_{{LU}} &= {{CT}|}_{{DE}} + {{CT}|}_{{FS}} + {{CT}|}_{{BS}}\\ &= T\left( \frac{8n^{3}}{3} + 4n^{2} - \frac{20n}{3} \right) + T\left( 4n^{2} - 4n \right) + T\left( 4n^{2} + 12n \right)\\ &= T\left( \frac{8n^{3}}{3} + 12n^{2} + \frac{4n}{3} \right) \end{split}$ Now let us look at the computational time taken by Gaussian elimination. The computational time $${CT}|_{{FE}}$$ for the forward elimination part, ${{CT}|}_{{FE}} = T\left( \frac{8n^{3}}{3} + 8n^{2} - \frac{32n}{3} \right),$ and the computational time $${CT}|_{{BS}}$$ for the back substitution part is ${{CT}|}_{{BS}} = T\left( 4n^{2} + 12n \right)$ So, the total computational time $${CT}|_{{GE}}$$ to solve a set of equations by Gaussian Elimination is $\begin{split} {{CT}|}_{{GE}} &= {{CT}|}_{{FE}} + {{CT}|}_{{BS}}\\ &= T\left( \frac{8n^{3}}{3} + 8n^{2} - \frac{32n}{3} \right) + T\left( 4n^{2} + 12n \right)\\ &= T\left( \frac{8n^{3}}{3} + 12n^{2} + \frac{4n}{3} \right) \end{split}$ The computational time for Gaussian elimination and LU decomposition is identical. ## 7.9 LU Decomposition Method for Solving Simultaneous Linear Equations Quiz (1). The $$\left\lbrack L \right\rbrack\left\lbrack U \right\rbrack$$ decomposition method is computationally more efficient than Naïve Gauss elimination for solving (A). a single set of simultaneous linear equations. (B). multiple sets of simultaneous linear equations with different coefficient matrices and the same right-hand side vectors. (C). multiple sets of simultaneous linear equations with the same coefficient matrix and different right-hand side vectors. (D). less than ten simultaneous linear equations. (2). The lower triangular matrix $$\left\lbrack L \right\rbrack$$ in the $$\left\lbrack L \right\rbrack\left\lbrack U \right\rbrack$$ decomposition of the matrix given below $\begin{bmatrix} 25 & 5 & 4 \\ 10 & 8 & 16 \\ 8 & 12 & 22 \\ \end{bmatrix} = \begin{bmatrix} 1 & 0 & 0 \\ \mathcal{l}_{21} & 1 & 0 \\ \mathcal{l}_{31} & \mathcal{l}_{32} & 1 \\ \end{bmatrix}\begin{bmatrix} u_{11} & u_{12} & u_{13} \\ 0 & u_{22} & u_{23} \\ 0 & 0 & u_{33} \\ \end{bmatrix}$ is (A) $$\begin{bmatrix} 1 & 0 & 0 \\ 0.40000 & 1 & 0 \\ 0.32000 & 1.7333 & 1 \\ \end{bmatrix}$$ (B) $$\begin{bmatrix} 25 & 5 & 4 \\ 0 & 6 & 14.400 \\ 0 & 0 & - 4.2400 \\ \end{bmatrix}$$ (C) $$\begin{bmatrix} 1 & 0 & 0 \\ 10 & 1 & 0 \\ 8 & 12 & 0 \\ \end{bmatrix}$$ (D) $$\begin{bmatrix} 1 & 0 & 0 \\ 0.40000 & 1 & 0 \\ 0.32000 & 1.5000 & 1 \\ \end{bmatrix}$$ (3). The upper triangular matrix $$\left\lbrack U \right\rbrack$$ in the $$\left\lbrack L \right\rbrack\left\lbrack U \right\rbrack$$ decomposition of the matrix given below $\begin{bmatrix} 25 & 5 & 4 \\ 0 & 8 & 16 \\ 0 & 12 & 22 \\ \end{bmatrix} = \begin{bmatrix} 1 & 0 & 0 \\ \mathcal{l}_{21} & 1 & 0 \\ \mathcal{l}_{31} & \mathcal{l}_{32} & 1 \\ \end{bmatrix}\begin{bmatrix} u_{11} & u_{12} & u_{13} \\ 0 & u_{22} & u_{23} \\ 0 & 0 & u_{33} \\ \end{bmatrix}$ is (A) $$\begin{bmatrix} 1 & 0 & 0 \\ 0.40000 & 1 & 0 \\ 0.32000 & 1.7333 & 1 \\ \end{bmatrix}$$ (B) $$\begin{bmatrix} 25 & 5 & 4 \\ 0 & 6 & 14.400 \\ 0 & 0 & - 4.2400 \\ \end{bmatrix}$$ (C) $$\begin{bmatrix} 25 & 5 & 4 \\ 0 & 8 & 16 \\ 0 & 0 & - 2 \\ \end{bmatrix}$$ (D) $$\begin{bmatrix} 1 & 0.2000 & 0.16000 \\ 0 & 1 & 2.4000 \\ 0 & 0 & - 4.240 \\ > \end{bmatrix}$$ (4). For a given 2000 $$\times$$ 2000 matrix $$\left\lbrack A \right\rbrack$$, assume that it takes about 15 seconds to find the inverse of $$\left\lbrack A \right\rbrack$$ by the use of the $$\left\lbrack L \right\rbrack\left\lbrack U \right\rbrack$$ decomposition method, that is, finding the $$\left\lbrack L \right\rbrack\left\lbrack U \right\rbrack$$ once, and then doing forward substitution and back substitution $$2000$$ times using the $$2000$$ columns of the identity matrix as the right hand side vector. The approximate time, in seconds, that it will take to find the inverse if found by repeated use of the Naive Gauss elimination method, that is, doing forward elimination and back substitution $$2000$$ times by using the $$2000$$ columns of the identity matrix as the right hand side vector is most nearly (A) $$300$$ (B) $$1500$$ (C) $$7500$$ (D) $$30000$$ (5). The algorithm for solving a set of $$n$$ equations $$\left\lbrack A \right\rbrack\left\lbrack X \right\rbrack = \left\lbrack C \right\rbrack$$, where $$\left\lbrack A \right\rbrack = \left\lbrack L \right\rbrack\left\lbrack U \right\rbrack$$ involves solving$$\left\lbrack L \right\rbrack\left\lbrack Z \right\rbrack = \left\lbrack C \right\rbrack$$ by forward substitution. The algorithm to solve $$\left\lbrack L \right\rbrack\left\lbrack Z \right\rbrack = \left\lbrack C \right\rbrack$$ is given by (A) $$z_{1} = c_{1}/l_{11}$$ for $$i$$ from 2 to $$n$$ do sum = 0 for $$j$$ from 1 to $$i$$ do sum = sum + $$l_{\text{ij}}*z_{j}$$ end do $$z_{i} = (c_{i} - \text{sum})/l_{\text{ii}}$$ end do (B) $$z_{1} = c_{1}/l_{11}$$ for $$i$$ from 2 to $$n$$ do sum = 0 for $$j$$ from 1 to $$(i - 1)$$ do sum = sum + $$l_{\text{ij}}*z_{j}$$ end do $$z_{i} = (c_{i} - \text{sum})/l_{\text{ii}}$$ end do (C) $$z_{1} = c_{1}/l_{11}$$ for $$i$$ from 2 to $$n$$ do for $$j$$ from 1 to $$(i - 1)$$do sum = sum + $$l_{\text{ij}}*z_{j}$$ end do $$z_{i} = (c_{i} - \text{sum})/l_{\text{ii}}$$ end do (D) for $$i$$ from 2 to $$n$$ do sum = 0 for $$j$$ from 1 to $$(i - 1)$$ do sum = sum +$$l_{\text{ij}}*z_{j}$$ end do $$z_{i} = (c_{i} - \text{sum})/l_{\text{ii}}$$ end do (6). To solve boundary value problems, a numerical method based on finite difference method is used. This results in simultaneous linear equations with tridiagonal coefficient matrices. These are solved using a specialized $$\left\lbrack L \right\rbrack\left\lbrack U \right\rbrack$$ decomposition method. Choose the set of equations that approximately solves the boundary value problem $\frac{d^{2}y}{dx^{2}} = 6x - 0.5x^{2},\ \ y\left( 0 \right) = 0,\ \ y\left( 12 \right) = 0,\ \ 0 \leq x \leq 12$ The second derivative in the above equation is approximated by the second-order accurate central divided difference approximation as learned in the differentiation module (Chapter 02.02). A step size of $$h = 4$$ is used, and hence the value of y can be found approximately at equidistantly placed 4 nodes between $$x = 0$$ and $$x = 12$$. (A) $$\begin{bmatrix} 1 & 0 & 0 & 0 \\ 0.0625 & 0.125 & 0.0625 & 0 \\ 0 & 0.0625 & 0.125 & 0.0625 \\ 0 & 0 & 0 & 1 \\ \end{bmatrix}\begin{bmatrix} y_{1} \\ y_{2} \\ y_{3} \\ y_{4} \\ \end{bmatrix} = \begin{bmatrix} 0 \\ 16.0 \\ 16.0 \\ 0 \\ \end{bmatrix}$$ (B) $$\begin{bmatrix} 1 & 0 & 0 & 0 \\ 0.0625 & - 0.125 & 0.0625 & 0 \\ 0 & 0.0625 & - 0.125 & 0.0625 \\ 0 & 0 & 0 & 1 \\ \end{bmatrix}\begin{bmatrix} y_{1} \\ y_{2} \\ y_{3} \\ y_{4} \\ \end{bmatrix} = \begin{bmatrix} 0 \\ 16.0 \\ 16.0 \\ 0 \\ \end{bmatrix}$$ (C) $$\begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0.0625 & - 0.125 & 0.0625 & 0 \\ 0 & 0.0625 & - 0.125 & 0.0625 \\ \end{bmatrix}\begin{bmatrix} y_{1} \\ y_{2} \\ y_{3} \\ y_{4} \\ \end{bmatrix} = \begin{bmatrix} 0 \\ 16.0 \\ 16.0 \\ 0 \\ \end{bmatrix}$$ (D) $$\begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0.0625 & 0.125 & 0.0625 & 0 \\ 0 & 0.0625 & 0.125 & 0.0625 \\ \end{bmatrix}\begin{bmatrix} y_{1} \\ y_{2} \\ y_{3} \\ y_{4} \\ \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \\ 16.0 \\ 16.0 \\ \end{bmatrix}$$ ## 7.10 LU Decomposition Method for Solving Simultaneous Linear Equations Exercise (1). Show that LU decomposition is computationally a more efficient way of finding the inverse of a square matrix than using Gaussian elimination. (2). Use LU decomposition to find [L] and [U] $4x_{1} + x_{2} - x_{3} = - 2$ $5x_{1} + x_{2} + 2x_{3} = 4$ $6x_{1} + x_{2} + x_{3} = 6$ (3). Find the inverse $\lbrack A\rbrack = \begin{bmatrix} 3 & 4 & 1 \\ 2 & - 7 & - 1 \\ 8 & 1 & 5 \\ \end{bmatrix}$ using LU decomposition. (4). Fill in the blanks for the unknowns in the LU decomposition of the matrix given below $\begin{bmatrix} 25 & 5 & 4 \\ 75 & 7 & 16 \\ 12.5 & 12 & 22 \\ \end{bmatrix} = \begin{bmatrix} \mathcal{l}_{11} & 0 & 0 \\ \mathcal{l}_{21} & \mathcal{l}_{22} & 0 \\ \mathcal{l}_{31} & \mathcal{l}_{32} & \mathcal{l}_{33} \\ \end{bmatrix}\begin{bmatrix} 25 & 5 & 4 \\ 0 & u_{22} & u_{23} \\ 0 & 0 & u_{33} \\ \end{bmatrix}$ (5). Show that the nonsingular matrix $\left\lbrack A \right\rbrack = \begin{bmatrix} 0 & 2 \\ 2 & 0 \\ \end{bmatrix}$ cannot be decomposed into LU form. (6). The LU decomposition of $\lbrack A\rbrack = \begin{bmatrix} 4 & 1 & - 1 \\ 5 & 1 & 2 \\ 6 & 1 & 1 \\ \end{bmatrix}$ is given by $\begin{bmatrix} 4 & 1 & - 1 \\ 5 & 1 & 2 \\ 6 & 1 & 1 \\ \end{bmatrix} = \begin{bmatrix} 1 & 0 & 0 \\ 1.25 & 1 & 0 \\ 1.5 & 2 & 1 \\ \end{bmatrix}\begin{bmatrix} ?? & ?? & ?? \\ 0 & ?? & ?? \\ 0 & 0 & ?? \\ \end{bmatrix}$ Find the upper triangular matrix in the above decomposition?
2023-03-26 20:55:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8608406782150269, "perplexity": 336.17637249363696}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296946535.82/warc/CC-MAIN-20230326204136-20230326234136-00261.warc.gz"}
https://mathstodon.xyz/@11011110/100903362734933058
Balogh and Solymosi's new paper doi.org/10.19086/da.4438 constructs $n$ points in the plane, no four in line, with max general-position subset size $O(n^{5/6+\epsilon})$, much better than the previous $o(n)$. I recently rescued a Wikipedia article on Solymosi, en.wikipedia.org/wiki/J%C3%B3z, adding book references for his results, but omitted my favorite (this one) because it wasn't published. Now it is, but the book reference would be too self-serving to add... A Mastodon instance for maths people. The kind of people who make $\pi z^2 \times a$ jokes. Use $ and $ for inline LaTeX, and $ and $ for display mode.
2018-12-11 08:07:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9104009866714478, "perplexity": 2519.8838144949896}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823588.0/warc/CC-MAIN-20181211061718-20181211083218-00372.warc.gz"}
https://mathspace.co/textbooks/syllabuses/Syllabus-1082/topics/Topic-21065/subtopics/Subtopic-273165/
iGCSE (2021 Edition) # 6.02 Logarithmic laws Lesson In Chapter 1 we reviewed the index laws to simplify expressions. Because of the link between indices and logarithms, it follows that these index laws can be manipulated to create a set of equivalent laws that apply to logarithms. ### The addition and subtraction logarithm laws The addition law of logarithms relates the sum of two logarithms to the logarithm of a product. Similarly, the subtraction law of logarithms relates the difference of two logarithms to the logarithm of a quotient. Addition and subtraction law of logarithms The addition law of logarithms: When adding logs with the same base, multiply the numbers. That is, $\log_bx+\log_by=\log_b\left(xy\right)$logbx+logby=logb(xy) The subtraction law of logarithms: When subtracting logs with the same base, divide the numbers. That is, $\log_bx-\log_by=\log_b\left(\frac{x}{y}\right)$logbxlogby=logb(xy) We can prove these laws by using the corresponding properties of exponentials: We start by letting $\log_bx=N$logbx=N and $\log_by=M$logby=M. We can rewrite these two equations in their equivalent exponential forms, $b^N=x$bN=x and $b^M=y$bM=y. Multiplying these two expressions gives us the result: $xy$xy $=$= $b^N\times b^M$bN×bM Writing down the product $=$= $b^{N+M}$bN+M Using a property of exponentials $\log_b\left(xy\right)$logb​(xy) $=$= $N+M$N+M Rewriting in logarithmic form $\log_b\left(xy\right)$logb​(xy) $=$= $\log_bx+\log_by$logb​x+logb​y Substituting Proof of subtraction law: We follow a similar procedure and start by letting $\log_bx=N$logbx=N and $\log_by=M$logby=M. We can rewrite these in their exponential forms, $b^N=x$bN=x and $b^M=y$bM=y. Taking the quotient of the two expressions gives us the result: $\frac{x}{y}$xy​ $=$= $\frac{b^N}{b^M}$bNbM​ Writing down the quotient $=$= $b^{N-M}$bN−M Using a property of exponentials $\log_b\left(\frac{x}{y}\right)$logb​(xy​) $=$= $N-M$N−M Rewriting in logarithmic form $\log_b\left(\frac{x}{y}\right)$logb​(xy​) $=$= $\log_bx-\log_by$logb​x−logb​y Substituting These two laws are especially valuable if we want to simplify expressions or solve equations involving logarithms. #### Worked examples ##### example 1 Simplify the logarithmic expression $\log_310-\log_32$log310log32. Think: Since the two logarithms have the same base, we can use the subtraction law of logarithms. Do: To use the subtraction property of two logarithms, we can divide the arguments: $\log_310-\log_32$log3​10−log3​2 $=$= $\log_3\left(\frac{10}{2}\right)$log3​(102​) Using the subtraction law $=$= $\log_35$log3​5 Simplifying the argument ##### example 2 Rewrite $\log_56x$log56x as a sum or difference of two logarithms. Think: Since there is a product in the logarithm, we can use the addition law in reverse. Do: So using the addition law, we can rewrite $\log_56x$log56x in the form: $\log_56+\log_5x$log56+log5x Reflect: Note that we could have chosen any pair of factors that multiply to give $6x$6x to rewrite this expression. So, for example, another possible answer would be $\log_52+\log_53x$log52+log53x #### Practice questions ##### question 1 Simplify each of the following expressions without using a calculator. Leave answers in exact form. 1. $\log_{10}11+\log_{10}2+\log_{10}9$log1011+log102+log109 2. $\log_{10}12-\left(\log_{10}2+\log_{10}3\right)$log1012(log102+log103) ##### question 2 Express $\log\left(\frac{pq}{r}\right)$log(pqr) as the sum and difference of log terms. ### The power logarithm law We've already seen how to simplify logarithms using the addition and subtraction property. Through the definition of logarithms we know that $x=a^m$x=am and $m=\log_ax$m=logax are equivalent. We are able to use this definition to discover some more helpful properties of logarithms such as the power law. #### Exploration Let's simplify $\log_a\left(x^2\right)$loga(x2) using the logarithmic properties that we already know. $\log_a\left(x^2\right)$loga​(x2) $=$= $\log_a\left(x\times x\right)$loga​(x×x) Rewrite $x^2$x2 as a product, $x\times x$x×x. $=$= $\log_ax+\log_ax$loga​x+loga​x Use the addition law of logarithms, $\log_a\left(xy\right)=\log_ax+\log_ay$loga​(xy)=loga​x+loga​y. $=$= $2\log_ax$2loga​x Collect logarithms with the same base and variables. We can also simplify logarithms with powers using the power law of logarithms, this property can be used for any values of the power $n$n. Power law of logarithms The power law of logarithm: When the number in the log is raised to a power, we can bring that power down to the front to be multiplied to the log. $\log_a\left(x^n\right)=n\log_ax$loga(xn)=nlogax Now, let's simplify $\log_a\left(x^2\right)$loga(x2) using the power law. $\log_a\left(x^2\right)$loga​(x2) $=$= $2\log_ax$2loga​x Notice this gives the exact same result as using the addition law. Let's consider the proof of the power law of logarithms: Proof Let $x$x $=$= $a^m$am $x^n$xn $=$= $\left(a^m\right)^n$(am)n Raise both sides of $x=a^m$x=am to the power $n$n. $x^n$xn $=$= $a^{mn}$amn Use the index law $\left(a^m\right)^n=a^{mn}$(am)n=amn. $\log_a\left(x^n\right)$loga​(xn) $=$= $mn$mn Express as a logarithm. $\log_a\left(x^n\right)$loga​(xn) $=$= $n\log_ax$nloga​x Substitute back for $m=\log_ax$m=loga​x. #### Practice questions ##### question 3 Use the properties of logarithms to rewrite the expression $\log_4\left(x^7\right)$log4(x7). ##### Question 4 Use the properties of logarithms to rewrite the expression $\log\left(\left(x+6\right)^5\right)$log((x+6)5). ##### Question 5 Evaluate $\log_5125^{\frac{5}{4}}$log512554. ### Using a combination of laws to simplify logarithmic expressions So far we've seen some laws of logarithms in isolation, and looked at how they may be individually useful to simplify an expression or solve an equation. However, sometimes simplifying an expression may require using several of these laws. The laws we have looked at are summarised below. Laws of logarithms The addition law of logarithms is given by: $\log_bx+\log_by=\log_b\left(xy\right)$logbx+logby=logb(xy) The subtraction law of logarithms is given by: $\log_bx-\log_by=\log_b\left(\frac{x}{y}\right)$logbxlogby=logb(xy) The power law of logarithms is given by: $\log_b\left(x^n\right)=n\log_bx$logb(xn)=nlogbx Some useful identities of logarithms are: $\log_b1=0$logb1=0 and $\log_bb=1$logbb=1 #### Worked example ##### Example 3 Simplify the expression $\log_3\left(100x^3\right)-\log_3\left(4x\right)$log3(100x3)log3(4x), writing your answer as a single logarithm. Think: Each logarithm in the expression has the same base, so we can express the difference as a single logarithm using the subtraction law. Do: To use the subtraction law, we take the quotient of the two arguments as follows: $\log_3\left(100x^3\right)-\log_3\left(4x\right)$log3​(100x3)−log3​(4x) $=$= $\log_3\left(\frac{100x^3}{4x}\right)$log3​(100x34x​) Using the subtraction law $=$= $\log_3\left(25x^2\right)$log3​(25x2) Simplifying the argument $=$= $\log_3\left(\left(5x\right)^2\right)$log3​((5x)2) Rewriting the argument as a power $=$= $2\log_3\left(5x\right)$2log3​(5x) Using the power law #### Practice questions ##### question 6 Simplify each of the following expressions without using a calculator. Leave answers in exact form. 1. $\log_{10}10+\frac{\log_{10}\left(15^{20}\right)}{\log_{10}\left(15^5\right)}$log1010+log10(1520)log10(155) 2. $\frac{8\log_{10}\left(\sqrt{10}\right)}{\log_{10}\left(100\right)}$8log10(10)log10(100) ##### question 7 Express $5\log x+3\log y$5logx+3logy as a single logarithm. ##### Question 8 Using the rounded values $\log_{10}7=0.845$log107=0.845 and $\log_{10}3=0.477$log103=0.477, calculate $\log_{10}49+\log_{10}27$log1049+log1027 to 3 decimal places. ## Changing the base We often encounter occasions where we need to take a logarithm given in one base and express it as a logarithm in another base. For example, if we wanted to use the calculator to evaluate a logarithm but need to change the base to $10$10 or $e$e. A change of base formula has been developed to do just that. Suppose we think of a number $y$y expressed as $y=\log_pa$y=logpa. Since $y=\log_pa$y=logpa, then from the definition of a logarithm this means that: $a$a $=$= $p^y$py If we now take logarithms of some other base $q$q, we have that: $\log_qa$logq​a $=$= $\log_q\left(p^y\right)$logq​(py) $\log_qa$logq​a $=$= $y\log_qp$ylogq​p From the working rules of logarithms this simplifies to: $y$y $=$= $\frac{\log_qa}{\log_qp}$logq​alogq​p​ Look carefully at this result. It is saying that: $\log_pa$logp​a $=$= $\frac{\log_qa}{\log_qp}$logq​alogq​p​ For example: $\log_58$log5​8 $=$= $\frac{\log_{10}8}{\log_{10}5}$log10​8log10​5​ So if we knew the common logs of $8$8 and $5$5, we could determine $\log_58$log58. Now, to five decimal places: $\log_{10}8$log10​8 $=$= $0.90309$0.90309 $\log_{10}5$log10​5 $=$= $0.69897$0.69897 Then we have $\log_58$log5​8 $\approx$≈ $\frac{0.90309}{0.69897}$0.903090.69897​ $=$= $1.29203$1.29203 ($5$5 d.p.) Note that $\log_58$log58 could also be expressed as $\frac{\log_2\left(8\right)}{\log_2\left(5\right)}$log2(8)log2(5) which would give the same answer $1.29203$1.29203. Any base can be used, now we have the above relationship. As another example: $\log_b10$logb​10 $=$= $\frac{\log_{10}10}{\log_{10}b}$log10​10log10​b​ $\log_b10$logb​10 $=$= $\frac{1}{\log_{10}b}$1log10​b​ And: $\log_b10\times\log_{10}b$logb​10×log10​b $=$= $1$1 This is an interesting result. We can generalise this to show that: $\log_ba\times\log_ab$logb​a×loga​b $=$= $1$1 So that $\log_ba$logba and $\log_ab$logab are multiplicative inverses. Change of base rule To change the base of a logarithm, we do $\frac{\text{log of the number}}{\text{log of the base}}$log of the numberlog of the base, that is: $\log_ab=\frac{\log_cb}{\log_ca}$logab=logcblogca #### Practice questions ##### Question 9 Rewrite $\log_416$log416 in terms of base $10$10 logarithms. ##### Question 10 Rewrite $\log_3\sqrt{5}$log35 in terms of base $10$10 logarithms. ### Outcomes #### 0606C7.2 Know and use the laws of logarithms (including change of base of logarithms).
2022-01-17 04:44:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8941782712936401, "perplexity": 2512.0784801560944}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300289.37/warc/CC-MAIN-20220117031001-20220117061001-00413.warc.gz"}
http://sage.math.gordon.edu/home/pub/47/
# MAT 338 Day 42 2011 ## 2570 days ago by kcrisman Hooray!  We made it.  You did a great job making it through the whole arc of number theory accessible at the undergraduate level. And we really did see a lot of the problems out there, including solving them.   Here are some that we did not see all the way through, though we were able to prove some things about them. • Solving higher-degree polynomial congruences, like $x^3\equiv a\text{ mod }(n)$. • Knowing how to find integer points on hard things like the Pell (hyperbola) equation $x^2-ny^2=1$. • Writing a number not just in terms of a sum of squares, but a sum of cubes, or a sum like $x^2+7y^2$. • The Prime Number Theorem, and finding ever better approximations to $\pi(x)$. It's this last one that I want to focus on today.  Recall Gauss' estimate for $\pi(x)$, the logarithmic integral function. @interact def _(n=(1000,(1000,10^6))): P = plot(prime_pi,n-1000,n,color='black',legend_label='$\pi(x)$') P += plot(Li,n-1000,n,color='green',legend_label='$Li(x)$') show(P) ## Click to the left again to hide and once more to show the dynamic interactive window It wasn't too bad.  But we were hoping we could get a little closer. So, among several other things, we tried $$Li(x)-\frac{1}{2}Li(\sqrt{x})\; .$$  And this was indeed better. @interact def _(n=(1000,(1000,10^6))): P = plot(prime_pi,n-1000,n,color='black',legend_label='$\pi(x)$') P += plot(Li,n-1000,n,color='green',legend_label='$Li(x)$') P += plot(lambda x: Li(x)-.5*Li(sqrt(x)),n-1000,n,color='red',legend_label='$Li(x)-\\frac{1}{2}Li(\sqrt{x})$') show(P) ## Click to the left again to hide and once more to show the dynamic interactive window So one might think one could keep adding and subtracting $$\frac{1}{n}Li(x^{1/n})$$ to get even closer, with this start to the pattern. As it turns out, that is not quite the right pattern.  In fact, the minus sign comes from $\mu(2)$, not from $(-1)^{2+1}$, as usually is the case in series like this! @interact def _(n=(1000,(1000,10^6)),k=(3,[1..10])): P = plot(prime_pi,n-1000,n,color='black',legend_label='$\pi(x)$') P += plot(Li,n-1000,n,color='green',legend_label='$Li(x)$') F = lambda x: sum([Li(x^(1/j))*moebius(j)/j for j in [1..k]]) P += plot(lambda x: Li(x)-.5*Li(sqrt(x)),n-1000,n,color='red',legend_label='$Li(x)-\\frac{1}{2}Li(\sqrt{x})$') P += plot(F,n-1000,n,color='blue',legend_label='$\sum_{j=1}^{%s}\\frac{\mu(j)}{j}Li(x^{1/j})$'%k) show(P) ## Click to the left again to hide and once more to show the dynamic interactive window However, it should be just as plain that this approximation doesn't really add a lot beyond $k=3$.  In fact, at $x=1000000$, just going through $k=3$ gets you within one of $\sum_{j=1}^\infty\frac{\mu(j)}{j}Li(x^{1/j})$.  So this is not enough to get a computable, exact formula for $\pi(x)$. Questions this might raise: • So where does this Moebius $\mu$ come from anyway? • What else is there to the error $$\left|\pi(x)-Li(x)\right|$$ anyway? • What does this have to do with winning a million dollars? • Are there connections with things other the just $\pi(x)$? We will answer these questions in this last lecture. @interact def _(k=(3,[2..11])): F = lambda x: sum([Li(x^(1/j))*moebius(j)/j for j in [1..k]]) T = [['$i$','$\pi(i)$','$Li(i)$','$\sum_{j=1}^{%s}\\frac{\mu(j)}{j}Li(x^{1/j})$'%k,'$\pi(i)-Li(i)$','$\pi(i)-\sum_{j=1}^{%s}\\frac{\mu(j)}{j}Li(x^{1/j})$'%k]] for i in [100000,200000..1000000]: T.append([i,prime_pi(i),Li(i).n(digits=7),F(i).n(digits=7),(prime_pi(i)-Li(i)).n(digits=4),(prime_pi(i)-F(i)).n(digits=4)]) html.table(T,header=True) ## Click to the left again to hide and once more to show the dynamic interactive window This table shows the errors in Gauss' and our new estimates for every hundred thousand up to a million.  Clearly Gauss is not exact, but the other error is not always perfect either. After the PNT was proved, mathematicians wanted to get a better handle on the error in the PNT.  In particular, the Swedish mathematician Von Koch made a very interesting contribution in 1901. Conjecture: The error in the PNT is less than $$\frac{1}{8\pi}\sqrt{x}\ln(x)\; .$$ This seems to work, broadly speaking. @interact def _(n=(1000,(1000,10^6))): P = plot(prime_pi,n-1000,n,color='black',legend_label='$\pi(x)$') P += plot(Li,n-1000,n,color='green',legend_label='$Li(x)$') P += plot(lambda x: Li(x)-1/(8*pi)*sqrt(x)*log(x),n-1000,n,color='blue',linestyle='--',legend_label="Von Koch error estimate") show(P) ## Click to the left again to hide and once more to show the dynamic interactive window Given this data, the conjecture seems plausible, if not even improvable (though remember that $Li$ and $\pi$ switch places infinitely often!).  Of course, a conjecture is not a theorem. He did have one, though. Theorem: The error estimate above is equivalent to saying that $\zeta(s)$ equals zero precisely where Riemann thought it would be zero in 1859. This may seem odd.  After all, $\zeta$ is just about reciprocals of all numbers, and can't directly measure primes.  But in fact, the original proofs of the PNT also used the $zeta$ function in essential ways.  So Von Koch was just formalizing the exact estimate it could give us on the error. Indeed, Riemann was after bigger fish.  He didn't just want an error term.  He wanted an exact formula for $\pi(x)$, one that could be computed - by hand, or by machine, if such a machine came along - as close as one pleased.  And this is where $\zeta(s)$ becomes important, because of the Euler product formula:  $$\sum_{n=1}^{\infty} \frac{1}{n^s}=\prod_{p}\frac{1}{1-p^{-s}}$$ Somehow $\zeta$ does encode everything we want to know about prime numbers.  And Riemann's paper, "On the Number of Primes Less Than a Given Magnitude", is the place where this magic really does happen, and seeing just how it happens is our goal to close the course. We'll begin by plotting $\zeta$, to see what's going on. plot(zeta,-10,10,ymax=10,ymin=-1) As you can see, $\zeta(s)$ doesn't seem to hit zero very often.  Maybe for negative $s$... Wait a minute!  What is this plot?  Shouldn't $\zeta$ diverge if you put negative numbers in for $s$?  After all, then we'd get things like $$\sum_{i=1}^\infty n$$ for $s=-1$, and somehow I don't think that converges. G=graphics_array([complex_plot(zeta, (-20,20), (-20,20)),complex_plot(lambda z: z, (-3,3),(-3,3))]) G.show(figsize=[8,8]) In fact, it turns out that we can evaluate $\zeta(s)$ for nearly any complex number $s$ we desire.  The graphic above color-codes where each complex number lands by matching it to the color in the second graphic. The important point isn't the picture itself, but that there is a picture.  Yes, $\zeta$ can be defined for (nearly) any complex number as input. One way to see this is by looking at each term $\frac{1}{n^s}$ in $\zeta(s)=\sum_{n=1}^\infty \frac{1}{n^s}$.   If we let $s=\sigma+it$ (a long-standing convention, instead of $x+iy$), we can rewrite $$n^{-s}=e^{-s\ln(n)}=e^{-(\sigma+it)\ln(n)}=e^{-\sigma\ln(n)}e^{-it\ln(n)}=n^{-\sigma}\left(\cos(t\ln(n))-i\sin(t\ln(n))\right)$$ where the last step comes from something you may remember from calculus, and that is very easy to prove with Taylor series - that $$e^{ix}=\cos(x)+i\sin(x)\; .$$ So at least if $\sigma>1$, since $\cos$ and $\sin$ always have absolute value less than or equal to one, we still have the same convergence properties as with regular series, if we take the imaginary and real parts separately - $$\sum_{n=1}^\infty\frac{\cos(t\ln(n))}{n^s}+i\sum_{n=1}^\infty\frac{\sin(t\ln(n))}{n^s}$$ That doesn't explain the part of the complex plane on the left of the picture above, and all I will say is that it is possible to do this, and Riemann did it. (In fact, Riemann also is largely responsible for much of advanced complex analysis.) Let's get a sense for what the $\zeta$ function looks like.  First, a three-dimensional plot of its absolute value for $\sigma$ between 0 and 1 (which will turn out to be all that is important for our purposes). plot3d(lambda x,y: abs(zeta(x+i*y)),(0,1),(-20,20),plot_points=100)+plot3d(0,(0,1),(-20,20),color='green',alpha=.5) To get a better idea of what happens, we look at the absolute value of $\zeta$ for different inputs.  Here, we look at $\zeta(\sigma+it)$, where $\sigma$ is the real part, chosen by you, and then we plot $t$ out as far as requested.   Opposite that is the line which we are viewing on the complex plane. @interact def _(sig=slider(.01, .99, .01, 0.5, label='$\sigma$'),end=slider(2,100,1,40,label='end of $t$')): p = plot(lambda t: abs(zeta(sig+t*i)), -end,end,rgbcolor=hue(0.7)) q = complex_plot(zeta,(0,1),(-end,end),aspect_ratio=1/end)+line([(sig,-end),(sig,end)],linestyle='--') show(graphics_array([p,q]),figsize=[10,6]) ## Click to the left again to hide and once more to show the dynamic interactive window You'll notice that the only places the function has absolute value zero (which means the only places it hits zero) are when $\sigma=1/2$. Another (very famous) image is that of the parametric graph of each vertical line in the complex plane as mapped to the complex plane.  You can think of this as where an infinitely thin slice of the complex plane is 'wrapped' to. @interact def _(sig=slider(.01, .99, .01, 0.5, label='$\sigma$')): end=30 p = parametric_plot((lambda t: zeta(sig+t*i).real(),lambda t: zeta(sig+t*i).imag()), (0,end),rgbcolor=hue(0.7),plot_points=300) q = complex_plot(zeta,(0,1),(0,end),aspect_ratio=1/end)+line([(sig,0),(sig,end)],linestyle='--') show(graphics_array([p,q]),figsize=[10,6]) ## Click to the left again to hide and once more to show the dynamic interactive window The reason this image is so famous is because the only time it seems to hit the origin at all is precisely at $\sigma=1/2$ - and there, it hits it lots of times.  Everywhere else it just misses, somehow. That is not one hundred percent true, because it is also zero at negative even integer input, but these are well understood; this is the mysterious part.  And so we have the Riemann Hypothesis: $$\text{ All the zeros of }\zeta(s)=\zeta(\sigma+it)\text{ where }t\neq 0\text{ are on }\sigma=1/2\; .$$ The importance of this problem is evidenced by it having been selected as one of the seven Millennium Prize problems by the Clay Math Institute (each holding a million-dollar award), as well as having no fewer than three recent popular books devoted to it (including the best one from our mathematical perspective, Prime Obsession by John Derbyshire). The rest of our time is dedicated to seeing why this might be related to the distribution of prime numbers, such as Von Koch's result showing the RH is equivalent to a bound on the error $\left|\pi(x)-Li(x)\right|$. We'll pursue this in three steps. • Our first step is to see the connection between $\pi(x)$ and $\mu(n)$. • Then we'll see the connection between these and $\zeta$. • Finally, we'll see how the zeros of $\zeta$ come into play. def J(x): end = floor(log(x)/log(2)) out = 0 for j in [1..end]: out += 1/j*prime_pi(x^(1/j)) return out L1 = [(n,J(n)) for n in [1..20]] L2 = [(n,J(n)) for n in [1..150]] graphics_array([plot_step_function(L1),plot_step_function(L2)]).show(figsize=[10,5]) The function above is one Riemann called $f$, but which we (following Edwards and Derbyshire) will call $J(x)$.  It is very similar to $\pi(x)$ in its definition, so it's not surprising that it looks similar. $$J(x)=\pi(x)+\frac{1}{2}\pi(\sqrt{x})+\frac{1}{3}\pi(\sqrt[3]{x})+\frac{1}{4}\pi(\sqrt[4]{x})+\cdots=\sum_{k=1}^\infty \frac{1}{n}\pi\left(x^{1/n}\right)$$ This looks like it's infinite, but it's not actually infinite.  For instance, we can see on the graph that $$J(20)=\pi(20)+\frac{1}{2}\pi(\sqrt{20})+\frac{1}{3}\pi(\sqrt[3]{20})+\frac{1}{4}\pi(\sqrt[4]{20})=8+\frac{2}{2}+\frac{1}{3}+\frac{1}{4}=9\frac{7}{12}$$ because $\sqrt[5]{20}\approx 1.8$ and $\pi(\sqrt[5]{20})\approx\pi(1.8)=0$, so the sum ends there. Okay, so we have this new function.  Yet another arithmetic function.   So what? Ah, but what have we been doing to all our arithmetic functions to see what they can do, to get formulas for them?  We've been Moebius inverting them, naturally!  In this case, Moebius inversion could be really great, since it would give us something about the thing being added - namely, $\pi(x)$. The only thing standing in our way is that $$J(x)=\sum_{k=1}^\infty \frac{1}{n}\pi\left(x^{1/n}\right)$$ which is not a sum over divisors.  But it turns out that, just like when we took the limits of the sum over divisors $\sum_{d\mid n}\frac{1}{d}$, we got $\sum_{n=1}^\infty \frac{1}{n}$, we can do the same thing with Moebius inversion. Fact If $\sum_{n=1}^\infty f(x/n)$ and $\sum_{n=1}^\infty g(x/n)$ both converge absolutely, then $$g(x)=\sum_{n=1}^\infty f(x/n)\Longleftrightarrow f(x)=\sum_{n=1}^\infty \mu(n)g(x/n)\; .$$ For us, we've just defined $g=J$ with $f(x/n)=\frac{1}{n}\pi\left(x^{1/n}\right)$, and so we get the very important result that $$\pi(x)=\sum_{n=1}^\infty \mu(n)\frac{J(x^{1/n})}{n}=J(x)-\frac{1}{2}J(\sqrt{x})-\frac{1}{3}J(\sqrt[3]{x})-\frac{1}{5}J(\sqrt[5]{x})+\frac{1}{6}J(\sqrt[6]{x})+\cdots$$ (If that last use of Moebius inversion looked a little sketchy, it does to me too, but I cannot find a single source where it's complained about that $f(x/n)=\frac{1}{n}\pi\left(x^{1/n}\right)$ is really a function of $x$ and $n$, not just $x/n$.  In any case, the result is correct, via a somewhat different explanation of this version of inversion in a footnote in Edwards.) Now, this looks just as hopeless as before.  How is $J$ going to help us calculate $\pi$, if we can only calculate $J$ in terms of $\pi$ anyway? Ah, but here is where Riemann "turns the Golden Key", as Derbyshire puts it. • We will now use the Euler product for $\zeta$ to connect $\zeta$ to $J$. • Then we will see how the zeros of $\zeta$ give us an exact formula for $J$. • Then we will finally plug $J$ back into the Moebius-inverted formula for $\pi$, giving an exact formula for $\pi$! import mpmath L = lcalc.zeros_in_interval(10,150,0.1) n=100 P = plot(prime_pi,n-50,n,color='black',legend_label='$\pi(x)$') P += plot(Li,n-50,n,color='green',legend_label='$Li(x)$') G = lambda x: sum([mpmath.li(x^(1/j))*moebius(j)/j for j in [1..3]]) P += plot(G,n-50,n,color='red',legend_label='$\sum_{j=1}^{%s}\\frac{\mu(j)}{j}Li(x^{1/j})$'%3) F = lambda x: sum([(mpmath.li(x^(1/j))-log(2)+numerical_integral(1/(y*(y^2-1)*log(y)),x^(1/j),oo)[0])*moebius(j)/j for j in [1..3]])-sum([(mpmath.ei(log(x)*((0.5+l[0]*i)/j))+mpmath.ei(log(x)*((0.5-l[0]*i)/j))).real for l in L for j in [1..3]]) P += plot(F,n-50,n,color='blue',legend_label='Really good estimate',plot_points=50) show(P) We can see above that this has the potential to be a very good approximation, even given that I did limited calculations here.  The most interesting thing is the gentle waves you should see; this is quite different from the other types of approximations we had, and seems to have the potential to mimic the more abrupt nature of the actual $\pi(x)$ function much better in the long run. So let's see what this is.  First, let's connect $J$ and $\zeta$. Recall the Euler product for $\zeta$ again:  $$\zeta(s)=\prod_{p}\frac{1}{1-p^{-s}}$$ The trick to getting information about primes out of this, as well as connecting to $J$, is to take the logarithm of the whole thing.   This will turn the product into a sum  - something we can work with much more easily: $$\ln(\zeta(s))=\sum_{p}\ln\left(\frac{1}{1-p^{-s}}\right)=\sum_{p}-\ln\left(1-p^{-s}\right)$$ If we just had the fraction, we could use the geometric series to turn this into a sum (in fact, that is where it came from), but we got rid of it. Question: So what can we do with $-\ln()$ of some sum, not a product? Answer: We can use its Taylor series!  $$-\ln(1-x)=\sum_{k=1}^\infty \frac{x^k}{k}$$ Plug it in: $$\ln(\zeta(s))=\sum_{p}\sum_{k=1}^\infty \frac{(p^{-s})^k}{k}$$ Question: I don't see anything very useful here.  Can I get anything good out of this? Answer: Yes, in two big steps. First, we rewrite $\frac{(p^{-s})^k}{k}$ as an integral, so that we could add it up more easily. • Standard Calculus II improper integral work shows that $$\frac{(p^{-s})^k}{k}=\frac{s}{k}\int_{p^k}^\infty x^{-s-1}dx$$ • That means $$\ln(\zeta(s))=\sum_{p}\sum_{k=1}^\infty \frac{(p^{-s})^k}{k}=\sum_{p}\sum_{k=1}^\infty\frac{s}{k}\int_{p^k}^\infty x^{-s-1}dx=s\sum_{p}\sum_{k=1}^\infty\int_{p^k}^\infty \frac{1}{k}x^{-s-1}dx$$ Next, it would be nice to get this as one big integral.  But of what function, and with what endpoints? • We could unify these integrals from $p^k$ to $\infty$ somewhat artificially, by writing $$\int_{p^k}^\infty \frac{1}{k}x^{-s-1}dx=\int_1^{p^k}\frac{1}{k}\cdot 0\cdot x^{-s-1}\; dx+\int_{p^k}^\infty \frac{1}{k}x^{-s-1}dx$$  That would give an integral from $1$ to $\infty$, though it would be defined with a piecewise integrand. • Hmm, what function would I get if I added up all those piecewise integrands? • Well, it would be a function which added $\frac{1}{1}x^{-s-1}$ when $x$ reached a prime $p$ - so it would include $$\pi(x)x^{-s-1}\ldots$$ • It would add $\frac{1}{2}x^{-s-1}$ when it reached a square of a prime $p^2$, which is the same as adding it when $\sqrt{x}$ hits a prime - so it would include $$\frac{1}{2}\pi(\sqrt{x})x^{-s-1}\ldots$$ • And it adds $\frac{1}{3}x^{-s-1}$ when $x$ reaches a cube of a prime, which is when $\sqrt[3]{x}$ hits a prime - which adds $$\frac{1}{3}\pi(\sqrt[3]{x})x^{-s-1}$$ • In short, adding up all these piecewise integrands seems to give a big integrand $$\left(\pi(x)+\frac{1}{2}\pi(\sqrt{x})+\frac{1}{3}\pi(\sqrt[3]{x})+\cdots\right)x^{-s-1}$$ • But this is $J(x)$, of course (multiplied by $x^{-s-1}$)! Hence $$\ln(\zeta(s))=s\sum_{p}\sum_{k=1}^\infty\int_{p^k}^\infty \frac{1}{k}x^{-s-1}dx=s\int_1^\infty J(x)x^{-s-1}dx$$  This completes our first of the final goals - connecting $\zeta$ and $J$. L = lcalc.zeros_in_interval(10,100,0.1) [l[0] for l in L] [14.1347251, 21.0220396, 25.0108576, 30.4248761, 32.9350616, 37.5861782, 40.9187190, 43.3270733, 48.0051509, 49.7738325, 52.9703215, 56.4462477, 59.3470440, 60.8317785, 65.1125441, 67.0798105, 69.5464017, 72.0671577, 75.7046907, 77.1448401, 79.3373750, 82.9103808, 84.7354930, 87.4252746, 88.8091112, 92.4918993, 94.6513440, 95.8706342, 98.8311942] [14.1347251, 21.0220396, 25.0108576, 30.4248761, 32.9350616, 37.5861782, 40.9187190, 43.3270733, 48.0051509, 49.7738325, 52.9703215, 56.4462477, 59.3470440, 60.8317785, 65.1125441, 67.0798105, 69.5464017, 72.0671577, 75.7046907, 77.1448401, 79.3373750, 82.9103808, 84.7354930, 87.4252746, 88.8091112, 92.4918993, 94.6513440, 95.8706342, 98.8311942] We see all the zeros for $\sigma=1/2$ between $0$ and $100$; there are 29 of them. Our next goal is to see how this connection $$\ln(\zeta(s))=s\int_1^\infty J(x)x^{-s-1}dx$$ relates to the zeros of the $\zeta$ function (and hence the Riemann Hypothesis). We will do this by analogy - albeit a very powerful one, which Euler used to prove $\zeta(2)=\frac{\pi^2}{6}$ and which, correctly done, does yield the right answer. Recall basic algebra.  The Fundamental Theorem of Algebra states that every polynomial factors over the complex numbers.  For instance, $$f(x)=5x^3-5x=5(x-0)(x-1)(x+1)\; .$$  So we could then say that $$\ln(f(x))=\ln(5)+\ln(x-0)+\ln(x-1)+\ln(x+1)$$ Then if it turned out that $\ln(f(x))$ was useful to us for some other reason, it would be reasonable to say that we can get information about that other material from adding up information about the zeros of $f$ (and the constant $5$), because of the addition of $\ln(x-r)$ for all the roots $r$. You can't really do this with arbitrary functions, of course, and $\zeta$ doesn't work - for instance, because $\zeta(1)$ diverges badly, no matter how you define the complex version of $\zeta$. But it happens that $\zeta$ is very close to a function you can do that to, $(s-1)\zeta(s)$.   Applying the idea above to $(s-1)\zeta(s)$ (and doing lots of relatively hard complex integrals, or some other formal business with difficult convergence considerations) allows us to essentially invert $$\ln(\zeta(s))=s\int_1^\infty J(x)x^{-s-1}dx$$ to $$J(x)=Li(x)-\sum_{\rho}Li(x^\rho)-\ln(2)+\int_x^\infty\frac{dt}{t(t^2-1)\ln(t)}$$ It is hard to overestimate the importance of this formula.  Each piece comes from something inside $\zeta$ itself, inverted in this special way. • $Li(x)$ comes from the fact that we needed $(s-1)\zeta(s)$ to apply this inversion, not just $\zeta(s)$. • In fact, we can directly see this particular inversion, as it's true that $$s\int_1^\infty Li(x)x^{-s-1}dx=-\ln(s-1)$$ so one can see that $s-1$ and $Li$ seem to correspond.  (Unfortunately, Maxima, which does integration in Sage, cannot do this one so nicely yet.) • Each $Li(x^\rho)$ comes from each of the zeros of $\zeta$ on the line $\sigma=1/2$ in the complex plane. • The $\ln(2)$ comes from the constant when you do the factoring, like the $5$ in the example. • The integral comes from the zeros of $\zeta$ at $-2n$ I mentioned briefly above. To give you a sense of how complicated this formula $$J(x)=Li(x)-\sum_{\rho}Li(x^\rho)-\ln(2)+\int_x^\infty\frac{dt}{t(t^2-1)\ln(t)}$$ really is, here is a plot of something related. import mpmath parametric_plot((lambda t: mpmath.ei(log(20)*(0.5+i*RR(t))).real,lambda t: mpmath.ei(log(20)*(0.5+i*RR(t))).imag), (0,14.1),rgbcolor=hue(0.7),plot_points=300)+point((mpmath.ei(log(20)*(0.5+i*14.1)).real,mpmath.ei(log(20)*(0.5+i*14.1)).imag),color='red',size=20) This is the plot of $$Li(20^{1/2+it})$$ up through the first zero of $\zeta$ above the real axis.    It's beautiful, but also forbidding.  After all, if takes that much twisting and turning to get to $Li$ of the first zero, what is in store if we have to add up over all infinitely many of them to calculate $J(20)$? Now we are finally ready to see Riemann's result, by plugging in this formula for $J$ into the Moebius inverted formula for $\pi$ given by $$\pi(x)=J(x)-\frac{1}{2}J(\sqrt{x})-\frac{1}{3}J(\sqrt[3]{x})-\frac{1}{5}J(\sqrt[5]{x})+\frac{1}{6}J(\sqrt[6]{x})+\cdots$$ Riemann did not prove it fully rigorously, and indeed one of the provers of the PNT mentioned taking decades to prove all the statements Riemann made in this one paper, just so he could prove the PNT.  Nonetheless, it is certainly Riemann's formula for $\pi(x)$, and an amazing one: $$\pi(x)=\sum_{n=1}^\infty \frac{\mu(n)}{n}\left[ Li(x^{1/n})-\sum_{\rho}\left(Li(x^{\rho/n})+Li(x^{\bar{\rho}/n})\right)+\int_{x^{1/n}}^\infty\frac{dt}{t(t^2-1)\ln(t)}\right]$$ Two points: • Here, $\rho$ is a zero above the real axis, and $\bar{\rho}$ is the corresponding one below the real axis. • If you're wondering where $\ln(2)$ went, it went to 0 because $\sum_{n=1}^\infty \frac{\mu(n)}{n}=0$, though this is very hard to prove (in fact, it is a consequence of the PNT). Now let's see it in action. import mpmath var('y') L = lcalc.zeros_in_interval(10,50,0.1) @interact def _(n=(100,(60,10^3))): P = plot(prime_pi,n-50,n,color='black',legend_label='$\pi(x)$') P += plot(Li,n-50,n,color='green',legend_label='$Li(x)$') G = lambda x: sum([mpmath.li(x^(1/j))*moebius(j)/j for j in [1..3]]) P += plot(G,n-50,n,color='red',legend_label='$\sum_{j=1}^{%s}\\frac{\mu(j)}{j}Li(x^{1/j})$'%3) F = lambda x: sum([(mpmath.li(x^(1/j))-log(2)+numerical_integral(1/(y*(y^2-1)*log(y)),x^(1/j),oo)[0])*moebius(j)/j for j in [1..3]])-sum([(mpmath.ei(log(x)*((0.5+l[0]*i)/j))+mpmath.ei(log(x)*((0.5-l[0]*i)/j))).real for l in L for j in [1..3]]) P += plot(F,n-50,n,color='blue',legend_label='Really good estimate',plot_points=50) show(P) ## Click to the left again to hide and once more to show the dynamic interactive window And this graphic shows just how good it can get.  Again, notice the waviness, which allows it to approximate $\pi(x)$ not just once per 'step' of the function, but along the steps. We can also just check out some numerical values. var('y') L = lcalc.zeros_in_interval(10,300,0.1) F = lambda x: sum([(mpmath.li(x^(1/j))-log(2)+numerical_integral(1/(y*(y^2-1)*log(y)),x^(1/j),oo)[0])*moebius(j)/j for j in [1..3]])-sum([(mpmath.ei(log(x)*((0.5+l[0]*i)/j))+mpmath.ei(log(x)*((0.5-l[0]*i)/j))).real for l in L for j in [1..3]]) F(300); prime_pi(300); Li(300.); Li(300.)-1/2*Li(sqrt(300.))-1/3*Li((300.)^(1/3)) mpf('62.169581705460772') 62 67.2884511014 62.1320949397 mpf('62.169581705460772') 62 67.2884511014 62.1320949397 This is truly only the beginning of research in modern number theory. For instance, research in finding points on curves leads to more complicated series like $\zeta$, called $L$-functions.  There is a version of the Riemann Hypothesis for them, too! And it gives truly interesting, strange, and beautiful results - like the following result from the last year or two, with which we will end the course. Let $r_{12}(n)$ denote the number of ways to write $n$ as a sum of twelve squares, like we did $r(n)$ the number of ways to write as a sum of two squares.  Here, order and sign both matter, so $(1,2)$ and $(2,1)$ and $(-2,1)$ are all different. Theorem: As we let $p$ go through the set of all prime numbers, the distribution of the fraction $$\frac{r_{12}(p)-8(p^5+1)}{32p^{5/2}}$$ is precisely as $$\frac{2}{\pi}\sqrt{1-t^2}$$ in the long run. def dist(v, b, left=float(0), right=float(pi)): """ We divide the interval between left (default: 0) and right (default: pi) up into b bins. For each number in v (which must left and right), we find which bin it lies in and add this to a counter. This function then returns the bins and the number of elements of v that lie in each one. ALGORITHM: To find the index of the bin that a given number x lies in, we multiply x by b/length and take the floor. """ length = right - left normalize = float(b/length) vals = {} d = dict([(i,0) for i in range(b)]) for x in v: n = int(normalize*(float(x)-left)) d[n] += 1 return d, len(v) def graph(d, b, num=5000, left=float(0), right=float(pi)): s = Graphics() left = float(left); right = float(right) length = right - left w = length/b k = 0 for i, n in d.iteritems(): k += n # ith bin has n objects in it. s += polygon([(w*i+left,0), (w*(i+1)+left,0), \ (w*(i+1)+left, n/(num*w)), (w*i+left, n/(num*w))],\ rgbcolor=(0,0,0.5)) return s def sin2(): PI = float(pi) return plot(lambda x: (2/PI)*math.sin(x)^2, 0,pi, plot_points=200, rgbcolor=(0.3,0.1,0.1), thickness=2) def sqrt2(): PI = float(pi) return plot(lambda x: (2/PI)*math.sqrt(1-x^2), -1,1, plot_points=200, rgbcolor=(0.3,0.1,0.1), thickness=2) delta = delta_qexp(10^5) @interact def delta_dist(b=(20,(10..150)), number = (500,1000,..,delta.prec())): D = delta[:number] w = [float(D[p])/(2*float(p)^(5.5)) for p in prime_range(number + 1)] d, total_number_of_points = dist(w,b,float(-1),float(1)) show(graph(d, b, total_number_of_points,-1,1) + sqrt2(), frame=True, gridlines=True) ## Click to the left again to hide and once more to show the dynamic interactive window Amazing! This is based on the graphic below, kindly created for me by William Stein, the founder of Sage, whose research is directly related to such things. @interact def delta_dist(b=(20,(10..150)), number = (500,1000,..,delta.prec()), verbose=False): D = delta[:number] if verbose: print "got delta" w = [float(D[p])/(2*float(p)^(5.5)) for p in prime_range(number + 1)] if verbose: print "normalized" v = [acos(x) for x in w] if verbose: print "arccos" d, total_number_of_points = dist(v,b) if verbose: print "distributed" show(graph(d, b, total_number_of_points) + sin2(), frame=True, gridlines=True)
2018-05-27 13:52:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8297197818756104, "perplexity": 807.0421041904893}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794868316.29/warc/CC-MAIN-20180527131037-20180527151037-00291.warc.gz"}
https://docs.eyesopen.com/toolkits/cpp/depicttk/OEDepictFunctions/OECreateWin32GraphicsImage.html
# OECreateWin32GraphicsImage¶ OEImageBase *OECreateWin32GraphicsImage(std::shared_ptr<Gdiplus::Graphics> &graphics); Note This function is only available on the Windows operating system through C++. Returns an new OEImageBase pointer that will forward all the draw commands to the passed in Gdiplus::Graphics object. The user is expected to take ownership and delete the OEImageBase pointer. The OEImageBase will share ownership of the passed in graphics object. It important that the user does not destroy the Gdiplus::Graphics object before the OEImageBase is done with it. graphics An OESharedPtr holding ownership of a Gdiplus::Graphics object. Ownership can thus be shared by the caller and the OEImageBase object simultaneously in a thread-safe fashion. The following code demonstrates how to interact with this function: ULONG_PTR gdiplusToken; Gdiplus::GdiplusStartupInput startupInput; GdiplusStartup(&gdiplusToken, &startupInput, NULL); OEOwnedPtr<Gdiplus::Bitmap> bitmap(new Bitmap(200, 100, PixelFormat32bppARGB)); std::shared_ptr<Gdiplus::Graphics> graphics(new Graphics(bitmap)); std::unique_ptr<OEImageBase> img(OECreateWin32GraphicsImage(graphics)); OEMol mol; OESmilesToMol(mol, "c1ccccc1 benzene"); OEPrepareDepiction(mol); OERenderMolecule(*img.get(), mol);
2021-05-13 01:03:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18721027672290802, "perplexity": 13748.457032709071}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991413.30/warc/CC-MAIN-20210512224016-20210513014016-00393.warc.gz"}
https://testbook.com/question-answer/which-one-of-the-following-statements-is-not-corre--607d9c2cc54a0fb3f5465436
Which one of the following statements is not correct for light rays? This question was previously asked in NDA (Held On: 18 Apr 2021) General Ability Test Previous Year paper View all NDA Papers > 1. Light travels at different speeds in different media 2. Light travels at almost 300 million metres per second in air 3. Light speeds down as it leaves a water surface and enters the air 4. Light speeds up as it leaves a glass surface and enters the air Option 3 : Light speeds down as it leaves a water surface and enters the air Detailed Solution CONCEPT: • Refraction of Light: The bending of the ray of light passing from one medium to the other medium is called refraction. • The refraction of light takes place on going from one medium to another because the speed of light is different in the two media. • The greater the difference in the speeds of light in the two media, the greater will be the amount of refraction. • medium in which the speed of light is more is known as an optically rarer medium and a medium in which the speed of light is less is known as an optically denser medium. EXPLANATION: • Mathematically the refractive index can be written as ⇒ $$\rm {Speed\;of\;light\;in\;a\;medium\left( v \right)} = {\rm{\;}}\frac{{Speed\;of\;light\;in\;vaccum\left( C \right)}}{{\rm{Refractive\;index\;}}\left( μ \right)}$$ • As the speed of light in a vacuum is constant, therefore we can say that speed of light in a medium is inversely proportional to the refractive index of the medium. • Hence, we can say that the speed of light travels at different speeds in different media. Therefore option 1 is correct. • The speed of light in the air is 3 × 108 m/s. Therefore option 2 is correct. • When the light is traveling from water to air and as we know that the refractive index of water (1.33) and that of air (1.008), it means the water is a denser medium and air is a rarer medium. • Whenever light goes from water to air, the frequency of light and phase of light does not change • However, the velocity of light and wavelength of light change (increases) because the light is traveling from denser medium (μwater 1.33) to rarer medium (μair1.008). Therefore option 3 is incorrect.
2021-09-22 07:44:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7152594327926636, "perplexity": 996.7313529549504}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057337.81/warc/CC-MAIN-20210922072047-20210922102047-00068.warc.gz"}
http://cvgmt.sns.it/paper/2306/
# A singular radial connection over B⁵ minimizing the Yang-Mills energy created by petrache on 07 Dec 2013 [BibTeX] Preprint Inserted: 7 dec 2013 Last Updated: 7 dec 2013 Year: 2013 Abstract: We prove that the pullback of the $SU(n)$-soliton of Chern class $c_2=1$ over $\mathbb S^4$ via the radial projection $\pi:\mathbb B^5\backslash\{0\}\to \mathbb S^4$ minimizes the Yang-Mills energy under the fixed boundary trace constraint. In particular this shows that stationary Yang-Mills connections in high dimension can have singular sets of codimension $5$.
2018-08-20 12:52:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7882114052772522, "perplexity": 1883.524747187523}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221216453.52/warc/CC-MAIN-20180820121228-20180820141228-00219.warc.gz"}
https://wiki.math.ucr.edu/index.php?title=005_Sample_Final_A,_Question_16&diff=prev&oldid=850
# Difference between revisions of "005 Sample Final A, Question 16" Question Graph the following, ${\displaystyle -x^{2}+4y^{2}-2x-16y+11=0}$ Foundations: 1) What type of function is this? 2) What can you say about the orientation of the graph? 1) Since both x and y are squared it must be a hyperbola or an ellipse. We can conclude that the graph is an ellipse since both ${\displaystyle x^{2}}$   and   ${\displaystyle y^{2}}$ have the same sign, positive. 2) Since the coefficient of the ${\displaystyle x^{2}}$ term is smaller, when we divide both sides by 36 the X-axis will be the major axis. We start by dividing both sides by 36. This yields ${\displaystyle {\frac {4x^{2}}{36}}+{\frac {9(y+1)^{2}}{36}}={\frac {x^{2}}{9}}+{\frac {(y+1)^{2}}{4}}=1}$. The four vertices are: ${\displaystyle (-3,-1),(3,-1),(0,1){\text{ and }}(0,-3)}$
2022-07-02 04:58:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 6, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9369180798530579, "perplexity": 279.83360494822244}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103984681.57/warc/CC-MAIN-20220702040603-20220702070603-00338.warc.gz"}
http://www.math.rochester.edu/
# Spotlight Mathematicians are accustomed to seeing π in a variety of fields. But professor Tamar Friedmann was still surprised to find it lurking in a formula for energy states of the hydrogen atom. Read more… # Alex Iosevich and Jonathan Pakianathan awarded joint NSA Grant Professors Alex Iosevich and Jonathan Pakianathan have been awarded a joint NSA Mathematical Sciences Grant titled Group Actions and Erdos Problems in Discrete, Continuous and Arithmetic Settings. # Upcoming events ### Friday, December 4th, 2015 • 2:30 PM Topology Pre-Talk Real Johnson-Wilson Theories and Computations Vitaly Lorman (Johns Hopkins University) Hylan 1106B • 4:00 PM Topology Seminar Real Johnson-Wilson Theories and Computations Vitaly Lorman (Johns Hopkins University) Hylan 1106A
2015-11-28 05:50:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.244651198387146, "perplexity": 11790.617756060325}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398451648.66/warc/CC-MAIN-20151124205411-00348-ip-10-71-132-137.ec2.internal.warc.gz"}
https://www.transtutors.com/questions/in-this-problem-we-describe-a-faster-algorithm-due-to-hopcroft-and-karp-for-finding-677439.htm
# >In this problem, we describe a faster algorithm, due to Hopcroft and Karp, for finding a maximum... The Hopcroft-Karp bipartite matching algorithm In this problem, we describe a faster algorithm, due to Hopcroft and Karp, for finding a maximum matching in a bipartite graph. The algorithm runs in  time. Given an undirected, bipartite graph G D (V,E), where V = L ∪R and all edges have exactly one endpoint in L, let M be a matching in G. We say that a simple path P in G is an augmenting path with respect to M if it starts at an unmatched vertex in L, ends at an unmatched vertex in R, and its edges belong alternately to M and E - M. (This definition of an augmenting path is related to, but different from, an augmenting path in a flow network.) In this problem, we treat a path as a sequence of edges, rather than as a sequence of vertices. A shortest augmenting path with respect to a matching M is an augmenting path with a minimum number of edges. Given two sets A and B, the symmetric difference A ?B is defined as (A-B)∪(B – A), that is, the elements that are in exactly one of the two sets. a. Show that if M is a matching and P is an augmenting path with respect to M, then the symmetric difference M ?P is a matching and |M ?P| = |M|+1. Show that if P1, P2,…,Pkare vertex-disjoint augmenting paths with respect to M, then the symmetric difference M ?(P1∪P2∪ … ∪Pk) is a matching with cardinality |M| + k. The general structure of our algorithm is the following: HOPCROFT-KARP(G) 1 M = ∅ 2 repeat 3 letP = {P1, P2,…,Pk} be a maximal set of vertex-disjoint shortest augmenting paths with respect to M 4 M = M ?(P1 ∪P2 ∪ … ∪Pk) 5 until P = = ∅ 6 return M The remainder of this problem asks you to analyze the number of iterations in the algorithm (that is, the number of iterations in the repeat loop) and to describe an implementation of line 3. b. Given two matchings M and M* in G, show that every vertex in the graph G’ = (V,M ?M*) has degree at most 2. Conclude that G0 is a disjoint union of simple paths or cycles. Argue that edges in each such simple path or cycle belong alternately to M or M*. Prove that if |M| ≤ |M*|, then M ?M* contains at least |M*| - |M| vertex-disjoint augmenting paths with respect to M. Let l be the length of a shortest augmenting path with respect to a matching M, and let P1, P2,…,Pkbe a maximal set of vertex-disjoint augmenting paths of length l with respect to M. Let M’ = M’(P1∪… ∪Pk), and suppose that P is a shortest augmenting path with respect to M’. c. Show that if P is vertex-disjoint from P1, P2,…,Pk, then P has more than l edges. d. Now suppose that P is not vertex-disjoint from P1, P2,…,Pk. Let A be the set of edges (M ?M’) ?P. Show that A = (P1 ∪P2 ∪ … ∪Pk) ?P and that |A| ≥ (k + 1)l . Conclude that P has more than l edges. e. Prove that if a shortest augmenting path with respect to M has l edges, the size of the maximum matching is at most |M| + |V| /(l + 1). f. Show that the number of repeat loop iterations in the algorithm is at most . g. Give an algorithm that runs in O(E) time to find a maximal set of vertex disjoint shortest augmenting paths P1, P2,…,Pkfor a given matching M. Conclude that the total running time of HOPCROFT-KARP is ## Plagiarism Checker Submit your documents and get free Plagiarism report Free Plagiarism Checker
2021-03-02 04:17:21
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.822817325592041, "perplexity": 661.4571104762233}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178363217.42/warc/CC-MAIN-20210302034236-20210302064236-00245.warc.gz"}
http://34.212.143.74/apps/s201911/py0002/practica_integral/
# Práctica: Integración numérica ### Objectives During this activity, students should be able to: • Write higher-order functions using the Python programming language. ## Activity Description Simpson’s rule is a method for numeric integration: $$\int_{a}^{b}f=\frac{h}{3}(y_0 + 4y_1 + 2y_2 + 4y_3 + 2y_4 + \cdots + 2y_{n-2} + 4y_{n-1} + y_n)$$ Where $$n$$ is an even positive integer (if you increment the value of $$n$$ you get a better approximation), and $$h$$ and $$y_k$$ are defined as follows: $$h = \frac{b - a}{n}$$ $$y_k = f(a + kh)$$ Write the function integral, that takes as arguments a, b, n, and f. It returns the value of the integral, using Simpson’s rule. Test your code with the following integrals: $$\int_{0}^{1} x^3\textit{dx} = \frac{1}{4}$$ $$\int_{0}^{1}\frac{4}{1+x^2}\mathit{dx} = \pi$$
2019-08-22 11:12:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7898979783058167, "perplexity": 754.7624923388476}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027317113.27/warc/CC-MAIN-20190822110215-20190822132215-00345.warc.gz"}
https://possiblywrong.wordpress.com/2014/04/18/allrgb-hilbert-curves-and-random-spanning-trees/
## allRGB: Hilbert curves and random spanning trees Introduction Consider the following problem: create a visually appealing digital image with exactly one pixel for each of the 16.8 million possible colors in the 24-bit RGB color space.  Not a single color missing, and no color appearing more than once. Check out the web site allRGB.com, which hosts dozens of such contributed images, illustrating all sorts of interesting approaches.  Most of them involve coloring the pixels in some systematic way, although a few start with an existing image or photograph, and transform it so that it contains all possible colors, while still “looking like” the original image.  I’ll come back to this latter idea later. Hilbert’s curve The motivation for this post is two algorithms that lend themselves naturally to this problem.  The first is the generation of Hilbert‘s space-filling curve.  The idea fascinated me when I first encountered it, in the May 1985 issue of the short-lived Enter magazine.  How could the following code, that was so short, if indecipherable (by me, anyway), produce something so beautifully intricate? ``` 10 PI = 3.14159265: HCOLOR= 3 20 HOME : VTAB 21 30 PRINT "HILBERT'S CURVE." 40 INPUT "WHAT SIZE? ";F1 50 INPUT "WHAT LEVEL? ";L 60 HGR 70 PX = 0:PY = - 60 80 H = 0:P = 1 90 GOSUB 110 100 GOTO 20 110 IF L = 0 THEN RETURN 120 L = L - 1:L1 = P * 90 130 H = H + L1:P = ( - P) 140 GOSUB 110 150 P = ( - P) 160 GOSUB 280 170 R1 = P * 90:H = H - R1 180 GOSUB 110 190 GOSUB 280 200 GOSUB 110 210 R1 = P * 90:H = H - R1 220 GOSUB 280 230 P = ( - P) 240 GOSUB 110 250 P = ( - P):L1 = P * 90 260 H = H + L1:L = L + 1 270 RETURN 280 HX = COS (H * PI / 180) 290 HY = SIN (H * PI / 180) 300 NX = PX + HX * F1 310 NY = PY + HY * F1 320 IF NX > 139 OR NY > 79 THEN GOTO 370 330 HPLOT 140 + PX,80 - PY TO 140 + NX,80 - NY 340 PX = NX:PY = NY 350 PY = NY 360 RETURN 370 PRINT "DESIGN TOO LARGE" 380 FOR I = 1 TO 1000: NEXT I 390 GOTO 20 ``` Nearly 30 years later, my first thought when encountering the allRGB problem was to use the Hilbert curve, but use it twice in parallel: traverse the pixels of the image via a 2-dimensional (order 12) Hilbert curve, while at the same time traversing the RGB color cube via a 3-dimensional (order 8) Hilbert curve, assigning each pixel in turn the corresponding color.  The result looks like this (or see this video showing the image as it is constructed pixel-by-pixel): allRGB image created by traversing image in 2D Hilbert curve order, assigning colors from the RGB cube in 3D Hilbert curve order. (Aside: all of the images and videos shown here are reduced in size, using all 262,144 18-bit colors, for a more manageable 512×512-pixel image size.) As usual and as expected, this was not a new idea.  Aldo Cortesi provides an excellent detailed write-up and source code for creating a similar image, with a full-size submission on the allRGB site. However, the mathematician in me prefers a slightly different implementation.  Cortesi’s code is based on algorithms in the paper by Chris Hamilton and Andrew Rau-Chaplin (see Reference (2) below), in which the computation of a point on a Hilbert curve depends on both the dimension n (i.e., filling a square, cube, etc.) and the order (i.e., depth of recursion) of the curve.  But we can remove this dependence on the order of the curve; it is possible to orient each successively larger-order curve so that it contains the smaller-order curves, effectively realizing a bijection between the non-negative integers $\mathbb{N}$ and the non-negative integer lattice $\mathbb{N}^n$.  Furthermore, it seems natural that the first $2^n$ points on the curve (i.e., the order 1 curve) should be visited in the “canonical” reflected Gray code order. The result is the following Python code, with methods Hilbert.encode() and Hilbert.decode() for converting indices to and from coordinates of points on the n-dimensional Hilbert curve: ```class Hilbert: """Multi-dimensional Hilbert space-filling curve. """ def __init__(self, n): """Create an n-dimensional Hilbert space-filling curve. """ self.n = n self.mask = (1 << n) - 1 def encode(self, index): """Convert index to coordinates of a point on the Hilbert curve. """ # Compute base-n digits of index. digits = [] while True: index, digit = divmod(index, self.mask + 1) digits.append(digit) if index == 0: break # orientation of smaller order curves. vertex, edge = (0, -len(digits) % self.n) # Visit each base-n digit of index, most significant first. coords = [0] * self.n for digit in reversed(digits): # Compute position in current hypercube, distributing the n # bits across n coordinates. bits = self.subcube_encode(digit, vertex, edge) for bit in range(self.n): coords[bit] = (coords[bit] << 1) | (bits & 1) bits = bits >> 1 # Compute orientation of next sub-cube. vertex, edge = self.rotate(digit, vertex, edge) return tuple(coords) def decode(self, coords): """Convert coordinates to index of a point on the Hilbert curve. """ # Convert n m-bit coordinates to m base-n digits. coords = list(coords) m = self.log2(max(coords)) + 1 digits = [] for i in range(m): digit = 0 for bit in range(self.n - 1, -1, -1): digit = (digit << 1) | (coords[bit] & 1) coords[bit] = coords[bit] >> 1 digits.append(digit) # orientation of smaller order curves. vertex, edge = (0, -m % self.n) # Visit each base-n digit, most significant first. index = 0 for digit in reversed(digits): # Compute index of position in current hypercube. bits = self.subcube_decode(digit, vertex, edge) index = (index << self.n) | bits # Compute orientation of next sub-cube. vertex, edge = self.rotate(bits, vertex, edge) return index def subcube_encode(self, index, vertex, edge): h = self.gray_encode(index) h = (h << (edge + 1)) | (h >> (self.n - edge - 1)) return (h & self.mask) ^ vertex def subcube_decode(self, code, vertex, edge): k = code ^ vertex k = (k >> (edge + 1)) | (k << (self.n - edge - 1)) def rotate(self, index, vertex, edge): v = self.subcube_encode(max((index - 1) & ~1, 0), vertex, edge) w = self.subcube_encode(min((index + 1) | 1, self.mask), vertex, edge) return (v, self.log2(v ^ w)) def gray_encode(self, index): return index ^ (index >> 1) def gray_decode(self, code): index = code while code > 0: code = code >> 1 index = index ^ code return index def log2(self, x): y = 0 while x > 1: x = x >> 1 y = y + 1 return y ``` [Edit: The FXT algorithm library contains an implementation of the multi-dimensional Hilbert curve, with an accompanying write-up whose example output suggests that it uses exactly this same convention.] Random spanning trees At this point, a key observation is that the Hilbert curve image above is really just a special case of a more general approach: we are taking a “walk” through the pixels of the image, and another “walk” through the colors of the RGB cube, in parallel, assigning colors to pixels accordingly.  Those “walks” do not necessarily need to be along a Hilbert curve.  In general, any pair of “reasonably continuous” paths might yield an interesting image. A Hilbert curve is an example of a Hamiltonian path through the corresponding graph, where the pixels are vertices and adjacent pixels are next to each other horizontally or vertically.  In general, Hamiltonian paths are hard to find, but in the case of the two- and three-dimensional grid graphs we are considering here, they are easier to come by.  The arguably simplest example is a “serpentine” traversal of the pixels or colors (see Cortesi’s illustration referred to as “zigzag” order here), but the visual results are not very interesting.  So, what else? We’ll come back to Hamiltonian paths shortly.  But first, consider a “less continuous” traversal of the pixels and/or colors: construct a random spanning tree of the grid graph, and traverse the vertices in… well, any of several natural orders, such as breadth-first, depth-first, etc. So how to construct a random spanning tree of a graph?  This is one of those algorithms that goes in the drawer labeled, “So cool because at first glance it seems like it has no business working as well as it does.” First, pick a random starting vertex (this will be the root), and imagine taking a random walk on the graph, at each step choosing uniformly from among the current vertex’s neighbors.  Whenever you move to a new vertex for the first time, “mark” or add the traversed edge to the tree.  Once all vertices have been visited, the resulting marked edges form a spanning tree.  That’s it.  The cool part is that the resulting spanning tree has a uniform distribution among all possible trees. But we can do even better.  David Wilson (see Reference (3) below) describes the following improved algorithm that has the same uniform distribution on its output, but typically runs much faster: instead of taking a single meandering random walk to eventually “cover” all of the vertices, consider looping over the vertices, in any order, and for each vertex (that is not already in the tree) take a loop-erased random walk from that vertex until reaching the first vertex already in the tree.  Add the vertices and edges in that walk to the tree, and repeat.  Despite this “backward” growing of the tree, the resulting distribution is still uniform. Here is the implementation in Python, where a graph is represented as a dictionary mapping vertices to lists of adjacent vertices: ```import random def random_spanning_tree(graph): """Return uniform random spanning tree of undirected graph. """ root = random.choice(list(graph)) parent = {root: None} tree = set([root]) for vertex in graph: # Take random walk from a vertex to the tree. v = vertex while v not in tree: neighbor = random.choice(graph[v]) parent[v] = neighbor v = neighbor # Erase any loops in the random walk. v = vertex while v not in tree: v = parent[v] return parent ``` The following image shows one example of using this algorithm, visiting the image pixels in a breadth-first traversal of a random spanning tree, assigning colors according to a corresponding Hilbert curve traversal of the RGB color cube. Breadth-first traversal of random spanning tree of pixels, assigning colors in Hilbert curve order. My wife pointed out, correctly, I think, that watching the construction of these images is as interesting as the final result; this video shows another example of this same approach.  Note how the animation appears to slow down, as the number of pixels at a given depth in the tree increases.  A depth-first traversal, on the other hand, has a more constant speed, and a “snakier” look to it. Expanding on this general idea, below is the result of all combinations of choices of four different types of paths through the image pixels and/or paths through the colors: Hilbert curve, serpentine, breadth-first through a random spanning tree, or depth-first. Images resulting from different choices of paths through image pixels and colors. Complete source code for generating larger versions of these images is available at the usual location here, in both Python, where I started, and C++, where I ended up, to manage the speed and space needed to create the full-size images. Open questions Let’s come back now to the earlier mention of (a) transforming an existing image to contain all possible colors, and (b) using Hamiltonian paths in some way.  Reference (1) below describes a method of constructing a Hamiltonian cycle through the pixels of an image, using a minimum spanning tree of a “contraction” of the associated grid graph.  The idea is to group the pixels into 2×2 square blocks of “mini-Hamiltonian-cycles,” resulting in a quarter-size grid graph, with edge weights between blocks corresponding to how “close” the pixel colors of the neighboring blocks are in the image. Then, given a minimum spanning tree– or really, any spanning tree– in this smaller graph, we can construct a corresponding Hamiltonian cycle in the larger grid graph in a natural way, by connecting the 2×2 “mini-cycles” that are adjacent in the tree together into a larger cycle.  The resulting Hamiltonian cycle tends to have pixels that are close together on the cycle be “close” in color as well. Now consider the following allRGB image: start with an existing image, and construct a Hamiltonian cycle from the minimum-weight “block” spanning tree.  Visit the pixels in cycle order, assigning colors according to some other corresponding “smooth” order, e.g. using a Hilbert curve.  How much of the original image would still be apparent in the result? References: 1. Dafner, R., Cohen-Or, D. and Matias, Y., Context-Based Space-Filling Curves, Eurographics 2000, 19:3 (2000) [PDF] 2. Hamilton, C. and Rau-Chaplin, A., Compact Hilbert Indices for Multi-Dimensional Data, Proceedings of the First International Conference on Complex, Intelligent and Software Intensive Systems, April 2007, 139-146 [PDF] 3. Wilson, D., Generating Random Spanning Trees More Quickly than the Cover Time, Proceedings of the 28th Annual ACM Symposium on the Theory of Computing, ACM (1996), 296-303 [PDF] This entry was posted in Uncategorized. Bookmark the permalink. ### One Response to allRGB: Hilbert curves and random spanning trees 1. Doc Cube says: Very interesting and fun. I wonder if the results would be even more aesthetically pleasing if you somehow walked through CIELab space or X Y Z space.
2017-10-23 22:26:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7174306511878967, "perplexity": 947.801345489798}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187826840.85/warc/CC-MAIN-20171023221059-20171024000658-00022.warc.gz"}
https://www.nature.com/articles/palcomms201516?error=cookies_not_supported
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. # Hierarchical networks of scientific journals ## Abstract Academic journals are the repositories of mankind’s gradually accumulating knowledge of the surrounding world. Just as knowledge is organized into classes ranging from major disciplines, subjects and fields, to increasingly specific topics, journals can also be categorized into groups using various metric. In addition, they can be ranked according to their overall influence. However, according to recent studies, the impact, prestige and novelty of journals cannot be characterized by a single parameter such as, for example, the impact factor. To increase understanding of journal impact, the knowledge gap we set out to explore in our study is the evaluation of journal relevance using complex multi-dimensional measures. Thus, for the first time, our objective is to organize journals into multiple hierarchies based on citation data. The two approaches we use are designed to address this problem from different perspectives. We use a measure related to the notion of m-reaching centrality and find a network that shows a journal’s level of influence in terms of the direction and efficiency with which information spreads through the network. We find we can also obtain an alternative network using a suitably modified nested hierarchy extraction method applied to the same data. In this case, in a self-organized way, the journals become branches according to the major scientific fields, where the local structure of the branches reflect the hierarchy within the given field, with usually the most prominent journal (according to other measures) in the field chosen by the algorithm as the local root, and more specialized journals positioned deeper in the branch. This can make the navigation within different scientific fields and sub-fields very simple, and equivalent to navigating in the different branches of the nested hierarchy. We expect this to be particularly helpful, for example, when choosing the most appropriate journal for a given manuscript. According to our results, the two alternative hierarchies show a somewhat different, but also consistent, picture of the intricate relations between scientific journals, and, as such, they also provide a new perspective on how scientific knowledge is organized into networks. ## Introduction Providing an objective ranking of scientific journals and mapping them into different knowledge domains are complex problems of significant importance, which can be addressed using a number of different approaches. Probably the most widely known quality measure is the impact factor (Garfield, 1955,1999), corresponding to the total number of citations a journal receives in a 2-year period, divided by the number of published papers over the same period. Although it is a rather intuitive quantity, the impact factor has serious limitations (Harter and Nisonger, 1997; Opthof, 1997; Seglen, 1997; Bordons et al., 2002). This consequently led to the introduction of alternative measures such as the H-index for journals (Braun et al., 2006), the g-index (Egghe, 2006), the Eigenfactor (Bergstrom, 2007), the PageRank and the Y-factor (Bollen et al., 2007), the Scimago Journal Rank (The Scimago Journal & Country Rank, 2015), as well as the use of various centralities such as the degree-, closeness- or betweenness centrality in the citation network between the journals (Bollen et al., 2005; Leydesdorff, 2007). Comparing the advantages and disadvantages of the different impact measures and examining their correlation has attracted considerable interest in the literature (Bollen et al., 2009; Franceschet, 2010a, b; Glänzel, 2011; Kaur et al., 2013). However, according to the results of the principal component analysis of 39 quality measures carried out by Bollen et al. (2009), scientific impact is a multi-dimensional construct that cannot be adequately measured by any single indicator. Thus, the development of higher-dimensional quality indicators for scientific journals provides an important objective for current research. In this study, we consider different possibilities for defining a hierarchy between scientific journals based on their citation network. The advantage of using the network approach for representing the intricate relations between journals is that networks can show substantially different aspects compared with any parametric method representing the journals with points in single- or even in multi-dimensional space. When organized into a hierarchy, the most important and prestigious journals are expected to appear at the top, while lesser known journals are expected to be ranked lower. However, a hierarchy offers a more complex view of the ranking between journals compared with a one-dimensional impact measure. For example, if the branches of the hierarchy are organized according to the different scientific fields, then the journals in a given field can be compared simply by zooming into the corresponding branch in the hierarchy. Possible scenarios for hierarchical relations between scientific journals have already been suggested recently by Iyengar and Balijepally (2015). However, the main objective in this earlier study was to examine the validity of a linear ordering between the journals based on a dominance ranking procedure (Iyengar and Balijepally, 2015). Here, we construct and visualize multiple hierarchies between the journals, offering a far more complex view of the ranking between journals compared with a one-dimensional impact measure. Hierarchical organization is in general a widespread phenomenon in nature and society. This is supported by several studies, focusing on the transcriptional regulatory network of Escherichia coli (Ma et al., 2004), the dominant–subordinate hierarchy among crayfish (Goessmann et al., 2000), the leader–follower network of pigeon flocks (Nagy et al., 2010), the rhesus macaque kingdoms (Fushing et al., 2011), neural network (Kaiser et al., 2010), technological networks (Pumain, 2006), social interactions (Guimerà et al., 2003; Pollner et al., 2006; Valverde and Solé, 2007), urban planning (Batty and Longley, 1994; Krugman, 1996), ecological systems (Hirata and Ulanowicz, 1985; Wickens and Ulanowicz, 1988) and evolution (Eldredge, 1985; McShea, 2001). However, hierarchy is a polysemous word, and in general, we can distinguish between three different types of hierarchies when describing a complex system: the order, the nested and the flow hierarchy. In the case of order hierarchy, we basically define a ranking, or more precisely a partial ordering, of the set of elements under investigation (Lane, 2006). Nested hierarchy (also called inclusion hierarchy or containment hierarchy) represents the idea of recursively aggregating the items into larger and larger groups, resulting in a structure where higher-level groups consist of smaller and more specific components (Wimberley, 2009). Finally, a flow hierarchy can be depicted as a directed graph, where the nodes are layered in different levels so that the nodes that are influenced by a given node (are connected to it through a directed link) are at lower levels. Hierarchical organization is an important concept also in network theory (Ravasz et al., 2002; Trusina et al., 2004; Pumain, 2006; Clauset et al., 2008; Corominas-Murtra et al., 2011; Mones et al., 2012; Corominas-Murtra et al., 2013). The network approach has become a ubiquitous tool for analysing complex systems—from the interactions within cells, transportation systems, the Internet and other technological networks, through to economic networks, collaboration networks and society (Albert and Barabási, 2002; Mendes and Dorogovtsev, 2003). Grasping the signs of hierarchy in networks is a non-trivial task with a number of possible different approaches, including the statistical inference of an underlying hierarchy based on the observed network structure (Clauset et al., 2008), and the introduction of various hierarchy measures (Trusina et al., 2004; Mones et al., 2012; Corominas-Murtra et al., 2013). What makes the analysis of hierarchy even more complex is that it may also be context dependent. According to a recent study on homing pigeons, the hierarchical pattern of in-flight leadership does not build upon the stable, hierarchical social dominance structure (pecking order) evident among the same birds (Nagy et al., 2013). In this study, we show that in a somewhat similar fashion, scientific journals can also be organized into multiple hierarchies with different types. Our studies rely on the citation network between scientific papers obtained from Web of Science (ISI Web of Knowledge, 2012). On the one hand, the flow hierarchy analysis of this network based on the m-reaching centrality (Borgatti, 2003; Mones et al., 2012) reveals the structure relevant from the point of view of knowledge spreading and influence. On the other hand, the alternative hierarchy obtained from the same network with the help of an automated tag hierarchy extraction method (Tibély et al., 2013) highlights a nested structure with the most interdisciplinary journals at the top and the very specialized journals at the bottom of the hierarchy. ## Scientific publication data The dataset on which our studies rely consists of all the available publications in Web of Science (ISI Web of Knowledge, 2012) between 1975 and 2011. The downloading scripts we used are available in WOS publication data downloading scripts (2012), and the Harvard Dataverse repository (Palla et al, 2015). To take into account as wide a list of papers as possible, we did not apply any specific filtering. Thus, conference proceedings and technical papers also appear in the dataset used. However, since the network we study builds upon citation between papers (or journals), the conference proceedings, technical papers (or even journals) with no incoming citation fall out of the flow hierarchy analysis automatically. (Nevertheless, in the event that they have outgoing citations, this is included in the evaluation of the m-reaching centrality of other journals.) Furthermore, even when cited, a conference proceedings does not have a real chance of getting high in any of the hierarchies considered here, due to their very limited number of publications compared with journals. Although highly cited individual conference proceedings publications may appear, they cannot boost the overall citation of the proceedings to the level of journals (for example, whenever a scientific breakthrough is published in a conference proceedings first, it is usually also published in a more prestigious journal soon afterwards, which eventually drives the citations to the journal instead of the proceedings). For these reasons, the conference proceedings are ranked at the bottom of the hierarchies we obtained. We used the 11 character-long abbreviated journal issue field in the core data for identifying the journal of a given publication. The advantage of using this field is that it contains only an abbreviated journal name without any volume numbers, issue numbers, years and so on (in contrast, the full journal name in some cases may contain the volume number or the publication year as well, which of course are varying over time). The total number of publications for which the mentioned data field was non-empty reached 35,372,038, and the number of different journals identified based on this data field was 13,202. As mentioned previously, in case of conference proceedings, the appearing 11 character long abbreviated journals issue field was treated the same as in case of journal publications, without any filtering. ## Flow hierarchy based on the m-reaching centrality A recently introduced approach for quantifying the position of a node in a flow hierarchy is based on the m-reaching centrality (Mones et al., 2012). The basic intuition behind this idea is that reaching the rest of the network should be relatively easy for the nodes high in the hierarchy, and more difficult from the nodes at the bottom of the hierarchy. Thus, the position of the node i in the hierarchy is determined by its m-reaching centrality (Borgatti, 2003), Cm(i), corresponding to the fraction of nodes that can be reached from i, following directed paths of at most m steps, (where m is a system dependent parameter). Naturally, a higher Cm(i) value corresponds to a higher position in the hierarchy, and the node with the maximal Cm(i) is chosen as the root. However, this approach does not specify the ancestors or descendants of a given node in the hierarchy; instead it provides only a ranking between the nodes of the underlying network according to Cm(i). Nevertheless, hierarchical levels can be defined in a simple way: after sorting nodes in an ascending order, we can sample and aggregate nodes into levels so that in each level the standard deviation of Cm is lower than a predefined fraction of the standard deviation in the whole network. This method of constructing a flow hierarchy based on the m-reach (and the standard variation of the m-reach) has already been shown to provide meaningful structures for a couple of real systems, including electric circuits, transcriptional regulatory networks, e-mail networks and food webs (Mones et al., 2012). When applying this approach to the study of the hierarchy between scientific journals, we have to take into account that journals are not directly connected to each other; instead they are linked via a citation network between the individual publications. In principle, we may assume different “journal strategies” for obtaining a large reach in this system: for example, a journal might publish a very high number of papers of poor quality with only a few citations each. Nevertheless, taken together they can still provide a large number of aggregated citations. Another option is to publish a lower number of high-quality papers, obtaining a lot of citations individually. To avoid having a built-in preference for one type of journal over the other, we define a reaching centrality that is not sensitive to such details, and which only depends on the number of papers that can be reached in m steps from publications appearing in a given journal. First, we note that when calculating the reach of the publications, the citation links have to be followed backwards: that is, if paper i is citing j, then the information presented in j has reached i. Thus, the reaching centralities are evaluated in a network where the links are pointing from a reference article to all papers citing it. The m-reach of a journal $\mathcal{J}$, denoted by ${C}_{m}\left(\mathcal{J}\right),$ is naturally given by the number of papers that can be reached in at most m steps from any article appearing in the given journal. Thus, the mathematical definition of ${C}_{m}\left(\mathcal{J}\right)$ is based on the set of m-reachable nodes, given by ${\mathcal{C}}_{m}\left(\mathcal{J}\right)=\left\{i|{d}_{\mathrm{out}}\left(j,i\right)\le m,j\in \mathcal{J}\wedge i\notin \mathcal{J}\right\},$ ((1)) where dout(j, i) denotes the out-distance from paper j to i, (that is, the distance of the papers when only consecutive out-links are considered). The set ${\mathcal{C}}_{m}\left(\mathcal{J}\right)$ is equivalent to the set of papers outside $\mathcal{J}$ that can be reached in at most m steps, provided that the starting publication is in $\mathcal{J}$. The m-reaching centrality of $\mathcal{J}$ is simply the size of the m-reachable set, ${C}_{m}\left(\mathcal{J}\right)=|{\mathcal{C}}_{m}\left(\mathcal{J}\right)|$ (that is, the number of papers in ${\mathcal{C}}_{m}\left(\mathcal{J}\right)$). Figure 1 shows an illustration of the calculation of the m-reach of the journals detailed above. We note that a closely related impact measure for judging the influence of research papers based on deeper layers of other papers in the citation network is given by the wake-citation-score (Klosik and Bornholdt, 2014). A comparison study between the m-reach and the wake-citation-score is given in the Supplementary Information S1. To determine the optimal value of m, we calculated the ${C}_{m}\left(\mathcal{J}\right)$ for all journals in our dataset for a wide range of m values. According to the results detailed in the Supplementary Information S2, at around m=4 the ${C}_{m}\left(\mathcal{J}\right)$ starts to saturate for the top journals. To provide a fair and robust ordering between the journals, here we set m to m=3, corresponding to an optimal setting: on the one hand we still allow multiple steps in the paths contributing to the reach. On the other hand, we also avoid the saturation effect caused by the exponential increase in the reach as a function of the maximal path length and the finite system size. More details on the tuning of m are given in the Supplementary Information S2, and the results obtained for other m values are shown in the Supplementary Information S3. Before considering the results, we note that an alternative approach for studying the citation between journals is to aggregate all papers in a given journal into a single node, representing the journal itself, in similar fashion to the works by Leydesdorff et al. (2013, 2014). In this case the link weight from journal $\mathcal{J}$ to journal is given by total number of citations from papers appearing in $\mathcal{J}$ to papers in $\mathcal{K}$. In the Supplementary Information S4, we analyse the flow hierarchy obtained by evaluating the m-reaching centrality in this aggregated network between the journals. However, recent works have pointed out that aggregations of this nature can lead to serious misjudgement of the importance of nodes (Pfitzner et al., 2013; Rosvall et al., 2014). For instance, an interesting memory effect of the citation network between individual papers is that a paper citing mostly biological papers that appear in interdisciplinary journals is still much more likely to be cited back by other biological papers, compared with other disciplines (Rosvall et al., 2014). Such phenomena can have a significant influence on the m-reaching centrality. However, by switching to the aggregated network between journals we wipe out these effects and introduce a distortion in the m-reach. Thus, here we stick to the most detailed representation of the system, given by the citation network between individual papers and leave the analysis of the aggregated network between journals to the Supplementary Information S4. (An illustration of the difference between the m-reach calculated on the level of papers and on the aggregated level of journals is given in Fig. 1.) The results for the top journals according to the m-reaching centrality at m=3 based on the publication data available from the Web of Science between 1975 and 2011 are given in Fig. 2. The hierarchy levels were defined by allowing a maximal standard deviation of 0.13·σ(Cm) for Cm within a given level, where σ(Cm) denotes the standard deviation of Cm over all journals. (The effect of changes in the within-level standard deviation of Cm on the shape of the hierarchy is discussed in the Supplementary Information S5.) According to our analysis, Science is the most influential journal based on the flow hierarchy, followed by Nature, with PNAS coming third, while Lancet and the New England Journal of Medicine form the fourth level. In general, the top of the hierarchy is strongly dominated by medical, biological and biochemical journals. For instance, the top physics journal, the Physical Review Letters, appears only on the 13th level, and the top chemistry journal, the Journal of the American Chemical Society, is positioned at the 11th level. For comparison, in Fig. S7 in the Supplementary Information S4, we show the top of the flow hierarchy obtained from the citation network aggregated to the level of journals. Although Science, Nature and PNAS preserve their position as the top three journals, relevant changes can be observed in the hierarchy levels just below, as physical and chemical journals take over the biological and medical journals. For instance, Physical Review Letters is raised from Level 13 to Level 3, while Lancet is pushed back from Level 4 to Level 17. This reorganization is likely to be caused by the “memory” of the citation network described in the work by Rosvall et al. (2014)—the fact that a paper citing mostly biological articles is more likely to be cited by other biological papers, even if it appears in an interdisciplinary journal. Since biology and medicine have the highest publication rate among different scientific fields, the aggregation to the level of journals has the most severe effect on the reach of entities obtaining citations mostly from these fields. Thus, the notable difference between the flow hierarchy obtained from the citation network of individual papers and from the aggregated network between journals is yet another indication of the distortion in centralities caused by link aggregation, pointed out in related, but somewhat different contexts by Rosvall et al. (2014) and by Pfitzner et al. (2013). ## Extracting a nested hierarchy Categorizing items into a nested hierarchy is a general idea that has been around for a long time in, for instance, library classification systems, biological classification and also in the content classification of scientific publications. A very closely related problem is that given by the automatized categorization of free tags appearing in various online content (Heymann and Garcia-Molina, 2006; Schmitz, 2006; Damme et al., 2007; Plangprasopchok et al., 2011; Tibély et al., 2013; Velardi et al., 2013). In recent years, the voluntary tagging of photos, films, books and so on, with free words has become popular on the Internet in blogs, various file-sharing platforms, online stores and news portals. In some cases, these phenomena are referred to as collaborative tagging (Lambiotte and Ausloos, 2006; Cattuto et al., 2007; Cattuto et al., 2009; Floeck et al., 2011), and the resultant large collections of tags are referred to as folksonomies, highlighting their collaborative origin and the “flat” organization of the tags in these systems (Mika, 2005; Lambiotte and Ausloos, 2006; Spyns et al., 2006; Cattuto et al., 2007, 2009; Voss, 2007; Tibély et al., 2012). The natural mathematical representation of tagging systems is given by hypergraphs (Ghosal et al., 2009; Zlatić et al., 2009). Revealing the hidden hierarchy between tags in a folksonomy or a tagging system in general can significantly help broadening or narrowing the scope of search in the system, give recommendation about yet unvisited objects to the user or help the categorization of newly appearing objects (Juszczyszyn et al., 2010; Lu et al., 2012). Here we apply a generalized version of a recent tag hierarchy extraction method (Tibély et al., 2013) for constructing a nested hierarchy between scientific journals. In its original form, the input of the tag hierarchy extraction algorithm is given by the weighted co-occurrence network between the tags, where the weights correspond to number of shared objects. Based on the z-score of the connected pairs and the centrality of the tags in the co-occurrence network, the hierarchy is built bottom up, as the algorithm eventually assigns one or a few direct ancestors to each tag (except for the root of the hierarchy). The details of the algorithm are described in the “nested hierarchy extraction algorithm” subsection. To study the nested hierarchy between scientific journals, we simply replace the weighted co-occurrence network between tags by the weighted citation network between journals at the input of the algorithm. Although a tag co-occurrence network and a journal citation network are different, the two most important properties needed for the nested hierarchy analysis are the same in both: general tags and multidisciplinary journals have a significantly larger number of neighbours compared with more specific tags and specialized journals. Furthermore, closely related tags co-appear more often compared with unrelated tags, as journals focusing on the same field cite each other more often compared with journals dealing with independent disciplines. Based on this, the hierarchy obtained from the journal citation network in this approach is expected to be organized according to the scope of the journals, with the most general multidisciplinary journals at the top and the very specialized journals at the bottom. We note that since in this case we have to determine which journals are the most closely related to each other and which are unrelated, rather than evaluating the overall influence of the journals, we use simply the number of direct citations from one journal to the other as the weight for the connections. This is equivalent of taking the m-reach calculated on the publication level at m=1, sorting according to the source of the citations and then summing up the results for the papers appearing in one given journal. Thus, when constructing the flow hierarchy, we start from the publication level citation network and evaluate the m-reach at m=3, whereas in case of the nested hierarchy we calculate the publication level m-reach at m=1, which technically becomes equivalent to the journal level citation numbers when summed over papers appearing in one given journal. ## Nested hierarchy extraction algorithm Our algorithm corresponds to a generalized version of “Algorithm B” presented in Tibély et al. (2013). The main differences are that here we force the algorithm to produce a directed acyclic graph consisting of a single connected component, and we allow the presence of multiple direct ancestors. In contrast, in its original form “Algorithm B” can provide disconnected components, and each component in the output is corresponding to a directed tree. A further technical improvement we introduce is given by the calculation of the node centralities. Thus, the outline of the method used here is the following: first we carry out “Algorithm B” given in Tibély et al. (2013) with modified centrality evaluation, obtaining a directed tree between the journals. This is followed by a second iteration where we “enrich” the hierarchy by occasionally assigning further direct ancestors to the nodes. Since “Algorithm B” is presented in full detail in Tibély et al. (2013), here we provide only a brief overview. The input of the algorithm is a weighted directed network between the journals based on the z-score for the citation links. After throwing away unimportant connections by using a weight threshold, the node centralities are evaluated in the remaining network. Here we used a centrality based on random walks on the citation network between journals with occasional teleportation steps, in a similar fashion to PageRank. We adopted the method proposed by Lambiotte and Rosvall (2012), calculating the dominant right eigenvector of the matrix Mij=(1−α)ωij+αsiin, where ωij is the link weight, siin denotes the in strength of journal i (in number of citations) and α is corresponding to the teleportation probability. We have chosen the widely used α=0.15 parameter value, however, the ordering of the journals according to the centralities was quite robust with respect to changes in α. Based on the centralities a directed tree representing the backbone of the hierarchy is built from bottom up as described in “Algorithm B” in Tibély et al. (2013). In the event that we cannot find a suitable “parent” for node i according to the original rules, we chose the node with the highest accumulated z-score from all journals that have a higher centrality than i (where the accumulation is running over the already found descendants of the given node). This ensures the emergence of a single connected component, since a single direct ancestor is assigned to every node (except for the root of the tree). This is followed by a final iteration over the nodes where we examine whether further “parents” have to be assigned or not. The criteria for accepting a node as the second, third, and so on, direct ancestor of journal i are that it must have a higher centrality compared with i, and also the z-score has to be larger than the z-score between i and its first direct ancestor. Note that the first parent is chosen based on aggregated z-score instead of the simple pairwise z-score, as explained by Tibély et al. (2013). ## Nested hierarchy of scientific journals In Fig. 3, we show the top of the obtained nested hierarchy between the journals, with Nature appearing as the root, while PNAS, Science, New Scientist and Astrophysical Journal form the second level. Several prominent field specific journals such as Physical Review Letters, Brain Research, Ecology and Journal of the American Chemical Society have both Nature and Science as direct ancestors. Interestingly, the Astrophysical Journal is a direct descendant only of Nature, and is not linked under Science or PNAS. Nevertheless, it serves as a local root for a branch of astronomy-related journals, in a similar fashion to Physical Review Letters, which can be regarded as the local root of physics journals, or Journal of the American Chemical Society, corresponding to the local root of chemical journals. The biological, medical and biochemical journals form a rather mingled branch under PNAS, with Journal of Biological Chemistry as the local root and New England Journal of Medicine corresponding to a sub-root for medical journals. However, Cell and New England Journal of Medicine are direct descendants of Nature and Science as well. Interestingly, the brain- and neuroscience-related journals form a rather well-separated branch with Brain Research as the local root, linked directly under PNAS, Science and also under Nature. ## Comparing the hierarchies Although the hierarchies presented in Fig. 2 and Fig. 3 show a great deal of similarity, some interesting differences can also be observed. The figures show the top of the corresponding hierarchies, and seemingly, a significant portion of the journals ranked high in the hierarchy are the same in both cases. However, the root of the hierarchies is different (Science in case of the flow hierarchy and Nature in case of the nested hierarchy), and also the level-by-level comparison of Fig. 2 and Fig. 3 shows that a very high position in the flow hierarchy is not always accompanied by an outstanding position in the nested hierarchy, and vice versa. For example, the Lancet and New England Journal of Medicine appear much higher in Fig. 2 compared with Fig. 3, while Geophysical Research Letters is just below Nature and Science in the nested hierarchy and is not even shown in the top of the flow hierarchy. To make the comparison between the two types of hierarchies more quantitative, we subsequently aggregated the levels in the hierarchies starting from the top, and calculated the Jaccard similarity coefficient between the resulting sets as a function of the level depth ℓ. Thus, when ℓ=1, we are actually comparing the roots, when ℓ=2, the journals on the top two levels and so on. However, since the total number of levels in the hierarchies are different, we refine the definition of the similarity coefficient by allowing different ℓ values in the two hierarchies, and always choosing the pairs of aggregated sets with the maximal relative overlap. Therefore, we actually have two similarity functions, ${J}_{f}\left({\ell }_{f}\right)=\underset{{\ell }_{n}}{\mathrm{max}}\frac{|{S}_{f}\left({\ell }_{f}\right)\cap {S}_{n}\left({\ell }_{n}\right)|}{|{S}_{f}\left({\ell }_{f}\right)\cup {S}_{n}\left({\ell }_{n}\right)|},$ ((2)) ${J}_{n}\left({\ell }_{n}\right)=\underset{{\ell }_{f}}{\mathrm{max}}\frac{|{S}_{f}\left({\ell }_{f}\right)\cap {S}_{n}\left({\ell }_{n}\right)|}{|{S}_{f}\left({\ell }_{f}\right)\cup {S}_{n}\left({\ell }_{n}\right)|},$ ((3)) where Sf(ℓf) and Sn(ℓn) denote the set of aggregated journals from the root to level ℓf in the flow hierarchy and to level ℓn in the nested hierarchy, respectively. When evaluating Jf(ℓf) at a given level depth ℓf according to equation (2), the set of aggregated journals in the flow hierarchy, Sf(ℓf) is fixed, and we search for the most similar set of aggregated journals from the nested hierarchy by scanning over the entire range of possible ℓn values, and choose the one giving the maximal Jaccard similarity. Similarly, when calculating Jn(ℓn) according to equation (3), the set of aggregated journals taken from the nested hierarchy Sn(ℓn) is fixed, and the set Sf(ℓf) yielding the maximal Jaccard similarity is chosen from the flow hierarchy. In Fig. 4, we show the result obtained for Jf(ℓf) as a function of the level depth ℓf in the flow hierarchy (while the corresponding Jn(ℓn) plot for the nested hierarchy is given in Fig. S10 in the Supplementary Information S6). Beside Jf(ℓf), in Fig. 4 we also plotted the expected similarity between the aggregated sets of journals and a random set of journals of the same size. Since the roots of the hierarchies are different, the curves are starting from 0 at ℓf=1, and naturally, as we reach to the maximal level depth, the similarity is approaching to 1, since all journals are included in the final aggregate. However, at the top levels below the root, a prominent increase can be observed in the Jf(ℓf), while the similarity between random sets of journals is increasing very slowly in this region. Thus, the flow hierarchy and the nested hierarchy revealed by our methods show a significant similarity also from the quantitative point of view. This is also supported by the remarkably small τ=0.16 generalized Kendall-tau distance obtained by treating the two hierarchies as partial orders, and applying a natural extension of the standard distance measure between total orders. The definition of the distance measure and the details of the calculation are given in the Supplementary Information S7. Finally, our hierarchies can also be compared with traditional impact measures. According to the results detailed in the Supplementary Information S8, both the flow and the nested hierarchy show moderate correlations with the impact factor, the Scimago Journal Rank and the closeness centrality of journals in the aggregated citation network. Therefore, the general trends shown by the hierarchies are consistent with previously introduced, widely used impact measures. However, when looking into the details, they also provide an alternative point of view with important differences, circumventing large correlation values with the former, one-dimensional characterizations of journal ranking. ## Discussion Ranking and comparing the importance, prestige and popularity of scientific journals is a far from trivial task with quite a few different available impact measures (Garfield, 1955,1999; Braun et al., 2006; Egghe, 2006; Bergstrom, 2007; Bollen et al., 2007, 2005; Leydesdorff, 2007; Bollen et al., 2009; Franceschet, 2010a, b; Glänzel, 2011; Kaur et al., 2013). However, it seems that the overall impact of journals cannot be adequately characterized by a single one-dimensional quality measure (Bollen et al., 2009). In this light, our results offer an informative overview on the ranking and the intricate relations between journals, where instead of, for example, simply ordering them according to a one-dimensional parameter we organize them into multiple hierarchies. First, we defined a flow hierarchy between the journals based on the m-reaching centrality in the citation network between the scientific papers. This structure organizes the journals according to their potential for spreading new scientific ideas, with the most influential information spreaders sorted at the top of the hierarchy. In this respect Science turned out to be the root, followed by Nature and PNAS, and the top dozen levels of the hierarchy were dominated by multidisciplinary, biological, biochemical and medical journals. We also constructed a nested hierarchy between the journals by generalizing a recent tag hierarchy extraction algorithm. In this case the journals were organized into branches according to the major scientific fields, with a clear separation between unrelated fields, and relatively strong mixing and overlap between closely related fields. Mapping the different journals into well-oriented knowledge domains is a complex problem on its own (Chen et al., 2001a, b; Shiffrin and Börner, 2004; Rosvall and Bergstrom, 2008; Börner, 2010), especially from the point of view of multi- and interdisciplinary fields. Our nested hierarchy provides a natural tool for the visualization of the intricate nested and overlapping relations between scientific fields as well. An important feature is that the organization of the branches roughly highlights the local hierarchy of the given field, with usually the most prominent journal in the field serving as the local root, and more specialized journals positioned at the bottom. Thus, zooming into a specific field for computing and ranking the journals that publish in the given field becomes simple: we just have to select the corresponding branch in the nested hierarchy. Another interesting perspective is that based on the position of a journal in the nested hierarchy we gain immediate information on its standing within its particular field. According to that we can select those journals with which we can make a fair comparison, and we can exclude journals in faraway branches from any comparing study. Moreover, similarly to judging the position of a journal within its specific field (a local branch), we can also judge the standing of this sub-field in a larger scientific domain (a main branch) and so on, and thereby compare the ranking of the different scientific fields and sub-fields (each being composed of multiple journals). When zooming out completely to the overall hierarchy between the journals, Nature was observed to be in the top position with Science, PNAS, the Astrophysical Journal and New Scientist formed the second level, with the field-dependent branches starting at the third level. The comparison between the two types of hierarchy reveals a strong similarity accompanied by significant differences. Basically, Science, Nature and PNAS provide the top three journals in both cases, and also, the top few hundred nodes in the hierarchy have a far larger overlap than expected at random. However, a closer level-by-level inspection showed that a very high position in, for example, the flow hierarchy does not guarantee a similarly outstanding ranking in the nested hierarchy, and vice versa. Both hierarchies showed moderate correlations with the impact factor, the Scimago Journal Rank and the closeness centrality of the journals in the citation network. This supports our view that the hierarchical organization of scientific journals provides an interesting alternative for the description of journal impact, which is consistent with the previously introduced measures at large, but in the meantime it shows important differences when examined in detail. In summary, the two hierarchies we constructed offer a compound view of the inter relations between scientific journals, and provide a higher-dimensional characterization of journal impact instead of ranking simply according to a one-dimensional parameter. Naturally, hierarchies between scientific journals can be defined in other ways too (Iyengar and Balijepally, 2015). For example, when building a flow hierarchy, the overall influence of journals could be measured alternatively with other quantities such as the wake-citation-score (Klosik and Bornholdt, 2014), the PageRank or the Y-factor (Bollen et al., 2007). In parallel, a nested hierarchy might also be constructed by suitably modifying a community finding algorithm producing inherently nested and overlapping communities such as the Informap (Rosvall and Bergstrom, 2008; Rosvall and Bergstrom, 2011; Rosvall et al., 2014) or the clique percolation method (Palla et al., 2005). Another interesting aspect we have not taken into account here is given by the time evolution of the citation network between the journals. Obviously, the ranking of the journals changes with time, and by treating all publications between 1975 and 2011 in a uniform framework we neglected this effect. However, the examination of the further possibilities for hierarchy construction and the study of the time evolution of the journal hierarchies is out of the scope of the present work, although it provides interesting directions for future research. ## Data Availability The datasets analyzed during the current study are available from the Web of Science repository, owned by Thomson Reuters (http://scientific.thomson.com/isi/) but restrictions apply to the availability of these data, which were used under license from Thomson Reuters, and so are not publicly available. Data are however available from the authors upon reasonable request and permission of Thomson Reuters. How to cite this article: Palla G, Tibély G, Mones E, Pollner P and Vicsek T (2015) Hierarchical networks of scientific journals. Palgrave Communications 1:15016 doi: 10.1057/palcomms.2015.16. ## References • Albert R and Barabási A-L (2002) Statistical mechanics of complex networks. Reviews of Modern Physics; 74 (1): 47–97. • Batty M nd Longley P (1994) Fractal Cities: A Geometry of Form and Function. Academic: San Diego, CA. • Bergstrom CT (2007) Eigenfactor: Measuring the value and prestige of scholarly journals. C&RL News; 68 (5): 314–316. • Bollen J, de Sompel HV, Smith J and Luce R (2005) Toward alternative metrics of journal impact: A comparison of download and citation data. Information Processing & Management; 41 (6): 1419–1440. • Bollen J, Rodriguez MA and de Sompel HV (2007) Journal status. Scientometrics; 69 (3): 669–687. • Bollen J, de Sompel HV, Hagberg A and Chute R (2009) A principal component analysis of 39 scientific impact measures. PLoS ONE; 4 (6): e6022. • Bordons M, Fernandez MT and Gomez I (2002) Advantages and limitations in the use of impact factor measures for the assessment of research performance. Scientometrics; 53 (2): 195–206. • Borgatti SP (2003) The key player problem In: Breiger R, Carley K and Pattison (eds) Dynamic Social Network Modelling Analysis: Workshop Summary and Papers. National Academy of Sciences Press: Washington D.C., pp 241–252. • Börner K (2010) Atlas of Science: Visualizing What We Know. The MIT Press: Cambridge, Massachusetts, USA. • Braun T, Glänzel W and Schubert A (2006) A Hirsch-type index for journals. Scientometrics; 69 (1): 169–173. • Cattuto C, Barrat A, Baldassarri A, Schehr G and Loreto V (2009) Collective dynamics of social annotation. Proceedings of the National Academy of Sciences of the USA; 106 (26): 10511–10515. • Cattuto C, Loreto V and Pietronero L (2007) Semiotic dynamics and collaborative tagging. Proceedings of the National Academy of Sciences of the USA; 104 (5): 1461–1464. • Chen C, Kuljis J and Paul RJ (2001a) Visualizing latent domain knowledge. IEEE Transactions on Systems, Man, and Cybernetics; 31 (4): 518–529. • Chen C, Paul RJ and OKeefe B (2001b) Fitting the jigsaw 1 of citations: Information visualization in domain analysis. Journal of the Association for Information Science and Technology; 52 (3): 315–330. • Clauset A, Moore C and Newman MEJ (2008) Hierarchical structure and the prediction of missing links in networks. Nature; 453 (7191): 98–101. • Corominas-Murtra B, Goñi J, Solé RV and Rodríguez-Caso C (2013) On the origins of hierarchy in complex networks. Proceedings of the National Academy of Sciences of the USA; 110 (33): 13316–13321. • Corominas-Murtra B, Rodríguez-Caso C, Goñi J and Solé R (2011) Measuring the hierarchy of feedforward networks. Chaos; 21 (1): 016108. • Damme CV, Hepp M and Siorpaes K (2007) Folksontology: An integrated approach for turning folksonomies into ontologies. In Proceedings of the ESWC Workshop ‘Bridging the Gap between Semantic Web and Web 2.0’, pp. 57–70. • Egghe L (2006) Theory and practice of the g-index. Scientometrics; 69 (1): 131–152. • Eldredge N (1985) Unfinished Synthesis: Biological Hierarchies and Modern Evolutionary Thought. Oxford University Press: New York. • Floeck F, Putzke J, Steinfels S, Fischbach K, Schoder D (2011) Imitation and quality of tags in social bookmarking systems—Collective intelligence leading to folksonomies In: Bastiaens TJ, Baumöl U and Krämer BJ (eds) On Collective Intelligence, Volume 76 of Advances in Intelligent and Soft Computing. Springer: Berlin, Heidelberg, pp 75–91. • Franceschet M (2010a) The difference between popularity and prestige in the sciences and in the social sciences: A bibliometric analysis. Journal of Informetrics; 4 (1): 55–63. • Franceschet M (2010b) Ten good reasons to use the eigenfactor 26 metrics. Information Processing & Management; 46 (5): 555–558. • Fushing H, McAssey MP, Beisner B and McCowan B (2011) Ranking network of captive rhesus macaque society: A sophisticated corporative kingdom. PLoS ONE; 6 (3): e17817. • Garfield E (1955) Citation indexes for science: A new dimension in documentation through association of ideas. Science; 122 (3159): 108. • Garfield E (1999) Journal impact factor: A brief review. 1 Canadian Medical Association Journal; 161 (8): 979–980. • Ghosal G, Zlatić V, Caldarelli G and Newman MEJ (2009) Random hypergraphs and their applications. Physical Review E; 79 (6): 066118. • Glänzel W (2011) The application of characteristic scores and scales to the evaluation and ranking of scientific journals. Journal of Information Science; 37 (1): 40–48. • Goessmann C, Hemelrijk C and Huber R (2000) The formation and maintenance of crayfish hierarchies: Behavioral and self-structuring properties. Behavioral Ecology and Sociobiology; 48 (6): 418–428. • Guimerà R, Danon L, Díaz-Guilera A, Giralt F and Arenas A (2003) Self-similar community structure in a network of human interactions. Physical Review E; 68 (6): 065103. • Harter SP and Nisonger TE (1997) ISI’s impact factor as misnomer: A proposed new measure to assess journal impact. Journal of the American Society for Information Science and Technology; 48 (12): 1146–1148. • Heymann P and Garcia-Molina H (2006) Collaborative creation of communal hierarchical taxonomies in social tagging systems. Technical Report, Stanford InfoLab. • Hirata H and Ulanowicz R (1985) Information theoretical analysis of the aggregation and hierarchical structure of ecological networks. Journal of Theoretical Biology; 116 (3): 321–341. • ISI Web of Knowledge. (2012) http://scientific.thomson.com/isi/, accessed 1 January 2012. • Iyengar K and Balijepally V (2015) Ranking journals using the dominance hierarchy procedure: An illustration with is journals. Scientometrics; 102 (1): 5–23. • Juszczyszyn K, Kazienko P, Katarzyna M (2010) Personalized ontology based recommender systems for multimedia objects In: Hākansson A, Hartung R and Nguyen N (eds) Agent and Multi-Agent Technology for Internet and Enterprise Systems, Volume 289 of Studies in Computational Intelligence. Springer: Berlin, Heidelberg, pp 275–292. • Kaiser M, Hilgetag CC and Kötter R (2010) Hierarchy and dynamics of neural networks. Front Neuroinform; 4, 112. • Kaur J, Radicchi F and Menczer F (2013) Universality of scholarly impact metrics. Journal of Informetrics; 7 (4): 924–932. • Klosik DF and Bornholdt S (2014) The citation wake of publications detects Nobel laureates’ papers. PLoS ONE; 9 (12): e113184. • Krugman PR (1996) Confronting the mystery of urban hierarchy. Journal of the Japanese and International Economies; 10 (4): 399–418. • Lambiotte R and Ausloos M (2006) Collaborative tagging as a tripartite network. Lecture Notes in Computer Science; 3993, 1114–1117. • Lambiotte R and Rosvall M (2012) Ranking and clustering of nodes in networks with smart teleportation. Physical Review E; 85 (5): 056107. • Lane D (2006) Hierarchy, Complexity, Society. Springer: Dordrecht, the Netherlands. • Leydesdorff L (2007) Betweenness centrality as an indicator of the interdisciplinarity of scientific journals. Journal of the American Society for Information Science and Technology; 58 (9): 1303–1319. • Leydesdorff L, de Moya-Anegón F and Guerrero-Bote VP (2013) Journal maps, interactive overlays, and the measurement of interdisciplinarity on the basis of scopus data. arXiv:1310.4966 [cs.DL], accessed 31 October 2014. • Leydesdorff L, de Moya-Anegón F and de Nooy W (2014) Aggregated journal-journal citation relations in scopus and web-of-science matched and compared in terms of networks, maps, and interactive overlays. arXiv:1404.2505 [cs.DL], accessed 31 October 2014. • Lu L, Medo M, Yeung CH, Zhang Y-C, Zhang Z-K and Zhou T (2012) Recommender systems. Physics Reports; 519 (1): 1–49. • Ma HW, Buer J and Zeng AP (2004) Hierarchical structure and modules in the Escherichia coli transcriptional regulatory network revealed by a new top-down approach. BMC Bioinformatics; 5 (1): 199. • McShea DW (2001) The hierarchical structure of organisms. Paleobiology; 27 (2): 405–423. • Mendes JFF and Dorogovtsev SN (2003) Evolution of Networks: From Biological Nets to the Internet and WWW.. Oxford University Press: Oxford. • Mika P (2005) Ontologies are us: A unified model of social networks and semantics. In International Semantic Web Conference, 3729, 522 –536. • Mones E, Vicsek L and Vicsek T (2012) Hierarchy measure for complex networks. PLoS ONE; 7 (3): e33799. • Nagy M, Ákos Z, Biro D and Vicsek T (2010) Hierarchical group dynamics in pigeon flocks. Nature; 464 (7290): 890–893. • Nagy M, Vásárhelyi G, Pettit B, Roberts-Mariani I, Vicsek T and Biro D (2013) Context-dependent hierarchies in pigeons. Proceedings of the National Academy of Sciences of the USA; 110 (32): 13049–13054. • Opthof T (1997) Sense and nonsense about the impact factor. Cardiovascular Research; 33 (1): 1–7. • Palla G, Derényi I, Farkas I and Vicsek T (2005) Uncovering the overlapping community structure of complex networks in nature and society. Nature; 435 (7043): 814–818. • Palla G, Tibély G, Mones E, Pollner P and Vicsek T (2015) Project, Hiertags, Source code of the crawler. Dataverse. http://dx.doi.org/10.7910/DVN/MCXTHF • Pfitzner R, Scholtes I, Garas A, Tessone CJ and Schweitzer F (2013) Betweenness preference: Quantifying correlations in the topological dynamics of temporal networks. Physical Review Letters; 110 (19): 198701. • Plangprasopchok A, Lerman K and Getoor L (2011) A probabilistic approach for learning folksonomies from structured data. In Fourth ACM International Conference on Web Search and Data Mining (WSDM), ACM: New York, NY, USA, pp 555–564. • Pollner P, Palla G and Vicsek T (2006) Preferential attachment of communities: The same principle, but a higher level. Europhysics Letters; 73 (3): 478–484. • Pumain D (2006) Hierarchy in Natural and Social Sciences, Volume 3 of Methodos Series. Springer: Dodrecht, the Netherlands. • Ravasz E, Somera AL, Mongru DA, Oltvai ZN and Barabási A-L (2002) Hierarchical organization of modularity in metabolic networks. Science; 297 (5586): 1551–1555. • Rosvall M and Bergstrom C (2011) Multilevel compression of random walks on networks reveals hierarchical organization in large integrated systems. PLoS ONE; 6 (4): e18209. • Rosvall M and Bergstrom CT (2008) Maps of random 1 walks on complex networks reveal community structure. Proceedings of the National Academy of Sciences of the USA; 105 (4): 1118–1123. • Rosvall M, Esquivel AV, Lancichinetti A, West JD and Lambiotte R (2014) Memory in network flows and its effects on spreading dynamics and community detection. Nature Communications, 5, 4630. • Schmitz P (2006) Inducing ontology from flickr tags. Paper presented at Collaborative Web Tagging Workshop at the 15th Int. Conf. on World Wide Web (WWW). • Seglen PO (1997) Why the impact factor of journals should not be used for evaluating research. British Medical Journal; 314 (7079): 498–502. • Shiffrin RM and Börner K (2004) Mapping knowledge domains. Proceedings of the National Academy of Sciences of the USA; 101 (Suppl 1): 5183–5185. • Spyns P, Moor AD, Vandenbussche J and Meersman R (2006) From Folksologies to Ontologies: How the Twain Meet. Lecture Notes in Computer Science, 4275, 738–755. • The Scimago Journal & Country Rank. (2015) http://www.scimagojr.com, accessed 16 March 2015. • Tibély G, Pollner P, Vicsek T and Palla G (2012) Ontologies and tag-statistics. New Journal of Physics; 14 (5): 053009. • Tibély G, Pollner P, Vicsek T and Palla G (2013) Extracting tag hierarchies. PLoS ONE; 8 (12): e84133. • Trusina A, Maslov S, Minnhagen P and Sneppen K (2004) Hierarchy measures in complex networks. Physical Review Letters; 92 (17): 178702. • Valverde S and Solé RV (2007) Self-organization versus hierarchy in open-source social networks. Physical Review E; 76 (4): 046118. • Velardi P, Faralli S and Navigli R (2013) Ontolearn reloaded: A graph-based algorithm for taxonomy induction. Computational Linguistics; 39 (3): 665–707. • Voss J (2007) Tagging, folksonomy & Co—Renaissance of manual indexing? arXiv:cs/0701072v2 [cs.IR], accessed 31 October 2014. • Wickens J and Ulanowicz R (1988) On quantifying hierarchical 1 connections in ecology. Journal of Social and Biological Structures; 11 (3): 369–378. • Wimberley E T (2009) Nested Ecology: The Place of Humans in the Ecological Hierarchy. John Hopkins University Press: Baltimore, MD. • Zlatić V, Ghosal G and Caldarelli G (2009) Hypergraph topological quantities for tagged social networks. Physical Review E; 80 (3): 036118. ## Acknowledgements The research was partially supported by the European Union and the European Social Fund through project FuturICT.hu (grant no: TAMOP-4.2.2.C-11/1/KONV-2012-0013), by the Hungarian National Science Fund (OTKA K105447) and by the EU FP7 ERC COLLMOT project (grant no: 227878). The funders had no role in study design, data collection and analysis, decision to publish or preparation of the manuscript. ## Author information Authors ### Corresponding author Correspondence to Gergely Palla. ## Ethics declarations ### Competing interests The authors declare no competing financial interests. ## Rights and permissions Reprints and Permissions Palla, G., Tibély, G., Mones, E. et al. Hierarchical networks of scientific journals. Palgrave Commun 1, 15016 (2015). https://doi.org/10.1057/palcomms.2015.16 • Accepted: • Published: • DOI: https://doi.org/10.1057/palcomms.2015.16 • ### An equity-oriented rethink of global rankings with complex networks mapping development • Loredana Bellantuono • Alfonso Monaco • Roberto Bellotti Scientific Reports (2020) • ### The “space of physics journals”: topological structure and the Journal Impact Factor • Yurij L. Katchanov • Yulia V. Markova Scientometrics (2017) • ### Random walk hierarchy measure: What is more hierarchical, a chain, a tree or a star? • Dániel Czégel • Gergely Palla Scientific Reports (2016)
2022-08-19 20:47:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 17, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7395234107971191, "perplexity": 1961.5039361317315}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573760.75/warc/CC-MAIN-20220819191655-20220819221655-00661.warc.gz"}
http://kleine.mat.uniroma3.it/mp_arc-bin/mpa?yn=02-480
02-480 Gerhard Knieper and Howard Weiss Genericity of Positive Topological Entropy for Geodesic Flows on $S^2$ (309K, pdf) Nov 22, 02 Abstract , Paper (src), View paper (auto. generated pdf), Index of related papers Abstract. We show that there is a $C^\infty$ open and dense set of positively curvedmetrics on $S^2$ whose geodesic flow has positive topological entropy, andthus exhibits chaotic behavior. The geodesic flow for each of these metrics possesses a horseshoe and it follows that these metrics have an exponential growth rate of hyperbolic closed geodesics. The positive curvature hypothesis is required to ensure the existence of a global surface of section for the geodesic flow. Our proof uses a new and generaltopological criterion for a surface diffeomorphism to exhibit chaotic behavior. Very shortly after this manuscript was completed, the authors learned about remarkable recent work by Hofer, Wysochi, and Zehnder \cite{HWZ1, HWZ2} on three dimensional Reeb flows. In the special case of geodesic flows on $S^2$, they show that the geodesic flow for a $C^\infty$ dense set of Riemannian metrics on $S^2$ possesses either a global surface of section or a heteroclinic connection. It then immediately follows from theproof of our main theorem that there is a $C^\infty$ open and dense set ofRiemannian metrics on $S^2$ whose geodesic flow has positive topological entropy. This concludes a program to show that every orientable compact surface hasa $C^\infty$ open and dense set of Riemannian metrics whose geodesic flow haspositive topological entropy Files: 02-480.src( 02-480.keywords , ger2.pdf.mm )
2018-08-18 12:04:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7175601720809937, "perplexity": 458.7283236813907}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221213666.61/warc/CC-MAIN-20180818114957-20180818134957-00656.warc.gz"}
https://www.cuemath.com/numbers/multiply/
# Multiply Let us play a simple game. There is a track marked with numbers. Tarru, the frog, has to jump on every third tile without stepping on any number in between. Tarru starts jumping from number 3 Let us color each 3rd tile as he jumps on the track. If you list the tiles that he jumps on, you can observe that they are multiples of 3 3, 6, 9, 12, 15, 18, and so on. Learning about multiplication helps us in getting the answer to such questions. We don't want you to worry, as we, at Cuemath, understand how this topic can be confusing for you. We are here to help you understand this concept in a clear way. In this section, you will learn about the definition of multiply in maths, multiply fractions, and synonyms of multiply. You can check out the interactive simulations to know more about the lesson and try your hand at solving a few interesting practice questions at the end of the page. So, let's start! ## Lesson Plan 1 What Is Multiply? 2 Solved Examples on Multiply 3 Challenging Questions on Multiply 4 Important Notes on Multiply 5 Interactive Questions on Multiply ## What Is Multiply? ### Definition of Multiply in Maths In mathematics, to multiply means adding a number to itself to a particular number of times. Multiplication can be viewed as a process of repeated addition. For example, $$4 \times 3$$ is the same as $$4+4+4=12$$. In fact, repeated addition is the first pillar of multiplication. The synonym of multiply is to find the product of two numbers. For example, here you have 3 groups of 7 cupcakes. So, instead of adding seven three times, it can be written as $$7 \times 3 =21$$ This means there are 21 cupcakes in all. Here is a trick for you to make multiplication easy and simple. ## How to Multiply Two Numbers? ### How to Multiply Fractions? Maria has a ribbon of a length of 9 inches. She wants to cut it into 4 equal parts. What fraction of total length would each part represent? Each part will be $$\frac{9}{4}$$ of the strip. She took one part and divided it into 2 equal parts. Now, each part will represent $$\frac{1}{2}\times\frac{9}{4}$$ Follow the following steps to multiply the fractions. • Multiply the numerators $$1 \times 9 = 9$$ • Multiply the denominators $$2 \times 4 = 8$$ • Since it's already in its lowest term we can leave it as is. $\frac{1}{2}\times\frac{9}{4} = \frac{9}{8}$ ### Fun Facts • If you multiply a number by 1, the result is the number itself. • If you multiply any number by 0, the result is always 0. • Changing the order of two numbers in multiplication does not make any difference in the result. ## Solved Examples Example 1 Tom got 3 packs of trading cards and each pack had 5 cards in it. So, how many cards did Tom get in total? ### Solution Number of cards in one pack of trading cards = 5 Number of cards in 3 packs of trading cards = $$5 \times 3=15$$ $$\therefore$$ Tom got 15 cards in all. Example 2 Jacky is a special cook who comes only on party days. He was called for 28 days. For each day, he was being paid $15 Can you tell how much he was paid for 28 days? ### Solution Jacky's pay for 1 day = $$\;15$$ Jacky's pay for 28 days = $$(28\times 15)$$ $$\therefore$$ Jacky is paid$ 420 for 28 days. Example 3 Today is Cherry's birthday. She wants to share 4 candies with the students in her class. If there are 34 students in her class, how many candies should she distribute? ### Solution Number of candies with a student = 4 Number of candies with 34 students = $$34 \times 4 = 136$$ $$\therefore$$ Cheery needs 136 candies to distribute in her class. Example 4 Joe attends music classes every fifth day. Starting from the 5th of August, can you color those dates in the calendar? Write down the series to check the number whose multiples are shown. Solution After coloring every 5th day, the series that we get is: 5, 10, 15, 20, 25 and 30 $$\therefore$$ Multiples of 5 are shown. Challenging Questions 1. The following card contains an incomplete 4-column grid. Do appropriate calculations to find the missing numbers. Finally, there are five selected numbers on each grid. Transfer them to the final column, and the last challenge is to sum all of these numbers. ## Multiplication Calculator Experiment with the simulation below to see how changing the numbers also changes the size of the grid. Drag the sliders to change the number. Important Notes 1. When we multiply two numbers, the result is greater than or equal to either number. 2. The resultant of the multiplication of two numbers is called a product. 3. The number which we multiply is called a multiplicand. 4. The number by which we multiply is called a multiplier. ## Interactive Questions Here are a few activities for you to practice. ## Let's Summarize We hope you enjoyed learning about Multiply with the simulations and practice questions. Now, you will be able to easily solve problems on multiplication in maths, multiply fractions, and synonyms of multiply. At Cuemath, our team of math experts is dedicated to making learning fun for our favorite readers, the students! Through an interactive and engaging learning-teaching-learning approach, the teachers explore all angles of a topic. Be it worksheets, online classes, doubt sessions, or any other form of relation, it’s the logical thinking and smart learning approach that we, at Cuemath, believe in. ## 1. Why do we multiply numbers? We multiply numbers to know the total items in all by multiplying the number of objects in each group by the number of such equal groups. ## 2. What is the result of multiplication called? The result of the multiplication is called the product. ## 3. What is the first number in a multiplication called? The first number in multiplication is called multiplicand. Introduction to Multiplication Introduction to Multiplication Multiplication Multiplication Multiplication
2021-02-25 16:59:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5877062678337097, "perplexity": 1111.2300238027171}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178351374.10/warc/CC-MAIN-20210225153633-20210225183633-00286.warc.gz"}
https://www.physicsforums.com/threads/finding-the-value-of-constants-in-f-x-as-x-0.216552/
# Finding the value of constants in f(x) as x->0 1. Feb 19, 2008 ### XenoWolf I'm not looking for the complete answer (from what I've read in the intro posts, you won't/shouldn't give it to me anyways)... I just need to figure out where to start. This is my first time taking calc, and I'm pretty lost. Thanks in advance. 1. The problem statement, all variables and given/known data Find the values of the constants a and b such that lim (x$$\rightarrow$$0) [ ( $$\sqrt{a+bx}$$ - $$\sqrt{3}$$ ) / x ] = $$\sqrt{3}$$ 3. The attempt at a solution I've attempted to solve it a couple of ways in an algebraic style, but the fact that there are three 'variables' has me stumped. I also tried using the limit property that states the limit of h(x)=f(x)/g(x) as x->c is L/K (I hope I got that right.. hah.) but the fact that K ends up being zero screws that up... I'm just completely lost as to where I need to start the problem. I don't know if I should be solving for a variable, doing trial-and-error stuff, using some kind of limit property, etc. 2. Feb 19, 2008 ### Rainbow Child Let $$f(x)=\frac{\sqrt{a+b\,x}-\sqrt3}{x}=\frac{g(x)}{x}$$ and solve for $g(x)$ What's the limit $$\lim_{x\to 0}g(x)$$ ?
2018-01-24 08:15:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7794280052185059, "perplexity": 418.3892160865044}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084893530.89/warc/CC-MAIN-20180124070239-20180124090239-00482.warc.gz"}
https://kmr.dialectica.se/wp/research/math-rehab/what-is-mathematics/mathematical-courtesy/
# Mathematical Courtesy This page is a sub-page of our page on What is Mathematics? /////// Mathematical Courtesy: //////// Quoting from E.T. Jaynes, Probability Theory – The Logic of Science, Cambridge University Press, 2009 (2003), pp. 675-676: A few years ago the writer attended a seminar talk by a young mathematician who had just received his Ph.D. degree and, we understood, had a marvellous new limit theorem of probability theory. He started to define the sets he proposed to use, but three blackboards were not enough for them, and he never got through the list. At the end of the hour, having to give up the room, we walked out in puzzlement, not knowing even the statement of his theorem. A ‘19th century mathematician’ like Poincaré would have been into the meat of the calculation within a few minutes and would have completed the proof and pointed out its consequences in time for discussion. The young man is not to be blamed; he was only doing what he had been taught a ‘20th century mathematician’ must do. Although he has perhaps now learned to plan his talks a little better, he is surely still wasting much of his own time and that of others in reciting all the preliminary incantations that are demanded in 20th century mathematics before one is allowed to proceed to the actual problem. He is a victim of what we consider to be, not a higher standards or rigor, but studied mathematical discourtesy. Nowadays, if you introduce a variable $\, x \,$ without repeating the incantation that it is in some set or ‘space’ $\, X$, you are accused of dealing with an undefined problem. If you differentiate a function $\, f(x) \,$ without first having stated that it is differentiable, you are accused of lack of rigor. If you note that your function $\, f(x) \,$ has some special property natural to the application, you are accused of lack of generality. In other words, every statement you make will receive the discourteous interpretation. Obviously, mathematical results cannot be communicated without some decent standards of precision in our statements. But a fanatical insistence on one particular form of precision and generality can be carried so far that it defeats its own purpose; 20th century mathematics often degenerates into an idle adversary game instead of a communication process. The fanatic is not trying to understand your substantive message at all, but only trying to find fault with your style of presentation. He will strive to read nonsense into what you are saying, if he can possibly find a way of doing so. In self-defense, writers are obliged to concentrate their attention on every tiny, irrelevant, nitpicking detail of how things are said rather than on what is said. The length grows; the content shrinks. Mathematical communication would be much more efficient and pleasant if we adopted a different attitude. For one who makes the courteous interpretation of what others write, the fact that $\, x \,$ is introduced as a variable already implies that there is some set $\, X \,$ of possible values. Why should it be necessary to repeat that incantation every time a variable is introduced, thus using up two symbols where one would do? (Indeed, the range of values is usually indicated more clearly at the point where it matters, by adding conditions such as $\, (0 < x < 1) \,$ after an equation.) For a courteous reader, the fact that a writer differentiates $\, f(x) \,$ twice already implies that he considers it twice differentiable; why should he be required to say everything twice? If he proves proposition $\, A \,$ in enough generality to cover his application, why should he be obliged to use additional space for irrelevancies about the most general possible conditions under which $\, A \,$ would be true? A source as annoying as the fanatic is his cousin, the compulsive mathematical nitpicker. We expect that an author will define his technical terms, and then use them in a way consistent with his definitions. But if any other author has ever used the term with a slightly different shade of meaning, the nitpicker will be right there accusing you of inconsistent terminology. The writer has been subjected to this many times; and colleagues report the same experience. Nineteenth century mathematicians were not being non-rigorous by their style; they merely, as a matter of course, extended simple civilized courtesy to others, and expected to receive it in return. This will lead one to try to read sense into what others write, if it can possibly be done in view of the whole context; not to pervert our reading of every mathematical work into a witch-hunt for deviations of the Official Style. Therefore, sympathizing with the young man’s plight but not intending to be enslaved like him, we issue the following: Every variable $\, x \,$ that we introduce is understood to have some set $\, X \,$ of possible values. Every function $\, f(x) \,$ that we introduce is supposed to be sufficiently well-behaved so that what we do with it makes sense. We undertake to make every proof general enough to cover the application we make of it. It is an assigned homework problem for the reader who is interested in the question to find the most general conditions under which the result would hold. We could convert many 19th century mathematical works to 20th century standards by making a rubber stamp containing the Proclamation, with perhaps another sentence using the terms sigma-algebra, Borel field, Radon-Nikodym derivative, and stamping it on the first page. Modern writers could shorten their works substantially, with improved readability and no decrease in content, by including such a Proclamation in the copyright message, and writing thereafter in the 19th century style. Perhaps some publishers, seeing these words, may demand that they do this for economic reasons; it would be a service to science. /////// End of Quote from E.T. Jaynes
2022-08-12 03:42:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 13, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6903290748596191, "perplexity": 808.7031111801645}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571538.36/warc/CC-MAIN-20220812014923-20220812044923-00575.warc.gz"}
https://blog.zilin.one/21-259-fall-2013/recitation-12/
# Recitation 12 Section 12.5 Problem 12: Evaluate the triple integral. $\iiint_E xy dV$, where $E$ is bounded by the parabolic cylinders $y=x^2$ and $x=y^2$ and the planes $z=0$ and $z=x+y$. Comment: Denote the region on $xy$-plane bounded by $y=x^2$ and $x=y^2$ by $L$. We obtain $\iiint_E xy dV = \iint_L xy(x+y)dA = \int_0^1\int_{x^2}^{\sqrt{x}}xy(x+y)dydx = \frac{3}{28}$. Section 12.5 Problem 18: Use a triple integral to find the volume of the given solid. The solid enclosed by the paraboloids $y = x^2 + z^2$ and $y=8-x^2-z^2$. Solution: The projection of the solid on the $xz$-plane is a disc with radius 2 centered at the origin, denoted by $D$. The volume is given by $\iint_D 8-x^2-z^2-(x^2+z^2) dA$. Under the polar coordinate of the $xz$-plane, this integral becomes $\int_0^{2\pi}\int_0^2 (8-2r^2)r drd\theta = 16\pi$. Section 12.5 Problem 35: Evaluate the triple integral using only geometric interpretation and symmetry. $\iiint_C (4 + 5x^2yz^2) dV$, where $C$ is the cylindrical region $x^2 +y^2 \leq 4,-2\leq z\leq 2$. Solution: By symmetry $\iiint_C (4 + 5x^2yz^2) dV = \iiint_C 4 dV$. By geometric interpretation, this is 4 times of the volume of the cylindrical region whose volume is $\pi\times 2^2\times 4=16\pi$. Hence the answer is $64\pi$. Section 12.6 Problem 16: Sketch the solid whose volume is given by the integral and evaluate the integral. $\int_0^2\int_0^{2\pi}\int_0^r r dz d\theta dr$. Answer: This is part of the region inside the cylinder $x^2+y^2=2^2$ that lies above the plane $z=0$ and below the cone $z=\sqrt{x^2+y^2}$. Section 12.6 Problem 18: Use cylindrical coordinates. Evaluate $\iiint_E z dV$, where $E$ is enclosed by the paraboloid $z=x^2 +y^2$ and the plane $z=4$. Solution: The projection of the solid on the $xy$-plane is the disc centered at the origin with radius 2, denoted by $D$. Then $\iiint_E z dV = \iint_D\int_{x^2+y^2}^4 z dz dA$. Under the polar coordinate, this is equal to $\int_0^{2\pi}\int_0^2\int_{r^2}^4 z dz r dr d\theta = \frac{64}{3}\pi$. Section 12.6 Problem 19: Use cylindrical coordinates. Evaluate $\iiint_E (x + y + z) dV$, where $E$ is the solid in the first octant that lies under the paraboloid $z = 4 - x^2 - y^2$. Solution: The projection of the solid onto the $xy$-plane is the part of disc with radius 2 centered at the origin in the first quadrant. Under the cylindrical coordinates, $\iiint_E (x + y + z) dV = \int_0^{\pi/2}\int_0^2\int_0^{4-r^2} (r\cos\theta+r\sin\theta+z)dz r dr d\theta=\frac{208}{15}$. Section 12.7 Problem 15: A solid lies above the cone $z = \sqrt{x^2 + y^2}$ and below the sphere $x^2 + y^2 + z^2 = z$. Write a description of the solid in terms of inequalities involving spherical coordinates. Solution: Under the spherical coordinates, the cone is described by $\phi = \pi / 4$ and the sphere by $\rho=\cos\phi$. Therefore, the solid is described by $0\leq\theta\leq 2\pi, 0\leq \phi\leq \pi/4, 0\leq\rho\leq \cos\phi$. Section 12.7 Problem 25: Use spherical coordinates. Evaluate $\iiint_E xe^{x^2+y^2+z^2} dV$, where $E$ is the portion of the unit ball $x^2 +y^2 +z^2 \leq 1$ that lies in the first octant. Solution: The portion of the unit ball that lies in the first octant is described by $0\leq\theta\leq\pi/2, 0\leq\phi\leq\pi/2, 0\leq\rho\leq 1$. Under the spherical coordinates, $\iiint_E xe^{x^2+y^2+z^2} dV=\int_0^{\pi/2}\int_0^{\pi/2}\int_0^1 \rho\sin\phi\cos\theta e^{\rho^2}\rho^2\sin\phi d\rho d\phi d\theta = \int_0^{\pi/2} \cos\theta d\theta\int_0^{\pi/2} \sin^2\phi d\phi\int_0^1 \rho^3e^{\rho^2}d\rho = \frac{\pi}{8}$.
2020-05-26 02:24:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 51, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9649427533149719, "perplexity": 123.80242394693329}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347390442.29/warc/CC-MAIN-20200526015239-20200526045239-00444.warc.gz"}
http://www.compchemhighlights.org/2015_06_01_archive.html
## Thursday, June 25, 2015 ### Accurately Modeling Nanosecond Protein Dynamics Requires at least Microseconds of Simulation Gregory R. Bowman Journal of Computational Chemistry 2015 Contributed by +Jan Jensen This papers compares computed and experimental order parameters for two proteins, ubiquitin and RNase H, computed using 10, 100, 1000, and 10000 ns (10 $\mu$s) explicit molecular dynamics simulations. Order parameters quantify the order of particular bonds (typically amide NH and methyl CH) and are typically measured via relaxation-dispersion NMR experiments that are insensitive to dynamics longer than the molecular tumbling time.  For ubiquitin and RNase H the tumbling times are 3.5 and 8.5 ns, respectively so the usual assumption would be that a 100 ns simulation is more than enough for both cases. Figure 3. Correlation coefficients (r) and RMSDs between experimental backbone order parameters for RNase H and those calculated from simulations with the Amber03 force field. Mean values from 50 bootstrapped samples are shown as a function of the simulation length for the long-time limit approximation (open diamonds) and from the truncated average approximation (closed circles). The error bars represent one standard deviation. (c) 2015 John Wiley and Sons. Reproduced with permission. Bowman shows that this assumption is not good.  For example, for RNase H (Figure 3 from the paper), 100 ns is barely adequate (especially for r) and a 1 $\mu$s simulation is a minimum requirement to demonstrate convergence. As Bowman points out: Since order parameters are an ensemble average, they have contributions from all populated states, including those separated from the native state by high-energy barriers, which are unlikely to be accessed during nanoseconds of simulation. Using Amber99sb-ILDN and Charmm27 results in similar conclusions. The fact that the agreement with experiment continues to improve as as the length of the simulation increases also suggests that modern force fields predict that the experimentally observed protein structure is in fact a minimum of the free energy surface, which has not always been the case in the past. ## Tuesday, June 23, 2015 ### A Practicable Real-Space Measure and Visualization of Static Electron-Correlation Effects Grimme, S. and Hansen, A., Angew Chem Int Edit, (2015) Contributed by Tobias Schwabe The question of how to deal with multireference (MR) cases in DFT has a longstanding history. Of course, the exact functional would also include multireference effects (or non-dynamical/non-local/static electron correlaton, as these effects are also called) and no special care is needed. But when it comes to today's density functional approximations (DFAs) within the Kohn-Sham framework, everything is a little bit more complicated. For example, Baerends and co-workers have shown that is the exchange part in GGA-DFAs that actually accounts for static electron correlation.[1] These studies, among others, led to the conclusion that the (erroneous) electron self-interaction in DFAs accounts for some of the MR character in a system. A good review about how these things are interconnected can be found in Ref. [2]. Instead of searching for better and better DFAs, another approach to the problem is to apply ensemble DFT which introduces the free electron energy and also the concept of entropy into DFT.[3] The key concept here is to allow for fractional occupation numbers in Kohn-Sham orbitals and to look at the system at T > 0 K. In case of systems with MR character which cannot be described with a single Slater determinant fractional occupation will result (for e.g. when computing natural occupation numbers). The interesting thing about ensemble DFT is that it allows to find these numbers directly via a variational approach without computing an MR wavefunction first. Grimme and Hansen now turned this approach into a tool for a qualitative analysis of molecular systems. They do so by plotting what they call the fractional orbital density (FOD). That is, only those molecular orbitals with non-integer occupation numbers contribute to the density – and only at a finite temperature. This density vanishes completely at T = 0 K. So, the analysis literally shows MR hot spots. Integrating the FOD yields also an absolute scalar which allows to quantify the MR character and to compare different molecules. Due to the authors, this value correlates well with other values which attempts to provide such information. A great advantage of the approach is that now the MR character can be located (geometrically) within the molecule. The findings presented in the application part of the paper go along well with chemical intuition. The analysis might help to visualize and to interpret MR phenomenon. The tool can provide insight when the nature of the electronic structure is not obvious – for example, when dealing with biradicals in a singlet spin state. It might also be a good starting point to identify relevant regions/orbitals which should be included when one wants to treat a system on a higher level than DFT, for example with WFT-in-DFT based on projector techniques. Last but not least, it can help to identify chemical systems to which standard DFAs should not (or only with great care) be applied. References: [1] a) O. V. Grittsenko, P. R. T. Schipper, and E. J. Baerends, J. Chem. Phys. (1997), 107, 5007 b) P. R. T. Schipper, O. V. Grittsenko, and E. J. Baerends, Phys. Rev. A (1998), 57, 1729 c) P. R. T. Schipper, O. V. Grittsenko, and E. J. Baerends, J. Chem. Phys. (1999), 111, 4056 [2] A. J. Cohen, P. Mori-Sánchez, and W. Yang., Chem. Rev. (2012), 112, 289 [3] R. G. Parr and W. Yang, Density-Functional Theory of Atoms and Molecules, Oxford University Press (1989) ## Thursday, June 18, 2015 ### Ring Planarity Problem of 2-Oxazoline Revisited Using Microwave Spectroscopy and Quantum Chemical Calculations Samdal, S.; Møllendal, H.; Reine, S.; Guillemin, J.-C. J. Phys. Chem. A 2015, 119, 4875–4884 Contributed by Steven Bachrach. Reposted from Computational Organic Chemistry with permission A recent reinvestigation of the structure of 2-oxazoline demonstrates the difficulties that many computational methods can still have in predicting structure. Samdal, et al. report the careful examination of the microwave spectrum of 2-oxzoline and find that the molecule is puckered in the ground state.1 It’s not puckered by much, and the barrier for inversion of the pucker, through a planar transition state is only 49 ± 8 J mol-1. The lowest vibrational frequency in the non-planar ground state, which corresponds to the puckering vibration, has a frequency of 92 ± 15 cm-1. This low barrier is a great test case for quantum mechanical methodologies. And the outcome here is not particularly good. HF/cc-pVQZ, M06-2X/cc-pVQZ, and B3LYP/cc-pVQZ all predict that 2-oxazoline is planar. More concerning is that CCSD and CCSD(T) with either the cc-pVTZ or cc-pVQZ basis sets also predict a planar structure. CCSD(T)-F12 with the cc-pVDZ predicts a non-planar ground state with a barrier of only 8.5 J mol-1, but this barrier shrinks to 5.5 J mol-1 with the larger cc-pVTZ basis set. The only method that has good agreement with experiment is MP2. This method predicts a non-planar ground state with a pucker barrier of 11 J mol-1 with cc-pVTZ, 39.6 J mol-1 with cc-pVQZ, and 61 J mol-1with the cc-pV5Z basis set. The non-planar ground state and the planar transition state of 2-oxazoline are shown in Figure 1. The computed puckering vibrational frequency does not reproduce the experiment as well; at MP2/cc-pV5Z the predicted frequency is 61 cm-1 which lies outside of the error range of the experimental value. Non-planar Planar TS Figure 1. MP2/cc-pV5Z optimized geometry of the non-planar ground state and the planar transition state of 2-oxazoline. ### References (1) Samdal, S.; Møllendal, H.; Reine, S.; Guillemin, J.-C. "Ring Planarity Problem of 2-Oxazoline Revisited Using Microwave Spectroscopy and Quantum Chemical Calculations," J. Phys. Chem. A 2015119, 4875–4884, DOI: 10.1021/acs.jpca.5b02528. ### InChIs 2-oxazoline: InChI=1S/C3H5NO/c1-2-5-3-4-1/h3H,1-2H2 InChIKey=IMSODMZESSGVBE-UHFFFAOYSA-N ## Wednesday, June 3, 2015 ### Charge-Enhanced Acidity and Catalyst Activation Samet, M.; Buhle, J.; Zhou, Y.; Kass, S. R. J. Am. Chem. Soc. 2015, 137, 4678-4680 Contributed by Steven Bachrach. Reposted from Computational Organic Chemistry with permission Kass and coworkers looked at a series of substituted phenols to tease out ways to produce stronger acids in non-polar media.1 First they established a linear relationship between the vibrational frequency shifts of the hydroxyl group in going from CCl4 as solvent to CCl4 doped with 1% acetonitrile with the experimental pKa in DMSO. They also showed a strong relationship between this vibrational frequency shift and gas phase acidity (both experimental and computed deprotonation energies). A key recognition was that a charged substituent (like say ammonium) has a much larger effect on the gas-phase (and non-polar solvent) acidity than on the acidity in a polar solvent, like DMSO. This can be attributed to the lack of a medium able to stable charge build-up in non-polar solvent or in the gas phase. This led them to 1, for which B3LYP/6-31+G(d,p) computations of the analogous dipentyl derivative 2 (see Figure 1) indicated a deprotonation free energy of 261.4 kcal mol-1, nearly 60 kcal mol-1 smaller than any other substituted phenol they previously examined. Subsequent measurement of the OH vibrational frequency shift showed the largest shift, indicating that 1 is extremely acidic in non-polar solvent. Further computational exploration led to 3 (see Figure 1), for which computations predicted an even smaller deprotonation energy of 231.1 kcal mol-1. Preparation of 4 and experimental observation of its vibrational frequency shift revealed an even larger shift than for 1, making 4 extraordinarily acidic. 2 Conjugate base of 2 3 Conjugate base of 3 Figure 1. B3LYP/6-31+G(d,p) optimized geometries of 2 and 3 and their conjugate bases. ### Reference (1) Samet, M.; Buhle, J.; Zhou, Y.; Kass, S. R. "Charge-Enhanced Acidity and Catalyst Activation," J. Am. Chem. Soc. 2015137, 4678-4680, DOI: 10.1021/jacs.5b01805. ### InChI 1 (cation only): InChI=1S/C23H41NO/c1-4-6-8-10-12-14-20-24(3,21-15-13-11-9-7-5-2)22-16-18-23(25)19-17-22/h16-19H,4-15,20-21H2,1-3H3/p+1 InChIKey=HIQMXPFMEWRQQG-UHFFFAOYSA-O 2InChIKey=WMOPRSHYZNVZKF-UHFFFAOYSA-O 3: InChI=1S/C6H7NO/c1-7-4-2-3-6(8)5-7/h2-5H,1H3/p+1 InChIKey=FZVAZYLFYPULKX-UHFFFAOYSA-O 4 (cation only): InChI=1S/C13H21NO/c1-2-3-4-5-6-7-10-14-11-8-9-13(15)12-14/h8-9,11-12H,2-7,10H2,1H3/p+1 InChIKey=HSFRKOBOATYXAH-UHFFFAOYSA-O
2017-04-29 21:17:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6153264045715332, "perplexity": 2889.489520373097}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123590.89/warc/CC-MAIN-20170423031203-00126-ip-10-145-167-34.ec2.internal.warc.gz"}
https://aptitude.gateoverflow.in/1204/cat-2000-question-67
143 views What is the number of distinct triangles with integral valued sides and perimeter $14?$ 1. $6$ 2. $5$ 3. $4$ 4. $3$
2022-12-06 06:19:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9206834435462952, "perplexity": 382.45307503173007}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711074.68/warc/CC-MAIN-20221206060908-20221206090908-00153.warc.gz"}
https://eccc.weizmann.ac.il/keyword/19031/
Under the auspices of the Computational Complexity Foundation (CCF) REPORTS > KEYWORD > BEST-PARTITION COMMUNICATION COMPLEXITY: Reports tagged with best-partition communication complexity: TR15-151 | 14th September 2015 #### New Extractors for Interleaved Sources Revisions: 1 We study how to extract randomness from a $C$-interleaved source, that is, a source comprised of $C$ independent sources whose bits or symbols are interleaved. We describe a simple approach for constructing such extractors that yields: (1) For some $\delta>0, c > 0$, explicit extractors for $2$-interleaved sources on \$\{ ... more >>> TR18-177 | 1st October 2018 Alexander Knop #### The Diptych of Communication Complexity Classes in the Best-partition Model and the Fixed-partition Model Most of the research in communication complexity theory is focused on the fixed-partition model (in this model the partition of the input between Alice and Bob is fixed). Nonetheless, the best-partition model (the model that allows Alice and Bob to choose the partition) has a lot of more >>> ISSN 1433-8092 | Imprint
2023-02-05 07:01:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6803068518638611, "perplexity": 6508.6587374663795}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500250.51/warc/CC-MAIN-20230205063441-20230205093441-00186.warc.gz"}
https://math.stackexchange.com/questions/3107263/selecting-conditional-states-depending-on-previous-states
# Selecting conditional states depending on previous states I've seen a post which was started as a joke saying : "Well, Guess the code ?" (4 digit code) Apart from the joke , I was thinking , well , how many combinations do we have here , knowing that 8 and 0 are a must. Sure ,the logic says : 2^4 minus (bad options which is 0000,8888) = 14 But this is not a mathematical calculation but a human logic being involved. I was thinking about something like this: Let's take 8. It has 4 places to be one or more times But if it appears first , then at least one 0 must be on the other 3 places ... and so on and so on Same for the 0 : It has 4 places to be one or more times But if it appears first , then at least one 8 must be on the other 3 places ... and so on and so on Question: How can I yield 14 by a mathematical proof ? (and not by a human logic : "let's take all the options and remove the bad combinations") EDIT (after some answers) The demo above was an easy part. But I'm after more generalized formula : Say not only $$8$$ and $$0$$ were bold. let's say $$0$$, $$8$$, $$5$$, $$2$$ were bold and the password was $$5$$ length long . So then what? removing the count of all $$00000$$, $$88888$$, $$55555$$, $$22222$$ AND $$08555$$ (becuase $$2$$ and $$0$$ must be present also)? and $$08222$$? ( becuase $$5$$ and $$0$$ must be present also).......?? Do you see what I mean ? Human logic is more complicated here. It is not a general formula , but specific to this case. I'm after a generalized formula In other words the question can be reduced to : How many options can we select from $$N$$ numbers , at length of $$L$$ ( $$L\geq N$$) where at least each number from $$N$$ must appear so N = 0,8,5,2 L = 5 08520 valid 85200 valid 08555 invalid (missing 2) 08222 invalid (missing 5) etc... First of all, your original argument looks like a mathematical proof to me; you want to count all sequences of $$0$$'s and $$8$$'s of length $$4$$ that contain at least one $$0$$ and one $$8$$. That's the same as all sequences of length $$4$$ that aren't all $$0$$'s or all $$8$$'s, and so you get $$2^4-2=14$$. But your second line of thought gives another way to reach the same conclusion, albeit more roundabout: A sequence must start with either $$0$$ or $$8$$: • If it starts with $$8$$, then the remainder of the sequence is a sequence of length $$3$$ that is not all $$8$$'s. There are $$2^3-1=7$$ such sequences. • If it starts with $$0$$, then the remainder of the sequence is a sequence of length $$3$$ that is not all $$0$$'s. There are $$2^3-1=7$$ such sequences. This yields a total of $$7+7=14$$ sequences. Of course this is in essence the same reasoning as your original argument, with a small adjustment by distinguishing cases based on the first digit. You could also keep distinguishing cases for the second, third and fourth digit and in this way count all sequences separately. This comes down to drawing a tree diagram of all possible sequences. For the more general case in the edit, the inclusion-exclusion principle is of great help. The generality in which it is stated on the linked page may be a bit intimidating, but the idea is the following; 1. You count the total number of sequences, let's call this $$N$$. For your example, this would be all sequences consisting of $$0$$, $$2$$, $$5$$ and $$8$$; there are $$4^4=256$$ such sequences, so $$N=256$$. We want to subtract from this the number of sequences that do not contain one of the digits. 2. You don't want the sequences that do not contain a $$0$$, do not contain a $$2$$, do not contain a $$5$$ and do not contain an $$8$$. There are $$N_0=3^4=81$$ sequences that do not contain a $$0$$, and similarly $$N_2=N_5=N_8=81$$ sequences that do not contain a $$2$$, a $$5$$ or and $$8$$. Now we could compute the number of sequences that contain each digit as $$N-(N_0+N_2+N_5+N_8)=256-4\times81=-68,$$ which is clearly wrong, because we have counted the sequences that do not contain a $$0$$ and do not contain a $$2$$ twice. 3. So next add the number of sequences that do not contain a $$0$$ and a $$2$$; there are $$N_{0,2}=2^4=16$$ such sequences, and similarly $$N_{0,5}=N_{0,8}=N_{2,5}=N_{2,8}=N_{5,8}=16.$$ Adding these back we get $$\begin{eqnarray*} N&-&(N_0+N_2+N_5+N_8)+(N_{0,2}+N_{0,5}+N_{0,8}+N_{2,5}+N_{2,8}+N_{5,8})\\ &=&256-4\times81+16\times6=28. \end{eqnarray*}$$ But now we have a problem with the sequences that are missing three digits; we first subtracted them three times, then added them back three times, so we should subtract them again. To clarify, for example the sequence $$8888$$ was counted in $$N_0$$ and $$N_2$$ and $$N_5$$ and subtracted, and then counted in $$N_{0,2}$$ and $$N_{0,5}$$ and $$N_{2,5}$$ and added again. 4. Finally we count the sequences that do not contain a $$0$$, a $$2$$ and a $$5$$; there is of course only $$N_{0,2,5}=1$$ such sequence which is $$8888$$. Similarly $$N_{0,2,8}=N_{0,5,8}=N_{2,5,8}=1$$. Then the total number of sequences is $$\begin{eqnarray*} N&-&(N_0+N_2+N_5+N_8)+(N_{0,2}+N_{0,5}+N_{0,8}+N_{2,5}+N_{2,8}+N_{5,8})\\ &-&(N_{0,2,5}+N_{0,2,8}+N_{0,5,8}+N_{2,5,8})\\ &=&256-4\times81+6\times16-4\times1=24. \end{eqnarray*}$$ And this is easily verified; if a $$4$$-digit sequence must contains each of the $$4$$ digits once, then it is a permutation of the $$4$$ digits. There are $$4!=4\times3\times2\times1=24$$ permutations, as there are $$4$$ choices for the first digit, then $$3$$ for the second, then $$2$$ for the third and then only $$1$$ for the last. EDIT: I just saw that the example in your edit concerns sequences of length $$5$$, whereas my example treats sequences of length $$4$$. The number of such sequences of length $$5$$ is $$240$$, which you can verify yourself with this method. EDIT2: I just saw that you asked for a formula for the number of sequences of length $$L$$ that contain each of $$N$$ digits at least once, with $$N\leq L$$. The above shows that the number of possible sequences is $$\sum_{i=0}^N(-1)^i\binom{N}{i}(N-i)^L.$$ In particular, for the original question on the number of sequences of length $$L=4$$ with the $$N=2$$ digits $$0$$ and $$8$$, we indeed get $$\sum_{i=0}^N(-1)^i\binom{N}{i}(N-i)^L=\binom{2}{0}2^4-\binom{2}{1}1^4+\binom{2}{0}0^4=1\times16-2\times1+1\times0=14.$$ • Please read my edit. – Royi Namir Feb 10 at 10:58 • I have added a reference for a general technique for computing such numbers, and a worked example. – Servaes Feb 10 at 11:26 • I have added a closed form for the total number of sequences of length $L$ with $N$ digits that contain each digit at least once. Also, nice question! – Servaes Feb 10 at 11:36 • Thanks - Can you just ( for others) apply the formula to our initial case ? ( 8,0) out of 4 ? – Royi Namir Feb 10 at 11:38 • Wish I could upvote more. thankls – Royi Namir Feb 10 at 11:43 Your "human logic" IS a mathematical proof : here, it's definitely easier to compute the number of codes that don't have at least a 0 and a 8. Let's note • $$\Omega$$ the set of all codes, • $$A_0$$ the set of codes with at least one $$0$$, • $$A_8$$ the set of codes with at least one $$8$$. You want $$\#(A_0\cap A_8)$$. The complementary of this set is $$\overline{A_0}\cup\overline{A_8}$$, and with Poincaré formula : $$\#(\overline{A_0}\cup\overline{A_8}) = \#(\overline{A_0}) + \#(\overline{A_8}) - \#(\overline{A_0}\cap\overline{A_8})$$ $$\overline{A_0}$$ is the set of codes with no $$0$$, there are $$9^4$$ such codes. Similarly, $$\#(\overline{A_8})=9^4$$. And $$\overline{A_0}\cap\overline{A_8}$$ is the set of codes that contain neither $$0$$ nor $$8$$, there are $$8^4$$ of them. So $$\#(\overline{A_0}\cup\overline{A_8}) = 2\times9^4-8^4=9026$$ Finally $$\#(\Omega)=10^4$$, so $$\#(A_0\cap A_8) = 10^4-9026=74$$ • how come 74 when the answer is 14 ? – Royi Namir Feb 10 at 10:46 • Is your question : "what is the number of codes with at least an 8 and a 0?"; or "what is the number of codes with only 0 and 8 (and at least one of each)?" ? I answered the first one. – Nicolas FRANCOIS Feb 10 at 10:49 • The latter...... – Royi Namir Feb 10 at 10:49 • Oh, sorry. Then you have to change the universe a bit : $\#\Omega=2^4$, $\#\overline{A_0}=\#\overline{A_8}=1$ and $\#(\overline{A_0}\cap\overline{A_8})=0$, then $\#(A_0\cap A_8)=2^4-1-1+0=14$. – Nicolas FRANCOIS Feb 10 at 10:53 • As you saw, counting directly the number of codes with at least a 0 and an 8 is far more complicated. – Nicolas FRANCOIS Feb 10 at 10:54
2019-06-18 05:48:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 120, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7843628525733948, "perplexity": 196.98561107873638}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998607.18/warc/CC-MAIN-20190618043259-20190618065259-00064.warc.gz"}
http://conlegium779.org/merle-tottenham-sxuzj/homogeneous-and-non-homogeneous-function-d4eaaa
However, it works at least for linear differential operators $\mathcal D$. ∂ The first question that comes to our mind is what is a homogeneous equation? if all of its arguments are multiplied by a factor, then the value of the function is multiplied by some power of that factor. A (nonzero) continuous function homogeneous of degree k on R n \ {0} extends continuously to R n if and only if Re{k} > 0. α We can note that f(αx,αy,αz) = (αx)2+(αy)2+(αz)2+… . A function is monotone where ∀, ∈ ≥ → ≥ Assumption of homotheticity simplifies computation, Derived functions have homogeneous properties, doubling prices and income doesn't change demand, demand functions are homogenous of degree 0 + = c If fis linearly homogeneous, then the function defined along any ray from the origin is a linear function. And let's say we try to do this, and it's not separable, and it's not exact. Remember that the columns of a REF matrix are of two kinds: See more. x The function (8.122) is homogeneous of degree n if we have . ) Then, Any linear map ƒ : V → W is homogeneous of degree 1 since by the definition of linearity, Similarly, any multilinear function ƒ : V1 × V2 × ⋯ × Vn → W is homogeneous of degree n since by the definition of multilinearity. 1. ) f To solve this problem we look for a function (x) so that the change of dependent vari-ables u(x;t) = v(x;t)+ (x) transforms the non-homogeneous problem into a homogeneous problem. ⁡ Affine functions (the function Homogeneous definition, composed of parts or elements that are all of the same kind; not heterogeneous: a homogeneous population. The general solution of this nonhomogeneous differential equation is. Here, we consider differential equations with the following standard form: dy dx = M(x,y) N(x,y) ) f Find a non-homogeneous ‘estimator' Cy + c such that the risk MSE(B, Cy + c) is minimized with respect to C and c. The matrix C and the vector c can be functions of (B,02). x The degree of homogeneity can be negative, and need not be an integer. Specifically, let Let C be a cone in a vector space V. A function f: C →Ris homogeneous of degree γ if f(tx) = tγf(x) for every x∈ Rm and t > 0. ( ⁡ The mathematical cost of this generalization, however, is that we lose the property of stationary increments. = For example. ln Since = The last display makes it possible to define homogeneity of distributions. The repair performance of scratches. f x ( 3.5). Homogeneous Differential Equation. This feature makes it have a refurbishing function. . ( In particular, if M and N are both homogeneous functions of the same degree in x and y, then the equation is said to be a homogeneous equation. {\displaystyle \mathbf {x} \cdot \nabla } ⁡ In mathematics, a homogeneous function is one with multiplicative scaling behaviour: if all its arguments are multiplied by a factor, then its value is multiplied by some power of this factor. For instance, looking again at this system: we see that if x = 0, y = 0, and z = 0, then all three equations are true. ( x a linear first-order differential equation is homogenous if its right hand side is zero & A linear first-order differential equation is non-homogenous if its right hand side is non-zero. — Suppose that the function f : ℝn \ {0} → ℝ is continuously differentiable. x {\displaystyle f(x)=\ln x} x Thus, if f is homogeneous of degree m and g is homogeneous of degree n, then f/g is homogeneous of degree m − n away from the zeros of g. The natural logarithm Let f : X → Y be a map. ex. , + w x2 is x to power 2 and xy = x1y1 giving total power of 1+1 = 2). {\displaystyle \textstyle \alpha \mathbf {x} \cdot \nabla f(\alpha \mathbf {x} )=kf(\alpha \mathbf {x} )} ( ∇ In particular we have R= u t ku xx= (v+ ) t 00k(v+ ) xx= v t kv xx k : So if we want v t kv xx= 0 then we need 00= 1 k R: for all α ∈ F and v1 ∈ V1, v2 ∈ V2, ..., vn ∈ Vn. ( {\displaystyle \textstyle f(\alpha \mathbf {x} )=g(\alpha )=\alpha ^{k}g(1)=\alpha ^{k}f(\mathbf {x} )} What does non-homogeneous mean? φ A function is homogeneous if it is homogeneous of degree αfor some α∈R. Homogeneous Function. The … The constant k is called the degree of homogeneity. This equation may be solved using an integrating factor approach, with solution ∇ Search non homogeneous and thousands of other words in English definition and synonym dictionary from Reverso. Homogeneous applies to functions like f(x) , f(x,y,z) etc, it is a general idea. Theorem 3. , where c = f (1). ) An algorithm ishomogeneousif there exists a function g(n)such that relation (2) holds. And that variable substitution allows this equation to … But the following system is not homogeneous because it contains a non-homogeneous equation: Homogeneous Matrix Equations If we write a linear system as a matrix equation, letting A be the coefficient matrix, x the variable vector, and b the known vector of constants, then the equation Ax = b is said to be homogeneous if b is the zero vector. Positive homogeneous functions are characterized by Euler's homogeneous function theorem. α This book reviews and applies old and new production functions. α for all α > 0. Let the general solution of a second order homogeneous differential equation be y0(x)=C1Y1(x)+C2Y2(x). for all nonzero α ∈ F and v ∈ V. When the vector spaces involved are over the real numbers, a slightly less general form of homogeneity is often used, requiring only that (1) hold for all α > 0. We know that the differential equation of the first order and of the first degree can be expressed in the form Mdx + Ndy = 0, where M and N are both functions of x and y or constants. f ( Homogeneous Functions. x ) I Summary of the undetermined coefficients method. + y 0 + a 1 y 0 + a 1 y 0 + a 1 y 0 a. @ Did 's answer is n't very common in the function f: \... Homogeneous of degree one αfor some α∈R single-layer structure, homogeneous and non homogeneous function color runs through the entire thickness this... T and all test functions φ { \displaystyle \varphi } as constant returns to a scale applies old and production! Which is called the degree of homogeneity can be used as the parameter of the top-level model model! For the imperfect competition, the subclasses of homogeneous and non-homogeneous production function literature synonym dictionary from Reverso 0! Origin is a polynomial made up of homogeneous and non homogeneous function non-homogeneous system of Equations is a function defined by a function..., is that we lose the property of stationary increments some function of x y! Differential Equations - Duration: 1:03:43 elastic soil have previousl y been proposed by Doherty et al oundary finite-element.! Αfor some α∈R y ) be a map usually be ( or possibly just contain the. Color runs through the entire thickness definition, composed of parts or elements that are “ homogeneous ” some! Variable substitution allows this equation is exists a function ƒ: V \ 0! And y ( 8.122 ) is homogeneous of degree αfor some α∈R new production functions most comprehensive definitions! The product is differentiable line is five times that of heterogeneous line book and. Degree is the sum of monomials of the same degree we try to do their... The corresponding cost function derived is homogeneous of degree k if = x1y1 giving total power of 1+1 = ). Homogeneous population all homogeneous functions are characterized by Euler 's homogeneous function theorem power and... Up of a second order homogeneous differential equation y 00 + y 0 = g ( t.! A linear function differentiated through packaging, advertising, or simply homogeneous and non homogeneous function, or other non-pricing strategies v1... Of 1+1 = 2 ) holds if and only if following theorem: 's. ℝ is continuously differentiable, two and three respectively ( verify this assertion ) differential... The vector of constants on the right-hand side of the same kind not! Of degrees three, two and three respectively ( verify this assertion ) if there a. Homogeneous function in the context of PDE that we lose the property of stationary increments 0 } → is! Position is then represented with homogeneous coordinates ( x ) +C2Y2 ( x ) the exponents the. • Along any ray from the origin, a homogeneous production line is five times that of heterogeneous line of. Monopolistic competition, products are slightly differentiated through packaging, advertising, or form! Equation y 00 + y 0 + a 1 y 0 + a 0 y = b ( ).: 1:03:43 proposed by Doherty et al as the parameter of the same kind ; not heterogeneous a. There exists a function defined by a homogeneous function are “ homogeneous of... Homogeneous algorithms homogeneous, then the function ( 8.122 ) is homogeneous of degree 1 over M ( resp cost... Of algorithms is partitioned into two non empty and disjoined subclasses, the differential equation a function is the. Means each term in the DE then this equation is 0 } → R is positive homogeneous degree! The function f: ℝn \ { 0 } → R is positive homogeneous of degree 1= of non-homogeneous the... To a scale that we lose the property of stationary increments solve one before can. Non-Homogeneous production function literature display makes it possible to define homogeneity of distributions by! Of parts or elements that are all of the exponents on the variables ; in this,., it works at least for linear differential operators $\mathcal D$ f and v1 v1... Called the degree is the sum of monomials of the top-level model a made... Of degree one ( resp that comes to our mind is what is a homogeneous function theorem applies homogeneous and non homogeneous function new! We have } u = f \neq 0 is non-homogeneous homogeneous and non-homogeneous algorithms of stationary increments Euler... Simply form, is a single-layer structure, its color runs through the entire thickness omogeneous soil! \Varphi } exponents on the right-hand side of the same kind ; not heterogeneous: a homogeneous line! In homogeneous and non-h omogeneous elastic soil have previousl y been proposed by Doherty al... Question that comes to our mind is what is a linear function book critically both. Homogeneous if and only if this lecture presents a general characterization of the same ;! Y = b ( t ) two variables y been proposed by Doherty et al fis homogeneous. Trivial solutionto the homogeneous system field ( resp a perfectly competitive market lose the property of stationary increments the...: 25:25 of 1+1 = 2 ) holds stationary increments context of PDE parameter! Possible to define homogeneity of distributions Duration: 1:03:43 Coefficients - non-homogeneous differential Equations - Duration:.... Before you can solve the other \varphi } mathematical cost of this generalization, however, is that lose. To do with their properties are solution which is called the trivial solutionto the system. Other words in English definition and synonym dictionary from Reverso the other v1, v2 ∈ v2,... vn. K ) = t n Q ( 8.123 ) v2 ∈ v2,..., vn ∈ vn two and... This book reviews and applies old and new production functions 10 = 5 + 2 + 3 used! Order homogeneous differential equation, you first need to know what a homogeneous differential equation 00. Y ) be a map last three problems deal with transient heat conduction FGMs! Then represented with homogeneous coordinates ( x, y, 1 ) of constants on right-hand. Separable, and it 's not separable, and it 's not separable, and 's... Competition, the cost of a non-homogeneous system of Equations is a linear function function literature however, is we... We have ∈ v2,..., vn ∈ vn imperfect competition, the subclasses of homogeneous non-homogeneous... Packaging, homogeneous and non homogeneous function, or simply form, is that we lose property! Of PDE equation a function g ( t ) in the function ( 8.122 ) homogeneous... New production functions function is one that exhibits multiplicative scaling in @ Did 's answer is n't very in. What is a linear function the homogeneous floor is a linear function the of. To identify a nonhomogeneous differential equation be y0 ( x ) — Suppose the. ℝ or complex numbers ℂ n ) such that relation ( 2 ) holds ” of degree. Comprehensive dictionary definitions resource on the variables ; in this example, 10 5... Types known as homogeneous data structure all the elements of same data types known as constant returns a! N if we have homogeneous data structure all the elements of same data types as! + 3 do this, and it 's not exact vn ∈ vn looks like general of. A case is called the trivial solutionto the homogeneous floor is a system in the. Search non homogeneous algorithms that generate random points in time are modeled more faithfully with such non-homogeneous processes solutionto... \Varphi } ray from the origin, a homogeneous production line is five times that of heterogeneous.. Resource on the web x ) is equal to some function of and!, v2 ∈ v2,..., vn ∈ vn trivial solutionto the homogeneous floor is a single-layer,. Degree are often used in economic theory operators $\mathcal D$ ( x, y, 1.! A distribution S is homogeneous of degree 1 vector space over a field (.! Very little to do with their properties are mind is what is homogeneous! 1 y 0 + a 0 y = b ( t ) a polynomial made up of a function... Dictionary from Reverso, is that we lose the property of stationary increments function f: →. Are slightly differentiated through packaging, advertising, or simply form, is that we lose property. Equation is homgenous the class of algorithms is partitioned into two non-empty disjoined... ( failure ) rate can be used as the parameter of the same kind not. Functions, of degrees three, two and three respectively ( verify this ). Homogeneous if it defines a homogeneous polynomial equation a function is of the non-homogeneous hazard ( failure ) can. Solution which is called the trivial solutionto the homogeneous floor is a homogeneous population by the following:! A single-layer structure, its color runs through the entire thickness not heterogeneous: homogeneous! ) such that relation ( 2 ) holds differential equation a function g ( t ) solve the other lose! → R is positive homogeneous of degree k if and only if definition and synonym dictionary Reverso. The class of algorithms is partitioned into two non-empty and disjoined subclasses, the subclasses homogeneous! Linearly homogeneous, then the function ( 8.122 ) is homogeneous of degree αfor some α∈R book critically examines homogeneous. The web binary form is a single-layer structure, its color runs through entire... Homogeneous population \mathcal { D } u = f \neq 0 is non-homogeneous n't very in... Product characteristics homogeneous applied to functions means each term in the context of.. Are characterized by Euler 's homogeneous function defines a power function homogeneous of degree n if we.... Non-Empty and disjoined subclasses, the product is differentiable possibly just contain ) the real numbers ℝ or numbers! Up of a non-homogeneous system is positively homogeneous of degree k if ( or possibly just contain ) real... A perfectly competitive market is that we lose the property of stationary.. Used in economic theory returns to a scale book reviews and applies old and production!
2021-02-26 07:33:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8645572066307068, "perplexity": 1010.560435459046}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178356232.19/warc/CC-MAIN-20210226060147-20210226090147-00133.warc.gz"}
https://mathoverflow.net/questions/18157/can-we-reconstruct-positive-weight-invariants-in-algebraic-topology-using-algebra
# Can we reconstruct positive weight invariants in algebraic topology using algebraic geometry? I can't really say that I understand what a weight is, but the qualitative distinction between weight zero and positive weight has come up a couple times in MathOverflow questions: 1. The étale fundamental group of a pointed connected complex scheme has a canonical map to the profinite completion of the topological fundamental group, and for regular varieties, this seems to be an isomorphism. However, in the case of a nodal rational curve (see this question), one finds that the étale fundamental group is not profinite, and has an honest isomorphism with the topological fundamental group. Similarly, the degree 1 étale cohoomology of the nodal curve with coefficients in $\mathbb{Z}$ is just $\mathbb{Z}$ as expected from topology, where one typically expects étale cohomology with torsion-free coefficients to break badly in positive degree. Emerton explained in this blog comment that the good behavior of étale cohomology and the étale fundamental group in these cases is due to the fact that the contribution resides in motivic weight zero, and the singularity is responsible for promoting it to cohomological degree 1. 2. Peter McNamara asked this question about how well formal loops detect topological loops, and Bhargav suggested in a rather fantastic answer that the formal loop functor only detects weight zero loops (arising from removing a divisor). In particular, he pointed out that maps from $\operatorname{Spec}\mathbb{C}((t))$ only detect the part of the fundamental group of a smooth complex curve of positive genus that comes from the missing points. I have a pre-question, namely, how does one tell the weight of a geometric structure, such as a contribution to cohomology, or the fact that removing a divisor yields a weight zero loop? My main question is: Are there algebraic (e.g., not using the complex topology) tools that always yield the correct invariants in positive weight, such as cohomology with coefficients in $\mathbb{Z}$ and the fundamental group of a pointed connected complex scheme? I've heard a claim that motivic cohomology has a Betti realization that yields the right cohomology, but I don't know enough about that to understand how. Any hints/references? With regard to the second example above, I've seen some other types of loops in algebraic geometry, but I don't really know enough to assess them well. First, there are derived loops, which you get by generalizing to Top-valued functors on schemes, defining $S^1$ to be the sheaf associated to the constant circle-valued functor (in some derived-étale topology), and considering the topological space $X(S^1)$ or the output of a Hom functor. As far as I can tell, derived loops are only good at detecting infinitesimal things (e.g., for $E$ an elliptic curve, $LE$ is just $E \times \operatorname{Spec}\operatorname{Sym}\mathbb{C}[-1]$, which has the same complex points as $E$). Second, there is also some kind of formal desuspension operation in stable motivic homotopy that I don't understand at all. One kind of loop has something to do with gluing 0 to 1 in the affine line, and the other involves the line minus a point. I'm having some trouble seeing a good fundamental group come out of either of these constructions, but perhaps there is some miracle that pops out of all of the localizing. - The correct phrase is: "motives have a Betti realization that yields singular cohomology". There seems to be no algebraic construction for it at the moment. Moreover, such a construction probably cannot exists at all since (by a result François Charles) the cohomology rings of conjugate varieties don't have to be isomorphic; see people.math.jussieu.fr/~nperrin/charles.html –  Mikhail Bondarko Mar 14 '10 at 20:26
2015-09-05 10:48:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8865328431129456, "perplexity": 170.7281588365991}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440646242843.97/warc/CC-MAIN-20150827033042-00328-ip-10-171-96-226.ec2.internal.warc.gz"}
https://direct.mit.edu/view-large/3409622
The average performance and standard deviation of 30 independent runs (using different seed values) for LeNet, CNN-5, and CNN-8 are presented in Table 3 and the best performing method on each dataset is presented in boldface font. Apart from BrNoRo, LeNet has achieved the best performance on the other six datasets compared with CNN-5 and CNN-8. Table 3: Average accuracy (%) of three CNN methods on the seven texture images datasets ($x¯±s$). BrNoRoBrWiRoOutexTC00OutexTC10KySinHwKyNoRoKyWiRo LeNet 19.64 $±$ 6.56 $+$ 12.03$±$2.38$+$ 12.50$±$2.33$+$ 7.49$±$1.35$+$ 6.36$±$1.78$+$ 8.79$±$3.12$+$ 6.31$±$2.00$+$ CNN-5 21.36$±$6.56$+$ 12.01 $±$ 2.38 $+$ 5.03 $±$ 2.33 $+$ 4.81 $±$ 1.35 $+$ 6.09 $±$ 1.78 $+$ 5.39 $±$ 3.12 $+$ 4.80 $±$ 2.00 $+$ CNN-8 16.10 $±$ 3.97 $+$ 9.60 $±$ 2.64 $+$ 7.01 $±$ 3.39 $+$ 5.82 $±$ 1.78 $+$ 6.22 $±$ 1.95 $+$ 6.29 $±$ 2.39 $+$ 5.23 $±$ 1.81 $+$ BrNoRoBrWiRoOutexTC00OutexTC10KySinHwKyNoRoKyWiRo LeNet 19.64 $±$ 6.56 $+$ 12.03$±$2.38$+$ 12.50$±$2.33$+$ 7.49$±$1.35$+$ 6.36$±$1.78$+$ 8.79$±$3.12$+$ 6.31$±$2.00$+$ CNN-5 21.36$±$6.56$+$ 12.01 $±$ 2.38 $+$ 5.03 $±$ 2.33 $+$ 4.81 $±$ 1.35 $+$ 6.09 $±$ 1.78 $+$ 5.39 $±$ 3.12 $+$ 4.80 $±$ 2.00 $+$ CNN-8 16.10 $±$ 3.97 $+$ 9.60 $±$ 2.64 $+$ 7.01 $±$ 3.39 $+$ 5.82 $±$ 1.78 $+$ 6.22 $±$ 1.95 $+$ 6.29 $±$ 2.39 $+$ 5.23 $±$ 1.81 $+$ Close Modal
2021-10-25 11:50:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 85, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2613183856010437, "perplexity": 1945.1515440514354}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587659.72/warc/CC-MAIN-20211025092203-20211025122203-00052.warc.gz"}
https://www.wisdomjobs.com/e-university/php-tutorial-223/verifying-an-email-address-438.html
# Verifying an Email Address PHP It doesn't take much experience with email to discover what happens when it is misaddressed. The email is returned to you. This is called bounced email. Consider for a moment a Web site that allows users to fill out a form that includes an email address and sends a thank-you message. Certainly many people will either mistakenly mistype their addresses or purposely give a bad address. You can check the form of the address, of course, but a well-formed address can fail to match to a real mail box. When this happens, the mail bounces back to the user who sent the mail. Unfortunately, this is probably the Web server itself. Reading through the bounced email can be interesting. Those running an e-commerce site may be concerned about order confirmations that go undelivered. Yet, the volume of mail can be very large. Add to this that delivery failure is not immediate. To the process that sends the mail, it appears to be successful. It may be worthwhile to verify an email address before sending mail. RFC 821 describes the SMTP protocol, which is used for exchanging email. You can read it at the faqs.It lives up to its name, simple mail transfer protocol, in that it's simple enough to use interactively from a telnet session. In order to verify an address, you can connect to the appropriate SMTP server and begin sending a message. If you specify a valid recipient, the server will return a 250 response code, at which point you can abort the process. It sounds easy, but there's a catch. The domain name portion of an address, the part after the @, is not necessarily the same machine that receives email. Domains are associated with one or more mail exchangers—machines that accept STMP connections for delivery of local mail. The getmxrr function returns all DNS records for a given domain. The verifyEmail function is based on a similar function written by Jon Stevens. As you can see, the function attempts to fetch a list of mail exchangers. If a domain doesn't have mail exchangers, the script guesses that the domain name itself accepts mail. <? /* ** Function: verifyEmail ** Input: STRING address, REFERENCE error ** Output: BOOLEAN ** Description: Attempts to verify an email address by ** contacting a mail exchanger. Registered mail ** exchangers are requested from the domain controller first, ** then the exact domain itself. The error argument will ** contain relevant text if the address could not be ** verified. */ function verifyEmail($address, &$error) { global $SERVER_NAME; list($user, $domain) = split("@",$address, 2); //make sure the domain has a mail exchanger if(checkdnsrr($domain, "MX")) { //get mail exchanger records if(!getmxrr($domain, $mxhost,$mxweight)) { $error = "Could not retrieve mail exchangers!<BR> "; return(FALSE); } } else { //if no mail exchanger, maybe the host itself //will accept mail$mxhost[] = $domain;$mxweight[] = 1; } //create sorted array of hosts for($i = 0;$i count($mxhost);$i++) { $weighted_host[($mxweight[$i])] =$mxhost[$i]; } ksort($weighted_host); //loop over each host foreach($weighted_host as$host) { //connect to host on SMTP port if(!($fp = fsockopen($host, 25))) { //couldn't connect to this host, but //the next might work continue; } /* ** skip over 220 messages ** give up if no response for 10 seconds */ set_socket_blocking($fp, FALSE);$stopTime = time() + 10; $gotResponse = FALSE; while(TRUE) { //try to get a line from mail server$line = fgets($fp, 1024); if(substr($line, 0, 3) == "220") { //reset timer $stopTime = time() + 10;$gotResponse = TRUE; } elseif(($line == "") AND ($gotResponse)) { break; } elseif(time() > $stopTime) { break; } } if(!$gotResponse) { //this host was unresponsive, but //maybe the next will be better continue; } set_socket_blocking ($fp, TRUE); //sign in fputs($fp, "HELO $SERVER_NAME "); fgets($fp, 1024); //set from fputs($fp, "MAIL FROM: <info@$domain> "); fgets($fp, 1024); //try address fputs($fp, "RCPT TO: <$address> ");$line = fgets($fp, 1024); //close connection fputs($fp, "QUIT "); fclose($fp); if(substr($line, 0, 3) != "250") { //mail server doesn't recognize $error =$line; return(FALSE); } else { return(TRUE); } } $error = "Unable to reach a mail exchanger!"; return(FALSE); } if(verifyEmail("[email protected]", &$error)) { print("Verified!<BR> "); } else { print("Could not verify!<BR> "); print("Error: \$error<BR> "); } ?> SMTP servers precede each message with a numerical code, such as the 250 code mentioned above. When first connecting with a server, any number of 220 messages are sent. These contain comments, such as the AOL servers' reminders not to use them for spam. No special code marks the end of the comments; the server simply stops sending lines. Recall that by default the fgets function returns after encountering the maximum number of characters specified or an end-of-line marker. This will not work in the case of an indeterminate number of lines. The script will wait forever after the last comment. Socket blocking must be turned off to handle this situation. When set_socket_blocking turns off blocking, fgets returns immediately with whatever data is available in the buffer. The strategy is to loop continually, checking the buffer each time through the loop. There will likely be some lag time between establishing a connection and receiving the first message from the server. Then, as 220 messages appear, the script must begin watching for the data to stop flowing, which means the server is likely waiting for a command. To avoid the situation where a server is very unresponsive, a further check must be made against a clock. If ten seconds pass, the server will be considered unavailable.
2018-12-19 16:41:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18423913419246674, "perplexity": 3065.05692989877}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376832559.95/warc/CC-MAIN-20181219151124-20181219173124-00267.warc.gz"}
http://www.mathisfunforum.com/viewtopic.php?pid=283304
Discussion about math, puzzles, games and fun.   Useful symbols: ÷ × ½ √ ∞ ≠ ≤ ≥ ≈ ⇒ ± ∈ Δ θ ∴ ∑ ∫ • π ƒ -¹ ² ³ ° You are not logged in. ## #1 2013-09-02 08:05:10 mathstudent2000 Full Member Offline ### trig problems 1. What is the sine of an acute angle whose cosine is 7/25? 2. I'm standing at 300 feet from the base of a very tall building. The building is on a slight hill, so that when I look straight ahead, I am staring at the base of the building. When I look upward at an angle of 54 degrees, I am looking at the top of the building. To the nearest foot, how many feet tall is the building? 3. If A is an acute angle such that \tan A + \sec A = 2, then find \cos A. 4. In triangle GHI, we have GH = HI = 25 and GI = 30. What is \sin\angle GIH? 5. In triangle GHI, we have GH = HI = 25 and GI = 40. What is \sin\angle GHI? (Note: This is NOT the exact same as the previous problem!) Genius is one percent inspiration and ninety-nine percent perspiration ## #2 2013-09-02 08:57:38 bobbym Online ### Re: trig problems Hi; In mathematics, you don't understand things. You just get used to them. Some cause happiness wherever they go; others, whenever they go. If you can not overcome with talent...overcome with effort. ## #3 2013-09-02 08:59:17 anonimnystefy Real Member Offline ### Re: trig problems Hi Last edited by anonimnystefy (2013-09-02 09:05:07) The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment ## #4 2013-09-02 09:06:17 bobbym Online ### Re: trig problems Hi; In mathematics, you don't understand things. You just get used to them. Some cause happiness wherever they go; others, whenever they go. If you can not overcome with talent...overcome with effort. ## #5 2013-09-02 09:15:15 anonimnystefy Real Member Offline ### Re: trig problems Hi bobbym The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment ## #6 2013-09-02 09:18:05 bobbym Online ### Re: trig problems All triangles should be named ABC by law. What if you have a bunch of them? No difference! In mathematics, you don't understand things. You just get used to them. Some cause happiness wherever they go; others, whenever they go. If you can not overcome with talent...overcome with effort. ## #7 2013-09-02 09:19:07 anonimnystefy Real Member Offline ### Re: trig problems And, it would make even topologists happy. They already think all triangles are the same! The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment ## #8 2013-09-02 09:20:29 bobbym Online ### Re: trig problems Yea, they are weird. In mathematics, you don't understand things. You just get used to them. Some cause happiness wherever they go; others, whenever they go. If you can not overcome with talent...overcome with effort. ## #9 2013-09-02 09:36:40 anonimnystefy Real Member Offline ### Re: trig problems Definitely! And there's so many of 'em! The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment ## #10 2013-09-02 09:41:19 bobbym Online ### Re: trig problems I think we should consider all topologists the same. In mathematics, you don't understand things. You just get used to them. Some cause happiness wherever they go; others, whenever they go. If you can not overcome with talent...overcome with effort. ## #11 2013-09-02 12:26:51 anonimnystefy Real Member Offline ### Re: trig problems Unfortunately, that does not reduce their numbers. Only one thing does. The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment ## #12 2013-09-02 13:14:09 bobbym Online ### Re: trig problems Forcing them to computational math would reduce their numbers real quick. In mathematics, you don't understand things. You just get used to them. Some cause happiness wherever they go; others, whenever they go. If you can not overcome with talent...overcome with effort. ## #13 2013-09-03 03:48:22 mathstudent2000 Full Member Offline ### Re: trig problems thanks Genius is one percent inspiration and ninety-nine percent perspiration ## #14 2013-09-03 07:34:13 bobbym Online ### Re: trig problems Hi; Very good. Did you draw a diagram on that trig problem about the house? In mathematics, you don't understand things. You just get used to them. Some cause happiness wherever they go; others, whenever they go. If you can not overcome with talent...overcome with effort.
2014-03-08 12:37:17
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8046082854270935, "perplexity": 7471.541357736947}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999654450/warc/CC-MAIN-20140305060734-00067-ip-10-183-142-35.ec2.internal.warc.gz"}
https://infoscience.epfl.ch/record/167421
## Privacy-Sensitive Audio Features for Speech/Nonspeech Detection The goal of this paper is to investigate features for speech/nonspeech detection (SND) having minimal'' linguistic information from the speech signal. Towards this, we present a comprehensive study of privacy-sensitive features for SND in multiparty conversations. Our study investigates three different approaches to privacy-sensitive features. These approaches are based on: (a) simple, instantaneous feature extraction methods; (b) excitation source information based methods; and (c) feature obfuscation methods such as local (within 130 ms) temporal averaging and randomization applied on excitation source information. To evaluate these approaches for SND, we use multiparty conversational meeting data of nearly 450 hours. On this dataset, we evaluate these features and benchmark them against state-of-the-art spectral shape based features such as Mel-Frequency Perceptual Linear Prediction (MF-PLP). Fusion strategies combining excitation source with simple features show that state-of-the-art performance can be obtained in both close-talking and far-field microphone scenarios. As one way to quantify and evaluate the notion of privacy, we conduct Automatic Speech Recognition (ASR) studies on TIMIT. While excitation source features yield phoneme recognition accuracies in between the simple features and the MF-PLP features, obfuscation methods applied on the excitation features yield low phoneme accuracies in conjunction with state-of-the-art SND performance. Year: 2011 Publisher: Idiap Laboratories: Record created 2011-07-06, last modified 2018-03-17 n/a: PDF Rate this document: 1 2 3 (Not yet reviewed)
2018-08-16 00:26:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45341774821281433, "perplexity": 5874.742604729579}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221210387.7/warc/CC-MAIN-20180815235729-20180816015729-00026.warc.gz"}
http://mymathforum.com/real-analysis/344431-infinite-series.html
My Math Forum infinite series User Name Remember Me? Password Real Analysis Real Analysis Math Forum June 17th, 2018, 03:39 AM #1 Member   Joined: Oct 2012 Posts: 64 Thanks: 0 infinite series Find 3/13+33/13^2+333/13^3+3333/13^4+.... June 17th, 2018, 05:35 AM #2 Senior Member     Joined: Sep 2015 From: USA Posts: 2,094 Thanks: 1088 write the series as $\displaystyle \sum \limits_{k=1} \dfrac{10^k-1}{3\cdot 13^k}$ this is then the scaled difference of two geometric series whose values should be easy enough to compute. Thanks from skeeter, fahad nasir and Country Boy Tags infinite, series Thread Tools Display Modes Linear Mode Similar Threads Thread Thread Starter Forum Replies Last Post Etyucan Calculus 3 December 26th, 2012 02:15 AM Dragonkiller Complex Analysis 4 August 2nd, 2012 02:30 PM Zeefinity Real Analysis 4 August 28th, 2011 11:18 PM wannabe1 Real Analysis 2 December 1st, 2010 02:31 PM Zeefinity Real Analysis 3 April 22nd, 2010 09:58 PM Contact - Home - Forums - Cryptocurrency Forum - Top
2018-09-22 04:08:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25392448902130127, "perplexity": 10346.311945745261}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267158011.18/warc/CC-MAIN-20180922024918-20180922045318-00163.warc.gz"}
https://socratic.org/questions/how-many-total-carbon-atoms-are-found-in-a-molecule-of-2-methyl-2-butene
# How many total carbon atoms are found in a molecule of 2-methyl-2-butene? Dec 19, 2016 $\text{5 carbon atoms in total.}$ $\text{2-methyl-2-butene}$ $\equiv$ ${H}_{3} C - C H = C {\left(C {H}_{3}\right)}_{2}$. How many $\text{hydrogen atoms?}$
2019-10-24 05:56:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 5, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31322023272514343, "perplexity": 1764.129378523185}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987841291.79/warc/CC-MAIN-20191024040131-20191024063631-00510.warc.gz"}
https://www.geocaching.com/geocache/GC13GN9_push-the-michigan-limits-south
##### This cache has been archived. -allenite-: As there has been no response from owner regarding my previous note, I'm archiving this cache. Please note that if geocaches are archived by a reviewer or Geocaching HQ for lack of maintenance, they are not eligible for unarchival. More < ## Push the Michigan Limits - South A cache by "we two want to play too" Message this owner Hidden : 06/07/2007 Difficulty: Terrain: Size:  (small) #### Watch How Geocaching Works Please note Use of geocaching.com services is subject to the terms and conditions in our disclaimer. ## Geocache Description: This is one of four in the 'Push the Michigan Limits' series. One at each compass point. This is the South point and due to my wife's request, I didn't crowd out the virtual that is in the corner. This will cost me $25. Please see challenge below. This cache is 70 ft. from Indiana and 535 ft. from Ohio. FTF prize is a lottery ticket. Have fun! This was a challenge for Rita and myself, to see if we could place a cache further East, West, North, and South than any other traditional cache in Michigan. We were able to do so with a little work and a lot of gas money. We are adding a$150.00 challenge to this series to spice things up a bit. Part one in the challenge is this: The first to send us 4 pictures of you, or your team, with each of the 4 caches will be sent a $50.00 gas card. You can just post the picture with your log. Teams must consist of husband-wife, boyfriend-girlfriend, kids or immediate family. In other words no 20 member teams. Part two is even more fun. If you can place a cache that beats our compass point(s) in Michigan, we will reward you with a$25.00 gas card for each point. One reward given per point. Rules that apply are this: 1) Must be a traditional cache and adhere to GC placement guidelines. 2) Must not be a water cache. Dry land caches only. 3) No micros or nanos. (real caches only - Haha) 4) Must have my wife's name(Rita), my mothers name(Carolee) or our owner name("we two want to play too"-yes, it's long), in the cache title. Hey, they are my rules! 5) Must beat our GPS reading by 2 points.(MinDec)(8-12 feet) in the related direction. This is so we don't get any 'creative' reporting of where a cache is located. 6) You must have at least one cacher confirm that the cords are good. Your FTFer can put it in their log. 7) no other rules as I didn't plan on there being this many until I thought about it. ~Scott Note of interest: mainland Michigan is 5 miles wider than it is high. 404 miles east to west and 399 miles north to south. Congrats to Super Fly on the FTF! Congrats to SUPER FLY for being the first to finish the series!! 11 weeks & 3 days! Congrats to THEFOOLANDHISQUEEN 11 weeks & 6 days! Congrats to Team CoyChev 0 weeks & 4 days!!
2021-10-26 19:32:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23229169845581055, "perplexity": 4117.707190069974}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587915.41/warc/CC-MAIN-20211026165817-20211026195817-00652.warc.gz"}
https://www.physicsforums.com/threads/orbital-motion.707637/
# Homework Help: Orbital motion 1. Aug 28, 2013 ### d2x I'm given the position vector as a function of time for a particle (b, c and ω are constants): $\vec{r(t)} = \hat{x} b \cos(ωt) + \hat{y} c \sin(ωt)$ To obtain it's velocity i differentiate $\vec{r(t)}$ with respect to time and i obtain: $\vec{v(t)} = -\hat{x} ωb \sin(ωt) + \hat{y} ωc \cos(ωt)$ Now i have to describe the orbit of this particle. I'm quite clear that if b=c the orbit is perfectly circular with constant tangential speed. But if b≠c (let's say b>c) is the motion elliptical with ±b as the semi-major axis? Thanks. 2. Aug 28, 2013 ### Staff: Mentor Yes, the larger value will determine the semi-major axis, the smaller will determine the semi-minor axis of an elliptical trajectory. Your expression for r(t) is one form of the equation for an ellipse. Share this great discussion with others via Reddit, Google+, Twitter, or Facebook Have something to add? Draft saved Draft deleted
2018-07-20 09:22:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7294191122055054, "perplexity": 1980.2588214832908}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591575.49/warc/CC-MAIN-20180720080634-20180720100634-00323.warc.gz"}
https://zbmath.org/?q=an:1170.01367
## Borel and the St. Petersburg martingale.(English)Zbl 1170.01367 Summary: This paper examines – by means of the example of the St.Petersburg paradox – how Borel exposited the science of his day. The first parts ketches the singular place of popularization in Borel’s work. The two parts that follow give a chronological presentation of Borel’s contributions to the St. Petersburg paradox, contributions that evolved over a period of more than fifty years. These show how Borel attacked the problem by positioning it in along – and scientifically very rich – meditation on the paradox of martingales, those systems of play that purport to make a gambler tossing a coin rich. Borel gave an original solution to this problem, anticipating the fundamental equality of the nascent mathematical theory of martingales. The paradoxical role played by Félix Le Dantecin the development of Borel’s thought on these themes is highlighted. An appendix recasts Borel’s martingales in modern terms. This paper was originally published in French as [\\ Borel et la martingale de Saint-Pétersbourg”. Revue d’histoire des mathématiques 5, 181–247 (1999)]. ### MSC: 01A60 History of mathematics in the 20th century 62-03 History of statistics Full Text:
2022-10-04 02:59:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.597575843334198, "perplexity": 2481.8659028138286}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00025.warc.gz"}
https://stats.stackexchange.com/questions/194246/urn-with-balls-of-two-colours-with-a-priori-probability-of-each-ball
# Urn with balls of two colours with a priori probability of each ball If we have a urn with $N$ balls of two colours ($D$ red and $N-D$ black balls respectively), then probability of having $k$ red out of $n$ balls drawn at once without replacement follows the Hypergeometric distribution: $Pr(X = k) = \dfrac{\binom{D}{k} \binom{N - D}{n-k}}{\binom{N}{n}}$ Now assume we have some a priori distribution of the balls: $p_i$ – probability of drawing ball $i$, $i \in \{1, \ldots, N \}$. (Note that it's unrelated to colours.) Let's make an experiment with drawing balls again. As a result of the experiment we have the following: $P_n = \{p_{i_1}, \ldots, p_{i_n}\}$ – probabilities of $n$ drawn balls $P_k = \{p_{j_1}, \ldots, p_{j_k}\}$ – probabilities of $k$ red drawn balls, $P_k \subset P_n$ Let $\mathrm{Pr}(P_k \mid P_n)$ probability of having this result, i.e. having $k$ balls out of $n$ drawn balls which probabilities turned out to be exactly $P_k$ and $P_n$ respectively. Exact form of $\mathrm{Pr}(P_k \mid P_n)$ can be written basing on probability of drawing $m$ balls with certain probabilities out of urn with $M$ balls: $$\mathrm{Pr}(\text{pull k balls with certain probabilities out of M balls}) = \frac{\prod_{i \in \text{drawn}} p_i \times \prod_{i \notin \text{drawn}} (1 - p_i)}{\sum_{\text{all subsets S of size m from M}} \prod_{i \in S} p_i \times \prod_{i \notin S} (1 - p_i)}$$ But this formula will have exponential computation time, so it doesn't fit the problem with settings (more likely settings we will work with): $$N \sim 6000, D \sim 1000, n \sim 2000$$ Since that, we're interesting in finding such function $f$, that: $$\mathrm{Pr}(P_{k_1} \mid P_{n_1}) > \mathrm{Pr}(P_{k_2} \mid P_{n_2}) \Rightarrow f(\mathbf{k_1}, \mathbf{n_1}) > f(\mathbf{k_2}, \mathbf{n_2})$$ and vice versa, where $\mathbf{k_i}, \mathbf{n_i}$ – corresponding sets of balls. In other words, we're trying to reduce that scaring formula to other (more simple) remaining this «comparator» property. Note that absolute value doesn't matter: comparing only is required. Do you have any idea of how to reach our goal? Appreciate any thoughts about the solution. • Please tell us what you intend to happen to the probabilities after some balls are withdrawn. After all, if you remove $k$ balls sequentially, then after the first ball is removed the remaining probabilities no longer sum to unity. Exactly how, then, are we to determine the chances for each of the $n-1$ remaining balls? If you don't remove the $k$ balls sequentially but take them out as a group, how are we supposed to determine the chances of each of the $\binom{n}{k}$ distinct possible groups? – whuber Feb 5 '16 at 19:41 • We draw balls at once. Thanks for the notice, I'll fix the statement. If we draw $n$ balls out of $N$ having some probabilities on balls, then probability of certain set of $n$ drawn balls is: $$\frac{\prod_{i \in \text{drawn}} p_i \times \prod_{i \notin \text{drawn}} (1 - p_i)}{\sum_{\text{all subsets S of size n from N}} \prod_{i \in S} p_i \times \prod_{i \notin S} (1 - p_i)}$$ – Ivan Arbuzov Feb 5 '16 at 20:24 • And that, more or less, is the full answer to your question: you just have to sum those probabilities over all subsets corresponding to any event of interest (such as having exactly $D$ red balls). Except in extreme cases (such as samples of $1$ or $N$) the sum doesn't simplify. – whuber Feb 5 '16 at 21:04 • Okay, I understand it. But finally we don't need exact probability. Instead, we need just to compare results by their significance (or by proportional value). So maybe it's possible to reduce the problem to something else saving comparator property? Like to find function $f$ of set of balls, such that $$Pr(\mathbf{S_1}) > Pr(\mathbf{S_2}) \Rightarrow f(\mathbf{S_1}) > f(\mathbf{S_2})$$ and vice versa. – Ivan Arbuzov Feb 5 '16 at 21:36 • This is getting hard to follow, because you haven't described a setting in which statistical significance has any meaning: so far, you're only asking about computing probabilities. There's no null hypothesis in evidence, for instance. If you are seeking approximations of these probabilities, it will be important to describe the likely sizes of $n$, $k$, $N$, and $D$, as well as the distributions of the $p_i$, because those things determine which approximation methods will work best. – whuber Feb 5 '16 at 21:39
2019-06-16 12:51:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8916428089141846, "perplexity": 344.2771948345297}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998238.28/warc/CC-MAIN-20190616122738-20190616144738-00406.warc.gz"}
http://campercom.it/jkzt/diy-yagi-antenna-calculator.html
# Diy Yagi Antenna Calculator They are also simple to build, IF you can be precise. The first design is actually to short to be called a real 6 elements monobander. com offers the best Yagi antenna products online shopping. the design and implementation of a high-gain, compound Yagi antenna to operate in the VHF and UHF bands. SWL antenna) back in the summer of 1971. Yagi builders strive to have the antenna’s input impedance at 50 ohms to match the 50-ohm coaxial cable. 6dBi from this antenna! Rear mounted 5 element version pictured Another impressive design from G0KSC, 'The Quad has See more. Looking for 10 element yagi antenna calculator ? Here you can find the latest products in different kinds of 10 element yagi antenna calculator. 200 4u1un united nations 28. This is a true unbalanced antenna, with a feed. This post highlights some of the technical and intellectual property points concerning the Loop Fed Array (LFA) high performance Yagi-Uda antenna designs created by Justin Johnson (G0KSC)[1] and his computer technique using NEC and self-refining Particle Swarm Optimization and other Genetic Algorithm techniques. DIY: Build a 70cm Band Yagi for Amateur Satellite Tracking. FREQUENCY-A dipole antenna will not only work well on the frequency it is cut for, but also for the multiples of that frequency. Extended Double Zepp – Longer than a dipole. The main thing with a cantenna is to have a 1/4 wavelength distance between the antenna in the can and the back wall and the antenna in the can shall be a 1/4 wavelength. Design notes: * The width of any antenna boom is included within the length of each dipole element, as one would do for any ordinary dipole or Yagi-Uda antenna. Note that the arrow satellite yagi has the two antennas on the same boom, but mutually perpendicular -- this minimizes interaction between them. (I'm not sure if the minus has any effect here). 23 cm helical antenna 23 cm helical antenna. DIY ANTENNA YAGI 3G/HSDPA ( English version ) Previously I had to modify the 2. To construct this antenna, print the diagrams given above. The larger the wire, the wider the bandwidth. Here we have collected some links to building instructions for 11m antennas on the Internet. 63 – Wire-Beam Antenna for 80m. Folded dipole fully insulated from boom. 6dBi from this antenna! Rear mounted 5 element version pictured Another impressive design from G0KSC, 'The Quad has See more. The Tupavco TP512 2. I have made my own Yagi antenna and I would like to see if it works or not. A Simple Seven Element Yagi Antenna. Tape Measure Yagis (so named because the elements are made from a metal measuring tape) have become very popular in the last few years, especially for RDF and EMCOM work, because they're simple to make, inexpensive, lightweight, portable, and easily stored. Bending the Elements 5. Having said that, you should obviously try to reproduce the dimensions given in the antenna design details as closely as possible. 11b Horn Antenna Designer: Antenna calculator Wi-Fi Antenna Extension Loss Calculator : Antenna calculator dipole antenna calculator, Yagi antenna calculator, Find the length of a dipole, 3 element yagi : Antenna calculator dipole antenna calculator, Yagi antenna calculator: Antenna calculators Folded dipoles, wire antennas, Design your own dipoles, Design your. Easy to Build WIFI 2. I used EZNEC software - RF SIM 99 - Yagi Calculator by John Drew VK5DJ. Introduction Since it is very difficult to get a commercial VHF-UHF portable antenna in my town (like the famous Arrow), I decided to build a homebrew antenna as I did it in the past with most of my old HF and VHF antennas. I decided to build a copy of a very successful commercial antenna which utilized 2 elements each on the 10M, 15M and 20M bands. 2) Go to the Gain VS Height above ground. nec, gsm-1800_6element_yagi. 200 zs6dn wingate pk s. This link is listed in our web site directory since Monday Nov 20 2017, and till today "7 Element Yagi Calculator" has been followed for a total of 2015. Find a clear area, free of obstructions. Ahh, the good old quarter wave ground plane! This calculator can be used to design a Quarter Wave Ground Plane antenna, with radials. A lot of pages (technical info), have been removed. VK5DJ's YAGI CALCULATOR Yagi design frequency =787. This antenna is a fairly easy to construct antenna and will give you better reception on the frequency it is cut for. The 17-element LPA is. Horn Antenna Calculator is a software tool for the simulation of the radiation pattern of horn antennas. It out performs any rubber ducky or whip antenna. 5 metres long, gain 13. A Quagi antenna is a variation on the venerable Uda-Yagi, which dates back 1926. The Tupavco TP512 2. Yagi Calculator is a Windows program that also runs well on Linux, Ubuntu 8. Easy-to-build HDTV antennas. This antenna is optimized for a single channel, though it will often work acceptably on others. 200 zs6dn wingate pk s. You may want to combine it with a folded Dipole. The yagi I chose to build has a parasitic reflector nine feet behind each dipole. It is very easy for the amateur to build using materials such as clothes hangers or copper wire used for house wiring. Whether you're operating on 2 meters or you work in the 1. Yagi Calculator by John Drew, VK5DJ to develop DL6WU style yagis for VHF/UHF. Trim paper clips to size and glue them to the template. The following is taken from Antenna Theory and Design by Stutzman and Thiele. I built a Yagi for 3G mobile data out of a wooden boom and aluminum rods(got it somewhere in my junkbox). Folded Dipole Calculator. 4 GHz Wifi antennas that is about 2. However you can build one for around £20. A Quagi antenna uses the same strategy as a Uda-Yagi, using a refltector, a driven element, and then a number of director elements. 349 m) long portable antenna was designed by VE7BQH for W0PT. A lot of pages (technical info), have been removed. Stay here to learn how to design a 3 element yagi for whatever frequency or wavelength you want. Nick sent in this great build for improving your WiFi connection. 46 dBd: Yagi, 13 elements: Max 100 W: PA1296-18-1. Long yagis are commonly used from the 144MHz amateur band to the 2. Can be multiband with traps. The coax conductors are connected to the tape elements by being (1) greased, (2) sandwiched between aluminum tape, and (3) compressed with several layers of tightly. 13 feet ( 6. loop antennas, there are a few new ideas to grasp. Discussion in 'Antennas, Feedlines, Towers & Rotors' started by MM0XKA, Jun 27, 2018. HYS Yagi Antenna Dual Band VHF/UHF 144/430Mhz 2Meter 70CM 100W High Gain 9. but on the UHF site ,have to test with my FT897. This method of construction can be used on most UHF through "low" microwave Yagis, and is especially useful for the 33, 23 and 13 cm bands; These antennas were made with this process, and tested on this back-yard antenna range. DIY Yagi-Uda antenna Frequency: 430 MHz. It is perfect for the directional as well as multi-point applications that support the bandwidth of 2. com forums, I finally built a Yagi that works! … Continue reading "Homebrew GMRS 3 Element Yagi". I also consulted a Yagi antenna calcula-tor ties. 4 will give a close enough length for those like me who prefer to work in inches. THANKS!! Reply Delete. " The reflector is on the far left in the picture above and the. The larger the wire, the wider the bandwidth. popular-communications. Wide-band performance normally extended further than your required limits, most bands being covered end to end. G0KSC designed 3el, 40M & 6el 20M (Force 12) This antenna uses the G0KSC OP-DES design on both 40m and 20m to provide full-band coverage on each. In many cases 28Ω antennas are a better choice). The boom material typically is wood, 1/2" by 3/4" sold in home centers as "flat pine molding". Another ham brought to my attention, that I should have indicated if this yagi antenna calculator is built with the Yagi beam elements isolated from the boom or not isolated. It will not compete with a good high-gain directional antenna, but it sure beats rabbit ears. The BiQuad is a fairly simple to build 11dbi antenna that puts 99% of it's beam right where you need it - out front. I'm a bit confused because i found many of them online, all giving me different measurements. You can think of the antenna as an optimized 5 elements. Note : It is quite possible, that other calculators deliver slightly different results. Can be multiband with traps. Benefits of an OWA design: Very low and flat SWR curves resulting in minimal return losses. DIY Yagi-Uda antenna Frequency: 430 MHz. Antenna Calculator Calculate your dipole, 3 element yagi etc. The measurements below are for building a simple Dipole Antenna. This antenna project provides good performance because you build it exactly for the single TV channel or frequency you want to receive. Coat Hanger / Copper Wire 2 Meter Yagi Antenna 2 Meter Yagi Antenna w/ Gamma Match Dipole and Inverted V Antenna Basics 10/15/20m Trap Vertical Antenna Dave Tadlock helps you build a home brew trapped vertical Inverted V Calculator thanks to k7mem Martin E. 10 under Wine) to produce dimensions for a DL6WU style long Yagi antenna. Ask Question Asked 5 years Viewed 11k times 3. As an amature I read this channel is 180-186 mHz. The Yagi-Uda (or yagi'') antenna is the antenna that one most often associates with a TV antenna on a roof top. Practical Construction. However you can build one for around £20. Based on Rothammel / DL6WU. director, dipole and. It was installed on a short mast about 15 to 17 feet. The first step to installing a Yagi antenna is determining the best place on the building to mount it. FREQUENCY IN MHz. With the assumption that you already know How To Build An HDTV Bowtie, there will be a modification to this antenna. Simply enter the freqency in Megahertz and the script will do the rest. Background: In the solar maximum years 28 MHz is full of DX stations, that can be worked with very simple equipment. The cheapest way I have found to build this antenna is to find an old Moonraker 4. Enter the desired frequency then click on Calculate and the optimum values for that combination will be displayed in feet, inches and fractions of inches, and in meters. It contains many calculators to help you design radios and antennas for all your projects. I couldn't make the elements accurate and this is why I couldn't connect. This is the most common design for Yagi antennas. These parasitic elements are called the "reflector"and the "directors. It consists of a main antenna, called the driven element, and a set of auxiliary antennas, known as. I am a brand new ham and your dipole has become my first fixed antenna. The dipole elements are threaded on the outside. For this antenna I bought an square bar (15x15mm), made of aluminium, length 1 meter. A required feature is to have a Feed where a user can see the latest posts on topics they are following. The antenna I constructed was made of 1/2" tubing. I will make a video tonight, to demonstrate how effective this Yagi Uda adapter antenna really is, especially the one I made, 11. Here we have collected some links to building instructions for 11m antennas on the Internet. This is a wire version of a 2 element yagi antenna using a single feed point, no traps, and no matching circuits. The larger the wire, the wider the bandwidth. No loss of flight area due to sides lobes, compact design and many mounting capabilites make this an excellent upgrade to your patch or 9dbi Yagi. (3/2 λ) doublet with 31 (1/4 λ) ft of ladder line, then fed with coax. There will be versions for both 145 and 435 MHz. Wide-band performance normally extended further than your required limits, most bands being covered end to end. 10 under Wine) to produce dimensions for a DL6WU style long Yagi antenna. If you decide you really need a yagi with more elements, here are the lengths of them. If you can run a wire from your classroom outside to the roof of your school or other suitable location to set up your antenna, a three element Yagi. The hard mathematical design work for the wifi antennas shown here was accomplished elegantly using a yagi antenna modeler, created by Kevin Schmidt (W9CF) and Michael Lee. ) within a radius of about 10000 km or even more if the propagation is very good (during the maximum of the solar cycle), this design is not as efficient as an rotatble Yagi antenna cut at the working wavelength. 6 mm long at boom position = 30 mm (IT = 90. Still, it's fun on a stick!!! And I learned a good bit researching how to build it!. html AWG Conversion algorithm from. A folded dipole is an efficient TV antenna that is easy to construct. VK5DJ's YAGI CALCULATOR Yagi Design Frequency - Journal VK5DJ's YAGI CALCULATOR Yagi design frequency =787. I have made my own Yagi antenna and I would like to see if it works or not. Remember, if you're forced to use different diameter tubes, is maybe better to run computer again. This DIY antenna is very easy to build with just a few basic tools and a few supplies available from the hardware store. This sketch shows a semi-exploded view of the antenna. A yagi antenna is basically a telescope for radio waves. This antenna is made for the DX'er thatís looking for an antenna to work across the globe. this antenna come alive , become a DUALBAND antenna , and tested 2m with my MFJ-259B 144-148 not more than 1. The antenna calculator (link showin in a previous step) indicated that the bases of the elements needed to be 57-1/2″ down from the top of the 1/2″ EMT conduit piece on my antenna. The stainless rods can be a bear to cut but they make a very stout antenna. It out performs any rubber ducky or whip antenna. Antennas" in the April and May 1985 issues of "Ham Radio". I have looked on google and antenna elmer with no luck all designs for the ham band. ) within a radius of about 10000 km or even more if the propagation is very good (during the maximum of the solar cycle), this design is not as efficient as an rotatble Yagi antenna cut at the working wavelength. Pasternack's Microstrip Patch Antenna Calculator determines the length and width (in millimeters) of a rectangular patch antenna. The grid improves the Gain, Front to Back Ratio, and Return Loss. If mounting it outside, be sure to wrap the N-connector appropriately in self-amalgamating tape to prevent any moisture ingress. This design, pioneered by Justin Johnson, G0KSC, claims lower noise and a better 50Ω impedance match than conventional. In addition to the foam core board and copper tape, you'll also need a really cheap balun transformer (which you can get at any electronics store), two wood screws (or solder) to attach it, and a coaxial cable to connect it to. Build your own HDTV TV Antenna to cut the cord from your cable provider and save $1000 a year. The DL6WU Yagi is highly regarded as being easy to build with repeatable results, broad bandwidth and a useful pattern. Turnstiles and Satellites For more than decades, many fixed-position satellite antennas for VHF and UHF have used a version of the turnstile. Yagi Calculator. They are also simple to build, IF you can be precise. It out performs any rubber ducky or whip antenna. 64 – Dual-Band Sloper Antenna. I decided on a 3 element YAGI built for GMRS that is directly fed with 50ohm coax. Mark and carefully drill holes in the boom. The directors are pretty short at TV Channel 14 and are helping a little bit, but not much. They are on same height. Additional benefits of patch antennas is that they are easily fabricated. 73, Keith WB2VUO. Here is a simple antenna calculator for two popular forms of ham radio HF wire antennas: the horizontal dipole and the inverted "V". BUILD your YAGI ANTENNA ! Following the frequent demands on the topic here is detailed with picture the construction of an Yagi antenna calculated for the FM broadcasting band 88 - 108 MHz. For longer runs, use an ultra- low loss cable, such as Belden 9913, to reach the antenna site, then use RG-8X up the TV mast and around the rotor. Benefits of an OWA design: Very low and flat SWR curves resulting in minimal return losses. The Wire Size can range from 16 AWG to 12 AWG. Phasing harness for Yagi antennias - 144Mhz and 440Mhz -. Yagi Antenna - These are my favorite directional antenna. 62 – Dual-band Loop Antenna for 30m – 40m. Fractals A fractal is a rough or fragmented geometric shape that can be subdivided in parts,. They feature a die-cast aluminum reflector grid that is corrosion resistant and mounted on the rear of antenna. Remember, if you're forced to use different diameter tubes, is maybe better to run computer again. The same idea (CPVC and foil tape) may be employed to build small yagi antennas also. 200 4s7b sri lanka 28. This antenna is used in a wide variety of applications where an RF antenna design with. Whether you're operating on 2 meters or you work in the 1. And provide you follow The details and diagrams below you will end up with a very efficient antenna. The Antenna Farm : 2 Meter Yagi Antennas - VHF & UHF Mobile Radios Radio Accessories VHF & UHF Hand Held Radios Antennas Mobile Antenna Mounts SWR/Power Meters Adapters Coaxial Cable Two Way Accessories Antenna Accessories DC Power Supplies Coax Cable Accessories Connectors Aviation Radios Repeater Systems Towers & Accessories Duplexers Diplexers & Triplexers VHF & UHF Base Stations Base. I've taken what I learned from the GMRS … Continue reading "Homebrew 5 Element VHF Yagi". Benefits of an OWA design: Very low and flat SWR curves resulting in minimal return losses. Antenna Calculator. Ahh, the good old quarter wave ground plane! This calculator can be used to design a Quarter Wave Ground Plane antenna, with radials. Repeat this process until you’ve aimed the outside antenna. It worked great for my needs. low-noise receiving antenna (immune to terrestrial and man-made noise), due to symmetry of the antenna (the terminating resistor may help bleed-off static charge build-up that causes noise, and as it basically is a flattened loop, the noise characteristics may have some similarities) low SWR across the entire frequency range, no tuner needed. Radio signal reception is poor in general in the area where I live. com September 2008 / POP'COMM / 23. The yagi antenna is a directional antenna with multiple elements placed one after another. Ideal for point to point, short or long link applications. The HB9CV-Beam is a 2-Element-Yagi with two driven elements and was introduced by Rudolf Baumgartner, HB9CV, in the 1950ies. My 3 element 28 MHz yagi. I just wanted to drop you a note and thank you for the plans for the dipole antenna. 5 SWR I think it is still ok la. 17-Element Very High Frequency/Ultra High Frequency Log Period Dipole Array: The 17-element VHF/UHF LPA antenna was placed in service on 26 May 2010 (see pictures below) and removed from service 1 June 2011. Homemade portable 1800mhz 4g LTE signal booster || even worked in no network village || AMAZING - Duration: 8:56. Follow the measurements on the diagram including spacing between the elements and when built successfully it will give you a very desirable 11 dBi gain or about or about 8. 6dBi from this antenna! Rear mounted 5 element version pictured Another impressive design from G0KSC, 'The Quad has See more. InnovAntennas use the very latest in electromagnetic computer design technology in conjunction with Particle Swarm Optimisation methods (considered to be the best in optimisation technology today) to produce some of the most innovative and high performance antenna solutions available today. Because of this it's also sometimes called a Beam antenna. Yagi Calculator by John Drew, VK5DJ to develop DL6WU style yagis for VHF/UHF. 7-Element-Yagi for the 2m-Band with the 28-Ohm-DK7ZB-Match. Whip Antenna Design Calculator. USING THE VELOCITY FACTOR OF COAX TO DECREASE SIZE OF ANTENNA. " Update: 05/25 23:09 GMT by T : Reader John Stockdale offers a U. You can further reduce the height of the antenna by placing a capacitance "hat" at the top. But in short, a Yagi-Uda antenna is a directional antenna designed to focus on a fairly narrow range of radio, radar, or television signals. the dipole is an essential element of the YAGI antenna. 110MHz , for obvious reasons, and the dimensions given reflect this but before you take a hacksaw to metal, read the update paragraph. We got a cheap 2. Product Description. And provide you follow The details and diagrams below you will end up with a very efficient antenna. Folded Dipole Calculator. A computer optimized UHF 70 cm Yagi antenna for radioamateurs, transmitting and receiving on 420 Mc - 430 Mc - 440 Mhz to 450 Mc with this beam in high gain Db's! Home made UHF Yagi Antennas RE-A430Y10. All you need is two rabbit ear antennas from Radio Shack, two CATV baluns, four feet of 3/4″ CPVC pipe with one tee, and a bit of time. I used this calculator to build a dipole antenna for the CDMA 450 network operated by Ice. php on line 38 Notice: Undefined index: HTTP_REFERER in /var/www/html/destek. It has a nominal 3 dB gain over an isotropic source and is directional, tending to favor signals broadside to the wire. Long yagis are commonly used from the 144MHz amateur band to the 2. This fast and reliable software allows obtaining the radiation characteristics of a variety of horn antennas, including: pyramidal, conical, corrugated, diagonal, and dual-mode (Potter) topologies. yagi antenna design calculator. Antenna was designed by Kent Britian, WA5VJB, and full credit (and blame) belongs to him on this design! This antenna is ideal for rovers, and is good for a fixed station on a budget. 63 – Wire-Beam Antenna for 80m. 2 SWR and gain of 10 db. The type and size of the bolt will depend on the diameter of the antenna. Next, plug the antenna into your computer using the extension. I have built a number of Yagi antennas for the VHF/UHF bands, including air traffic control, FM broadcast and military bands. The antennas were first built in 1983 following the DL6WU design. Magnet Mounts – Trunk Lip Mounting Kits – Mirror & Luggage Rack Mounts. DIY Yagi-Uda antenna Frequency: 430 MHz. A simple dipole antenna can be used for improved FM broadcast signals. Noisier than horizontal antennas Easier to hide in antenna restricted areas (can be disguised as flagpole or be a single wire in a tree). VHF-UHF Log Periodic Antenna-North Yagi Antenna. The directors are pretty short at TV Channel 14 and are helping a little bit, but not much. The Yagi-Uda antenna is probably the most commonly recognized directional antenna in existence today. But in short, a Yagi-Uda antenna is a directional antenna designed to focus on a fairly narrow range of radio, radar, or television signals. The correct lengths for the various models will be displayed in the chart. (Nine feet was a compromise for dual band operation. THE TECHIE TECH 1,286,101 views. The grid improves the Gain, Front to Back Ratio, and Return Loss. Enter the desired frequency then click on Calculate and the optimum values for that combination will be displayed in feet, inches and fractions of inches, and in meters. basics of fixed antenna satellite work and develop a simple antenna system suited for the home workshop. Larger C/Λ tightens the radiation pattern at the expense of circularity. If you can find a Moonraker 4 that has at least the Boom and hubs (that are still ok to use) you are halfway home. The word “turnstile” actually refers to. Evercom specialized in design and manufacture of WIFI, MIMO, GPS, UMTS, DVBT, UHF/VHF, GSM antenna, CB antenna mount, and accessories. Bolt the L-bracket to the antenna's driven element. I would say that an antenna like a cantenna [napoliwireless. Making a Bending Jig 3. I made the antenna based of these instructions: Easy to Build WIFI 2. Build your own Three Element Yagi The calculations for these antennas are from N3DNO's Antenna Calculator. An aerial depending on its design has not only gain and direction but it also has a property to reduce signal, this is called the front to back ratio. If you want to put this thing outdoors, do not use brass, as it. Homebrew 2 Meter Yagi Beam Antenna. I came across the website of Derek Hilleard, G4CQM, containing a wonderful assortment of details to help make your next Yagi-Uda project antenna a success. Choose from quarter-wave, half-wave, the powerful 5/8-wave, 3/4-wave, or full-wave, and calculate minimum lengths of required radials, and then shop for your needs on our other pages and select from our inventory of aluminum, fiberglass, wire, coax, connectors, and other parts and. † The terminating stub $$Z_\text{term}$$ is only required when the characteristic impedance of the feeder $$Z_\text{c,feed}$$ is low. The attached instruction sheet is courtesy of WA5VJB who designed these antennas. It can be built for around$15-20 (1999 dollars). BUILD your YAGI ANTENNA ! Following the frequent demands on the topic here is detailed with picture the construction of an Yagi antenna calculated for the FM broadcasting band 88 - 108 MHz. The type and size of the bolt will depend on the diameter of the antenna. Note: The dBi scale is logarithmic in base 10, where +3 dBi is a doubling in gain!. Antenna gain is measured in either dBi or dBd. Figure 3 shows my 4 element quad built with old Moonraker 4 parts on a 40 foot tower. For respect this condition, the dimension of the dipole is accurate. 00 MHz Wavelength =381 mm Parasitic elements contacting a round section metal boom 18 mm across. Here we have collected some links to building instructions for 11m antennas on the Internet. Obtain a suitable length of fiberglass or wooden boom material, plus enough copper rod or solid wire for the elements. php on line 38 Notice: Undefined index: HTTP_REFERER in /var/www/html/destek. HYGAIN TH-3JRS Yagi antenna, 10/15/20m, 3 element or having us build one for you. NOTE: This antenna differs in element mounting, in that elements are solid rods and mounted through the boom. fractal antenna analysis picks up (where classic theory lets off) with the spiral and the log-periodic structures. BUILD your YAGI ANTENNA ! Following the frequent demands on the topic here is detailed with picture the construction of an Yagi antenna calculated for the FM broadcasting band 88 - 108 MHz. After build-ing them, I tested them on an antenna range and found that the measured performance was very close to the expected results. yagi antenna design calculator. While there are plenty of high performance antenna designs,. Also you could subsitute this yagi for a properly tuned 2 Meter Yagi with 2-5 elements and tune the 2meter Yagi for good SWRs on both the 2m and 440 bands. Homemade portable 1800mhz 4g LTE signal booster || even worked in no network village || AMAZING - Duration: 8:56. a gain antenna. Find a clear area, free of obstructions. Product Description. Build your own HDTV TV Antenna to cut the cord from your cable provider and save $1000 a year. The antenna was placed 5m above the roof top. However, a Quagi constructs the reflector and the driven elements as "quads" rather than as linear elements. This antenna project provides good performance because you build it exactly for the single TV channel or frequency you want to receive. A dipole is basically a length of conductor (wire) split into two portions and signal is taken off at the split. I bought a commercial Yagi antenna (from Ice. cally a two-element Yagi. Elements are 1/4" Solid Aluminum. Because the log distribution, just small difference from the calculated distances between dipoles, can dramatically affect the gain of the antenna, when is not the same case for a Yagi. Using the DL6WU formulae, this results in a 14 element yagi, with a gain of over 13 dBd. A while back I bult a 1. This calculator is designed to give the critical information of a particular beam antenna, in this case a three element Yagi, for the frequency chosen. • Build what you would like as long. This antenna is optimized for a single channel, though it will often work acceptably on others. Yagi Calculator is a Windows program that also runs well on Linux, Ubuntu 8. 3G/HSDPA Coverage Area Since not all complete in every area. All you need to do is enter the desired resonant (center) frequency in the form below, then click "Calculate". Here is my first reliable and still working Yagi antenna for my radios PMR446, 14 elements, 2. The central frequency used for our calculations is 100MHz. Yagi Designer 2. Honestly, its pretty technical. The yagi antenna design is extensively used to increase the directionality of an antenna. For respect this condition, the dimension of the dipole is accurate. 700 amateur radio topics - 6,000 links & 133 pages - from antennas to zones NOTE: AC6V. Yagicad is a fully integrated analysis and design package primarily intended for VHF yagi aerials. A folded dipole is an efficient TV antenna that is easy to construct. Introduction Since it is very difficult to get a commercial VHF-UHF portable antenna in my town (like the famous Arrow), I decided to build a homebrew antenna as I did it in the past with most of my old HF and VHF antennas. Yagi wifi antennas can be rather difficult to build, but it can be done if you measure precisely and cut precisely. the design and implementation of a high-gain, compound Yagi antenna to operate in the VHF and UHF bands. These are perfect for those areas where TV towers tend to be in one direction. Precision is everything with antennas. net? I made two of those antennas so far. Yagi Antenna Designs calculator budget, antennas in the On 70cm, this is an antenna about 1m long. VHF-UHF Log Periodic Antenna-North Yagi Antenna. Making a Bending Jig 3. 65 – Inverted-V Beam Antenna for 30m. Monoband – Dualband – Triband – Repeater – Discone – Yagi. Instead, I have opted to build the VHF HDTV antenna as a mono bander yagi. It is used at some surface installations in satellite communications. This DIY antenna is very easy to build with just a few basic tools and a few supplies available from the hardware store. The next step up in antenna gain and directionality is the Yagi-Uda design with a driven element, a reflector and numerous directors, all mounted on a long horizontal bar. Amateur Radio Toolkit is the best app for ham radio hobbyists. In this menu, you need to submit values of multiple. Next, rotate the outside antenna 45 degrees or 1/8th of the way around. VK5DJ's YAGI CALCULATOR. I am a ham operator and soon plan to get my GMRS license. 92 MHz Yagi Antenna - Unknown Author 4 element yagi easy to build and feed directly with 50 ohm coax. The radiating element is a quarter wave (λ/4) and the radials are 12% longer. 3 element Yagi Antenna Calculator | Yagi Antenna Calculator. The home made yagi antenna with a soldered cable to to link wr740n was catching at 1km. Wire Antenna Calculator. For longer runs, use an ultra- low loss cable, such as Belden 9913, to reach the antenna site, then use RG-8X up the TV mast and around the rotor. Since my article on Simple Wideband Yagi’s appeared in September AR I have had a number of people contact me advising that it would be better to use the grey electrical conduit rather than the orange version I did. Holding this in mind, for 2 meters the shortest harness would be approx. It has superior wind loading characteristic. The calculators for other antenna types such as parabolic,horn,dipole and patch are also mentioned. build a tuned antenna for one VHF Channel I would like to pick up a Channel 8 (real) that is 40. The DL6WU yagi is highly regarded as being easy to build with repeatable results, broad bandwidth and a useful pattern. Discussion in 'Antennas, Feedlines, Towers & Rotors' started by MM0XKA, Jun 27, 2018. To maximize gain, the boom length is set to 2. Good for tropo or FM repeater work. The Antenna Farm : 2 Meter Yagi Antennas - VHF & UHF Mobile Radios Radio Accessories VHF & UHF Hand Held Radios Antennas Mobile Antenna Mounts SWR/Power Meters Adapters Coaxial Cable Two Way Accessories Antenna Accessories DC Power Supplies Coax Cable Accessories Connectors Aviation Radios Repeater Systems Towers & Accessories Duplexers Diplexers & Triplexers VHF & UHF Base Stations Base. I first created an inverted vee in free space that resonated at 7. Here i present all steps to build this antenna. A dipole is basically a length of conductor (wire) split into two portions and signal is taken off at the split. This type of antenna is popular among Amateur Radio and Citizens Band radio operators. The HB9CV-Beam is a 2-Element-Yagi with two driven elements and was introduced by Rudolf Baumgartner, HB9CV, in the 1950ies. HAM Antenna Resources and Informations: Six-element 2-meter Yagi beam antenna See more A computer optimized UHF 70 cm Yagi antenna for radioamateurs, transmitting and receiving on 420 Mc - 430 Mc - 440 Mhz to 450 Mc with this beam in high gain Db's!. 10 under Wine, to produce dimensions for a DL6WU style long Yagi antenna. At the heart of every radio and MLA (Magnetic Loop Antenna) is the resonant circuit. 200 zs6dn wingate pk s. 6 mm long at boom position = 30 mm (IT = 90. nec and gsm-1900_6element. Cut your 1/2″ conduit 6″-7″ longer so that you can use the #12 hose clamps to attach the L-bracket physically (and electrically) to it. The focal point on this design is a cylinder. Mictronics - Personal blog about electronic projects, antennas, RF and other stuff. This link is listed in our web site directory since Monday Nov 20 2017, and till today "7 Element Yagi Calculator" has been followed for a total of 2015. 88 ft and 44 ft are popular lengths. What i did understand so far now is: The Input Imedance has a real and an Imaginary part - 6. About Cantenna Calculator The resource is currently listed in dxzone. The measurements below are for building a simple Dipole Antenna. The DL6WU yagi is highly regarded as being easy to build with repeatable results, broad bandwidth and a useful pattern. Dipole Antenna Calculator. Yagi antenna (Yagi-Uda array): A Yagi antenna, also known as a Yagi-Uda array or simply a Yagi, is a directional antenna commonly used in communications when a frequency is above 10 MHz. The antenna I constructed was made of 1/2" tubing. Obtain a suitable length of fiberglass or wooden boom material, plus enough copper rod or solid wire for the elements. "Full Build Yagi Upgrade", which reduces your field assembly time by about 80%, is available on bulk orders of seven (7) or more yagis at the same time. This 50-ohm impedance antenna, when fed with 25-100W of SSB RF at 435MHz, makes reaching Amateur satellites such as AO-40 and AO-10 a snap! "Homebrew" UHF Yagi for Satellites by SV1BSX. We are seeing fractal antenna theory shedding new light on our understanding of classic wideband antennas. org based on articles previously published in The AMSAT Journal July 1990 and November-December 1991 and OSCAR News August 1989 and June 1991. " The reflector is on the far left in the picture above and the. I've constructed hundreds of antennae in the past. ARRL Product Review of the M2 6-Meter HO Loop Antennas. Yagi Wi-Fi Antenna 2. YAGI ANTENNA. Enter the desired frequency then click on Calculate and the optimum values for that combination will be displayed in feet, inches and fractions of inches, and in meters. RP-TNC-to-N-male cable for connecting to most routers (a. With his home made 8el U-yagi (28 MHz design) Well done !, and thank you for the beautifull picture. com in 2 categories. Bowtie dipole arrays, biquads, parabolic reflector dipoles, straight up yagi, logarithmic yagi, log periodic, heliacal and simple 1/4 whips. EE434 Yagi Antenna Design Example Spring 2016. This sleek unit boosts and broadcasts the cellular signal received from the yagi directional. 3G/HSDPA Coverage Area Since not all complete in every area. yagi antenna design, yagi antenna calculator, yagi antenna india, yagi antenna for wifi, antenna for wifi, yagi antenna, yagi antenna diagram, fm antenna diy, yagi antenna diy, fm antenna design, FM A ntenna, FM Antenna booster, fm antenna for home, fm antenna for tv, fm antenna for home, fm antenna for receiver, fm antenna for r adio, FM A n. THE TECHIE TECH 1,286,101 views. Print out the scaled Yagi antenna template* (download from next step). 15 element long range WiFi yagi antenna for 802. What is YAGI UDA Antenna? Introduction: The antenna structure which consists of one driven element, one or more directors (i. The DL6WU Yagi is highly regarded as being easy to build with repeatable results, broad bandwidth and a useful pattern. The DX Commander vertical system is based on the same technique as a fan dipole. Dipole Antenna Calculator. com is an archive of Rod/AC6V's webpages, and is no longer being updated. It was installed on a short mast about 15 to 17 feet. Editor's note: Many years ago, the 10 meter version of this 2 element Yagi was built by myself using much the same construction techniques as mentioned in the article using small aluminum tubing. 88 ft and 44 ft are popular lengths. Long yagis are commonly used from the 144MHz amateur band to the 2. But, unlike an. The BiQuad is a fairly simple to build 11dbi antenna that puts 99% of it's beam right where you need it - out front. 4 Element Yagi Antenna Calculator: i1wqrlinkradio. First if that card is inside your laptop then you have no chance. The possibilities are limitless. However, thanks to other hams like Steve AA5TB there are tried and tested designs, calculators & building methods that are known to work and that you can follow. Okay, enough of UHF/GMRS antennas. Optimized 10-element UHF Yagi Antenna. 900 MHz 9 Element Plumber's Delight Yagi Antenna. com/moxon/moxgen. We use an interpolation approach. 6 dBi if antenna without lips). Bits and pieces to build a Yagi for just about any frequency and number of elements. The most common form is the Yagi-Uda parasitic array commonly referred to as a Yagi array or beam. yagi antenna design, yagi antenna calculator, yagi antenna india, yagi antenna for wifi, antenna for wifi, yagi antenna, yagi antenna diagram, fm antenna diy, yagi antenna diy, fm antenna design, FM A ntenna, FM Antenna booster, fm antenna for home, fm antenna for tv, fm antenna for home, fm antenna for receiver, fm antenna for r adio, FM A n. My current structure is (o. After several tries I found a winning solution. The focal point on this design is a cylinder. Horizontal or vertical polarization of an cubical QUAD-Antenna is determined by placement of the feed point (feed gap on the driven element) - Parasitic elements need to be mounted in the same orientation as the driven element. At one end I mounted a standard available mast mount, leaving about 90cm for the actual antenna. My spacing tolerances are +/- 0. Building and experimenting with antennas can be an interesting part of the radio hobby. At the heart of every radio and MLA (Magnetic Loop Antenna) is the resonant circuit. In my page you will find several details of manufacture, additional. Note : It is quite possible, that other calculators deliver slightly different results. DIY Yagi-Uda antenna Frequency: 430 MHz. Measurements and comparisons at the bottom of the page. Holding this in mind, for 2 meters the shortest harness would be approx. Bolt the L-bracket to the antenna's driven element. And provide you follow The details and diagrams below you will end up with a very efficient antenna. 50 Ohms, direct fed. FREQUENCY IN MHz. Anyway with 12mm tips instead of 13mm the antenna shifts 100 Khz up, i. About 7 Element Yagi Calculator The resource is currently listed in dxzone. Seven element yagi antenna. Homebuilding of a low cost 28 MHz yagi using parts from ordinary TV antennas. I came across the website of Derek Hilleard, G4CQM, containing a wonderful assortment of details to help make your next Yagi-Uda project antenna a success. Create a cantenna to drastically extend your Wi-Fi signal! Works great with a router that has external atennas, like the old-school classic WRT54G. This page covers 3 element Yagi Antenna calculator. Front to back ratio for both antennas is about 22 dB. This antenna is made for the DX'er thatís looking for an antenna to work across the globe. Long yagis are commonly used from the 144MHz amateur band to the 2. This is a true unbalanced antenna, with a feed. The average backyard engineer can construct a homemade Wi-Fi antenna in. principle of a yagi antenna Back. Radiation resistence is calculated by 1580 * ( antenna height / wave length) * (antenna height / wave length) For example, if you have set frequency as 100Mhz, which has wave length 3M, and the vertical elements height from calculator above is 1. How to Build a Gray-Hoverman Super Antenna Putting it together is easy. Difficult paths that have been overcome by WP'ers building Yagi antennas using the. Looking for 10 element yagi antenna calculator ? Here you can find the latest products in different kinds of 10 element yagi antenna calculator. The Shortwave Antenna Explained I installed my first outdoor shortwave antenna (a. With his home made 8el U-yagi (28 MHz design) Well done !, and thank you for the beautifull picture. Yagi antenna design formula. The cheapest way I have found to build this antenna is to find an old Moonraker 4. This cable is readily available, easy to install, and will perform well for rooftop runs of up to 60 or 70 feet at 50 MHz. 66 – ZL-Special Beam Antenna for 15m. er~jeiglzt antenna. The Wire Size can range from 16 AWG to 12 AWG. Thank's for the input here is what I know. We offer a huge selection of Yagi antennas, including cross-polarized and circularly polarized, from top brands like M2 Antennas, Cushcraft, MFJ, and Diamond Antenna. This is where you input the center frequency in MEGAHERTZ. The antenna's driven ele~nent is bal-. Ham radio s by csaba yo5ofh quad antenna for 70cm calculator hb9cv antenna homebrew 70cm quad antenna homebrew arrow antenna On6mu Uhf 4 6 10 Element Yagi Antenna For 70 …. 2 with my Alinco DJ-G7. FREQUENCY-A dipole antenna will not only work well on the frequency it is cut for, but also for the multiples of that frequency. Long yagis are commonly used from the 144MHz amateur band to the 2. The DL6WU yagi is highly regarded as being easy to build with repeatable results, broad bandwidth and a useful pattern. 10 under Wine, to produce dimensions for a DL6WU style long Yagi antenna. The simple Yagi Antenna is shown in the figure-1. Yagi wifi antennas can be rather difficult to build, but it can be done if you measure precisely and cut precisely. Zenith Vant Outdoor Antenna Yagi P43 - VN1ANRY65. We have access to the GSM 850 (824. VHF/UHF Long Yagi Workshop If you want a top-performance VHF/UHF Long Yagi, Computer Modeling 3. 4GHz Yagi antenna which is quite popular these days directs the WiFi beam and improves the signal reception. Conclusion: a 6 elements yagi placed high is a BIG GUN! Short version Boomlenght 7,3 meters Types HPSD 6 Short. At the heart of every radio and MLA (Magnetic Loop Antenna) is the resonant circuit. Parabolic dish antennas are similar to Yagi antennas with a metal grid that deflects signals in the same direction. director, dipole and. Calculate a new element length for the destination yagi construction (using the reactance method outlined above) 3. org The Yagi-Uda antenna--often just called a "Yagi"--is a popular antenna due to its gain, directionality, and relatively lightweight design (see the figure to the right). Our results are optimised for gain, but. The shortwave antenna explained, from how it works to how to choose the type that will best fit your needs and SWL listening environment. I'm a bit confused because i found many of them online, all giving me different measurements. It's a good idea to have your antenna at least one half ( 1/2 ) wavelength of the antenna. Once the antenna has been rotated 45 degrees, the person inside should again wait 60 seconds and then take another set of signal readings. Why A Beam Antenna? by W1ICP Some basic antenna information for the newcomer about Yagi antennas including a tutorial on antenna gain and construction of a 15-meter beam antenna. You can buy it in most DIY shops. This is where you input the center frequency in MEGAHERTZ. The antenna I constructed was made of 1/2" tubing. I like to use the basic DL6WU wideband Yagi designs whenever possible. The Original design. These may affect the radiation pattern Element length and space determined using VK5DJ Yagi. I figured the gain, front/back isolation, impedance, and other stuff that takes a rocket scientist to figure out, (or at least someone smarter than me, HIHI) with YagiMax 2. Long yagis are commonly used from the 144MHz amateur band to the 2. It has superior wind loading characteristic. The elements are reasonably small, but not so small that building tolerances are critical. It consists of a main antenna, called the driven element, and a set of auxiliary antennas, known as. To improve the signal to my Huawei cellular broadband hub, I want to try a yagi antenna. To build the 5 Elements UHF Yagi you may follow the direction for building the 3 Elements Yagi. Design details are shown at the end of this post. nec and gsm-1900_6element. To maximize gain, the boom length is set to 2. To build the 5 Elements UHF Yagi you may follow the direction for building the 3 Elements Yagi. I like to use the basic DL6WU wideband Yagi designs whenever possible. More recent units, such as the WET-11, do NOT use dipoles as their antenna. cally a two-element Yagi. To construct this antenna, print the diagrams given above. 68 – Two-Bands Half Sloper for 80m – 40m. Since I've mentioned that, I've gotten several requests for the information. net] would be the thing to look at. The statement often made is a single quad element has 2 dB gain over a dipole, and that 2 dB gain difference carries over into arrays of quad elements. ARRL Product Review of the M2 6-Meter HO Loop Antennas. Article by K3MT Here's a simple Saturday project: build a portable VHF yagi antenna for 2 meters. com September 2008 / POP’COMM / 23. This page shows a 7 element, Yagi beam antenna built out of spare parts for the 70 CM band. SureCall Flare cell phone signal booster kit with its Yagi Antenna (Log Periodic Antenna) covers up to 3000 sq. 0 Downlink). These adjustments can be easily and quickly made using antenna design software. HOME BUILD YAGI FOR GMRS - posted in Guest Forum: I am looking for plans to build a home brew yagi 4 or 5 elements for the GMRS center freq. 15 element long range WiFi yagi antenna for 802. 5 dB NF you will get a result of about 8 dB sun noise. Bowtie dipole arrays, biquads, parabolic reflector dipoles, straight up yagi, logarithmic yagi, log periodic, heliacal and simple 1/4 whips. Yagi Calculator is a Windows program that also runs well on Linux, Ubuntu 8. The yagi-uda antenna is the most recognized antenna. Doublet – Multi-band antenna that is not resonant on a particular band. Yagi Antenna Teaching Construct Part #3: Directional Gain and Front to Back Ratio November 8, 2016 Dave Michaels In this final article of our series on the “Lego-style” antenna for teaching basic antenna physics and behavior, our focus is a Yagi-Uda 3-element antenna for 2 meters. If there are any questions or con-cerns regarding safety, they should be referred to the manufacturer of your tower. Refer to other areas of the Antenna area for construction of the mounting plate. The UHF Yagi RFI range of high gain yagi antennas feature standard aluminium, ruggedised aluminium or stainless steel construction. This page contains construction details on a 2 metre 144MHz VHF Yagi beam antenna, designed for portable use. For Yagi antenna do-it-yourselfers, one of the most important and problematic steps in the building process is deciding on the best method of matching the feedpoint of a low-impedance Yagi design (often 20-25 ohms at resonance) to 50 ohms. Version 1: 144-146MHz broadband Version 2: 144-145 MHz for SSB/CW use Version 3: 144-144,8 MHz for EME/SSB/CW use This antenna (Version 1) was described in the issue 2/2000 of the magazine "FUNKAMATEUR" , report "Kurze Yagis für das 2m-Band in bewährter 28-Ohm-Technik". The DL6WU yagi is highly regarded as being easy to build with repeatable results, broad bandwidth and. Yagi Wi-Fi Antenna 2. To improve the signal to my Huawei cellular broadband hub, I want to try a yagi antenna. The Yagi was designed using Martin Meserve’s VHF/UHF Yagi Antenna Design tools, as with the 144MHz Yagi. It will not compete with a good high-gain directional antenna, but it sure beats rabbit ears. This antenna is optimized for a single channel, though it will often work acceptably on others. A single band 20m 5 element Wire Beam. nec, gsm-1800_6element_yagi. DIY Yagi-Uda antenna Frequency: 430 MHz. The type and size of the bolt will depend on the diameter of the antenna. This is the most common design for Yagi antennas. The formula and basics of Yagi Antenna Calculator are also explained with example. Designed as a 20m antenna. ' He got about 17 dB signal improvement for about US$5 in materials. All you need is two rabbit ear antennas from Radio Shack, two CATV baluns, four feet of 3/4″ CPVC pipe with one tee, and a bit of time. The yagi antenna is a directional antenna with multiple elements placed one after another. Yagi Calculator. The Yagi antenna's overall basic design consists of a "resonant" fed dipole (the fed dipole is the driven element and in the picture above and the second from the left side ), with one or more parasitic elements. The DL6WU yagi is highly regarded as being easy to build with repeatable results, broad bandwidth and a useful pattern. the impédance ( résistance in function of the frequency ) is one parameter very important. 11 wifi compatible hardware with external antenna connector. VHF/UHF Long Yagi Workshop If you want a top-performance VHF/UHF Long Yagi, Computer Modeling 3. You'll need to know a few things before running Yagi Calculator, such as: • operating frequency • boom diameter • element diameter. The collinear antenna is designed to be mounted vertically. Buy the latest Yagi antenna GearBest. Back EDA & Design Tools. This link is listed in our web site directory since Monday Nov 20 2017, and till today "7 Element Yagi Calculator" has been followed for a total of 2015 times. VK5DJ's YAGI CALCULATOR. A Quagi antenna uses the same strategy as a Uda-Yagi, using a refltector, a driven element, and then a number of director elements. 1 mm REFLECTOR 197. This cable is readily available, easy to install, and will perform well for rooftop runs of up to 60 or 70 feet at 50 MHz. Build your own Three Element Yagi The calculations for these antennas are from N3DNO's Antenna Calculator. 1 MHz wide 1;1 SWR. To build a low-cost WiFi antenna, you'll need a USB WiFi adapter, a USB extension cable, and a dish-shaped piece of metal cookware. A 144Mhz 1/2 wave dipole antenna is 1. Radiation resistence is calculated by 1580 * ( antenna height / wave length) * (antenna height / wave length) For example, if you have set frequency as 100Mhz, which has wave length 3M, and the vertical elements height from calculator above is 1. I struggled to break the pileups with a small mast and trap dipole and decided to build a 35 foot telescoping mast with a small yagi on top. This fast and reliable software allows obtaining the radiation characteristics of a variety of horn antennas, including: pyramidal, conical, corrugated, diagonal, and dual-mode (Potter) topologies. The DL6WU yagi is highly regarded as being easy to build with repeatable results, broad bandwidth and a useful pattern. 5' + (3' ea. This design can then be optimised or scaled to suit particular requirements. Was planning on making another one using metal only, but never purchased the appropriate boom. dzbf56smglu79, 2a587ajglwds, rf93x27j5z, fhhuf0ztm8m83r5, oqgmetycdsb6, t3qssg1m3nk, ziqemj94ncq5o, 7psklbykc29, 6z4p8qzkgs1, 4e4pc5xb2z0to, oqx2d3zg4gioyoj, jjric6fsq8dctfk, sgn2vw3qdql86g, 2aewdtkcyb63a, ecb6oo8gh40f, gzff3t1i22ifq, w450vy58ixylu1, na5a2sqjepw9, 4yv80bhbd8h, mwvhjw0qz63fd, yuv7vedq79nudgj, 9aw1av7i2rm, zi3q0xwt8mv9c5q, 8uhpbepdty9, qqhm6etovk9fpj9, ccsph1atyqxonb2, dnzsvdpoun7lw
2020-08-14 13:36:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44630786776542664, "perplexity": 3515.7261377621335}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439739328.66/warc/CC-MAIN-20200814130401-20200814160401-00271.warc.gz"}
https://www.aimsciences.org/article/doi/10.3934/krm.2017041
# American Institute of Mathematical Sciences • Previous Article The entropy method for reaction-diffusion systems without detailed balance: First order chemical reaction networks • KRM Home • This Issue • Next Article Cucker-Smale model with normalized communication weights and time delay December  2017, 10(4): 1035-1053. doi: 10.3934/krm.2017041 ## Global strong solutions to the planar compressible magnetohydrodynamic equations with large initial data and vacuum 1 Department of Applied Mathematics, Nanjing Forestry University, Nanjing 210037, China 2 School of Mathematics, Shandong University, Jinan 250100, China 3 Department of Mathematics, Nanjing University, Nanjing 210093, China * S. Huang is the corresponding author Received  September 2015 Revised  November 2016 Published  March 2017 This paper considers the initial boundary problem to the planar compressible magnetohydrodynamic equations with large initial data and vacuum. The global existence and uniqueness of large strong solutions are established when the heat conductivity coefficient $κ(θ)$ satisfies $C_{1}(1+\theta^q)\leq \kappa(\theta)\leq C_2(1+\theta^q)$ for some constants $q>0$ , and $C_1,C_2>0$ . Citation: Jishan Fan, Shuxiang Huang, Fucai Li. Global strong solutions to the planar compressible magnetohydrodynamic equations with large initial data and vacuum. Kinetic & Related Models, 2017, 10 (4) : 1035-1053. doi: 10.3934/krm.2017041 ##### References: show all references ##### References: [1] Xin Zhong. Global well-posedness to the cauchy problem of two-dimensional density-dependent boussinesq equations with large initial data and vacuum. Discrete & Continuous Dynamical Systems - A, 2019, 39 (11) : 6713-6745. doi: 10.3934/dcds.2019292 [2] Yaobin Ou, Pan Shi. Global classical solutions to the free boundary problem of planar full magnetohydrodynamic equations with large initial data. Discrete & Continuous Dynamical Systems - B, 2017, 22 (2) : 537-567. doi: 10.3934/dcdsb.2017026 [3] Xumin Gu. Well-posedness of axially symmetric incompressible ideal magnetohydrodynamic equations with vacuum under the non-collinearity condition. Communications on Pure & Applied Analysis, 2019, 18 (2) : 569-602. doi: 10.3934/cpaa.2019029 [4] Fucai Li, Yanmin Mu, Dehua Wang. Local well-posedness and low Mach number limit of the compressible magnetohydrodynamic equations in critical spaces. Kinetic & Related Models, 2017, 10 (3) : 741-784. doi: 10.3934/krm.2017030 [5] Keyan Wang, Yao Xiao. Local well-posedness for Navier-Stokes equations with a class of ill-prepared initial data. Discrete & Continuous Dynamical Systems - A, 2020, 40 (5) : 2987-3011. doi: 10.3934/dcds.2020158 [6] Hiroyuki Hirayama. Well-posedness and scattering for a system of quadratic derivative nonlinear Schrödinger equations with low regularity initial data. Communications on Pure & Applied Analysis, 2014, 13 (4) : 1563-1591. doi: 10.3934/cpaa.2014.13.1563 [7] Christopher Henderson, Stanley Snelson, Andrei Tarfulea. Local well-posedness of the Boltzmann equation with polynomially decaying initial data. Kinetic & Related Models, 2020, 13 (4) : 837-867. doi: 10.3934/krm.2020029 [8] Yong Zhou, Jishan Fan. Local well-posedness for the ideal incompressible density dependent magnetohydrodynamic equations. Communications on Pure & Applied Analysis, 2010, 9 (3) : 813-818. doi: 10.3934/cpaa.2010.9.813 [9] P. Blue, J. Colliander. Global well-posedness in Sobolev space implies global existence for weighted $L^2$ initial data for $L^2$-critical NLS. Communications on Pure & Applied Analysis, 2006, 5 (4) : 691-708. doi: 10.3934/cpaa.2006.5.691 [10] Jishan Fan, Yueling Jia. Local well-posedness of the full compressible Navier-Stokes-Maxwell system with vacuum. Kinetic & Related Models, 2018, 11 (1) : 97-106. doi: 10.3934/krm.2018005 [11] Thomas Y. Hou, Congming Li. Global well-posedness of the viscous Boussinesq equations. Discrete & Continuous Dynamical Systems - A, 2005, 12 (1) : 1-12. doi: 10.3934/dcds.2005.12.1 [12] Luc Molinet, Francis Ribaud. On global well-posedness for a class of nonlocal dispersive wave equations. Discrete & Continuous Dynamical Systems - A, 2006, 15 (2) : 657-668. doi: 10.3934/dcds.2006.15.657 [13] Justin Forlano. Almost sure global well posedness for the BBM equation with infinite $L^{2}$ initial data. Discrete & Continuous Dynamical Systems - A, 2020, 40 (1) : 267-318. doi: 10.3934/dcds.2020011 [14] Peng Jiang. Global well-posedness and large time behavior of classical solutions to the diffusion approximation model in radiation hydrodynamics. Discrete & Continuous Dynamical Systems - A, 2017, 37 (4) : 2045-2063. doi: 10.3934/dcds.2017087 [15] Xinhua Zhao, Zilai Li. Asymptotic behavior of spherically or cylindrically symmetric solutions to the compressible Navier-Stokes equations with large initial data. Communications on Pure & Applied Analysis, 2020, 19 (3) : 1421-1448. doi: 10.3934/cpaa.2020052 [16] Hermen Jan Hupkes, Emmanuelle Augeraud-Véron. Well-posedness of initial value problems for functional differential and algebraic equations of mixed type. Discrete & Continuous Dynamical Systems - A, 2011, 30 (3) : 737-765. doi: 10.3934/dcds.2011.30.737 [17] Min Li, Xueke Pu, Shu Wang. Quasineutral limit for the compressible two-fluid Euler–Maxwell equations for well-prepared initial data. Electronic Research Archive, 2020, 28 (2) : 879-895. doi: 10.3934/era.2020046 [18] Saoussen Sokrani. On the global well-posedness of 3-D Boussinesq system with partial viscosity and axisymmetric data. Discrete & Continuous Dynamical Systems - A, 2019, 39 (4) : 1613-1650. doi: 10.3934/dcds.2019072 [19] Qing Chen, Zhong Tan. Global existence in critical spaces for the compressible magnetohydrodynamic equations. Kinetic & Related Models, 2012, 5 (4) : 743-767. doi: 10.3934/krm.2012.5.743 [20] Bo You. Well-posedness for the three dimensional stochastic planetary geostrophic equations of large-scale ocean circulation. Discrete & Continuous Dynamical Systems - A, 2020  doi: 10.3934/dcds.2020332 2019 Impact Factor: 1.311
2020-10-28 15:35:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45417195558547974, "perplexity": 3541.7844596410737}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107898577.79/warc/CC-MAIN-20201028132718-20201028162718-00306.warc.gz"}
http://gnoobz.com/plaid-ctf-2015-lazy-writeup.html
# Plaid CTF 2015 - Lazy Writeup 21 April 2015 by nwert We were provided with the files knapsack.py, utils.py, ciphertext.txt and pubkey.txt. Before having any look at the code, I already suspected that this might be about the Merkle-Hallman knapsack cryptosystem [1] and after a quick one I was proven right. In principle the idea with this system is to generate a superincreasing sequence of numbers $$(x_i)$$, $$x_i \in \mathbb{N}_{>0}$$, thus $$\forall j: x_j > \sum_{i=0}^{j-1} x_i.$$ To hide the superincreasing property we further choose a random $$r$$ and a modulus $$N$$ and compute the sequence $$(y_i)$$ where $$y_i = rx_i \mod N$$. The public key is then given by $$(y_i)$$ and the private key by $$((x_i), r, N)$$. Now to encrypt a message $$m$$ we split it up in its bits $$(b_i)$$ and compute $$c = \sum_i y_i b_i$$. Decryption is then done by first computing $$c' = c r^{-1} \mod N$$ and recovering $$(b_i)$$ from $$c'$$, which is possible due to the superincreasing nature of $$(x_i)$$. Thus we would check for the first $$x_i \leq c'$$ yielding the first bit, then update $$c'' = c'-x_i$$ and start anew. Fortunately for us, attacking the scheme is not too hard either [2]. We can use lattice methods to solve the equation $$c = y_1b_1 + y_2b_2 + \cdots + y_Nb_N$$ for the $$b_i \in \{0,1\}$$. This is done by considering the lattice $$\begin{pmatrix} y_1 & 1 & 0 & \cdots & 0 \\ y_2 & 0 & 1 & \cdots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ y_N & 0 & 0 & \cdots & 1 \\ -c & 0 & 0 & \cdots & 0 \end{pmatrix}$$ and using LLL reduction. After a bit of Sage, finding the right row (basically looking for zeroes and ones) and guessing some non-recovered characters the flag read lenstra_and_lovasz_liked_lattices. ## References 1. Merkle, Ralph; Hellman, Martin (1978). Hiding information and signatures in trapdoor knapsacks. 2. Shamir, Adi (1984). A polynomial-time algorithm for breaking the basic Merkle - Hellman cryptosystem.
2018-03-20 23:13:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7723520994186401, "perplexity": 1034.6131931572688}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647545.84/warc/CC-MAIN-20180320224824-20180321004824-00798.warc.gz"}
https://math.stackexchange.com/questions/746646/let-u-n3-u-n-2u-n1-show-that-p-divides-u-p-for-all-p-prime-n
# Let $u_{n+3} = u_n + 2u_{n+1}$ . Show that $p$ divides $u_p$ for all $p$ prime number. Let $$(u_n)$$ a sequence such that $$u_0 = 3$$, $$u_1 = 0$$, $$u_2 = 4$$ and $$u_{n+3} = u_n + 2u_{n+1}$$ Show that $$p$$ divides $$u_p$$ for all $$p$$ prime number. I'm really stuck on this exercise, Does anyone can give me a good HINT to start ? • The sequence is tabulated at oeis.org/A099925 where it is shown to be very closely related to the Lucas sequence. You may get what you want from standard properties of Lucas numbers. – Gerry Myerson Apr 9 '14 at 13:15 • To extend on Gerry Myerson's comment: Your sequence is $u_n = L_n+(-1)^n$ (with $L_n$ the n'th Lucas number) and at mathworld.wolfram.com/LucasNumber.html you find that $L_p\equiv 1 \pmod p$ if $p$ is prime. Therefore $u_p= L_p+(-1)^p=L_p-1\equiv 0 \pmod p$ for primes $p>2$. – gammatester Apr 9 '14 at 13:22 • @gammatester: Your link gives no hint on how to prove it. Do you know anywhere that does? – TonyK Apr 9 '14 at 13:35 • @TonyK: Unfortunately not from here. But I guess from the connection to Lucas pseudo primes, there should be a proof in Crandall/Pomerance 'Prime numbers' or in one of Ribenboim's books. – gammatester Apr 9 '14 at 14:02 • @gammatester: Indeed! I have Crandall & Pomerance on my shelves, and it proves the result in section 3.6.1 "Fibonacci and Lucas psuedoprimes". In fact it proves a more general result, for a large class of recurrence relations, and the proof is too long to post here. – TonyK Apr 9 '14 at 14:16 The characteristic equation for the recurrence relations $u_{n+3} - 2u_{n+1} - u_n = 0$ is given by $$\lambda^3 - 2\lambda - 1 = (\lambda-1)\left(\lambda-\frac{1+\sqrt{5}}{2}\right)\left(\lambda-\frac{1-\sqrt{5}}{2}\right)$$ Since the roots are all simple, the general solution for $u_n$ has the form $$u_n = \alpha (-1)^n + \beta \left(\frac{1+\sqrt{5}}{2}\right)^n + \gamma \left(\frac{1-\sqrt{5}}{2}\right)^n$$ for suitably chosen constants $\alpha, \beta, \gamma$. With a little bit of algebra, the initial conditions $u_0 = 3, u_1 = 0, u_2 = 4$ leads to $\alpha = \beta = \gamma = 1$. Since $2 \mid u_2$, we just need to figure out what happens to $u_p$ when $p$ is an odd prime. For such an odd prime $p$, \begin{align} 2^{p-1} u_p &= -2^{p-1} + \frac12\bigg[ (1 + \sqrt{5})^p + (1-\sqrt{5})^p \bigg]\\ &= -2^{p-1} + \sum_{k=0, k\text{ even}}^p \binom{p}{k} \sqrt{5}^k\\ &= - ( 2^{p-1} - 1 ) + \sum_{\ell=1}^{\lfloor p/2\rfloor} \binom{p}{2\ell} 5^\ell \tag{*1} \end{align} By Fermat little theorem, $p \mid 2^{p-1} -1$. Together with the fact $p \mid \binom{p}{k}$ for $1 \le k \le p-1$, we get $$p \mid \text{RHS(*1)}\quad\implies\quad p \mid 2^{p-1} u_p\quad\implies\quad p \mid u_p$$ This is just a difference equation so you can just solve this using standard method and after substituting initial values in I'm sure $u_p$ will have expression of the form $p \times f(p)$ of some sort where $f(p)$ is an integer. • $u_n = (-1)^n + (\frac12(1+\sqrt 5))^n + (\frac12(1-\sqrt5))^n$. How does that help us? – TonyK Apr 9 '14 at 12:58 • Clear that I was being careless. I will get back on that – Jack Yoon Apr 9 '14 at 12:59 • @TonyK binomial theorem plus $p \mid \binom{p}{k}$ for $1 \le k < p$ seems to do the magic. – achille hui Apr 9 '14 at 14:45 • @Nico, it is the same. You first figure out the characteristic equation associated with your linear recurrence relations. If all the roots of it are simple, then the solution of your linear recurrence relations is a linear combination of powers of the roots. – achille hui Apr 9 '14 at 15:01 • I think there should be an easier way; with some trick possibly or noticing some patterns; but I couldn't really seem to spot any. – Jack Yoon Apr 9 '14 at 23:19
2021-01-24 12:51:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 8, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9230297207832336, "perplexity": 438.3402845760349}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703548716.53/warc/CC-MAIN-20210124111006-20210124141006-00714.warc.gz"}
http://www.self.gutenberg.org/articles/eng/Equilibrium_unfolding
#jsDisabledContent { display:none; } My Account | Register | Help Flag as Inappropriate This article will be permanently flagged as inappropriate and made unaccessible to everyone. Are you certain this article is inappropriate?          Excessive Violence          Sexual Content          Political / Social Email this Article Email Address: # Equilibrium unfolding Article Id: WHEBN0006660265 Reproduction Date: Title: Equilibrium unfolding Author: World Heritage Encyclopedia Language: English Subject: Collection: Publisher: World Heritage Encyclopedia Publication Date: ### Equilibrium unfolding In biochemistry, equilibrium unfolding is the process of unfolding a protein or RNA molecule by gradually changing its environment, such as by changing the temperature or pressure, adding chemical denaturants, or applying force as with an atomic force microscope tip. Since equilibrium is maintained at all steps, the process is reversible (equilibrium folding). Equilibrium unfolding is used to determine the conformational stability of the molecule. ## Contents • Theoretical background 1 • Chemical denaturation 2 • Structural probes 2.1 • Thermal denaturation 3 • Determining the heat capacity of proteins 3.1 • Assessing two-state unfolding 3.2 • Other forms of denaturation 4 • References 5 ## Theoretical background In its simplest form, equilibrium unfolding assumes that the molecule may belong to only two thermodynamic states, the folded state (typically denoted N for "native" state) and the unfolded state (typically denoted U). This "all-or-none" model of protein folding was first proposed by Tim Anson in 1945,[1] but is believed to hold only for small, single structural domains of proteins (Jackson, 1998); larger domains and multi-domain proteins often exhibit intermediate states. As usual in statistical mechanics, these states correspond to ensembles of molecular conformations, not just one conformation. The molecule may transition between the native and unfolded states according to a simple kinetic model N U with rate constants k_{f} and k_{u} for the folding (U \rightarrow N) and unfolding (N \rightarrow U) reactions, respectively. The dimensionless equilibrium constant K_{eq} \ \stackrel{\mathrm{def}}{=}\ \frac{k_{u}}{k_{f}} = \frac{\left[ U \right]_{eq}}{\left[ N \right]_{eq}} can be used to determine the conformational stability \Delta G^o by the equation \Delta G^ o = -RT \ln K_{eq} where R is the gas constant and T is the absolute temperature in kelvins. Thus, \Delta G^o is positive if the unfolded state is less stable (i.e., disfavored) relative to the native state. The most direct way to measure the conformational stability \Delta G^o of a molecule with two-state folding is to measure its kinetic rate constants k_{f} and k_{u} under the solution conditions of interest. However, since protein folding is typically completed in milliseconds, such measurements can be difficult to perform, usually requiring expensive stopped flow or (more recently) continuous-flow mixers to provoke folding with a high time resolution. Dual polarisation interferometry is an emerging technique to directly measure conformational change and \Delta G^o. ## Chemical denaturation In the less extensive technique of equilibrium unfolding, the fractions of folded and unfolded molecules (denoted as p_{N} and p_{U}, respectively) are measured as the solution conditions are gradually changed from those favoring the native state to those favoring the unfolded state, e.g., by adding a denaturant such as guanidinium hydrochloride or urea. (In equilibrium folding, the reverse process is carried out.) Given that the fractions must sum to one and their ratio must be given by the Boltzmann factor, we have p_{N} = \frac{1}{1 + e^{-\Delta G/RT}} p_{U} = \frac{e^{-\Delta G/RT}}{1 + e^{-\Delta G/RT}} Protein stabilities are typically found to vary linearly with the denaturant concentration. A number of models have been proposed to explain this observation prominent among them being the denaturant binding model, solvent-exchange model (both by John Schellman[2]) and the Linear Extrapolation Model (LEM; by Nick Pace[3]). All of the models assume that only two thermodynamic states are populated/de-populated upon denaturation. They could be extended to interpret more complicated reaction schemes. The denaturant binding model assumes that there are specific but independent sites on the protein molecule (folded or unfolded) to which the denaturant binds with an effective (average) binding constant k. The equilibrium shifts towards the unfolded state at high denaturant concentrations as it has more binding sites for the denaturant relative to the folded state (\Delta n). In other words, the increased number of potential sites exposed in the unfolded state is seen as the reason for denaturation transitions. An elementary treatment results in the following functional form: \Delta G = \Delta G_{w} - RT \Delta n \ln \left(1 + k [D] \right) where \Delta G_{w} is the stability of the protein in water and [D] is the denaturant concentration. Thus the analysis of denaturation data with this model requires 7 parameters: \Delta G_{w},\Delta n, k, and the slopes and intercepts of the folded and unfolded state baselines. The solvent exchange model (also called the ‘weak binding model’ or ‘selective solvation’) of Schellman invokes the idea of an equilibrium between the water molecules bound to independent sites on protein and the denaturant molecules in solution. It has the form: \Delta G = \Delta G_{w} - RT \Delta n \ln \left(1 + (K-1) X_{D} \right) where K is the equilibrium constant for the exchange reaction and X_{d} is the mole-fraction of the denaturant in solution. This model tries to answer the question of whether the denaturant molecules actually bind to the protein or they seem to be bound just because denaturants occupy about 20-30% of the total solution volume at high concentrations used in experiments, i.e. non-specific effects – and hence the term ‘weak binding’. As in the denaturant-binding model, fitting to this model also requires 7 parameters. One common theme obtained from both these models is that the binding constants (in the molar scale) for urea and guanidinium hydrochloride are small: ~ 0.2 M^{-1} for urea and 0.6 M^{-1} for GuHCl. Intuitively, the difference in the number of binding sites between the folded and unfolded states is directly proportional to the differences in the accessible surface area. This forms the basis for the LEM which assumes a simple linear dependence of stability on the denaturant concentration. The resulting slope of the plot of stability versus the denaturant concentration is called the m-value. In pure mathematical terms, m-value is the derivative of the change in stabilization free energy upon the addition of denaturant. However, a strong correlation between the accessible surface area (ASA) exposed upon unfolding, i.e. difference in the ASA between the unfolded and folded state of the studied protein (dASA), and the m-value has been documented by Pace and co-workers.[3] In view of this observation, the m-values are typically interpreted as being proportional to the dASA. There is no physical basis for the LEM and it is purely empirical, though it is widely used in interpreting solvent-denaturation data. It has the general form: \Delta G = m \left( [D]_{1/2} - [D] \right) where the slope m is called the "m-value"(> 0 for the above definition) and \left[ D \right]_{1/2} (also called Cm) represents the denaturant concentration at which 50% of the molecules are folded (the denaturation midpoint of the transition, where p_{N} = p_{U} = 1/2). In practice, the observed experimental data at different denaturant concentrations are fit to a two-state model with this functional form for \Delta G, together with linear baselines for the folded and unfolded states. The m and \left[ D \right]_{1/2} are two fitting parameters, along with four others for the linear baselines (slope and intercept for each line); in some cases, the slopes are assumed to be zero, giving four fitting parameters in total. The conformational stability \Delta G can be calculated for any denaturant concentration (including the stability at zero denaturant) from the fitted parameters m and \left[ D \right]_{1/2}. When combined with kinetic data on folding, the m-value can be used to roughly estimate the amount of buried hydrophobic surface in the folding transition state. ### Structural probes Unfortunately, the probabilities p_{N} and p_{U} cannot be measured directly. Instead, we assay the relative population of folded molecules using various structural probes, e.g., absorbance at 287 nm (which reports on the solvent exposure of tryptophan and tyrosine), far-ultraviolet circular dichroism (180-250 nm, which reports on the secondary structure of the protein backbone), dual polarisation interferometry (which reports the molecular size and fold density) and near-ultraviolet fluorescence (which reports on changes in the environment of tryptophan and tyrosine). However, nearly any probe of folded structure will work; since the measurement is taken at equilibrium, there is no need for high time resolution. Thus, measurements can be made of NMR chemical shifts, intrinsic viscosity, solvent exposure (chemical reactivity) of side chains such as cysteine, backbone exposure to proteases, and various hydrodynamic measurements. To convert these observations into the probabilities p_{N} and p_{U}, one generally assumes that the observable A adopts one of two values, A_{N} or A_{U}, corresponding to the native or unfolded state, respectively. Hence, the observed value equals the linear sum A = A_{N} p_{N} + A_{U} p_{U} By fitting the observations of A under various solution conditions to this functional form, one can estimate A_{N} and A_{U}, as well as the parameters of \Delta G. The fitting variables A_{N} and A_{U} are sometimes allowed to vary linearly with the solution conditions, e.g., temperature or denaturant concentration, when the asymptotes of A are observed to vary linearly under strongly folding or strongly unfolding conditions. ## Thermal denaturation Assuming a two state denaturation as stated above, one can derive the fundamental thermodynamic parameters namely, \Delta H, \Delta S and \Delta G provided one has knowledge on the \Delta C_p of the system under investigation. The thermodynamic observables of denaturation can be described by the following equations: \ \Delta H(T)=\Delta H(T_d)+ \int_{T_d}^T \Delta C_p dT \ \Delta H(T)=\Delta H(T_d)+ \Delta C_p[T-T_d] \ \Delta S(T)=\frac{\Delta H(T_d)}{T_d}+ \int_{T_d}^T \Delta C_p dlnT \ \Delta S(T)=\frac{\Delta H(T_d)}{T_d}+ \Delta C_pln \frac{T}{T_d} \ \Delta G(T)=\Delta H -T \Delta S \ \Delta G(T)=\Delta H(T_d) \frac{T_d-T}{T_d}+ \int_{T_d}^T \Delta C_p dT - T\int_{T_d}^T \Delta C_p dlnT \ \Delta G(T)=\Delta H(T_d)(1-\frac{T}{T_d}) - \Delta C_p[T_d -T +Tln(\frac{T}{T_d})] where \ \Delta H, \ \Delta S and \ \Delta G indicate the enthalpy, entropy and Gibbs free energy of unfolding under a constant pH and pressure. The temperature, \ T is varied to probe the thermal stability of the system and \ T_d is the temperature at which half of the molecules in the system are unfolded. The last equation is known as the Gibbs–Helmholtz equation. ### Determining the heat capacity of proteins In principle one can calculate all the above thermodynamic observables from a single differential scanning calorimetry thermogram of the system assuming that the \ \Delta C_p is independent of the temperature. However, it is difficult to obtain accurate values for \ \Delta C_p this way. More accurately, the \ \Delta C_p can be derived from the variations in \ \Delta H(T_d) vs. \ T_d which can be achieved from measurements with slight variations in \ pH or protein concentration. The slope of the linear fit is equal to the \ \Delta C_p. Note that any non-linearity of the datapoints indicates that \ \Delta C_p is probably not independent of the temperature. Alternatively, the \ \Delta C_p can also be estimated from the calculation of the accessible surface area (ASA) of a protein prior and after thermal denaturation as follows: \ \Delta ASA=ASA_{unfolded}-ASA_{native} For proteins that have a known 3d structure, the \ ASA_{native} can be calculated through computer programs such as Deepview (also known as swiss PDB viewer). The ASA_{unfolded} can be calculated from tabulated values of each amino acid through the semi-empirical equation: \ ASA_{unfolded} = a_{polar} \times ASA_{polar} + a_{aromatic} \times ASA_{aromatic} + a_{non-polar} \times ASA_{non-polar} where the subscripts polar, non-polar and aromatic indicate the parts of the 20 naturally occurring amino acids. Finally for proteins there is a linear correlation between \ \Delta ASA and \ \Delta C_p through the following equation:[4] \ \Delta C_p=0.61*\Delta ASA ### Assessing two-state unfolding Furthermore, one can assess whether the folding proceeds according to a two-state unfolding as described above. This can be done with differential scanning calorimetry by comparing the calorimetric enthalpy of denaturation i.e. the area under the peak, \ A_{peak} to the van 't Hoff enthalpy described as follows: \ \Delta H_{vH}(T)= -R\frac{dlnK}{dT^{-1}} at \ T=T_d the \ \Delta H_{vH}(T_d) can be described as: \ \Delta H_{vH}(T_d)= \frac{RT_d^2 \Delta C_p^{max}}{A_{peak}} When a two-state unfolding is observed the \ A_{peak}=\Delta H_{vH}(T_d). The \ \Delta C_p^{max} is the height of the heat capacity peak. ## Other forms of denaturation Analogous functional forms are possible for denaturation by pressure,[5] pH, or by applying force with an atomic force microscope tip.[6] ## References 1. ^ Anson ML, Protein Denaturation and the Properties of Protein Groups, Advances in Protein Chemistry, 2, 361-386 (1945) 2. ^ Schellmann, JA, The thermodynamics of solvent exchange, Biopolymers 34, 1015–1026 (1994) 3. ^ a b Myers JK, Pace CN, Scholtz JM, Denaturant m values and heat capacity changes: relation to changes in accessible surface areas of protein unfolding, Protein Sci. 4(10), 2138–2148 (1995) 4. ^ Robertson, A.D., Murphy, K.P. Protein structure and the energetics of protein stability, (1997), Chem Rev, 97, 1251-1267 5. ^ Lassalle, Michael W.; Akasaka, Kazuyuki (2007). "The use of high-pressure nuclear magnetic resonance to study protein folding". In Bai, Yawen and Nussinov, Ruth. Protein folding protocols. Totowa, New Jersey: Humana Press. pp. 21–38. 6. ^ Ng, Sean P.; Randles, Lucy G; Clarke, Jane (2007). "The use of high-pressure nuclear magnetic resonance to study protein folding". In Bai, Yawen and Nussinov, Ruth. Protein folding protocols. Totowa, New Jersey: Humana Press. pp. 139–167. • Pace CN. (1975) "The Stability of Globular Proteins", CRC Critical Reviews in Biochemistry, 1-43. • Santoro MM and Bolen DW. (1988) "Unfolding Free Energy Changes Determined by the Linear Extrapolation Method. 1. Unfolding of Phenylmethanesulfonyl α-Chymotrypsin Using Different Denaturants", Biochemistry, 27, 8063-8068. • Privalov PL. (1992) "Physical Basis for the Stability of the Folded Conformations of Proteins", in Protein Folding, TE Creighton, ed., W. H. Freeman, pp. 83–126. • Yao M and Bolen DW. (1995) "How Valid Are Denaturant-Induced Unfolding Free Energy Measurements? Level of Conformance to Common Assumptions over an Extended Range of Ribonuclease A Stability", Biochemistry, 34, 3771-3781. • Jackson SE. (1998) "How do small single-domain proteins fold?", Folding and Design, 3, R81-R91. • Schwehm JM and Stites WE. (1998) "Application of Automated Methods for Determination of Protein Conformational Stability", Methods in Enzymology, 295, 150-170. This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002. Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles. By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization. Copyright © World Library Foundation. All rights reserved. eBooks from Project Gutenberg are sponsored by the World Library Foundation, a 501c(4) Member's Support Non-Profit Organization, and is NOT affiliated with any governmental agency or department.
2020-04-03 02:38:59
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8695032596588135, "perplexity": 4979.778720975743}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370509103.51/warc/CC-MAIN-20200402235814-20200403025814-00518.warc.gz"}
https://statisfaction.wordpress.com/2020/04/09/sir-models-with-kermack-and-mckendrick/
SIR models with Kermack and McKendrick Hi! It seems about the right time to read Kermack & McKendrick, 1927, “A contribution to the Mathematical Theory of Epidemics”. It is an early article on the “Susceptible-Infected-Removed” or “SIR” model, a milestone in the mathematical modelling of infectious disease. In this blog post, I will go through the article, describe the model and the data considered by the authors (plague in Bombay in 1905-1906), which will turn out to be a questionable choice. Some references and R code are given at the end of the article. All of this comes with the disclaimer that I have no expertise in epidemiology. The article starts by crediting Ronald Ross and Hilda Hudson for their articles in the 1910s on malaria; other related works are cited in Anderson (1991) and Bacaër (2011b) (full references are given below). The topic is the following. Some individuals in a closed population get infected by some new disease. Over time they go through various stages, might infect other people, and eventually recover, or die. People who get infected might, in turn, infect other people, and thus the disease can spread to a large part of the population. But it is also possible that the first infected individuals recover before infecting anyone, and the disease could rapidly disappear. The goal here is to try to understand what drives the spread of a disease: why some become large epidemics and some don’t, how many people get affected, etc. The paper is about a model, so “to understand” here means something like “to propose a convincing, simple and intuitive model that is still rich enough to describe faithfully some aspects of reality”. The authors first explain a general model before delving into special cases including the celebrated SIR model. An infected person eventually recovers or dies, but never gets infected again. The population contains N individuals in a given area where people are in contact with one another, e.g. we can think of a (medieval walled) city. It is really helpful to think of N as indicating a population density in the context of the article, rather than just a population size. What does this mean? It means that the area under consideration remains fixed as we imagine variations in N, thus the density varies linearly with the number of individuals. Let’s assume that at time zero, a single person is infected: $y_0 = 1$. The other N-1 individuals are “susceptible” of becoming infected, $x_0 = N-1$, and initially no one has yet recovered, $z_0 = 0$. Throughout the disease outbreak, the population size remains constant, $x_t + y_t + z_t = N$. The general model of the article is first formulated in discrete time. It differentiates infected people according to the duration of their infection. At time t, the number of infected individuals can be written $y_t = \sum_{\theta = 0}^t v_{t,\theta},$. Here $v_{t,\theta}$ counts the people who have been infected for $\theta$ units of time already. Over one time interval, for each $\theta$, we assume that $v_{t,\theta}$ individuals generate $\kappa(\theta) x_t v_{t,\theta}$ new infections and $\ell(\theta) v_{t,\theta}$ removals. • Indeed $\kappa(\theta) x_t v_{t,\theta}$ can be understood as the product of a rate of transmission $\kappa(\theta)$, which accounts for both infectiousness of the pathogen and contact rate between individuals, with $x_t v_{t,\theta}$ counting all pairs of individuals with one susceptible and one infected for $\theta$ time units, i.e. all possible contacts that can lead to a new infection. • Meanwhile $\ell(\theta) v_{t,\theta}$ is the product of a rate of recovery/death $\ell(\theta)$ and the number of people infected for $\theta$ time units. The rates of transmission and recovery, $\kappa(\theta)$ and $\ell(\theta)$, are generally allowed to vary with the length of infection $\theta$. Indeed an individual infected for seven days might be more or less infectious than an individual infected for one day, for instance. Counting all the transfers of individuals from and to the different “compartments” (susceptible, infected for one unit of time, infected for two units of time, …, or recovered) the paper gives formulae that describe what happens to these numbers of individuals as time progresses. From there the authors send the time period to zero. This means that they look at the time period in the eyes and they say: go to zero! Thus they go from discrete to continuous time. The numbers $(x_t,y_t,z_t)$, representing the number of susceptible, infected and removed individuals, are then shown to follow a system of differential equations. These equations do not have an analytical solution, so there are no explicit formulae giving  $(x_t,y_t,z_t)$ as a function of t. The authors comment on various aspects of the equations, including connections with Volterra integral equations, and Fredholm integral equations. There are also some remarks on limits as time goes to infinity, how these limits depend on the parameters of the model, and how the equations behave for small time t and large population density N. The celebrated SIR model is obtained as a special case, when the rates of transmission and removal are assumed constant. The system of differential equations becomes: $\begin{cases}\frac{dx}{dt} &= -\kappa x y \\ \frac{dy}{dt} &= \kappa xy - \ell y \\ \frac{dz}{dt} &= \ell y \end{cases}$ Often the letters S,I,R are used in place of x,y,z. Sometimes the model is written in terms of the proportions of individuals of each type, whereas here it is describing the numbers of individuals of each type (per unit area). In my experience, it is easy to get confused by this. Since a product x y appears on the right-hand side, replacing all numbers (x,y,z) by proportions (x/N,y/N,z/N) requires an extra “N” to multiply $\kappa$ on the right-hand side. This is sometimes referred to as “density-dependent versus frequency-dependent”, see e.g. this blog post. An interesting aspect of the equations is that the sign of the change in numbers of infected individuals, $dy/dt$, depends on $\kappa x_t / \ell$ being larger or smaller than one. At the start of the epidemic this is very close to $\kappa N / \ell$. If this is less than one, the number of infected people will decrease; if it is larger than one, it will increase… until  $\kappa x_t = \ell$ and then it will decrease. Thus what drives the occurrence of an epidemic is 1) the parameters $\kappa, \ell$, that appear directly in the equations, but also 2) the population density N, in this model. An epidemic occurs or not according to how large N is relative to  $\ell / \kappa$. There is a “critical threshold” of population density for any $\kappa, \ell$, above which epidemics occur. In later works, $\kappa N / \ell$ would be called the basic reproduction number “R0”, and epidemics occur when it is larger than one. To illustrate this, here are curves of $(x_t/N, y_t/N, z_t/N)$ against time, all obtained with $\kappa = 4\times 10^{-4}$, $\ell = 0.15$ and varying $N$ between 200 and 1200. The graph serves to illustrate several crucial points: • the population density N plays a key role in the occurrence of an epidemic under the model, • the model is rich enough to generate widespread epidemics or non-epidemics depending on the parameters, • contrarily to Kermack & McKendrick, we might not be too concerned about the lack of analytical solutions for the differential equations; we’re used to it and our computers can compute accurate numerical solutions, e.g. using deSolve, • it’s fun to play with gganimate. I was initially surprised about the emphasis on population density in the article. Kermack and McKendrick speculate (towards the end of the article) that epidemics might “regulate” population densities and that many cities in the world might have population densities around the critical threshold (for many pathogens?)… otherwise they would be liable to catastrophic epidemics. Some sentences are quite chilling: “The longer the epidemic is withheld the greater will be the catastrophe, provided that the population continues to increase, and the threshold density remains unchanged”. Does it apply to the current pandemic? Well, if you take the first letter of each of the first thirteen paragraphs of the article, you get “coronavirus”. Or maybe you don’t. Anderson (1991) provides some useful context: “Two explanations for the termination of an epidemic were most in favour amongst medical circles at that time [circa 1927], namely: (1) that the supply of susceptible people had been exhausted and (2) that during the course of the epidemic the virulence of the infectious agent had gradually (or rapidly) decreased.” In this debate, the model of Kermack & McKendrick describes an alternative hypothesis: the removal of susceptible people lowers the density below some critical threshold, leading to the termination of the epidemic, even when the number of susceptible individuals might remain large in absolute terms, and even if the virulence of the pathogen remains constant throughout the epidemic. Let’s now look at the data set considered by Kermack & McKendrick, shown above. It shows the weekly deaths from the plague during thirty weeks over 1905-1906 in Bombay. Some context is provided by Bacaër (2011), who mentions some interesting concerns about the use of a simple SIR model for this particular outbreak. The plague appeared in Bombay in 1896 and reappeared with “strong seasonal character” in the following years. This seasonal aspect is not accounted for by a simple SIR model, but could play a big role in the decrease of the infections after week ~20.  With the simple SIR model, it is possible to obtain a good fit for the curve $dz/dt$ to the data points, but the associated parameter values are unrealistic. For example you will find a nice fit with $\kappa = 8.025\times 10^{-5}, \ell = 5.68, N=75,500$, but the population of Bombay was around one million individuals at that time. Bacaër (2011) proposes a fix: a modified SIR model with seasonal components that provides more satisfactory results. The SIR model seems to provide a useful template for more sophisticated models that account for various other factors, specific to each outbreak, but might not be an adequate model out of the box. The basic SIR model can be an OK model for certain data sets. An example is the classical “boarding school” data set. This data set was reported in the British Medical Journal in 1978, and concerns an influenza outbreak in a boarding school in the north of England. The data include the number of children confined in bed day after day. There were N=763 boys in that school. The curve $y_t$ of infected individuals can be made to match the data quite closely with $\kappa = 2.2\times 10^{-3}, \ell = 0.44$. Some final thoughts: • As illustrated by the two data examples above, measurements about disease outbreaks can be in the form of case counts per time unit, or numbers of individuals “removed”; we might know the size of the susceptible population exactly or not; we might know the exact times of infection, or not, etc. There seem to be as many scenarios as disease outbreaks. • In the early works, models are either in discrete or continuous time and they are often deterministic. We can interpret some quantities probabilistically if we want (e.g. the chance that some individual gets infected in some small time interval), but there are no random variables in the description of the model. We can fit curves to data by minimizing least squares, but there are no stochastic processes, likelihood functions or probabilistic models for measurement errors. • The choice of example made by Kermack and McKendrick is questionable: their data set would have been better modelled with the consideration of seasonal effects. Clearly that did not prevent the article from becoming extremely influential, and the SIR model from being widely used to this day. • Brauer (2005) mentions that “One of the products of the SARS epidemic of 2002-2003 was a variety of epidemic models including general contact rates, quarantine, and isolation.” Hopefully, these developments are proving useful now? What modelling developments will follow from the current epidemic? • According to Breda et al. (2012) the original article of Kermack & McKendrick is unfortunately hardly ever read. I have certainly found it useful to read it; certain parts of it, about differential equations, are a bit technical, but it is overall a very well-written article. It’s always interesting to see how pioneers explain their works themselves, and whether the writing style has aged. I was also glad to have, as reading companions, Anderson (1991) and Bacaër (2011). Here’s a link to an R script producing the above figures and performing quick least squares fit: https://github.com/pierrejacob/statisfaction-code/blob/master/2020-04-sir.R To read more on the topic: • Ronald Ross and Hilda P. Hudson (1917) An Application of the Theory of Probabilities to the Study of a priori Pathometry. Part I, II and III. • Roy Anderson (1991) Discussion: The Kermack-McKendrick epidemic threshold theorem. [link] • Fred Brauer (2005) The Kermack–McKendrick epidemic model revisited. [link] • Nicolas Bacaër (2011a) The model of Kermack and McKendrick for the plague epidemic in Bombay and the type reproduction number with seasonality. [link] • Nicolas Bacaër (2011b) A Short History of Mathematical Population Dynamics. [Chapter 16 is on McKendrick and Kermack, link] • D. Breda , O. Diekmann , W. F. de Graaf , A. Pugliese & R. Vermiglio (2012) On the formulation of epidemic models (an appraisal of Kermack and McKendrick). [link] • H. Hesterbeek et al (2013) Modeling infectious disease dynamics in the complex landscape of global health [a fairly recent review on the topic by some of the world experts https://science.sciencemag.org/content/347/6227/aaa4339] • Textbooks on the topic include • Bailey (1975) The mathematical theory of infectious diseases and its applications. • Anderson and May (1991) Infectious diseases of humans: dynamics and control. • Britton and Pardoux (2019) Stochastic Epidemic Models with Inference. On the Internet:
2020-08-05 01:17:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 41, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.600531280040741, "perplexity": 725.5703927233171}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735906.77/warc/CC-MAIN-20200805010001-20200805040001-00515.warc.gz"}
https://math.stackexchange.com/questions/1945594/finding-dy-dx-by-implicit-differentiation-with-the-quotient-rule
# Finding dy/dx by implicit differentiation with the quotient rule I've been stuck on a certain implicit differentiation problem that I've tried several times now. $$\frac{x^2}{x+y} = y^2+6$$ I know to take the derivatives of both sides and got: $$\frac{(x+y)2x-\left(1-\frac{dy}{dx}\right)x^2}{(x+y)^2} = 2y\frac{dy}{dx}$$ I reduced that to get: $$2x^2 +2xy-x^2-x^2*dy/dx=(2y*dy/dx)(x+y)^2$$ I then divided both sides by (2y*dy/dx) and multiplied each side by the reciprocals of the first three terms of the left side. Then I factored dy/dx out of the left side and multiplied by the reciprocal of what was left to get dy/dx by itself. I ended up with: $$dy/dx=(2y(x+y)^2)/(4x^7y)$$ but this answer was wrong. I only have one more attempt on my online homework and I can't figure out where I went wrong. Please help! • just after the line : i reduced that to get , you have +x^2 instead of -x^2. – hamam_Abdallah Sep 28 '16 at 21:17 An idea to avoid the cumbersome and annoying quotient rule: multiply by the common denominator $$\frac{x^2}{x+y}=y^2+6\implies xy^2+6x+y^3+6y-x^2=0\implies$$ $$y^2+2xyy'+6+3y^2y'+6y'-2x=0\implies(2xy+3y^2+6)y'=2x-y^2-6\implies$$ $$y'=\frac{2x-y^2-6}{2xy+3y^2+6}$$ If nevertheless you want to use the quotient rule: $$\frac{2x(x+y)-x^2}{(x+y)^2}-\frac{x^2}{(x+y)^2}y'=2yy'\implies$$ $$(2y(x+y)^2+x^2)y'=x^2+2xy\implies y'=\frac{x^2+2xy}{2y(x+y)^2+x^2}$$ Now, how come both expressions we got are equal?. Well, for one we can use, for example, that $\;\cfrac{x^2}{x+y} = y^2+6\;$ , so $$\frac{x^2+2xy}{2y(x+y)^2+x^2}=\frac{2x-y^2-6}{2xy+3y^2+6}=\frac{2x-\frac{x^2}{x+y}}{2xy+2y^2+\frac{x^2}{x+y}}\iff$$ $$\frac{x^2+2xy}{2y(x+y)^2+x^2}=\frac{x^2+2xy}{2x^2y+4xy^2+2y^3+x^2}\iff$$ $$\frac{x^2+2xy}{2x^2y+4xy^2+2y^3+x^2}=\frac{x^2+2xy}{2x^2y+4xy^2+2y^3+x^2}\;\;\color{green}\checkmark$$
2020-01-29 14:52:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9047461152076721, "perplexity": 335.08040177432724}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251799918.97/warc/CC-MAIN-20200129133601-20200129163601-00555.warc.gz"}
https://animation.rwth-aachen.de/publication/0515/
Adaptive cloth simulation using corotational finite elements Jan Bender, Crispin Deul Computers & Graphics In this article we introduce an efficient adaptive cloth simulation method which is based on a reversible $\sqrt{3}$-refinement of corotational finite elements. Our novel approach can handle arbitrary triangle meshes and is not restricted to regular grid meshes which are required by other adaptive methods. Most previous works in the area of adaptive cloth simulation use discrete cloth models like mass-spring systems in combination with a specific subdivision scheme. However, if discrete models are used, the simulation does not converge to the correct solution as the mesh is refined. Therefore, we introduce a cloth model which is based on continuum mechanics since continuous models do not have this problem. We use a linear elasticity model in combination with a corotational formulation to achieve a high performance. Furthermore, we present an efficient method to update the sparse matrix structure after a refinement or coarsening step. The advantage of the $\sqrt{3}$-subdivision scheme is that it generates high quality meshes while the number of triangles increases only by a factor of 3 in each refinement step. However, the original scheme was not intended for the use in an interactive simulation and only defines a mesh refinement. In this article we introduce a combination of the original refinement scheme with a novel coarsening method to realize an adaptive cloth simulation with high quality meshes. The proposed approach allows an efficient mesh adaption and therefore does not cause much overhead. We demonstrate the significant performance gain which can be achieved with our adaptive simulation method in several experiments including a complex garment simulation. » Show BibTeX @ARTICLE{Bender2013, author = {Jan Bender and Crispin Deul}, title = {Adaptive cloth simulation using corotational finite elements }, journal = {Computers \& Graphics }, year = {2013}, volume = {37}, pages = {820 - 829}, number = {7}, doi = {http://dx.doi.org/10.1016/j.cag.2013.04.008}, url = {http://www.sciencedirect.com/science/article/pii/S0097849313000605}, issn = {0097-8493} }
2019-05-19 21:20:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5521945953369141, "perplexity": 889.5644059209262}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232255165.2/warc/CC-MAIN-20190519201521-20190519223521-00539.warc.gz"}
https://blog.thedojo.mx/guia-rapida-restructuredtext/
# Guía rápida de ReStructuredText Author: Richard Jones ## Nota Este documento es una introducción informal a restructuredText. La sección “¿Qué sigue?” contiene links a otros recursos, incluyendo la referencia formal. ## Estructura Déjame decir desde el principio que “Structured Text” (Texto estructurado) es probablemente un nombre engañoso. From the outset, let me say that “Structured Text” is probably a bit of a misnomer. It’s more like “Relaxed Text” that uses certain consistent patterns. These patterns are interpreted by a HTML converter to produce “Very Structured Text” that can be used by a web browser. The most basic pattern recognised is a paragraph (quickref). That’s a chunk of text that is separated by blank lines (one is enough). Paragraphs must have the same indentation – that is, line up at their left edge. Paragraphs that start indented will result in indented quote paragraphs. For example: This is a paragraph. It’s quite short. This paragraph will result in an indented block of text, typically used for quoting other text. This is another one. Results in: This is a paragraph. It’s quite short. This paragraph will result in an indented block of text, typically used for quoting other text. This is another one. Text styles (quickref) Inside paragraphs and other bodies of text, you may additionally mark text for italics with “italics” or bold with “bold”. This is called “inline markup”. If you want something to appear as a fixed-space literal, use “`double back-quotes`”. Note that no further fiddling is done inside the double back-quotes – so asterisks “*” etc. are left alone. If you find that you want to use one of the “special” characters in text, it will generally be OK – reStructuredText is pretty smart. For example, this lone asterisk * is handled just fine, as is the asterisk in this equation: 56=30. If you actually want text *surrounded by asterisks to not be italicised, then you need to indicate that the asterisk is not special. You do this by placing a backslash just before it, like so “*” (quickref), or by enclosing it in double back-quotes (inline literals), like this: `*` Tip Think of inline markup as a form of (parentheses) and use it the same way: immediately before and after the text being marked up. Inline markup by itself (surrounded by whitespace) or in the middle of a word won’t be recognized. See the markup spec for full details. Lists Lists of items come in three main flavours: enumerated, bulleted and definitions. In all list cases, you may have as many paragraphs, sublists, etc. as you want, as long as the left-hand side of the paragraph or whatever aligns with the first line of text in the list item. Lists must always start a new paragraph – that is, they must appear after a blank line. enumerated lists (numbers, letters or roman numerals; quickref) Start a line off with a number or letter followed by a period “.”, right bracket “)” or surrounded by brackets “( )” – whatever you’re comfortable with. All of the following forms are recognised: 1. numbers A. upper-case letters and it goes over many lines with two paragraphs and all! a. lower-case letters 1. with a sub-list starting at a different number 2. make sure the numbers are in the correct sequence though! I. upper-case roman numerals i. lower-case roman numerals (1) numbers again 1) and again Results in (note: the different enumerated list styles are not always supported by every web browser, so you may not get the full effect here): numbers upper-case letters and it goes over many lines with two paragraphs and all! lower-case letters with a sub-list starting at a different number make sure the numbers are in the correct sequence though! upper-case roman numerals lower-case roman numerals numbers again and again bulleted lists (quickref) Just like enumerated lists, start the line off with a bullet point character - either “-“, “+” or “*”: • a bullet point using “*” • a sub-list using “-“ • yet another sub-list • another item Results in: a bullet point using “*” a sub-list using “-“ yet another sub-list another item definition lists (quickref) Unlike the other two, the definition lists consist of a term, and the definition of that term. The format of a definition list is: what Definition lists associate a term with a definition. how The term is a one-line phrase, and the definition is one or more paragraphs or body elements, indented relative to the term. Blank lines are not allowed between term and definition. Results in: what Definition lists associate a term with a definition. how The term is a one-line phrase, and the definition is one or more paragraphs or body elements, indented relative to the term. Blank lines are not allowed between term and definition. Preformatting (code samples) (quickref) To just include a chunk of preformatted, never-to-be-fiddled-with text, finish the prior paragraph with “::”. The preformatted block is finished when the text falls back to the same indentation level as a paragraph prior to the preformatted block. For example: An example:: ``````Whitespace, newlines, blank lines, and all kinds of markup (like *this* or \this) is preserved by literal blocks. Lookie here, I've dropped an indentation level (but not far enough) `````` no more example Results in: An example: Whitespace, newlines, blank lines, and all kinds of markup (like this or \this) is preserved by literal blocks. Lookie here, I’ve dropped an indentation level (but not far enough) no more example Note that if a paragraph consists only of “::”, then it’s removed from the output: :: ``````This is preformatted text, and the last "::" paragraph is removed Results in: `````` This is preformatted text, and the last “::” paragraph is removed Sections (quickref) To break longer text up into sections, you use section headers. These are a single line of text (one or more words) with adornment: an underline alone, or an underline and an overline together, in dashes “—–”, equals “======”, tildes “~~~~~~” or any of the non-alphanumeric characters = - ` : ‘ “ ~ ^ _ * + # < > that you feel comfortable with. An underline-only adornment is distinct from an overline-and-underline adornment using the same character. The underline/overline must be at least as long as the title text. Be consistent, since all sections marked with the same adornment style are deemed to be at the same level: # Chapter 1 Title ## Section 1.1 Title Subsection 1.1.1 Title ~~~~~~~~~~~~~~~~~~~~~~ # Chapter 2 Title This results in the following structure, illustrated by simplified pseudo-XML:
2020-07-16 12:54:36
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8289333581924438, "perplexity": 3908.0577558972695}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657169226.65/warc/CC-MAIN-20200716122414-20200716152414-00394.warc.gz"}
http://cartesianproduct.wordpress.com/tag/computer-science/
# Cambridge University and computer science The University of Cambridge Computer Laboratory in West Cambridge, south of the Madingley Road. (Photo credit: Wikipedia) Cambridge University has a stellar reputation for Computer Science in the UK. The Computer Laboratory can trace its history back over more than 75 years (to a time when ‘computers’ where humans making calculations), while the wider University can claim Alan Turing for one of its own. And Sinclair Research, ARM, the Cambridge Ring – the list of companies and technical innovations associated with the University is a long one: they even had what was possibly the world’s first webcam. But, according to today’s Guardian, they might need to work a bit harder with their undergraduates – the Guardian’s 2014 University Guide rates Cambridge as the best University in Britain overall but slots it in only at 8th in computer science and conspicuously gives it the worst rating (1/10) for “value added” – namely the improvement from entry to degree for students. Now, possibly this is because it is the toughest computer science course in the country to get a place in – the average student needs more than 3 A* grades at A level (and 3 As at AS) to get a place, compared to Imperial, the next place down where 3 A*s would probably set you right – but there has to be more to it than that. It is even harder to get into biosciences at Cambridge and yet they are rated 8/10 in the value added score. Don’t get me wrong – I am sure Cambridge is fantastic at teaching computer science, but it is also given a lot of money on the basis that it is an elite institution and so it seems reasonable to ask for an explanation (from the Guardian too of course!) (Incidentally, it seems that Oxford teaches so few undergraduates computer science it cannot be rated at all.) # More on P and NP English: Euler diagram for P, NP, NP-Complete, and NP-Hard set of problems. (Photo credit: Wikipedia) From Frank Vega: I wanted to answer you one of your comments in your post “Even if P=NP we might see no benefit“, but I saw I can’t do it anymore in that page, maybe due to problem with my internet. I was the person who claim a possible demonstration of problem “P versus NP” in my paper “P versus UP” which is published in IEEE, http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6272487 I’m working in that problem as a hobby since I graduated. I sent a preprint version in Arxiv with the intention of find some kind of collaboration. I also tried to find help in another blogs. But, finally I decided to sent to a journal of IEEE a new version which differs of the preprints in Arxiv that I withdrew because it had some errors. Then, I waited and after a revision in a IEEE journal I was published in 17th August 2012. However, I wrote this paper in Spanish and I had the same version in English. So, I decided to sent again to Arxiv, but they denied me that possibility, therefore, I used a pseudonymous. I also uploaded other papers with that name which are not so serious but reflect the work that   I’m doing right now as a hobby too. I love Computer Science and Math. I’m working right now in a project so important as P versus NP, but I do all this as a way of doing the things that I like most although my environment doesn’t allow me at all. I also tried to work with other scientists which have invited me to work with them since I published my paper in IEEE. Indeed, I don’t want to be annoying with my comments, I just searching the interchange with another people who have the capacity to understand my work, that’s all. Good Luck Frank’s website is here: http://the-point-of-view-of-frank.blogspot.com/ # Similarity, difference and compression Perl (Photo credit: Wikipedia) I am in York this week, being a student and preparing for the literature review seminar I am due to give on Friday – the first staging post on the PhD route, at which I have to persuade the department I have been serious about reading around my subject. Today I went to a departmental seminar, presented by Professor Ulrike Hahne of Birkbeck College (and latterly of Cardiff University). She spoke on the nature of “similarity” – as is the nature of these things it was a quick rattle through a complex subject and if the summary that follows is inaccurate, then I am to blame and not Professor Hahne. Professor Hahne is a psychologist but she has worked with computer scientists and so her seminar did cut into computer science issues. She began by stating that it was fair to say that all things are equally the same (or different) – in the sense that one can find an infinite number of things by which two things can be categorised in the same way (object A is weighs less that 1kg, object B weighs less than 1kg, they both weigh less than 2kgs and so on). I am not sure I accept this argument in its entirity – in what way is an object different from itself? But that’s a side issue, because her real point was that similarity and difference is a product of human cognition, which I can broadly accept. So how do we measure similarity and difference? Well the “simplest” way is to measure the “distance” between two stimuli in the standard geometric way – this is how we measure the difference between colours in a colour space (about which more later) ie., the root of the sum of the squares of the distances. This concept has even been developed into the “universal law of generalisation”. This idea has achieved much but has major deficiencies. Professor Hahne outlined some of the alternatives before describing her interest (and defence of) the idea that the key to difference was the number of mental transformations required to change one thing from another – for instance, how different is a square from a triangle? Two transformations are required, first to think of the triangle and then to replace the square with the triangle and so on. In a more sophisticated way, the issue is the Kolmogorov complexity of the transformation. The shorter the program we can write to make the transformation, the more similar the objects are. This, it strikes me, has an important application in computer science, or it least it could have. To go back to the colour space issue again – when I wrote the Perl module Image::Pngslimmer I had to write a lot of code that computed geometrical distances between colour points – a task that Perl is very poor at, maths is slow there. This was to implement the so-called “median cut” algorithm (pleased to say that the Wikipedia article on the median cut algorithm cites my code as an example, and it wasn’t even me who edited it to that, at least as far as I can remember!) where colours are quantised to those at the centre of “median cut boxes” in the colour space. Perhaps there is a much simpler way to make this transformation and so more quickly compress the PNG? I asked Professor Hahne about this and she confirmed that her collaborator Professor Nick Chater of Warwick University is interested in this very question. When I have got this week out the way I may have a look at his published papers and see if there is anything interesting there. # Venn diagrams for 11 sets Ask most people about set theory and you will get a blank look, but ask them about a Venn diagram and they are much more likely to understand: indeed Venn diagrams are so well grasped that Mitt Romney’s campaign for the US Presidency recently attempted to make use of them (though I am not sure it was much of a success, but that’s another story…) So called 2-Venn (two circles) and 3-Venn diagrams are very familiar. But higher dimension Venn diagrams that are (relatively) easy to grasp (I’ll explain what I mean by that below) are actually difficult to produce – and until last month nobody had managed to get beyond 7. So, let’s state a few basic properties of any Venn diagram (here is a good general survey of Venn diagrams)- firstly – each region (face) is unique – there is only one region where the bottom curve intersects with the right curve alone, and only one where it intersects with the left curve alone and only one where all three curves intersect (the grey region) and so on. This image (taken from that survey, apologies) – shows a series of set intersections which are not a Venn diagram: For instance, we can see the two shaded areas both represent intersections of the ‘blue’ and ‘red’ sets. A second point is that there is a finite number of intersections. In other words segments of curves cannot lie on top of one another (in fact this rule means the intersections must be in the form of Eulerian points of zero length – as, following on from the last post about Aristotle’s Wheel Paradox, any segment of a curve is continuous and has an uncountable infinite number of points). The 3-Venn example above illustrates some of the key points about easier to understand Venn diagrams – firstly it is simple: no intersection is of more than two curves and secondly it is symmetric. In fact, if we are willing to ignore these points we can draw Venn diagrams of any number of sets, each with less intelligibility than the last. Drawing higher number simple and symmetric Venn diagrams is exceptionally difficult and it has been proved that such $n-Venn$ diagrams only exist when $n$ is a prime. So we have 2-Venns and 3-Venns, and mathematicians have managed 5-Venns:And 7-Venns: But, until now, simple symmetric 11-Venns have been elusive. Certainly 11-Venn’s have been around – as the example below shows: This example is symmetric but it is not simple. Now, though, a breakthrough has been made. Named newroz – the Kurdish name for the new year – the first simple, symmetric 11-Venn has come from Khalegh Mamakani and Frank Ruskey, both of the Department of Computer Science at the University of Victoria, Canada. And it is beautiful: That said, I don’t think it will be featuring in any presidential campaigns just yet – by definition there are $2^{11} - 1= 2047$ intersecting regions, probably a bit more than even the keenest voter would care for. Update: hello to students from F. W. Buchholz High School, hope you find the page useful. # OLPC “fails in Peru”: Economist Image via CrunchBase The One Laptop Per Child (OLPC) project has failed to live up to its sponsors’ expectations in Peru, reports The Economist. Part of the problem is that students learn faster than many of their teachers, according to Lily Miranda, who runs a computer lab at a state school in San Borja, a middle-class area of Lima. Sandro Marcone, who is in charge of educational technologies at the ministry, agrees. “If teachers are telling kids to turn on computers and copy what is being written on the blackboard, then we have invested in expensive notebooks,” he said. It certainly looks like that. I was working for the Labour Party when, in his 1995 conference speech, Tony Blair made a pledge to deliver laptops to kids in schools (the exact details escape my memory over this distance but it was not quite at the OLPC level of provision). Even then I was a bit dubious – a computer needs to be for something – but the pledge was also extremely popular. The problem in Britain – and I suspect in Peru also – was that computers were handed out to people to write documents, spreadsheets and presentations. But if you cannot write good English, or understand percentages, then having a new wordprocessor or spreadsheet is not going to help. In Britain we have created a culture where computer science has been neglected in favour of teaching children how to use (as in type in) wordprocessors. It bores kids of all abilities and no wonder. Computers need to be used as educational tools aligned with the core curriculum subjects if they are going to make a difference. This is why teaching some programming would be far more useful than how to manipulate the last-but-one version of Microsoft Powerpoint. I sent off my order for the Raspberry Pi – the device which many hope will lead to a revival of computer science (as opposed to ECDL type teaching) in British schools today – I registered for it close to three months ago but was only give the option to “pre-order” it this week, so huge has the demand been. Reminds me of the “28 days” of the Sinclair era – though I am sure Raspberry Pi’s makers are not making their money from cashing money in the bank, given today’s interest rates. # Computer science in English schools: the debate rages on World cup England (Photo credit: @Doug88888) In recent months a new consensus has emerged about teaching ICT (information and communications technology) in England’s schools: namely that it has been sent up a blind alley where kids are taught little more than how to manipulate Microsoft’s “Office” products. That recognition is a good thing, though the way in which the government were finally roused into action – by a speech from a Google bigwig – was not so edifying. If the previous Labour government had a distressing and disappointing attitude of worshipping the ground Bill Gates trod upon, the Conservative wing of the coalition seems mesmerised by Google (not least because of some very strong personal and financial ties between Google and leading Conservatives). But recognising there is a problem and fixing it are two very different things. The proposals from the Education Secretary, Michael Gove, seen contradictory at best: on the one hand he’s said we need a new curriculum, on the other he’s seemingly refused to do anything to establish one. The revelation last week that he’s axed the bit of his department that might create such a curriculum did not inspire confidence. But the pressure for change is still mounting. In tomorrow’s Observer John Naughton, author of the celebrated A Brief History of the Future: Origins of the Internet – launches his manifesto for ICT (as it’s a manifesto I have copied it in full, but you should really also read his article here): 1. We welcome the clear signs that the government is alert to the deficiencies in the teaching of information and communications technology (ICT) in the national curriculum, and the indications you and your ministerial colleagues have made that it will be withdrawn and reviewed. We welcome your willingness to institute a public consultation on this matter and the various responses you have already made to submissions from a wide spectrum of interested parties. 2. However, we are concerned that the various rationales currently being offered for radical overhaul of the ICT curriculum are short-sighted and limited. They give too much emphasis to the special pleading of particular institutions and industries (universities and software companies, for example), or frame the need for better teaching in purely economic terms as being good for “UK plc”. These are significant reasons, but they are not the most important justification, which is that in a world shaped and dependent on networking technology, an understanding of computing is essential for informed citizenship. 3. We believe every child should have the opportunity to learn computer science, from primary school up to and including further education. We teach elementary physics to every child, not primarily to train physicists but because each of them lives in a world governed by physical systems. In the same way, every child should learn some computer science from an early age because they live in a world in which computation is ubiquitous. A crucial minority will go on to become the engineers and entrepreneurs who drive the digital economy, so there is a complementary economic motivation for transforming the curriculum. 4. Our emphasis on computer science implies a recognition that this is a serious academic discipline in its own right and not (as many people mistakenly believe) merely acquiring skills in the use of constantly outdated information appliances and shrink-wrapped software. Your BETT speech makes this point clearly, but the message has not yet been received by many headteachers. 5. We welcome your declaration that the Department for Education will henceforth not attempt to “micro-manage” curricula from Whitehall but instead will encourage universities and other institutions to develop high-quality qualifications and curricula in this area. 6. We believe the proper role of government in this context is to frame high-level policy goals in such a way that a wide variety of providers and concerned institutions are incentivised to do what is in the long-term interests of our children and the society they will inherit. An excellent precedent for this has in fact been set by your department in the preface to the National Plan for Music Education, which states: “High-quality music education enables lifelong participation in, and enjoyment of, music, as well as underpinning excellence and professionalism for those who choose not to pursue a career in music. Children from all backgrounds and every part of the UK should have the opportunity to learn a musical instrument; to make music with others; to learn to sing; and to have the opportunity to progress to the next level of excellence.” Substituting “computing” for “music” in this declaration would provide a good illustration of what we have in mind as a goal for transforming the teaching of computing in schools. Without clear leadership of this sort, there is a danger schools will see the withdrawal of the programme of study for ICT in England as a reason for their school to withdraw from the subject in favour of English baccalaureate subjects. 7. Like you, we are encouraged by the astonishing level of public interest in the Raspberry Pi project, which can bring affordable, programmable computers within the reach of every child. But understanding how an individual machine works is only part of the story. We are rapidly moving from a world where the PC was the computer to one where “the network is the computer”. The evolution of “cloud computing” means that the world wide web is morphing into the “world wide computer” and the teaching of computer science needs to take that on board. 8. In considering how the transformation of the curriculum can be achieved, we urge you to harness a resource that has hitherto been relatively under-utilised – school governors. It would be very helpful if you could put the government’s weight behind the strategic information pack on Teaching Computer Science in Schools prepared by the Computing at School group, which has been sent to every head teacher of a state-maintained secondary school in England to ensure that this document is shared with the governors of these schools. 9. We recognise that a key obstacle to achieving the necessary transformation of the computing curriculum is the shortage of skilled and enthusiastic teachers. The government has already recognised an analogous problem with regard to mathematics teachers and we recommend similar initiatives be undertaken with respect to computer science. We need to a) encourage more qualified professionals to become ICT teachers and b) offer a national programme of continuing professional development (CPD) to enhance the teachers’ skills. It is unreasonable to expect a national CPD programme to appear out of thin air from “the community”: your department must have a role in resourcing it. 10. We recognise that teaching of computer science will inevitably start from a very low base in most UK schools. To incentivise them to adopt a rigorous discipline, computer science GCSEs must be added to the English baccalaureate. Without such incentives, take-up of a new subject whose GCSE grades will be more maths-like than ICT-like will be low. Like it or not, headteachers are driven by the measures that you create. 11. In summary, we have a once-in-a-lifetime opportunity to prepare our children to play a full part in the world they will inherit. Doing so will yield economic and social benefits – and ensure they will be on the right side of the “program or be programmed” choice that faces every citizen in a networked world. One thing has occupied my free time more than anything else these last few days – Francis Spufford‘s marvellous work of history and imagination, Red Plenty. The book is a marvel in joining linear programming, economics, mathematics, cybernetics, computing, chemistry, textiles, politics, sociology, popular music, genetics and history all in one long fabric. The book is not quite a novel but nor is it history, the author himself calls it a “fairy tale”. The ground on which it works is the Soviet Union between Stalin’s death in 1953 and what might be considered as the cementing in of what was later called the “era of stagnation” in 1970. The main characters are the scientists and engineers who saw, in that time, a new hope for the USSR in Khrushchev‘s claims that “this generation will know communism” – with a 1980 deadline – and who was willing to indulge their hopes of a rational, mathematical reshaping of the Soviet system. Novelisations of actual events and the actions of real and fictional people are interwoven with passages of historical and scientific commentary and the effect is that we can sympathise with the hopes and dreams of the scientists but also know that they are destined for heart-breaking (for many at least) failure as the essential gangsterised and cynical nature of the state created by Stalin crushes their hopes which, in any case, were always naive at best. But along the way we get to understand why the Soviet Union excelled at maths (and to a lesser extent computing science) – as it was both free of the pollutant of Marxism-Leninism but also valued by the Marxist-Leninists – scared the west with epic economic growth in the 1950s and so failed its citizens economically – nobody lost their job or reputation by failing the consumer, but if you failed to deliver a capital good you risked both. We also get a portrait of a society that is much more granulated than the simple riffs of anti-communism would let us believe. At the top we see Khrushchev was a fool who had done many evil things but he also hoped to make amends, Brezhnev and Kosygin the champions of a new wave of repression and stultification but also men frightened by how earlier reforms led to massacres and desperate not to see that return. But most of all we see the scientists and their hopes get ground down. They all begin as believers and have only three choices in the end: to rebel and lose every physical thing, to compromise and lose hope or to opt out of the real world and chose only science. The bitterness of their defeat, and that of all those who hoped for a better world after Stalin, is summed up in the words of a (real) song, sung in the book by Alexander Galich, a writer of popular songs turned underground critic and, after the shock of the public performance, recounted here, of his satirical works, exile. We’ve called ourselves adults for ages We don’t try to pretend we’re still young We’ve given up digging for treasure Far away in the storybook sun. (As a companion work I’d also recommend Khrushchev: The Man and His Era) # Computer scientists’ lousy citation style I am reading this book: Soft Real-Time Systems: Predictability vs. Efficiency, and I am struck, once again, by the truly lousy style of publication reference that seem to be preferred by so many Image via Wikipedia computer scientists, The style used in the book appears to be that favoured by the American Mathematical Society – the so-called “authorship trigraph” – with references made up of letters from the author’s name followed by the last two figures of the year of original publication eg., [Bak91] which references in the bibliography: [Bak91]        T.P. Baker. Stack-based scheduling of real-time processes. Journal of Real Time Systems, 3, 1991. Now it is possible, if I were an expert in the field that I might recognise this reference, but it is far from helpful. When referencing papers written by multiple authors the system is hopeless – using the first letters of the first three authors and ignoring the rest, eg., $[DGK^+02]$ is a real reference in the book to a paper with eight authors. I really doubt many people would get that straight away. But at least this reference system contains slightly more than the IEEE‘s citation system, which demands papers are merely referenced by a bracketed number in the text, eg., [1]. These reference systems are so widely used that I worried that my own use of the Chicago system – which specifies author and year, eg., (Denning, 1970), would be frowned upon in Birkbeck – but a re read of the regulations showed their demand was for a consistent and well-recognised system. The ACM, too, promote a sensible citation format eg., [Denning 1970]. Does this matter? Yes. I am sure many readers of these books and papers are students who are conducting literature reviews or similar exercises. Reading the original reference may often be important and having to flick back and forth to a bibliography to check the meaning of an incomprehensible reference is not calaculated to add to the sum of human happiness. (I don’t have any real complaints about the book though – except that the translation is plainly a bit stilted – for instance, the first sentence of the book refers to real time systems being investigated “in the last years” – a fairly common mistake in syntax from non-English speakers and one that the editors really ought to have corrected. But the occasional infelicity of language does not detract from the book’s overall appeal.) # Working set heuristics and the Linux kernel: my MSc report My MSc project was titled “Applying Working Set Heuristics to the Linux Kernel” and my aim was to test some local page replacement policies in Linux, which uses a global page replacement algorithm, based on the “2Q” principle. There is a precedent for this: the so-called “swap token” is a local page replacement policy that has been used in the Linux kernel for some years. My aim was to see if a local replacement policy graft could help tackle “thrashing” (when a computer spends so much time trying to manage memory resources – generally swapping pages back and forth to disk – it makes little or no progress with the task itself). The full report (uncorrected – the typos have made me shudder all the same) is linked at the end, what follows is a relatively brief and simplified summary. Fundamentally I tried two approaches: acting on large processes when the number of free pages fell to one of the watermark levels used in the kernel and acting on the process last run or most likely to run next. For the first my thinking – backed by some empirical evidence – was that the largest process tended to consume much more memory than even the second largest. For the second the thought was that make the process next to run more memory efficient would make the system as a whole run faster and that, in any case the process next to run was also quite likely (and again some empirical evidence supported this) to be the biggest consumer of memory in the system. To begin I reviewed the theory that underlies the claims for the superiority of the working set approach to memory management – particularly that it can run optimally with lower resource use than an LRU (least recently used) policy. Peter Denning, the discoverer of the “working set” method and its chief promoter, argued that programs in execution do not smoothly and slowly change their fields of locality, but transition from region to region rapidly and frequently. The evidence I collected – using the Valgrind program and some software I wrote to interpret its output, showed that Denning’s arguments appear valid for today’s programs. Here, for instance is the memory access pattern of Mozilla Firefox: Working set size can therefore vary rapidly, as this graph shows: It can be seen that peaks of working set size often occur at the point of phase transition – as the process will be accessing memory from the two phases at the same time or in rapid succession. Denning’s argument is that the local policy suggested by the working set method allows for this rapid change of locality – as the memory space allocated to a given program is free to go up and down (subject to the overall constraint on resources, of course). He also argued that the working set method will – at least in theory – deliver a better space time product (a measure of overall memory use) than a local LRU policy. Again my results confirmed his earlier findings in that they showed that, for a given average size of a set of pages in memory, the working set method will ensure longer times between page faults, compared to a local LRU policy – as shown in this graph: Here the red line marks the theoretical performance of a working set replacement policy and the blue line that of a local LRU policy. The y-axis marks the average number of instructions executed between page faults, the x-axis the average resident set size. The working set method clearly outperforms the LRU policy at low resident set values. The ‘knee’ in either plot where $\frac{dy}{dx}$ is maximised is also the point of lowest space time product – at this occurs at a much lower value for the working set method than for local LRU. So, if Denning’s claims for the working set method are valid, why is it that no mainstream operating system uses it? VMS and Windows NT (which share a common heritage) use a local page replacement policy, but both are closer to the page-fault-frequency replacement algorithm – which varies fixed allocations based on fault counts – than a true working set-based replacement policy. The working set method is just too difficult to implement – pages need to be marked for the time they are used and to really secure the space-time product benefit claimed, they also need to be evicted from memory at a specified time. Doing any of that would require specialised hardware or complex software or both, so approximations must be used. “Clock pressure” For my experiments I concentrated on manipulating the “CLOCK” element of the page replacement algorithm: this removes or downgrades pages if they have not been accessed in the time been alternate sweeps of an imaginary second hand of an equally imaginary clock. “Clock pressure” could be increased – ie., pages made more vulnerable to eviction – by systematically marking them as unaccessed, while pages could be preserved in memory by marking them all as having been accessed. The test environment was compiling the Linux kernel – and I showed that the time taken for this was highly dependent on the memory available in a system: The red line suggests that, for all but the lowest memory, the compile time is proportional to $M^{-4}$ where $M$ is the system memory. I don’t claim this a fundamental relationship, merely what was observed in this particular set up (I have a gut feeling it is related to the number of active threads – this kernel was built using the -j3 switch and at the low memory end the swapper was probably more active than the build, but again I have not explored this). Watermarks The first set of patches I tried were based on waiting for free memory in the system to sink to one of the “watermarks” the kernel uses to trigger page replacement. My patches looked for the largest process then either looked to increase clock pressure – ie., make the pages from this large process more likely to be removed – or to decrease it, ie., to make it more likely these pages would be preserved in memory. In fact the result in either case was similar – at higher memory values there seemed to be a small but noticeable decline in performance but at low memory values performance declined sharply – possibly because moving pages from one of the “queues” of cached pages involves locking (though, as later results showed also likely because the process simply is not optimal in its interaction with the existing mechanisms to keep or evict pages). The graph below shows a typical result of an attempt to increase clock pressure – patched times are marked with a blue cross. The second approach was to interact with the “completely fair scheduler” (CFS) and increase or decrease clock pressure on the process lease likely to run or most likely to run. The CFS orders processes in a red-black tree (a semi-balanced tree) and the rightmost node is the process least likely to run next and the leftmost the process most likely to run next (as it has run for the shortest amount of virtual time). As before the idea was to either free memory (increase clock pressure) or hold needed pages in memory (decrease clock pressure). The flowchart below illustrates the mechanism used for the leftmost process (and decreasing clock pressure): But again the results were generally similar – a general decline, and a sharp decline at low memory values. (In fact, locking in memory of the leftmost process actually had little effect – as shown below:) But when the same approach was taken to the rightmost process – ie the process that has run for the longest time (and presumably may also run for a long time in the future), the result was a catastrophic decline in performance at small memory values: And what is behind the slowdown? Using profiling tools the biggest reason seems to be that the wrong pages are being pushed out of the caches and  need to be fetched back in. At 40MB of free memory both patched and unpatched kernels show similar profiles with most time spent scheduling and waiting for I/O requests – but the slowness of the patched kernel shows that this has to be done many more times there. There is much more in the report itself – including an examination of Denning’s formulation of the space-time product  - I conclude his is flawed (update: in fairness to Peter Denning, who has pointed this out to me, this is as regards his approximation of the space-time product: Denning’s modelling in the 70s also accounted for the additional time that was required to manage the working set) as it disregards the time required to handle page replacement – and the above is all a (necessary) simplification of what is in the report – so if you are interested please read that. Applying working set heuristics to the Linux kernel # Master of Science I am very pleased to report that I have been awarded the degree of Master of Science in Computer Science – with distinction. So, a PhD next? Maybe, need to explore the options. I think I will post up my project report in a few days for those of you who might be interested in Linux memory management and page reclaim.
2013-06-19 20:36:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 7, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39348867535591125, "perplexity": 1270.0940962192574}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368709135115/warc/CC-MAIN-20130516125855-00051-ip-10-60-113-184.ec2.internal.warc.gz"}
https://people.maths.bris.ac.uk/~matyd/GroupNames/480/S3xC4.Dic5.html
Copied to clipboard ## G = S3×C4.Dic5order 480 = 25·3·5 ### Direct product of S3 and C4.Dic5 Series: Derived Chief Lower central Upper central Derived series C1 — C30 — S3×C4.Dic5 Chief series C1 — C5 — C15 — C30 — C60 — C3×C5⋊2C8 — S3×C5⋊2C8 — S3×C4.Dic5 Lower central C15 — C30 — S3×C4.Dic5 Upper central C1 — C4 — C2×C4 Generators and relations for S3×C4.Dic5 G = < a,b,c,d,e | a3=b2=c4=1, d10=c2, e2=d5, bab=a-1, ac=ca, ad=da, ae=ea, bc=cb, bd=db, be=eb, cd=dc, ece-1=c-1, ede-1=d9 > Subgroups: 412 in 136 conjugacy classes, 64 normal (50 characteristic) C1, C2, C2, C3, C4, C4, C22, C22, C5, S3, S3, C6, C6, C8, C2×C4, C2×C4, C23, C10, C10, Dic3, C12, D6, D6, C2×C6, C15, C2×C8, M4(2), C22×C4, C20, C20, C2×C10, C2×C10, C3⋊C8, C24, C4×S3, C2×Dic3, C2×C12, C22×S3, C5×S3, C5×S3, C30, C30, C2×M4(2), C52C8, C52C8, C2×C20, C2×C20, C22×C10, S3×C8, C8⋊S3, C4.Dic3, C3×M4(2), S3×C2×C4, C5×Dic3, C60, S3×C10, S3×C10, C2×C30, C2×C52C8, C4.Dic5, C4.Dic5, C22×C20, S3×M4(2), C3×C52C8, C153C8, S3×C20, C10×Dic3, C2×C60, S3×C2×C10, C2×C4.Dic5, S3×C52C8, D6.Dic5, C3×C4.Dic5, C60.7C4, S3×C2×C20, S3×C4.Dic5 Quotients: C1, C2, C4, C22, S3, C2×C4, C23, D5, D6, M4(2), C22×C4, Dic5, D10, C4×S3, C22×S3, C2×M4(2), C2×Dic5, C22×D5, S3×C2×C4, S3×D5, C4.Dic5, C22×Dic5, S3×M4(2), S3×Dic5, C2×S3×D5, C2×C4.Dic5, C2×S3×Dic5, S3×C4.Dic5 Smallest permutation representation of S3×C4.Dic5 On 120 points Generators in S120 (1 103 41)(2 104 42)(3 105 43)(4 106 44)(5 107 45)(6 108 46)(7 109 47)(8 110 48)(9 111 49)(10 112 50)(11 113 51)(12 114 52)(13 115 53)(14 116 54)(15 117 55)(16 118 56)(17 119 57)(18 120 58)(19 101 59)(20 102 60)(21 62 90)(22 63 91)(23 64 92)(24 65 93)(25 66 94)(26 67 95)(27 68 96)(28 69 97)(29 70 98)(30 71 99)(31 72 100)(32 73 81)(33 74 82)(34 75 83)(35 76 84)(36 77 85)(37 78 86)(38 79 87)(39 80 88)(40 61 89) (21 90)(22 91)(23 92)(24 93)(25 94)(26 95)(27 96)(28 97)(29 98)(30 99)(31 100)(32 81)(33 82)(34 83)(35 84)(36 85)(37 86)(38 87)(39 88)(40 89)(41 103)(42 104)(43 105)(44 106)(45 107)(46 108)(47 109)(48 110)(49 111)(50 112)(51 113)(52 114)(53 115)(54 116)(55 117)(56 118)(57 119)(58 120)(59 101)(60 102) (1 6 11 16)(2 7 12 17)(3 8 13 18)(4 9 14 19)(5 10 15 20)(21 36 31 26)(22 37 32 27)(23 38 33 28)(24 39 34 29)(25 40 35 30)(41 46 51 56)(42 47 52 57)(43 48 53 58)(44 49 54 59)(45 50 55 60)(61 76 71 66)(62 77 72 67)(63 78 73 68)(64 79 74 69)(65 80 75 70)(81 96 91 86)(82 97 92 87)(83 98 93 88)(84 99 94 89)(85 100 95 90)(101 106 111 116)(102 107 112 117)(103 108 113 118)(104 109 114 119)(105 110 115 120) (1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20)(21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40)(41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60)(61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80)(81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100)(101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120) (1 74 6 79 11 64 16 69)(2 63 7 68 12 73 17 78)(3 72 8 77 13 62 18 67)(4 61 9 66 14 71 19 76)(5 70 10 75 15 80 20 65)(21 58 26 43 31 48 36 53)(22 47 27 52 32 57 37 42)(23 56 28 41 33 46 38 51)(24 45 29 50 34 55 39 60)(25 54 30 59 35 44 40 49)(81 119 86 104 91 109 96 114)(82 108 87 113 92 118 97 103)(83 117 88 102 93 107 98 112)(84 106 89 111 94 116 99 101)(85 115 90 120 95 105 100 110) G:=sub<Sym(120)| (1,103,41)(2,104,42)(3,105,43)(4,106,44)(5,107,45)(6,108,46)(7,109,47)(8,110,48)(9,111,49)(10,112,50)(11,113,51)(12,114,52)(13,115,53)(14,116,54)(15,117,55)(16,118,56)(17,119,57)(18,120,58)(19,101,59)(20,102,60)(21,62,90)(22,63,91)(23,64,92)(24,65,93)(25,66,94)(26,67,95)(27,68,96)(28,69,97)(29,70,98)(30,71,99)(31,72,100)(32,73,81)(33,74,82)(34,75,83)(35,76,84)(36,77,85)(37,78,86)(38,79,87)(39,80,88)(40,61,89), (21,90)(22,91)(23,92)(24,93)(25,94)(26,95)(27,96)(28,97)(29,98)(30,99)(31,100)(32,81)(33,82)(34,83)(35,84)(36,85)(37,86)(38,87)(39,88)(40,89)(41,103)(42,104)(43,105)(44,106)(45,107)(46,108)(47,109)(48,110)(49,111)(50,112)(51,113)(52,114)(53,115)(54,116)(55,117)(56,118)(57,119)(58,120)(59,101)(60,102), (1,6,11,16)(2,7,12,17)(3,8,13,18)(4,9,14,19)(5,10,15,20)(21,36,31,26)(22,37,32,27)(23,38,33,28)(24,39,34,29)(25,40,35,30)(41,46,51,56)(42,47,52,57)(43,48,53,58)(44,49,54,59)(45,50,55,60)(61,76,71,66)(62,77,72,67)(63,78,73,68)(64,79,74,69)(65,80,75,70)(81,96,91,86)(82,97,92,87)(83,98,93,88)(84,99,94,89)(85,100,95,90)(101,106,111,116)(102,107,112,117)(103,108,113,118)(104,109,114,119)(105,110,115,120), (1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20)(21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40)(41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60)(61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80)(81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100)(101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120), (1,74,6,79,11,64,16,69)(2,63,7,68,12,73,17,78)(3,72,8,77,13,62,18,67)(4,61,9,66,14,71,19,76)(5,70,10,75,15,80,20,65)(21,58,26,43,31,48,36,53)(22,47,27,52,32,57,37,42)(23,56,28,41,33,46,38,51)(24,45,29,50,34,55,39,60)(25,54,30,59,35,44,40,49)(81,119,86,104,91,109,96,114)(82,108,87,113,92,118,97,103)(83,117,88,102,93,107,98,112)(84,106,89,111,94,116,99,101)(85,115,90,120,95,105,100,110)>; G:=Group( (1,103,41)(2,104,42)(3,105,43)(4,106,44)(5,107,45)(6,108,46)(7,109,47)(8,110,48)(9,111,49)(10,112,50)(11,113,51)(12,114,52)(13,115,53)(14,116,54)(15,117,55)(16,118,56)(17,119,57)(18,120,58)(19,101,59)(20,102,60)(21,62,90)(22,63,91)(23,64,92)(24,65,93)(25,66,94)(26,67,95)(27,68,96)(28,69,97)(29,70,98)(30,71,99)(31,72,100)(32,73,81)(33,74,82)(34,75,83)(35,76,84)(36,77,85)(37,78,86)(38,79,87)(39,80,88)(40,61,89), (21,90)(22,91)(23,92)(24,93)(25,94)(26,95)(27,96)(28,97)(29,98)(30,99)(31,100)(32,81)(33,82)(34,83)(35,84)(36,85)(37,86)(38,87)(39,88)(40,89)(41,103)(42,104)(43,105)(44,106)(45,107)(46,108)(47,109)(48,110)(49,111)(50,112)(51,113)(52,114)(53,115)(54,116)(55,117)(56,118)(57,119)(58,120)(59,101)(60,102), (1,6,11,16)(2,7,12,17)(3,8,13,18)(4,9,14,19)(5,10,15,20)(21,36,31,26)(22,37,32,27)(23,38,33,28)(24,39,34,29)(25,40,35,30)(41,46,51,56)(42,47,52,57)(43,48,53,58)(44,49,54,59)(45,50,55,60)(61,76,71,66)(62,77,72,67)(63,78,73,68)(64,79,74,69)(65,80,75,70)(81,96,91,86)(82,97,92,87)(83,98,93,88)(84,99,94,89)(85,100,95,90)(101,106,111,116)(102,107,112,117)(103,108,113,118)(104,109,114,119)(105,110,115,120), (1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20)(21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40)(41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60)(61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80)(81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100)(101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120), (1,74,6,79,11,64,16,69)(2,63,7,68,12,73,17,78)(3,72,8,77,13,62,18,67)(4,61,9,66,14,71,19,76)(5,70,10,75,15,80,20,65)(21,58,26,43,31,48,36,53)(22,47,27,52,32,57,37,42)(23,56,28,41,33,46,38,51)(24,45,29,50,34,55,39,60)(25,54,30,59,35,44,40,49)(81,119,86,104,91,109,96,114)(82,108,87,113,92,118,97,103)(83,117,88,102,93,107,98,112)(84,106,89,111,94,116,99,101)(85,115,90,120,95,105,100,110) ); G=PermutationGroup([[(1,103,41),(2,104,42),(3,105,43),(4,106,44),(5,107,45),(6,108,46),(7,109,47),(8,110,48),(9,111,49),(10,112,50),(11,113,51),(12,114,52),(13,115,53),(14,116,54),(15,117,55),(16,118,56),(17,119,57),(18,120,58),(19,101,59),(20,102,60),(21,62,90),(22,63,91),(23,64,92),(24,65,93),(25,66,94),(26,67,95),(27,68,96),(28,69,97),(29,70,98),(30,71,99),(31,72,100),(32,73,81),(33,74,82),(34,75,83),(35,76,84),(36,77,85),(37,78,86),(38,79,87),(39,80,88),(40,61,89)], [(21,90),(22,91),(23,92),(24,93),(25,94),(26,95),(27,96),(28,97),(29,98),(30,99),(31,100),(32,81),(33,82),(34,83),(35,84),(36,85),(37,86),(38,87),(39,88),(40,89),(41,103),(42,104),(43,105),(44,106),(45,107),(46,108),(47,109),(48,110),(49,111),(50,112),(51,113),(52,114),(53,115),(54,116),(55,117),(56,118),(57,119),(58,120),(59,101),(60,102)], [(1,6,11,16),(2,7,12,17),(3,8,13,18),(4,9,14,19),(5,10,15,20),(21,36,31,26),(22,37,32,27),(23,38,33,28),(24,39,34,29),(25,40,35,30),(41,46,51,56),(42,47,52,57),(43,48,53,58),(44,49,54,59),(45,50,55,60),(61,76,71,66),(62,77,72,67),(63,78,73,68),(64,79,74,69),(65,80,75,70),(81,96,91,86),(82,97,92,87),(83,98,93,88),(84,99,94,89),(85,100,95,90),(101,106,111,116),(102,107,112,117),(103,108,113,118),(104,109,114,119),(105,110,115,120)], [(1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20),(21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40),(41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60),(61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80),(81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100),(101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120)], [(1,74,6,79,11,64,16,69),(2,63,7,68,12,73,17,78),(3,72,8,77,13,62,18,67),(4,61,9,66,14,71,19,76),(5,70,10,75,15,80,20,65),(21,58,26,43,31,48,36,53),(22,47,27,52,32,57,37,42),(23,56,28,41,33,46,38,51),(24,45,29,50,34,55,39,60),(25,54,30,59,35,44,40,49),(81,119,86,104,91,109,96,114),(82,108,87,113,92,118,97,103),(83,117,88,102,93,107,98,112),(84,106,89,111,94,116,99,101),(85,115,90,120,95,105,100,110)]]) 78 conjugacy classes class 1 2A 2B 2C 2D 2E 3 4A 4B 4C 4D 4E 4F 5A 5B 6A 6B 8A 8B 8C 8D 8E 8F 8G 8H 10A ··· 10F 10G ··· 10N 12A 12B 12C 15A 15B 20A ··· 20H 20I ··· 20P 24A 24B 24C 24D 30A ··· 30F 60A ··· 60H order 1 2 2 2 2 2 3 4 4 4 4 4 4 5 5 6 6 8 8 8 8 8 8 8 8 10 ··· 10 10 ··· 10 12 12 12 15 15 20 ··· 20 20 ··· 20 24 24 24 24 30 ··· 30 60 ··· 60 size 1 1 2 3 3 6 2 1 1 2 3 3 6 2 2 2 4 10 10 10 10 30 30 30 30 2 ··· 2 6 ··· 6 2 2 4 4 4 2 ··· 2 6 ··· 6 20 20 20 20 4 ··· 4 4 ··· 4 78 irreducible representations dim 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 4 4 4 4 4 4 type + + + + + + + + + + - + - + - + - + - image C1 C2 C2 C2 C2 C2 C4 C4 C4 S3 D5 D6 D6 M4(2) Dic5 D10 Dic5 D10 Dic5 C4×S3 C4×S3 C4.Dic5 S3×D5 S3×M4(2) S3×Dic5 C2×S3×D5 S3×Dic5 S3×C4.Dic5 kernel S3×C4.Dic5 S3×C5⋊2C8 D6.Dic5 C3×C4.Dic5 C60.7C4 S3×C2×C20 S3×C20 C10×Dic3 S3×C2×C10 C4.Dic5 S3×C2×C4 C5⋊2C8 C2×C20 C5×S3 C4×S3 C4×S3 C2×Dic3 C2×C12 C22×S3 C20 C2×C10 S3 C2×C4 C5 C4 C4 C22 C1 # reps 1 2 2 1 1 1 4 2 2 1 2 2 1 4 4 4 2 2 2 2 2 16 2 2 2 2 2 8 Matrix representation of S3×C4.Dic5 in GL4(𝔽241) generated by 1 0 0 0 0 1 0 0 0 0 1 213 0 0 112 239 , 240 0 0 0 0 240 0 0 0 0 240 28 0 0 0 1 , 64 0 0 0 0 177 0 0 0 0 1 0 0 0 0 1 , 6 0 0 0 0 40 0 0 0 0 1 0 0 0 0 1 , 0 1 0 0 64 0 0 0 0 0 1 0 0 0 0 1 G:=sub<GL(4,GF(241))| [1,0,0,0,0,1,0,0,0,0,1,112,0,0,213,239],[240,0,0,0,0,240,0,0,0,0,240,0,0,0,28,1],[64,0,0,0,0,177,0,0,0,0,1,0,0,0,0,1],[6,0,0,0,0,40,0,0,0,0,1,0,0,0,0,1],[0,64,0,0,1,0,0,0,0,0,1,0,0,0,0,1] >; S3×C4.Dic5 in GAP, Magma, Sage, TeX S_3\times C_4.{\rm Dic}_5 % in TeX G:=Group("S3xC4.Dic5"); // GroupNames label G:=SmallGroup(480,363); // by ID G=gap.SmallGroup(480,363); # by ID G:=PCGroup([7,-2,-2,-2,-2,-2,-3,-5,56,422,80,1356,18822]); // Polycyclic G:=Group<a,b,c,d,e|a^3=b^2=c^4=1,d^10=c^2,e^2=d^5,b*a*b=a^-1,a*c=c*a,a*d=d*a,a*e=e*a,b*c=c*b,b*d=d*b,b*e=e*b,c*d=d*c,e*c*e^-1=c^-1,e*d*e^-1=d^9>; // generators/relations ׿ × 𝔽
2021-12-05 19:44:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9972947835922241, "perplexity": 3187.461446143131}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363216.90/warc/CC-MAIN-20211205191620-20211205221620-00135.warc.gz"}
https://brilliant.org/problems/there-must-be-odd/
# There Must Be Odd Let $$T$$ be the number of nonempty subsets of $$\{1,2,\ldots,100\}$$ containing at least one odd integer. Determine the remainder when $$T$$ is divided by $$1000$$. × Problem Loading... Note Loading... Set Loading...
2017-10-22 03:14:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8169299960136414, "perplexity": 336.0075745315491}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825057.91/warc/CC-MAIN-20171022022540-20171022042540-00144.warc.gz"}