url
stringlengths
14
1.76k
text
stringlengths
100
1.02M
metadata
stringlengths
1.06k
1.1k
https://www.physicsforums.com/threads/is-this-the-correct-approach.568793/
# Is this the correct approach? 1. Jan 18, 2012 ### Jamin2112 1. The problem statement, all variables and given/known data 2. Relevant equations Fourier transform, Gaussian filter 3. The attempt at a solution First of all, someone in the class did it correctly and here's what they said: Second of all, let me make sure I have a correct understanding of this. We have a 20 x 262144 matrix (note that that 262144 = 643) each row i, column j is the amplitude of the signal. We're supposed to fft and sum each row before dividing by 20, to get the average power at each frequency. Further, if you look at the code at the bottom you see that the 262144 columns are actually frequencies meant to be in a 64 x 64 x 64 matrix. So we have a 64 x 64 x 64 matrix, each index being a frequency and each containing an average power value. Find the location of the maximum power value. Do an inverse fft. Put a Gaussian filter around the frequency we found. Then look at the plot of the average signal with the filter. Basically correct? Can you offer guidance or do you also need help? Similar Discussions: Is this the correct approach?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9311078786849976, "perplexity": 394.02993842840834}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687324.6/warc/CC-MAIN-20170920142244-20170920162244-00442.warc.gz"}
https://math.hecker.org/2014/03/30/linear-algebra-and-its-applications-exercise-3-1-18/
## Linear Algebra and Its Applications, Exercise 3.1.18 Exercise 3.1.18. Suppose that $S = \{0\}$ is the subspace of $\mathbb{R}^4$ containing only the origin. What is the orthogonal complement of $S$ ($S^\perp$)? What is $S^\perp$ if $S$ is the subspace of $\mathbb{R}^4$ spanned by the vector $(0, 0, 0, 1)$? Answer: Every vector is orthogonal to the zero vector. (In other words, $v^T0 = 0$ for all $v$.) So the orthogonal complement of $S = \{0\}$ is the entire vector space, or in this case $S^\perp = \mathbb{R}^4$. If $S$ is spanned by the vector $(0, 0, 0, 1)$ then all vectors in $S$ are of the form $(0, 0, 0, d)$. Any vector whose last entry is zero is orthogonal to vectors in $S$. In other words, for vectors of the form $(a, b, c, 0)$ the inner product with a vector in $S$ is $a \cdot 0 + b \cdot 0 + c \cdot 0 + 0 \cdot d = 0+0+0+0 = 0$ The space of vectors of the form $(a, b, c, 0)$ is spanned by the vectors $(1, 0, 0, 0)$, $(0, 1, 0, 0)$, and $(0, 0, 1, 0)$, which are linearly independent and form a basis for the subspace. Thus $S^\perp$ is the subspace of $\mathbb{R}^4$ with basis vectors $(1, 0, 0, 0)$, $(0, 1, 0, 0)$, and $(0, 0, 1, 0)$. NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang. If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books. This entry was posted in linear algebra and tagged . Bookmark the permalink.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 29, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9207940697669983, "perplexity": 98.65904065915865}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267159359.58/warc/CC-MAIN-20180923114712-20180923135112-00322.warc.gz"}
https://tug.org/pipermail/texhax/2008-December/011404.html
# [texhax] apacite and url Alan Litchfield alan at alphabyte.co.nz Thu Dec 4 20:18:55 CET 2008 ```Hi Christian, I can't answer your second question, but as to the first the use of url's is extensively handled in the document apacite.pdf. If you have TeXLive you already have this, or you can get it from your local ctan mirror. Basically you can use either url = {...} as a bib field or you can use \url{...} within a field, say howpublished = {...\url{...}...} for example. HIH Alan Christian Deindl wrote: > hi, > > I have two questions. > > I'm using apacite as bibliography style. > Two of my bibtex entries need to be cited with an url. > unfourtunately this produces an error message "missing \$ inserted". > Is there a way to cite an url? > perhaps I need an additional package? > > the second question is somewaht related to the first. > since my paper is in german, is there a good german style, who is also
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9487059712409973, "perplexity": 4536.802028306767}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526799.4/warc/CC-MAIN-20190720235054-20190721021054-00478.warc.gz"}
http://physics.stackexchange.com/users/7053/pancake?tab=summary
# pancake less info reputation 3 bio website location Amsterdam, Netherlands age 27 member for 3 years, 4 months seen Aug 22 '13 at 14:59 profile views 4 I am a software engineer and an iOS/Android developer. # 1 Question 2 Riddle: can you swim faster upstream than downstream (with respect to the water)? # 111 Reputation This user has not answered any questions # 3 Tags 0 flow 0 water 0 relative-motion # 14 Accounts Stack Overflow 638 rep 1930 Ask Different 150 rep 19 Server Fault 141 rep 210 TeX - LaTeX 131 rep 4 Bitcoin 118 rep 4
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.880505383014679, "perplexity": 12820.485008103456}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207932182.13/warc/CC-MAIN-20150521113212-00084-ip-10-180-206-219.ec2.internal.warc.gz"}
http://www.chegg.com/homework-help/questions-and-answers/proton-projected-positive-x-direction-region-uniform-electric-field-e-600-105-i-n-c-t-0-pr-q2678805
## proton in a feild A proton is projected in the positive x direction in to a region of a uniform electric field E = - 6.00 × 105 i N/C at t=0. The proton travels 7.00 cm as it comes to rest. (9 points) Determine the acceleration of the proton, its initial speed, and the time interval over which the proton comes to rest.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8754882216453552, "perplexity": 870.5943291169283}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705956734/warc/CC-MAIN-20130516120556-00060-ip-10-60-113-184.ec2.internal.warc.gz"}
https://www.gamedev.net/forums/topic/319741-deleting-a-file-in-c/
• 15 • 15 • 11 • 9 • 10 # Deleting a file in C This topic is 4686 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts Umm, this is kind of embarasing, but so far, in my 3 years of C I never needed to delete a file. I think that the function is unlink() but I am not 100% sure (from what I read, it does some fancy things, such as decrementing the refference count and such). So, is unlink() the [only/right] way to delete a file? P.S. I need a generic C function, not windows or something. ##### Share on other sites unlink is a POSIX function that generally isn't available on Windows (although VC++ does emulate it). I don't know if there is a general delete file function. ##### Share on other sites Well, DevC++ (gcc) seems to know about it. This is for an MMORPG server and it will run on Posix (FreeBSD), and on Windows for local testing. If VC can also emulate it, the better (one developer has VC). ##### Share on other sites remove Quote: int remove ( const char * filename );Delete a file. Deletes the file specified by filename. It is compiled as a call to the system function for deleting files (unlink, erase or del).Parameters.filename Path and name of the file to be removed. Return Value. If the file is succesfully deleted a 0 value is returned. On error a non-zero value is returned and the errno variable is set with the corresponding error code that that can be printed with a call to perror: Thanks :) ##### Share on other sites remove() is not standard. unlink() is the POSIX standard -- the talk about "reference counts" is what "delete a file" really means on UNIX systems. Usually, it just removes the file (unless there are multiple hard links). ##### Share on other sites But it said that it compiles to whatever delete function the system has. ##### Share on other sites nprz is right. remove is an ANSI function (I just looked it up). That'll be safe to use. ##### Share on other sites Quote: Original post by hplus0603remove() is not standard. unlink() is the POSIX standard -- the talk about "reference counts" is what "delete a file" really means on UNIX systems. Usually, it just removes the file (unless there are multiple hard links). Now where'd you learn that? I have multiple sources that say remove is standard: Quote: From subpages of http://www.unix.org/version3/apis.html, previously linked in this thread: POSIX P96Interface XSI Base U98 U95 P92 C99 C89 SVID3 BSD[...]remove() m m m m m m m m .[...]unlink() m m m m m . . m m[...]m Indicates that the interface is defined as mandatory.[...]. Indicates that the interface is not specified. Quote: From panda@industry:~\$ man 3 remove:NAME remove - delete a name and possibly the file it refers toSYNOPSIS #include int remove(const char *pathname);[...]CONFORMING TO ANSI C, SVID, AT&T, POSIX, X/OPEN, BSD 4.3
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2835347354412079, "perplexity": 5487.93090625881}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257650730.61/warc/CC-MAIN-20180324151847-20180324171847-00121.warc.gz"}
http://www.msad49moodle.org/course/index.php?categoryid=54
### Technology Curriculum Write a concise and interesting paragraph here that explains what this course is about
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9741511344909668, "perplexity": 3776.492220853597}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825308.77/warc/CC-MAIN-20171022150946-20171022170946-00823.warc.gz"}
https://mathematica.stackexchange.com/questions/175112/how-to-add-an-image-to-the-bottom-of-a-listpointplot3d/175113
# How to add an image to the bottom of a ListPointPlot3D? I am trying to plot the points of virus outbreak along time of 3 areas. I already have the data1, which have 3 columns: x-coordinate, y-coordinate and time. Then I have a 3D plot as below:- ListPointPlot3D[data1] I have an image of the map of the area concerned, and want to attach it to the bottom of the box for better illustration, is there a way to do this? Many thanks! you can combine 2 graphics with show[]. since you didn't provide neither the data nor the image required I used a meme I had saved as an example data = RandomInteger[{0, 10}, {10, 3}]; • Thanks for your reply and it really helps. May I also ask how can I raise up the image from z=0 to z=5? I tried to assign pts = {{0, 0, 5}, {0, 10, 5}, {10, 10, 5}, {10, 0, 5}}; and then run Show[{ListPointPlot3D[data, PlotStyle -> {Red}], Plot3D[0, {x, 0, 10}, {y, 0, 10}, PlotRange -> {{0, 10}, {0, 10}, {0, 10}}, PlotStyle -> {Texture[yourfile.jpg], Polygon[pts, VertexTextureCoordinates -> pts]}]}], but it doesn't do anything. – H42 Jun 12 '18 at 2:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19972671568393707, "perplexity": 647.4632147683855}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987838289.72/warc/CC-MAIN-20191024012613-20191024040113-00157.warc.gz"}
https://www.physicsforums.com/threads/help-setting-up-equation-to-find-curl-of-navier-stokes-equation.625948/
# Help Setting Up Equation To Find Curl of Navier-Stokes Equation 1. Aug 6, 2012 ### AKBob 1. The problem statement, all variables and given/known data I'm having trouble using equation 2.1 or 2.2 in the article to find the curl of the navier-stokes equation. I understand how to find curl, but can't make sense of the explanation/steps in the document provided by the professor. 2. Relevant equations All relavent equations are included in the two attachments. 3. The attempt at a solution I'm really having trouble getting started. The document provided by my professor says to "First evaluate the '(Beta)yk X v' term, substitute that in (v is the vector discussed at the top)," but the equation at the top looks like a general equation, and I'm starting to get frustrated. Any help/ideas/suggestions would really be appreciated. File size: 4.3 KB Views: 53 File size: 47.6 KB Views: 81 2. Aug 6, 2012 ### voko All the equations in the attachments are broken beyond repair. 3. Aug 6, 2012 ### Muphrid Yeah, sorry, use some LaTeX please. It'll be easier on everyone (well, except maybe you). Example: $$\rho \left(\frac{\partial v}{\partial t} + v \cdot \nabla v \right) = - \nabla p + f + \overline T(\nabla)$$ Is given by Code (Text): $$\rho \left(\frac{\partial v}{\partial t} + v \cdot \nabla v \right) = - \nabla p + f + \overline T(\nabla)$$ Similar Discussions: Help Setting Up Equation To Find Curl of Navier-Stokes Equation
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.802988588809967, "perplexity": 2409.272924544255}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814857.77/warc/CC-MAIN-20180223213947-20180223233947-00080.warc.gz"}
http://tex.stackexchange.com/questions/16500/passing-shape-argument-to-titleformat-as-a-command
# Passing shape argument to titleformat as a command I aim at using to types of chapter headings in my document: for preambule and appendix, simple underlined upper case, while for the main chapters, I want to have a full page with "chapter n / Title" on two different lines… I (almost) managed to do it by defining a new environment "chapterpage" and by passing arguments to \titleformat (from titlesec package) as macros (that are not defined in the same way in global and chapterpage environments. However, I do not manage to pass the first optional argument to \titleformat following this way :( I need this as I want to use a hang shape in the former layout and a display one in the latter. I suspect it may be solved using the appropriate \protect commands, but I did not manage to fix it! Here is a minimal example. If it worked, "Chapter 2" and "Bar" should be printed on different lines. Thank you for your help. \documentclass[11pt]{scrreprt} \usepackage{lipsum} \usepackage{titlesec} \usepackage{setspace} \newcommand{\mychapterShape}{hang} % hang is for aligning the number with the title \newcommand{\mychapterFormat}{\relax} \newcommand{\mychapterBefore}{\raggedright} \newcommand{\mychapterAfter}{\normalsize\vspace*{.8\baselineskip}\titlerule} \newcommand{\mychapterLabel}{\thechapter} %{\relax} \newenvironment{chapterpage} { \renewcommand{\mychapterShape}{display} \renewcommand{\mychapterFormat}{ \vspace{\stretch{7}}\normalfont\onehalfspacing\centering\large } \renewcommand{\mychapterBefore}{ \thispagestyle{empty}% \vspace{0.5em}% } \renewcommand{\mychapterAfter}{ \vspace{\stretch{10}}\cleardoublepage\thispagestyle{empty}\singlespacing } \renewcommand{\mychapterLabel}{\chaptertitlename~\thechapter} } {} % end chapterpage envt \titleformat{\chapter}[\mychapterShape]% {\mychapterFormat}{\mychapterLabel}{1em}% {\mychapterBefore}[\mychapterAfter]% \begin{document} \chapter{Foo} \lipsum[1-3] \begin{chapterpage} \chapter{Bar} \end{chapterpage} \lipsum[1-3] \end{document} - I would define two chapter styles \documentclass[11pt]{scrreprt} \usepackage{lipsum} \usepackage{titlesec} \newcommand{\mainchapterstyle}{% \titleformat{\chapter}[display] {\vspace{\stretch{7}}\normalfont\onehalfspacing\centering\large} {\chaptertitlename~\thechapter} {1em} {\thispagestyle{empty}\vspace{0.5em}} [\vspace{\stretch{10}}\cleardoublepage \thispagestyle{empty}\singlespacing]} \newcommand{\appchapterstyle}{% \titleformat{\chapter}[hang] {} {\thechapter} {1em} {\raggedright} [\normalsize\vspace*{.8\baselineskip}\titlerule]} \begin{document} \mainchapterstyle \chapter{Bar} \lipsum[1-3] \appendix \appchapterstyle \chapter{Foo} \lipsum[1-3] \end{document} - That works… thanks! –  Thomas Julou Apr 24 '11 at 21:00 @Thomas: it's common to mark the answer as accepted, or, if you want to wait for possible other answers, to at least mark it as useful. –  egreg Apr 24 '11 at 21:06 Done! Please accept my newbie apologizes :) –  Thomas Julou Apr 25 '11 at 8:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6694968342781067, "perplexity": 2053.54313948366}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163049608/warc/CC-MAIN-20131204131729-00082-ip-10-33-133-15.ec2.internal.warc.gz"}
https://www.philipzucker.com/a-touch-of-topological-computation-3-categorical-interlude/
Welcome back, friend. In the last two posts, I described the basics of how to build and manipulate the Fibonacci anyon vector space in Haskell. As a personal anecdote, trying to understand the category theory behind the theory of anyons is one of the reasons I started learning Haskell. These spaces are typically described using the terminology of category theory. I found it very frustrating that anyons were described in an abstract and confusing terminology. I really wondered if people were just making things harder than they have to be. I think Haskell is a perfect playground to clarify these constructions. While the category theory stuff isn’t strictly necessary, it is interesting and useful once you get past the frustration. but I hope everyone can get something out of it. Give it a shot if you’re interested, and don’t sweat the details. #### The Aroma of Categories I think Steve Awodey gives an excellent nutshell of category theory in the introductory section to his book: “What is category theory? As a first approximation, one could say that category theory is the mathematical study of (abstract) algebras of functions. Just as group theory is the abstraction of the idea of a system of permutations of a set or symmetries of a geometric object, so category theory arises from the idea of a system of functions among some objects.” For my intuition, a category is any “things” that plug together. The “in” of a thing has to match the “out” of another thing in order to hook them together. In other words, the requirement for something to be a category is having a notion of composition. The things you plug together are called the morphisms of the category and the matching ports are the objects of the category. The additional requirement of always having an identity morphism (a do-nothing connection wire) is usually there once you have composition, although it is good to take especial note of it. Category theory is an elegant framework for how to think about these composing things in a mathematical way. In my experience, thinking in these terms leads to good abstractions, and useful analogies between disparate things. It is helpful for any abstract concept to list some examples to expose the threads that connect them. Category theory in particular has a ton of examples connecting to many other fields because it is a science of analogy. These are the examples of categories I usually reach for. Which one feels the most comfortable to you will depend on your background. • Hask. Objects are types. Morphisms are functions between those types • Vect. Objects are vector spaces, morphisms are linear maps (roughly matrices). • Preorders. Objects are values. Morphisms are the inequalities between those values. • Sets. Objects are Sets. Morphisms are functions between sets. • Cat. Objects are categories, Morphisms are functors. This is a pretty cool one, although complete categorical narcissism. • Systems and Processes. • The Free Category of a directed graphs. Objects are vertices. Morphisms are paths between vertices #### Generic Programming and Typeclasses The goal of generic programming is to run programs that you write once in many way. There are many ways to approach this generic programming goal, but one way this is achieved in Haskell is by using Typeclasses. Typeclasses allow you to overload names, so that they mean different things based upon the types involved. Adding a vector is different than adding a float or int, but there are programs that can be written that reasonably apply in both situations. Writing your program in a way that it applies to disparate objects requires abstract ways of talking about things. Mathematics is an excellent place to mine for good abstractions. In particular, the category theory abstraction has demonstrated itself to be a very useful unified vocabulary for mathematical topics. I, and others, find it also to be a beautiful aesthetic by which to structure programs. In the Haskell base library there is a Category typeclass defined in base. In order to use this, you need to import the Prelude in an unusual way. {-# LANGUAGE NoImplicitPrelude #-} import Prelude hiding ((.), id) The Category typeclass is defined on the type that corresponds to the morphisms of the category. This type has a slot for the input type and a slot for the output type. In order for something to be a category, it has to have an identity morphisms and a notion of composition. class Category cat where id :: cat a a (.) :: cat b c -> cat a b -> cat a c The most obvious example of this Category typeclass is the instance for the ordinary Haskell function (->). The identity corresponds to the standard Haskell identity function, and composition to ordinary Haskell function composition. instance Category (->) where id = \x -> x f . g = \x -> f (g x) Another example of a category that we’ve already encountered is that of linear operators which we’ll call LinOp. LinOp is an example of a Kliesli arrow, a category built using monadic composition rather than regular function composition. In this case, the monad Q from my first post takes care of the linear pipework that happens between every application of a LinOp. The fish <=< operator is monadic composition from Control.Monad. newtype LinOp a b = LinOp {runLin :: a -> Q b} instance Category LinOp where id = LinOp pure (LinOp f) . (LinOp g) = LinOp (f <=< g) A related category is the FibOp category. This is the category of operations on Fibonacci anyons, which are also linear operations. It is LinOp specialized to the Fibonacci anyon space. All the operations we’ve previously discussed (F-moves, braiding) are in this category. newtype FibOp a b = FibOp {runFib :: (forall c. FibTree c a -> Q (FibTree c b))} instance Category (FibOp) where id = FibOp pure (FibOp f) . (FibOp g) = FibOp (f <=< g) The “feel” of category theory takes focus away from the objects and tries to place focus on the morphisms. There is a style of functional programming called “point-free” where you avoid ever giving variables explicit names and instead use pipe-work combinators like (.), fst, snd, or (***). This also has a feel of de-emphasizing objects. Many of the combinators that get used in this style have categorical analogs. In order to generically use categorical typeclasses, you have to write your program in this point free style. It is possible for a program written in the categorical style to be a reinterpreted as a program, a linear algebra operation, a circuit, or a diagram, all without changing the actual text of the program. For more on this, I highly recommend Conal Elliot’s compiling to categories, which also puts forth a methodology to avoid the somewhat unpleasant point-free style using a compiler plug-in. This might be an interesting place to mine for a good quantum programming language. YMMV. ### Monoidal Categories. Putting two processes in parallel can be considered a kind of product. A category is monoidal )if it has this product of this flavor, and has isomorphisms for reassociating objects and producing or consuming a unit object. This will make more sense when you see the examples. We can sketch out this monoidal category concept as a typeclass, where we use () as the unit object. class Category k => Monoidal k where parC :: k a c -> k b d -> k (a,b) (c,d) assoc :: k ((a,b),c) (a,(b,c)) assoc' :: k (a,(b,c)) ((a,b),c) leftUnitor :: k ((),a) a leftUnitor' :: k a ((),a) rightUnitor :: k (a,()) a rightUnitor' :: k a (a,()) #### Instances In Haskell, the standard monoidal product for regular Haskell functions is (***) from Control.Arrow. It takes two functions and turns it into a function that does the same stuff, but on a tuple of the original inputs. The associators and unitors are fairly straightforward. We can freely dump unit () and get it back because there is only one possible value for it. (***) :: (a -> c) -> (b -> d) -> ((a,b) -> (c,d)) f *** g = \(x,y) -> (f x, g y)  instance Monoidal (->) where parC f g = f *** g assoc ((x,y),z) = (x,(y,z)) assoc' (x,(y,z)) = ((x,y),z) leftUnitor (_, x) = x leftUnitor' x = ((),x) rightUnitor (x, _) = x rightUnitor' x = (x,()) The monoidal product we’ll choose for LinOp is the tensor/outer/Kronecker product. kron :: Num b => W b a -> W b c -> W b (a,c) kron (W x) (W y) = W [((a,c), r1 * r2) | (a,r1) <- x , (c,r2) <- y ] Otherwise, LinOp is basically a monadically lifted version of (->). The one dimensional vector space Q () is completely isomorphic to just a number. Taking the Kronecker product with it is basically the same thing as scalar multiplying (up to some shuffling). instance Monoidal LinOp where parC (LinOp f) (LinOp g) = LinOp $\(a,b) -> kron (f a) (g b) assoc = LinOp (pure . assoc) assoc' = LinOp (pure . unassoc) leftUnitor = LinOp (pure . leftUnitor) leftUnitor' = LinOp (pure .leftUnitor') rightUnitor = LinOp (pure . rightUnitor) rightUnitor' = LinOp (pure . rightUnitor') Now for a confession. I made a misstep in my first post. In order to make our Fibonacci anyons jive nicely with our current definitions, I should have defined our identity particle using type Id = () rather than data Id. We’ll do that now. In addition, we need some new primitive operations for absorbing and emitting identity particles that did not feel relevant at that time. rightUnit :: FibTree e (a,Id) -> Q (FibTree e a) rightUnit (TTI t _) = pure t rightUnit (III t _) = pure t rightUnit' :: FibTree e a -> Q (FibTree e (a,Id)) rightUnit' t@(TTT _ _) = pure (TTI t ILeaf) rightUnit' t@(TTI _ _) = pure (TTI t ILeaf) rightUnit' t@(TIT _ _) = pure (TTI t ILeaf) rightUnit' t@(III _ _) = pure (III t ILeaf) rightUnit' t@(ITT _ _) = pure (III t ILeaf) rightUnit' t@(ILeaf) = pure (III t ILeaf) rightUnit' t@(TLeaf) = pure (TTI t ILeaf) leftUnit :: FibTree e (Id,a) -> Q (FibTree e a) leftUnit = rightUnit <=< braid -- braid vs braid' doesn't matter, but it has a nice symettry. leftUnit' :: FibTree e a -> Q (FibTree e (Id,a)) leftUnit' = braid' <=< rightUnit' With these in place, we can define a monoidal instance for FibOp. The extremely important and intriguing F-move operations are the assoc operators for the category. While other categories have assoc that feel nearly trivial, these F-moves don’t feel so trivial. instance Monoidal (FibOp) where parC (FibOp f) (FibOp g) = (FibOp (lmap f)) . (FibOp (rmap g)) assoc = FibOp fmove' assoc' = FibOp fmove leftUnitor = FibOp leftUnit leftUnitor' = FibOp leftUnit' rightUnitor = FibOp rightUnit rightUnitor' = FibOp rightUnit' #### This is actually useful The parC operation is extremely useful to explicitly note in a program. It is an opportunity for optimization. It is possible to inefficiently implement parC in terms of other primitives, but it is very worthwhile to implement it in new primitives (although I haven’t here). In the case of (->), parC is an explicit location where actual computational parallelism is available. Once you perform parC, it is not longer obviously apparent that the left and right side of the tuple share no data during the computation. In the case of LinOp and FibOp, parC is a location where you can perform factored linear computations. The matrix vector product$ (A \otimes B)(v \otimes w)$can be performed individually$ (Av)\otimes (Bw)$. In the first case, where we densify$ A \otimes B$and then perform the multiplication, it costs$ O((N_A N_B)^2)$time, whereas performing them individually on the factors costs$ O( N_A^2 + N_B^2)$time, a significant savings. Applied category theory indeed. #### Laws Judge Dredd courtesy of David Like many typeclasses, these monoidal morphisms are assumed to follow certain laws. Here is a sketch (for a more thorough discussion check out the wikipedia page): • Functions with a tick at the end like assoc' should be the inverses of the functions without the tick like assoc, e.g. assoc . assoc' = id • The parC operation is (bi)functorial, meaning it obeys the commutation law parC (f . f') (g . g') = (parC f g) . (parC f' g') i.e. it doesn’t matter if we perform composition before or after the parC. • The pentagon law for assoc: Applying leftbottom is the same as applying topright leftbottom :: (((a,b),c),d) -> (a,(b,(c,d))) leftbottom = assoc . assoc topright :: (((a,b),c),d) -> (a,(b,(c,d))) topright = (id *** assoc) . assoc . (assoc *** id) • The triangle law for the unitors: topright' :: ((a,()),b) -> (a,b) topright' = (id *** leftUnitor) . assoc leftside :: ((a,()),b) -> (a,b) leftside = rightUnitor *** id #### String Diagrams String diagrams are a diagrammatic notation for monoidal categories. Morphisms are represented by boxes with lines. Composition g . f is made by connecting lines. The identity id is a raw arrow. The monoidal product of morphisms$ f \otimes g$is represented by placing lines next to each other. The diagrammatic notion is so powerful because the laws of monoidal categories are built so deeply into it they can go unnoticed. Identities can be put in or taken away. Association doesn’t even appear in the diagram. The boxes in the notation can naturally be pushed around and commuted past each other. This corresponds to the property$ (id \otimes g) \circ (f \otimes id) = (f \otimes id) \circ (id \otimes g)$What expression does the following diagram represent? Is it$ (f \circ f’) \otimes (g \circ g’)$(in Haskell notation parC (f . f') (g . g') )? Or is it$ (f \otimes g) \circ (f’ \otimes g’)$(in Haskell notation (parC f g) . (parC f' g')? Answer: It doesn’t matter because the functorial requirement of parC means the two expressions are identical. There are a number of notations you might meet in the world that can be interpreted as String diagrams. Three that seem particular pertinent are: • Quantum circuits • Anyon Diagrams! #### Braided and Symmetric Monoidal Categories: Categories That Braid and Swap Some monoidal categories have a notion of being able to braid morphisms. If so, it is called a braided monoidal category (go figure). class Monoidal k => Braided k where over :: k (a,b) (b,a) under :: k (a,b) (b,a) The over and under morphisms are inverse of each other over . under = id. The over morphism pulls the left morphism over the right, whereas the under pulls the left under the right. The diagram definitely helps to understand this definition. These over and under morphisms need to play nice with the associator of the monoidal category. These are laws that valid instance of the typeclass should follow. We actually already met them in the very first post. If the over and under of the braiding are the same the category is a symmetric monoidal category. This typeclass needs no extra functions, but it is now intended that the law over . over = id is obeyed. class Braided k => Symmetric k where When we draw a braid in a symmetric monoidal category, we don’t have to be careful with which one is over and under, because they are the same thing. The examples that come soonest to mind have this symmetric property, for example (->) is a symmetric monoidal category.. swap :: (a, b) -> (b, a) swap (x,y) = (y,x) instance Braided (->) where over = swap under = swap instance Symmetric (->) Similarly LinOp has an notion of swapping that is just a lifting of swap instance Braided (LinOp) where over = LinOp (pure . swap) under = LinOp (pure . swap) instance Symmetric LinOp However, FibOp is not symmetric! This is perhaps at the core of what makes FibOp so interesting. instance Braided FibOp where over = FibOp braid under = FibOp braid' #### Automating Association Last time, we spent a lot of time doing weird typelevel programming to automate the pain of manual association moves. We can do something quite similar to make the categorical reassociation less painful, and more like the carefree ideal of the string diagram if we replace composition (.) with a slightly different operator (...) :: ReAssoc b b' => FibOp b' c -> FibOp a b -> FibOp a c (FibOp f) ... (FibOp g) = FibOp$ f <=< reassoc <=< g Before defining reassoc, let’s define a helper LeftCollect typeclass. Given a typelevel integer n, it will reassociate the tree using a binary search procedure to make sure the left branch l at the root has Count l = n. leftcollect :: forall n gte l r o e. (gte ~ CmpNat n (Count l), LeftCollect n gte (l,r) o) => FibTree e (l,r) -> Q (FibTree e o) leftcollect x = leftcollect' @n @gte x class LeftCollect n gte a b | n gte a -> b where leftcollect' :: FibTree e a -> Q (FibTree e b) -- The process is like a binary search. -- LeftCollect pulls n leaves into the left branch of the tuple -- If n is greater than the size of l, we recurse into the right branch with a new number of leaves to collect -- then we do a final reshuffle to put those all into the left tree. instance ( k ~ Count l, r ~ (l',r'), n' ~ (n - k), gte ~ CmpNat n' (Count l'), LeftCollect n' gte r (l'',r'')) => LeftCollect n 'GT (l,r) ((l,l''),r'') where leftcollect' x = do x' <- rmap (leftcollect @n') x -- (l,(l'',r'')) -- l'' is size n - k fmove x' -- ((l,l''),r'') -- size of (l,l'') = k + (n-k) = n instance ( l ~ (l',r'), gte ~ CmpNat n (Count l'), LeftCollect n gte l (l'',r'')) => LeftCollect n 'LT (l,r) (l'',(r'',r)) where leftcollect' x = do x' <- lmap (leftcollect @n) x -- ((l'',r''),r) -- l'' is of size n fmove' x' -- (l'',(r'',r) instance LeftCollect n 'EQ (l,r) (l,r) where leftcollect' = pure Once we have LeftCollect, the typeclass ReAssoc is relatively simple to define. Given a pattern tree, we can count the elements in it’s left branch and LeftCollect the source tree to match that number. Then we recursively apply reassoc in the left and right branch of the tree. This means that every node has the same number of children in the tree, hence the trees will end up in an identical shape (modulo me mucking something up). class ReAssoc a b where reassoc :: FibTree e a -> Q (FibTree e b) instance (n ~ Count l', gte ~ CmpNat n (Count l), LeftCollect n gte (l,r) (l'',r''), ReAssoc l'' l', ReAssoc r'' r') => ReAssoc (l,r) (l',r') where reassoc x = do x' <- leftcollect @n x x'' <- rmap reassoc x' lmap reassoc x'' --instance {-# OVERLAPS #-} ReAssoc a a where -- reassoc = pure instance ReAssoc Tau Tau where reassoc = pure instance ReAssoc Id Id where reassoc = pure It seems likely that one could write equivalent instances that would work for an arbitrary monoidal category with a bit more work. We are aided somewhat by the fact that FibOp has a finite universe of possible leaf types to work with. ### Closing Thoughts While our categorical typeclasses are helpful and nice, I should point out that they are not going to cover all the things that can be described as categories, even in Haskell. Just like the Functor typeclass does not describe all the conceptual functors you might meet. One beautiful monoidal category is that of Haskell Functors under the monoidal product of Functor Composition. More on this to come, I think. https://parametricity.com/posts/2015-07-18-braids.html We never even touched the dot product in this post. This corresponds to another doodle in a string diagram, and another power to add to your category. It is somewhat trickier to work with cleanly in familiar Haskell terms, I think because (->) is at least not super obviously a dagger category? You can find a hopefully compiling version of all my snippets and more in my chaotic mutating Github repo https://github.com/philzook58/fib-anyon See you next time. #### References The Rosetta Stone paper by Baez and Stay is probably the conceptual daddy of this entire post (and more). Bartosz Milewski’s Category Theory for Programmer’s blog (online book really) and youtube series are where I learned most of what I know about category theory. I highly recommend them (huge Bartosz fanboy). https://www.math3ma.com/blog/what-is-category-theory-anyway There are fancier embeddings of category theory and monoidal categories than I’ve shown here. Often you want constrained categories and the ability to choose unit objects. I took a rather simplistic approach here.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6887943148612976, "perplexity": 2561.2202397138844}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711077.50/warc/CC-MAIN-20221206092907-20221206122907-00397.warc.gz"}
https://eccc.weizmann.ac.il/keyword/15391/
Under the auspices of the Computational Complexity Foundation (CCF) REPORTS > KEYWORD > CUTTING PLANE: Reports tagged with cutting plane: TR03-012 | 21st January 2003 Edward Hirsch, Arist Kojevnikov #### Several notes on the power of Gomory-Chvatal cuts We prove that the Cutting Plane proof system based on Gomory-Chvatal cuts polynomially simulates the lift-and-project system with integer coefficients written in unary. The restriction on coefficients can be omitted when using Krajicek's cut-free Gentzen-style extension of both systems. We also prove that Tseitin tautologies have short proofs in ... more >>> TR05-006 | 28th December 2004 Edward Hirsch, Sergey I. Nikolenko #### Simulating Cutting Plane proofs with restricted degree of falsity by Resolution Goerdt (1991) considered a weakened version of the Cutting Plane proof system with a restriction on the degree of falsity of intermediate inequalities. (The degree of falsity of an inequality written in the form $\sum a_ix_i+\sum b_i(1-x_i)\ge c,\ a_i,b_i\ge0$ is its constant term $c$.) He proved a superpolynomial lower bound ... more >>> TR16-202 | 19th December 2016 Dmitry Sokolov #### Dag-like Communication and Its Applications Revisions: 1 In 1990 Karchmer and Widgerson considered the following communication problem $Bit$: Alice and Bob know a function $f: \{0, 1\}^n \to \{0, 1\}$, Alice receives a point $x \in f^{-1}(1)$, Bob receives $y \in f^{-1}(0)$, and their goal is to find a position $i$ such that $x_i \neq y_i$. Karchmer ... more >>> ISSN 1433-8092 | Imprint
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6389515995979309, "perplexity": 3990.1402329745847}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628001089.83/warc/CC-MAIN-20190627095649-20190627121649-00281.warc.gz"}
http://mathhelpforum.com/calculus/15356-laplace-transform.html
Math Help - Laplace transform 1. Laplace transform Could someone show me how to do the following Laplace transform? f(t) = e^(2t) if 0 < t < 8 1 if t > 8 F(s) = ? I'm pretty lost. Thanks for any help. 2. Originally Posted by PvtBillPilgrim Could someone show me how to do the following Laplace transform? f(t) = e^(2t) if 0 < t < 8 1 if t > 8 F(s) = ? I'm pretty lost. Thanks for any help. Well we can do thus using just the definition of the Laplace Transform: $ (\mathcal{L} f) (s) = \int_0^{\infty} f(t) e^{-st} dt = \int_0^8 e^{2t}e^{-st} dt + \int_8^{\infty} e^{-st}dt = \int_0^8 e^{t(2-s)} dt + \int_8^{\infty} e^{-st}dt $ ........ $=\frac{1}{s-2}[1-e^{8(2-s)}] + \frac{e^{-8s}}{s}$ RonL 3. Makes sense. Thank you very much.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9870262145996094, "perplexity": 1094.0793464909727}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645176794.50/warc/CC-MAIN-20150827031256-00098-ip-10-171-96-226.ec2.internal.warc.gz"}
https://brilliant.org/practice/permutations-with-restriction/
Discrete Mathematics Permutations - With Restriction How many ways can the letters of the word BOTTLES be arranged such that both of the vowels are at the end? Details and assumptions The vowels in the word BOTTLES are O and E. Among $$5$$ girls in a group, exactly two of them are wearing red shirts. How many ways are there to seat all $$5$$ girls in a row such that the two girls wearing red shirts are not sitting adjacent to each other? Hint: Treat the two girls as one person. This will help find the number of arrangements that have the girls seated together, then subtract the number from 5!, the total number of arrangements. $$10$$ people including $$A, B$$ and $$C$$ are waiting in a line. How many distinct line-ups are there such that $$A, B,$$ and $$C$$ are not all adjacent? Details and assumptions $$A, B$$ and $$C$$ may be in any order as long as all three are not adjacent. 3 boys and 2 girls are about to be seated at a round table. If the 2 girls want to sit next to each other, find the number of ways seating these boys and girls. (Note: since the table is round, we are considering two seating arrangements to be equivalent if they can match just by rotating.) Mary has enrolled in $$6$$ courses: Chemistry, Physics, Math, English, French and Biology. She has one textbook for each course and wants to place them on a shelf. How many ways can she arrange the textbooks so that the English textbook is placed at any position to the left of the French textbook? Hint: There are just as many permutations where the English textbook is to the left of the French textbook as there are permutation where the French textbook is to the left of the English textbook. ×
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44048449397087097, "perplexity": 262.13017947115895}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202525.25/warc/CC-MAIN-20190321132523-20190321154523-00492.warc.gz"}
https://www.zora.uzh.ch/id/eprint/143117/
# Jupiter’s formation and its primordial internal structure Lozovsky, Michael; Helled, Ravit; Rosenberg, Eric D; Bodenheimer, Peter (2017). Jupiter’s formation and its primordial internal structure. The Astrophysical Journal, 836(2):227. ## Abstract The composition of Jupiter and the primordial distribution of the heavy elements are determined by its formation history. As a result, in order to constrain the primordial internal structure of Jupiter, the growth of the core and the deposition and settling of accreted planetesimals must be followed in detail. In this paper we determine the distribution of the heavy elements in proto-Jupiter and determine the mass and composition of the core. We find that while the outer envelope of proto-Jupiter is typically convective and has a homogeneous composition, the innermost regions have compositional gradients. In addition, the existence of heavy elements in the envelope leads to much higher internal temperatures (several times 104 K) than in the case of a hydrogen–helium envelope. The derived core mass depends on the actual definition of the core: if the core is defined as the region in which the heavy-element mass fraction is above some limit (say, 0.5), then it can be much more massive (~15 ${M}_{\oplus }$) and more extended (10% of the planet's radius) than in the case where the core is just the region with 100% heavy elements. In the former case Jupiter's core also consists of hydrogen and helium. Our results should be taken into account when constructing internal structure models of Jupiter and when interpreting the upcoming data from the Juno (NASA) mission. ## Abstract The composition of Jupiter and the primordial distribution of the heavy elements are determined by its formation history. As a result, in order to constrain the primordial internal structure of Jupiter, the growth of the core and the deposition and settling of accreted planetesimals must be followed in detail. In this paper we determine the distribution of the heavy elements in proto-Jupiter and determine the mass and composition of the core. We find that while the outer envelope of proto-Jupiter is typically convective and has a homogeneous composition, the innermost regions have compositional gradients. In addition, the existence of heavy elements in the envelope leads to much higher internal temperatures (several times 104 K) than in the case of a hydrogen–helium envelope. The derived core mass depends on the actual definition of the core: if the core is defined as the region in which the heavy-element mass fraction is above some limit (say, 0.5), then it can be much more massive (~15 ${M}_{\oplus }$) and more extended (10% of the planet's radius) than in the case where the core is just the region with 100% heavy elements. In the former case Jupiter's core also consists of hydrogen and helium. Our results should be taken into account when constructing internal structure models of Jupiter and when interpreting the upcoming data from the Juno (NASA) mission. ## Statistics ### Citations Dimensions.ai Metrics 19 citations in Web of Science® 24 citations in Scopus® ### Altmetrics Detailed statistics Item Type: Journal Article, refereed, original work 07 Faculty of Science > Institute for Computational Science 530 Physics Physical Sciences > Astronomy and Astrophysics Physical Sciences > Space and Planetary Science English February 2017 09 Jan 2018 21:48 28 Jul 2020 12:05 IOP Publishing 1538-4357 Green Publisher DOI. An embargo period may apply. https://doi.org/10.3847/1538-4357/836/2/227 Preview Content: Published Version Filetype: PDF Size: 2MB View at publisher
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.840344250202179, "perplexity": 1113.868920556465}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400191780.21/warc/CC-MAIN-20200919110805-20200919140805-00153.warc.gz"}
https://edurev.in/course/quiz/attempt/7813_Test-Signal-Systems/6126487f-c86a-4f53-af90-01cd9394778b
# Test: Signal & Systems ## 25 Questions MCQ Test GATE ECE (Electronics) 2023 Mock Test Series | Test: Signal & Systems Description Attempt Test: Signal & Systems | 25 questions in 75 minutes | Mock test for GATE preparation | Free important questions MCQ to study GATE ECE (Electronics) 2023 Mock Test Series for GATE Exam | Download free PDF with solutions QUESTION: 1 ### The even part of the signal x(n) = u(n) Solution: x(n) = u(n) x(-n) = u(-n) Even part of x(n), QUESTION: 2 ### The value of discrete time signal at non-integer is : Solution: The value of discrete time signal and non-integers is zero. QUESTION: 3 ### The ROC of the signal x(t) = 4 Solution: Since there is no common ROC is ROC and laplace transform of x(t) does not exist. QUESTION: 4 The input and output relationship of a system is described as y(t) = ax + b, this system is linear Solution: For a linear system equation 3 and 4 must be equal, so b = 0, whereas for any value of a. QUESTION: 5 Which of the following signal is an example of an anti causal signal Solution: For an anti causal signal, the value of a signal must be zero for t > 0 so signal is e-at u(-t) QUESTION: 6 The maximum phase-shift provided by the given system is Solution: QUESTION: 7 Determine the fundamental period of x(n) = Solution: QUESTION: 8 Consider a signal x(t) = 4 rect (t/6) and its Fourier - transform is x(ω). Determine the area under the curve in the ω-domain. Solution: QUESTION: 9 If the Fourier transform of x(t) is x(ω), then determine the Fourier transform of x(at-b) Solution: QUESTION: 10 A pole zero pattern of a certain filter is shown in the figure. The filter must be Solution: Since locations of poles and zeros are mirror image to each other about vertical axis, so it is an all pass filter QUESTION: 11 Determine the laplace-transform of signal x(t) shown in figure Solution: QUESTION: 12 Determine the total energy of x(t) = 12 Sin (6t) Solution: x(t) = 12 Sinc (6t) So, x(t) = 2 rect (f/6) According to Parsvell's Theorem, QUESTION: 13 Determine the bandwidth of the signal x(t) = e-at u(t), so that it contains 90% of its total energy. Solution: QUESTION: 14 Determine the output y(t) for an input x(t) = e-2t u(t), if the step response of the system is t u(t). Solution: QUESTION: 15 Simplify the following expression: δ(-t) * u(t) * e-t/2 u(t) Solution: QUESTION: 16 Consider the analog signal x(t) = 3 Cos 100πt. This signal is sampled with a frequency of 75 samples per second and at the reconstruction side an ideal LPF having cut-off frequency is 30Hz is used. Determine  the frequency component at the output of filter. Solution: QUESTION: 17 The input and output relationship of a system is given as, Solution: It is the input-output relationship of an accumulator. For any value of n, the output y(n) is depends only on the previous and present input So it is causal. It is also liner It is unstable system. Because the output is not bounded for the bounded input. QUESTION: 18 A signal x(t) = 2 (1 - Cos2πt) is sampled with a sampling frequency of 10 Hz. Determine the z-transform of sampled signal. Solution: QUESTION: 19 Given, Determine the ROC of its z-transform. Solution: ROC : |Z| > 1/3 QUESTION: 20 If a signal f(t) has energy E, the energy of the signal f(2t) is equal to Solution: QUESTION: 21 The correlation between two signals x1(t) and x2(t) is 6. If average power of x1(t) is 10 and x2(t) is 8. Then determine the power of x1(t) + x2(t) Solution: Total energy is given as, E = E1 + E2 + 2 Rx (τ) = 10 + 8 + 2 × 6 = 30 QUESTION: 22 , the co-efficient of term e-t in f(t) will be : Solution: QUESTION: 23 S1 : δ(n) is an energy signal having energy is 1. S2 : u(n) is a power signal having power is 1/2. Which of the above is/are correct? Solution: Energy of the signal is given as QUESTION: 24 The input and output relationship of a system is given as, The system is : Solution: QUESTION: 25 Determine the impulse response of the inverse system, if the impulse response of the system is u(n). Solution: Use Code STAYHOME200 and get INR 200 additional OFF Use Coupon Code
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8693585395812988, "perplexity": 4843.193669360612}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662525507.54/warc/CC-MAIN-20220519042059-20220519072059-00183.warc.gz"}
https://socratic.org/questions/what-is-pascal-s-triangle#138674
Precalculus Topics What is Pascal's triangle? Apr 17, 2015 One of the most interesting Number Patterns is Pascal's Triangle. It is named after Blaise Pascal. To build the triangle, always start with "1" at the top, then continue placing numbers below it in a triangular pattern. Each number is the two numbers above it added together (except for the edges, which are all "1"). Interesting part is this: The first diagonal is just "1"s, and the next diagonal has the counting numbers. The third diagonal has the triangular numbers. The fourth diagonal has the tetrahedral numbers.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9191370010375977, "perplexity": 1266.6216714942193}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303956.14/warc/CC-MAIN-20220123015212-20220123045212-00442.warc.gz"}
https://math.stackexchange.com/questions/2976983/strong-induction-fibonacci-numbers
# Strong induction. Fibonacci numbers Using Strong Induction, prove that the (n + 3)-rd Fibonacci number can be computed as 1 plus the sum of the first n + 1 Fibonacci numbers (remember n includes 0). so it has to be proven by Strong Induction with the Inductive Hypothesis applied twice . . . twice because of the nature of the two-term recurrence. Here I'm confused. I know how to use simple induction. But I have to use strong induction. The inductive hypothesis will be like For arbitrary k ∈ N, ∀j ∈ N, 1 ≤ j ≤ k, S(j) and the inductive step should be like (∀j ∈ N, 1 ≤ j ≤ k, S(j)) → S(k + 1) then??? Examples can be n =5, sum of first 5 fibonacci numbers are 7 + 1 = 8. There is a fibonacci number 8 which is (5+3). (n+3)rd fibonacci number = F(n + 2)because n = F(n-1) • The specific result you are referring to can be proven using regular induction. You could of course phrase it using strong induction, but you probably won't use anything more than $S(k)$ to prove $S(k+1)$. As for a hint, remember that $f_{n}+f_{n+1}=f_{n+2}$. – JMoravitz Oct 30 '18 at 1:43 • This tutorial explains how to typeset mathematics on this site. – N. F. Taussig Oct 30 '18 at 10:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5025644898414612, "perplexity": 477.8872784173711}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578544449.50/warc/CC-MAIN-20190422055611-20190422081611-00267.warc.gz"}
https://ratnuu.wordpress.com/tag/technical/
You are currently browsing the tag archive for the ‘Technical’ tag. While trying to understand the Luby transform (LT) code, I stumbled upon the well known coupon collector’s problem. It took a while for me to figure out the connection, but as it turned out, there is a stronger connection between these two.  In LT parlance, if we were to use only degree one packets (that is, packets sent and collected as it is) what is the expected number of packets to be collected (when collected randomly, one at a time) such that all the required packets are collected atleast once. For illustration let us say we have $n$ information packets at the transmitter. The receiver collectes these one by one at random. How many packets on the average, we need to collect until we have collected all of the $n$ different information packets. Remember we are collecting the packets randomly (On the other hand, if we were to collect things deterministically, we just need to collect $n$ packets to get all these $n$, when done without replacement). Assume that there are $n$ distinct coupon types.  We have, a large pool of these coupons at disposal. Every time you go to the shop you collect a coupon picked uniformly at random from the pool.  The picked coupon has equal probability of  being any of the $n$ types.  Naturally, some of the collected coupons (over multiple visits to the shop) may be of the same type. The question asked is this:  Suppose the coupon collector aims to have coupons of all types.  How many (number of visits) coupons he  has to collect till he possess all the $n$ distinct types of coupons? In expectation, the coupon collector should make  $n \log(n) + O(1)$ visits to the shop in order to have atleast one copy of all $n$ distinct types of coupons . This coupon collector problem can sound a little confusing to a fresh reader. For simplicity sake we can assume that, there are $n$ differently coloured coupons at the shop. The question then is, on average (i.e., expectation) how many times one needs to visit (each visit fetch a coupon) the shop so that all coloured coupons are fetched atleast once. There are $n$ different type of coupons.  The coupon collector collects a coupon upon each visit. The collected coupon is among the $n$ types, picked uniformly at random (from a set of possibly large pool of coupons) .  Since the coupon is drawn uniformly at random, there is a non zero probability that some of the collected coupons over multiple visits may be of the same type.  Suppose that at some stage, the coupon collector has $r$ different type of coupons collected.  The probability that his next visit fetch a new coupon type (not of the $r$ types he already have in the kitty) is $p_r=\frac{n-r}{n}$.  So, the expected number of coupons to be collected to fetch a new coupon type is $\frac{n}{n-r}$.  Let us denote this number by $E\left[N_r\right]$. The expected value $E\left[N_i\right]=\frac{1}{p_i}=\frac{n}{n-i}$. From this we can compute the expected value of $N$. In other words, $E[N]$, the expected number of coupons to be collected (i.e, number of visits to the shop!) so that, the he would have all the different $n$ types of coupons is: $E[N]=\displaystyle \sum_{i=1}^{n-1} {\frac{n}{n-i}}=n\sum_{i=1}^{n-1}{\frac{1}{i}}=nH(n)=n\log(n)+O(1)$ So, what is the trouble? This number $n\log(n)$ is prohibitively high a number to be acceptable (as decoding time of $n\log (n)$ is significantly higher than the wishful linear time $n$!). So, simply using degree $1$ is not a good idea. This is why Luby went ahead and identified some smarter distribution like Soliton (and its variants proposed later on, such as robust soliton and then the recent raptor codes by Amin). leeyoongu on LT codes decoding using Wiedem… vinod kumar on Yesudas and Rafi singing same… Ramesh on The split lot corner stor… sreelesh. on Yesudas and Rafi singing same… The new Prime gap |… on Improved bounds on Prime numbe…
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 26, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8638216853141785, "perplexity": 767.4026370762635}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423842.79/warc/CC-MAIN-20170722022441-20170722042441-00543.warc.gz"}
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Statistical_Mechanics/Advanced_Statistical_Mechanics/The_Grand_Canonical_Ensemble/Particle_number_fluctuations
Particle number fluctuations $$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$ $$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ In the grand canonical ensemble, the particle number $$N$$ is not constant. It is, therefore, instructive to calculate the fluctuation in this quantity. As usual, this is defined to be $\Delta N = \sqrt{\langle N^2 \rangle - \langle N \rangle^2}$ Note that $$\zeta{\partial \over \partial \zeta}\zeta {\partial \over \partial \zeta}\ln {\cal Z}(\zeta,V,T)$$ $${1 \over {\cal Z}}\sum_{N=0}^{\infty}N^2 \zeta^N Q(N,V,T) -{1 \over {\cal Z}^2} \left[\sum_{N=0}^{\infty} N \zeta^N Q(N,V,T)\right]^2$$ $$\langle N^2 \rangle - \langle N \rangle^2$$ Thus, $\left(\Delta N\right)^2 =\zeta{\partial \over \partial \zeta} \zeta {\partial \over \partial \zeta} \ln {\cal Z} (\zeta, V, T) = ({KT}^2){\partial^2 \over \partial \mu^2}\ln {\cal Z}(\mu,V,T) = kTV{\partial^2 P \over \partial \mu^2}$ In order to calculate this derivative, it is useful to introduce the Helmholtz free energy per particle defined as follows: $a(v,T) = {1 \over N}A(N,V,T)$ where $$v={V \over N} = {1 \over \rho}$$ is the volume per particle. The chemical potential is defined by $$\mu$$ $${\partial A \over \partial N} =a(v,T) + N{\partial a \over \partial v}{\partial v \over \partial N}$$ $$a(v,T) - v{\partial a \over \partial v}$$ Similarly, the pressure is given by $P = -{\partial A \over \partial V} = -N{\partial a \over \partial v}{\partial v \over \partial V} = -{\partial a \over \partial v}$ Also, ${\partial \mu \over \partial v} = -v{\partial^2 a \over \partial v^2}$ Therefore, ${\partial P \over \partial \mu} = {\partial P \over \partial v}{\partial v \over \partial \mu} = {\partial^2 a \over \partial v^2} \left[v{\partial^2 a \over \partial v^2}\right]^{-1} = {1 \over v}$ and ${\partial^2 P \over \partial \mu^2} = {\partial \over \partial v}{\partial P \over \partial \mu}{\partial v \over \partial \mu} = {1 \over v^2} \left[ v {\partial^2 a \over \partial v^2}\right]^{-1}= -{1 \over v^3 \partial P/\partial v}$ But recall the definition of the isothermal compressibility: $\kappa_T = -{1 \over V}{\partial V \over \partial P}=-{1 \over v \partial p/\partial v}$ Thus, ${\partial^2 P \over \partial \mu^2} = {1 \over v^2}\kappa_T$ and $\Delta N = \sqrt{\frac{\langle N \rangle kT \kappa_T}{v}}$ and the relative fluctuation is given by ${\Delta N \over N} = {1 \over \langle N \rangle}\sqrt{\frac{\langle N \rangle kT \kappa_T}{v}} \sim {1 \over \sqrt {\langle N \rangle }}\rightarrow 0\;\;{as}\langle N \rangle \rightarrow \infty$ Therefore, in the thermodynamic limit, the particle number fluctuations vanish, and the grand canonical ensemble is equivalent to the canonical ensemble.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9891257882118225, "perplexity": 128.36545920233834}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662625600.87/warc/CC-MAIN-20220526193923-20220526223923-00556.warc.gz"}
http://www.maa.org/publications/maa-reviews/quadratic-programming-and-affine-variational-inequalities-a-qualitative-study?device=mobile
Quadratic Programming and Affine Variational Inequalities: A Qualitative Study Publisher: Springer Verlag Number of Pages: 346 Price: 129.00 ISBN: 0-387-24277-5 Tuesday, March 1, 2005 Reviewable: No Include In BLL Rating: No Gue Myung Lee, Nguyen Nang Tam and Nguyen Dong Yen Series: Nonconvex Optimization and Its Applications 78 Publication Date: 2005 Format: Hardcover Category: Monograph Preface Notations and Abbreviations 2. Existence Theorems for Quadratic Programs 3. Necessary and Sufficient Optimality Conditions for Quadratic Programs 4. Properties of the Solution Sets of Quadratic Programs 5. Affine Variational Inequalities 6. Solution Existence for Affine Variational Inequalities 7. Upper-Lipschitz Continuity of the Solution Map in Affine Variational Inequalities 8. Linear Fractional Vector Optimization Problems 9. The Traffic Equilibrium Problem 10. Upper Semicontinuity of the KKT Point Set Mapping 11. Lower Semicontinuity of the KKT Point Set Mapping 12. Continuity of the Solution Map in Quadratic Programming 13. Continuity of the Optimal Value Function in Quadratic Programming 14. Directional Differentiability of the Optimal Value Function 15. Quadratic Programming Under Linear Perturbations: I. Continuity of the Solution Maps 16. Quadratic Programming Under Linear Perturbations: II. Properties of the Optimal Value Function 17. Quadratic Programming Under Linear Perturbations: III. The Convex Case 18. Continuity of the Solution Map in Affine Variational Inequalities References Index Publish Book: Modify Date: Sunday, August 14, 2005
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8711851239204407, "perplexity": 6733.262228397359}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115858171.97/warc/CC-MAIN-20150124161058-00237-ip-10-180-212-252.ec2.internal.warc.gz"}
https://crypto.stackexchange.com/questions/8931/quantum-resistance-of-lamport-signatures
# Quantum resistance of Lamport signatures The Lamport-Diffie signature scheme is said to be quantum-resistant. Why is that? What would a quantum attempt to attack this signature scheme look like, and how does it fail? • Security of lamport signatures reduces to the pre-image resistance of the underlying hash function. The best generic quantum algorithm to find pre-images is grover's algorithm with cost $2^{n/2}$. – CodesInChaos Jun 29 '13 at 15:52 • @CodesInChaos Where I will be able to find this reduction? – juaninf Jun 30 '13 at 2:30 • @CodesInChaos I read that the minimal requeriment to build a secure scheme signature is to use a one way function. In this context Why the Lamport-Diffie is quantum resistance? – juaninf Jun 30 '13 at 3:01 • There are two ways to capture the quantum attack. One allows you to query a classical signing oracle. The other allows you to query a quantum signing oracle. Which one do you consider? – xagawa Jul 2 '13 at 14:57 • Returning to the original question, the direct answer is 1) the original reduction is applicable to quantum setting (if the signing oracle is classical) and 2) there might be a family of quantum-resilience one-way functions. If you consider the case that the signing oracle is quantum, then the answer is in Boneh and Zhandry (CRYPTO 2013). – xagawa Jul 6 '13 at 15:17 Assume you want to invert the one-way function $f$ for image $y=f(x)$, given a forger for LD-OTS. Then you generate a valid LD key pair using $f$, sample a random position i in the key pair and a bit b and replace $pk_{i,b}$ by $y$ and run the adversary on the modified $pk$. If the adversaries query has bit $b$ in position $i$ you abort and restart the proceedure. Otherwise, you can answer the query. Now you hope that the adversaries forgery hash bit $b$ at position $i$. If this is the case, the $i$th element of the signature is a preimage of $y$ under $f$. The reduction works with only a slight loss in success probability, i.e. you loose something as you abort in some cases. However, this loss can be bounded by $1/2m$ for $m$ bit messages. You can find the details in the post-quantum cryptography book by Bernstein et al..
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5208654403686523, "perplexity": 775.3007460116719}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347391277.13/warc/CC-MAIN-20200526160400-20200526190400-00203.warc.gz"}
http://math.stackexchange.com/questions/105978/can-an-element-be-a-quadratic-residue-and-a-generator-mod-p
# Can an element be a quadratic residue and a generator (mod p)? i.e. is is possible for • $g$ to be a generator$\mod{p}$, and • $g \equiv x^2 \mod{p}$ for some $x$ I'm guessing not, as I think $x$ can't be expressed as a power of $g$, contradicting g being a generator? - Dear malikyo_o: Your argument is almost complete! You should try harder! –  Pierre-Yves Gaillard Feb 5 '12 at 14:15 If $p$ is an odd prime, then no. $\varphi(p)=p-1$ is even and the multiplicative group $\!\!\!\pmod{p}$ has order $\varphi(p)$. If $g=x^2$, then $g^{(p-1)/2}=x^{p-1}=1\pmod{p}$. But if $g$ is a generator of the multiplicative group $\!\!\!\pmod{p}$, $g^k\not=1\pmod{p}$ for $0<k<p-1$. In fact, the only $p$ for which $\varphi(p)$ is odd is $p=2$, and $1=1^2$ is a generator for the multiplicative group $\!\!\!\pmod{2}$. For all other $p$, $g=x^2$ cannot be a generator of the multiplicative group $\!\!\!\pmod{p}$. If you consider $p=2$ to be a prime number (and I wouldn't know how you could defend not doing so) then you will see that $1$ is both a quadratic residue and a generator mod $2$ (admittedly not very spectacular, but true nontheless). This is the only case, as explained the answer by robjohn. I have added that $p$ is an odd prime in the first paragraph. Thanks for keeping me on my toes. –  robjohn Feb 5 '12 at 14:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9613246917724609, "perplexity": 167.32608462635002}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394010451932/warc/CC-MAIN-20140305090731-00081-ip-10-183-142-35.ec2.internal.warc.gz"}
https://usa.cheenta.com/complex-numbers-and-prime-aime-i-2012-question-6/
Categories # Complex Numbers and prime | AIME I, 2012 | Question 6 Try this beautiful problem from the American Invitational Mathematics Examination I, AIME I, 2012 based on Complex Numbers and prime. Try this beautiful problem from the American Invitational Mathematics Examination I, AIME I, 2012 based on Complex Numbers and prime. ## Complex Numbers and primes – AIME 2012 The complex numbers z and w satisfy $z^{13} = w$ $w^{11} = z$ and the imaginary part of z is $\sin{\frac{m\pi}{n}}$, for relatively prime positive integers m and n with m<n. Find n. • is 107 • is 71 • is 840 • cannot be determined from the given information ### Key Concepts Complex Numbers Algebra Number Theory AIME I, 2012, Question 6 Complex Numbers from A to Z by Titu Andreescue ## Try with Hints Taking both given equations $(z^{13})^{11} = z$ gives $z^{143} = z$ Then $z^{142} = 1$ Then by De Moivre’s theorem, imaginary part of z will be of the form $\sin{\frac{2k\pi}{142}} = \sin{\frac{k\pi}{71}}$ where $k \in {1, 2, upto 70}$ 71 is prime and n = 71.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6516273021697998, "perplexity": 1848.1699381241967}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141184123.9/warc/CC-MAIN-20201125183823-20201125213823-00686.warc.gz"}
http://www.slipstick.com/outlook/rules/send-a-new-message-when-a-message-arrives/
# Run a Script Rule: Send a new message when a message arrives Last reviewed on March 14, 2014 How to use a Run a Script rule to have Microsoft Outlook automatically send a new email message using a template, to a new email addresses when a message meeting specific conditions arrives. I also have a version of the script that sends a new message with the body of the message that triggered the rule to another person. (To avoid including the Reply header in the body). To use, open the VBA Editor and paste the code into ThisOutlookSession. Create a Run a Script rule, selecting this rule. When you create the template, do not include a signature. Outlook will add the signature when the message is sent. To test the rule without sending messages, change objMsg = Send to ojbMsg = Display. This will open the message form instead of sending it. New message using a template Sub SendNew(Item As Outlook.MailItem) Dim objMsg As MailItem Set objMsg = Application.CreateItemFromTemplate("C:\path\to\test-rule.oft") ' If the address you want to send to is not saved in the template, objMsg.Send End Sub ## To reply to the sender using a template This script could be used for an Out of Office style reply. Using this method will send a reply with every message that meets the condition of the rule. Sub SendNew(Item As Outlook.MailItem) Dim objMsg As MailItem Set objMsg = Application.CreateItemFromTemplate("C:\path\to\test-rule.oft") ' Copy the original message subject objMsg.Subject = "Attn: " & Item.Subject objMsg.Send End Sub To Forward the message body to another address This version of the script sends a new message containing the body of the received message. (If you want to forward the complete message, use the Forward rule.) Hyperlinks links will be "opened". To avoid this, and if you don't mind converting the messages to HTML, you can use objMsg.HTMLBody = Item.HTMLBody instead (it works with plain text and RTF messages), or create an If Then statement to check for incoming body types and use the correct format. As always, test it with messages to yourself or using objMsg.Display before using it to send messages to others. Sub SendNew(Item As Outlook.MailItem) Dim objMsg As MailItem Set objMsg = Application.CreateItem(olMailItem) objMsg.Body = Item.Body objMsg.Subject = "FW: " & Item.Subject
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23715543746948242, "perplexity": 3285.7690607816753}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00506-ip-10-147-4-33.ec2.internal.warc.gz"}
https://math.libretexts.org/TextMaps/Analysis/Book%3A_Real_Analysis_(Boman_and_Rogers)/7%3A_Intermediate_and_Extreme_Values/7.4%3A_The_Supremum_and_the_Extreme_Value_Theorem
$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ # 7.4: The Supremum and the Extreme Value Theorem $$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$ $$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$ Skills to Develop • Explain supremum and the extreme value theorem Theorem 7.3.1 says that a continuous function on a closed, bounded interval must be bounded. Boundedness, in and of itself, does not ensure the existence of a maximum or minimum. We must also have a closed, bounded interval. To illustrate this, consider the continuous function $$f(x) =tan^{-1}x$$ defined on the (unbounded) interval $$(-∞,∞)$$. Figure $$\PageIndex{1}$$: Graph of $$f(x) =tan^{-1}x$$. This function is bounded between $$-\frac{π}{2}$$ and $$\frac{π}{2}$$, but it does not attain a maximum or minimum as the lines $$y = ±\frac{π}{2}$$ are horizontal asymptotes. Notice that if we restricted the domain to a closed, bounded interval then it would attain its extreme values on that interval (as guaranteed by the EVT). To find a maximum we need to find the smallest possible upper bound for the range of the function. This prompts the following definitions. Definition: $$\PageIndex{1}$$ Let $$S ⊆ R$$ and let $$b$$ be a real number. We say that $$b$$ is an upper bound of $$S$$ provided $$b ≥ x$$ for all $$x ∈ S$$. For example, if $$S = (0,1)$$, then any $$b$$ with $$b ≥ 1$$ would be an upper bound of $$S$$. Furthermore, the fact that $$b$$ is not an element of the set $$S$$ is immaterial. Indeed, if $$T = [0,1]$$, then any $$b$$ with $$b ≥ 1$$ would still be an upper bound of $$T$$. Notice that, in general, if a set has an upper bound, then it has infinitely many since any number larger than that upper bound would also be an upper bound. However, there is something special about the smallest upper bound. Definition: $$\PageIndex{2}$$ Let $$S ⊆ R$$ and let $$b$$ be a real number. We say that $$b$$ is the least upper bound of $$S$$ provided 1.  $$b ≥ x$$ for all $$x ∈ S$$. ($$b$$ is an upper bound of $$S$$) 2. If $$c ≥ x$$ for all $$x ∈ S$$, then $$c ≥ b$$. (Any upper bound of $$S$$ is at least as big as $$b$$) In this case, we also say that $$b$$ is the supremum of $$S$$ and we write $b = \sup(S)$ Notice that the definition really says that $$b$$ is the smallest upper bound of $$S$$. Also notice that the second condition can be replaced by its contrapositive so we can say that $$b = \sup S$$ if and only if 1.  $$b ≥ x$$ for all $$x ∈ S$$ 2.  If $$c < b$$ then there exists $$x ∈ S$$ such that $$c < x$$ The second condition says that if a number $$c$$ is less than $$b$$, then it can’t be an upper bound, so that $$b$$ really is the smallest upper bound. Also notice that the supremum of the set may or may not be in the set itself. This is illustrated by the examples above as in both cases, $$1 = \sup (0,1)$$ and $$1 = \sup [0,1]$$. Obviously, a set which is not bounded above such as $$N = {1, 2, 3, ...}$$ cannot have a supremum. However, for non-empty sets which are bounded above, we have the following. Theorem $$\PageIndex{1}$$: The Least Upper Bound Property (LUBP) Let $$S$$ be a non-empty subset of $$R$$ which is bounded above. Then $$S$$ has a supremum. Sketch of Proof Since $$S \neq \varnothing$$, then there exists $$s ∈ S$$. Since $$S$$ is bounded above then it has an upper bound, say $$b$$. We will set ourselves up to use the Nested Interval Property. With this in mind, let $$x_1 = s$$ and $$y_1 = b$$ and notice that $$∃ x ∈ S$$ such that $$x ≥ x_1$$ (namely, $$x_1$$ itself) and $$∀ x ∈ S$$, $$y_1 ≥ x$$. You probably guessed what’s coming next: let $$m_1$$ be the midpoint of $$[x_1,y_1]$$. Notice that either $$m_1 ≥ x$$, $$∀x ∈ S$$ or $$∃ x ∈ S$$ such that $$x ≥ m_1$$. In the former case, we relabel, letting $$x_2 = x_1$$ and $$y_2 = m_1$$. In the latter case, we let $$x_2 = m_1$$ and $$y_2 = y_1$$. In either case, we end up with $$x_1 ≤ x_2 ≤ y_2 ≤ y_1, y_2 - x_2 = \frac{1}{2} (y_1 - x_1)$$, and $$∃ x ∈ S$$ such that $$x ≥ x_2$$ and $$∀x ∈ S$$, $$y_2 ≥ x$$. If we continue this process, we end up with two sequences, ($$x_n$$) and ($$y_n$$), satisfying the following conditions: 1. $$x_1 ≤ x_2 ≤ x_3 ≤ ...$$ 2. $$y_1 ≥ y_2 ≥ y_3 ≥ ...$$ 3. $$∀ n, x_n ≤ y_n$$ 4. $$\lim_{n \to \infty } (y_n - x_n) = \lim_{n \to \infty }\frac{1}{2^{n-1}} (y_1 - x_1) = 0$$ 5. $$∀ n,∃ x ∈ S$$ such that $$x ≥ x_n$$ and $$∀x ∈ S, y_n ≥ x$$ By properties 1-5 and the NIP there exists $$c$$ such that $$x_n ≤ c ≤ y_n, ∀ n$$. We will leave it to you to use property 5 to show that $$c = \sup S$$. Exercise $$\PageIndex{1}$$ Complete the above ideas to provide a formal proof of Theorem $$\PageIndex{1}$$. Notice that we really used the fact that $$S$$ was non-empty and bounded above in the proof of Theorem $$\PageIndex{1}$$. This makes sense, since a set which is not bounded above cannot possibly have a least upper bound. In fact, any real number is an upper bound of the empty set so that the empty set would not have a least upper bound. The following corollary to Theorem $$\PageIndex{1}$$ can be very useful. Corollary $$\PageIndex{1}$$ Let ($$x_n$$) be a bounded, increasing sequence of real numbers. That is, $$x_1 ≤ x_2 ≤ x_3 ≤···$$. Then ($$x_n$$) converges to some real number $$c$$. Exercise $$\PageIndex{2}$$ Prove Corollary $$\PageIndex{1}$$. Hint Let $$c = \sup {x_n|n = 1,2,3,...}$$. To show that $$\lim_{n \to \infty } x_n = c$$, let $$\epsilon > 0$$. Note that $$c - \epsilon$$ is not an upper bound. You take it from here! Exercise $$\PageIndex{3}$$ Consider the following curious expression $$\sqrt{2+\sqrt{2+\sqrt{2+\sqrt{\cdots }}}}$$. We will use Corollary $$\PageIndex{1}$$ to show that this actually converges to some real number. After we know it converges we can actually compute what it is. Of course to do so, we need to define things a bit more precisely. With this in mind consider the following sequence ($$x_n$$) defined as follows: $x_1 = \sqrt{2}$ $x_{n+1} = \sqrt{2+x_n}$ 1. Use induction to show that $$x_n < 2$$ for $$n = 1, 2, 3, ....$$ 2. Use the result from part (a) to show that $$x_n < x_{n+1}$$ for $$n = 1, 2, 3, ...$$ 3. From Corollary $$\PageIndex{1}$$, we have that ($$x_n$$) must converge to some number $$c$$. Use the fact that ($$x_{n+1}$$) must converge to $$c$$ as well to compute what $$c$$ must be. We now have all the tools we need to tackle the Extreme Value Theorem. Theorem $$\PageIndex{2}$$: Extreme Value Theorem (EVT) Suppose $$f$$ is continuous on $$[a,b]$$. Then there exists $$c$$, $$d ∈ [a,b]$$ such that $$f(d) ≤ f(x) ≤ f(c)$$, for all $$x ∈ [a,b]$$. Sketch of Proof We will first show that $$f$$ attains its maximum. To this end, recall that Theorem Theorem 7.3.1 tells us that $$f[a,b] = {f(x)|x ∈ [a,b]}$$ is a bounded set. By the LUBP, $$f[a,b]$$ must have a least upper bound which we will label $$s$$, so that $$s = \sup f[a,b]$$. This says that $$s ≥ f(x)$$,for all $$x ∈ [a,b]$$. All we need to do now is find a $$c ∈ [a,b]$$ with $$f(c) = s$$. With this in mind, notice that since $$s = \sup f[a,b]$$, then for any positive integer $$n$$, $$s - \frac{1}{n}$$ is not an upper bound of $$f[a,b]$$. Thus there exists $$x_n ∈ [a,b]$$ with $$s - \frac{1}{n} < f(x_n) \leq s$$. Now, by the Bolzano-Weierstrass Theorem, ($$x_n$$) has a convergent subsequence($$x_{n_k}$$) converging to some $$c ∈ [a,b]$$. Using the continuity of $$f$$ at $$c$$, you should be able to show that $$f(c) = s$$. To find the minimum of $$f$$, find the maximum of $$-f$$. Exercise $$\PageIndex{4}$$ Formalize the above ideas into a proof of Theorem $$\PageIndex{2}$$. Notice that we used the NIP to prove both the Bolzano-Weierstrass Theorem and the LUBP. This is really unavoidable, as it turns out that all of those statements are equivalent in the sense that any one of them can be taken as the completeness axiom for the real number system and the others proved as theorems. This is not uncommon in mathematics, as people tend to gravitate toward ideas that suit the particular problem they are working on. In this case, people realized at some point that they needed some sort of completeness property for the real number system to prove various theorems. Each individual’s formulation of completeness fit in with his understanding of the problem at hand. Only in hindsight do we see that they were really talking about the same concept: the completeness of the real number system. In point of fact, most modern textbooks use the LUBP as the axiom of completeness and prove all other formulations as theorems. We will finish this section by showing that either the Bolzano-Weierstrass Theorem or the LUBP can be used to prove the NIP. This says that they are all equivalent and that any one of them could be taken as the completeness axiom. Exercise $$\PageIndex{5}$$ Use the Bolzano-Weierstrass Theorem to prove the NIP. That is, assume that the Bolzano-Weierstrass Theorem holds and suppose we have two sequences of real numbers, ($$x_n$$) and ($$y_n$$), satisfying: 1.  $$x_1 ≤ x_2 ≤ x_3 ≤ ...$$ 2.  $$y_1 ≥ y_2 ≥ y_3 ≥ ...$$ 3.  $$∀ n, x_n ≤ y_n$$ 4. $$\lim_{n \to \infty } (y_n - x_n) = 0$$ Prove that there is a real number $$c$$ such that $$x_n ≤ c ≤ y_n$$, for all $$n$$. Since the Bolzano-Weierstrass Theorem and the Nested Interval Property are equivalent, it follows that the Bolzano-Weierstrass Theorem will not work for the rational number system. Exercise $$\PageIndex{6}$$ Find a bounded sequence of rational numbers such that no subsequence of it converges to a rational number. Exercise $$\PageIndex{7}$$ Use the Least Upper Bound Property to prove the Nested Interval Property. That is, assume that every non-empty subset of the real numbers which is bounded above has a least upper bound; and suppose that we have two sequences of real numbers ($$x_n$$) and ($$y_n$$), satisfying: 1. $$x_1 ≤ x_2 ≤ x_3 ≤ ...$$ 2. $$y_1 ≥ y_2 ≥ y_3 ≥ ...$$ 3. $$∀ n, x_n ≤ y_n$$ 4. $$\lim_{n \to \infty } (y_n - x_n) = 0$$ Prove that there exists a real number $$c$$ such that $$x_n ≤ c ≤ y_n$$, for all $$n$$. (Again, the $$c$$ will, of necessity, be unique, but don’t worry about that.) Hint Corollary $$\PageIndex{1}$$ might work well here. Exercise $$\PageIndex{8}$$ Since the LUBP is equivalent to the NIP it does not hold for the rational number system. Demonstrate this by finding a non-empty set of rational numbers which is bounded above, but whose supremum is an irrational number. We have the machinery in place to clean up a matter that was introduced in Chapter 1. If you recall (or look back) we introduced the Archimedean Property of the real number system. This property says that given any two positive real numbers $$a,b$$, there exists a positive integer $$n$$ with $$na > b$$. As we mentioned in Chapter 1, this was taken to be intuitively obvious. The analogy we used there was to emptying an ocean $$b$$ with a teaspoon $$a$$ provided we are willing to use it enough times $$n$$. The completeness of the real number system allows us to prove it as a formal theorem. Theorem $$\PageIndex{3}$$: Archimedean Property of $$\mathbb{R}$$ Given any positive real numbers $$a$$ and $$b$$, there exists a positive integer $$n$$, such that $$na > b$$. Exercise $$\PageIndex{9}$$ Prove Theorem $$\PageIndex{3}$$. Hint Assume that there are positive real numbers $$a$$ and $$b$$, such that $$na ≤ b, ∀ n ∈ N$$. Then $$N$$ would be bounded above by $$b/a$$. Let $$s = \sup (N)$$ and consider $$s - 1$$. Given what we’ve been doing, one might ask if the Archimedean Property is equivalent to the LUBP (and thus could be taken as an axiom). The answer lies in the following problem. Exercise $$\PageIndex{10}$$ Does $$\mathbb{Q}$$ satisfy the Archimedean Property and what does this have to do with the question of taking the Archimedean Property as an axiom of completeness? ### Contributor • Eugene Boman (Pennsylvania State University) and Robert Rogers (SUNY Fredonia)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9886834025382996, "perplexity": 89.56363463220312}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221210735.11/warc/CC-MAIN-20180816113217-20180816133217-00290.warc.gz"}
https://www.physicsforums.com/threads/2-questions-one-wave-one-delta-function.89725/
# 2 questions one wave one delta function 1. Sep 19, 2005 ### Phymath 1st question what the heck does a "minimum" mean when talking about interference in waves, i got a question of the like y = 1.19(1 + 2 cos p)sin(kx - wt + p) is the superpostion function of three waves one which is p out of phase of the first and another which is p out of phase of the second wave. What value of p gives the minimum, i have no idea what that means I'm guessin when the amplitude is 0 or when pi/2 - kx + wt = p but how do i find that? 2nd question i have the function $$\int^{\infty}_{-\infty} (6-5x^5)\delta(x) dx$$ now by defintion of the delta function because 0 is contained with-in (as is all numbers) between the limits should it not = 0? thanks anyone 2. Sep 19, 2005 ### StNowhere 2nd question: Delta function: $$\int^{\infty}_{-\infty} f(x)\delta(x-a) dx = f(a)$$ Using that, it looks to me like your value is 6 I'll look at the first question a little more before I hazard a guess on it. 3. Sep 19, 2005 ### Phymath how is it 6 when $$f(x) = 6-5x^4$$, and $$\delta(x) = \delta(x-0)$$? 4. Sep 19, 2005 ### HallsofIvy Because the "definition" of the delta function that you refer to requires that $$\int_{-\infty}^{\infty}f(x)\delta(x)dx= f(0)$$!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8514612317085266, "perplexity": 1351.5237844030023}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583659654.11/warc/CC-MAIN-20190118005216-20190118031216-00590.warc.gz"}
http://tex.stackexchange.com/questions/31734/mix-marginnotes-with-marginpars-without-overlap
# mix marginnotes with marginpars without overlap We are trying to put the figure caption as well as the footnotes in the margin and got very good results with the tufte-latex package. However, we have to manually adjust the offset of each sidenote or caption so that they do not overlap. Please see the following part of a page. Please note, how reference 64 was manually moved up to not be too close to the caption. Now, we have to find something fully automatic and I found the marginfix package. It works great and can move \marginpars around to get very pleasing results. Please see this article how it works. I tried to make the sidenote package to combine the look with the automatic feature. Nevertheless, it is not useful to move a figure caption away from the adjacent figure. Therefore, the caption is placed with a \marginnote from the marginnote package. If the margin gets very crowded, the \marginpars start to overlap with the \marginnotes. It seems, that was addressed in the first version of marginfix. I could 'block' some part of the margin with \blockmargin. Does anyone know why \blockmargin was dropped in the rewrite of marginfix? I asked the author, but got no response. The repository of marginfix is here. Or is there another way to 'forbit' some area of the margin for \marginpar? I tried a MWE to generally demonstrate the problem. Please replace the filename in the \includegraphics macro. \documentclass[]{article} \usepackage{lipsum} \usepackage{graphicx} \usepackage{marginnote} \usepackage[paperwidth=170mm, paperheight=240mm, left=40pt, top=40pt, textwidth=260pt, marginparsep=20pt, marginparwidth=100pt, textheight=560pt, footskip=40pt]{geometry} \begin{document} Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Ut purus elit, vestibulum ut, placerat ac, adipiscing vitae, felis. Curabitur dictum gravida mauris. Nam arcu libero, nonummy eget, consectetuer id, vulputate a, magna. \marginpar{This could be a couple of references and other sidenotes.} Donec vehicula augue eu neque. \marginpar{This could be a couple of references and other sidenotes.} Pellentesque habitant morbi tris- tique senectus et netus et malesuada fames ac turpis egestas. Mauris ut leo. Cras viverra metus rhoncus sem. Nulla et lectus vestibulum urna fringilla ultrices. \marginpar{This could be a couple of references and other sidenotes.} Phasellus eu tellus sit amet tortor gravida placerat. Integer sapien est, iaculis in, pretium quis, viverra ac, nunc. \begin{figure}[h] \marginnote{This would be where the caption should be.} \includegraphics{broken_loop} \end{figure} \lipsum[3] \end{document} - Welcome to TeX.sx! A tip: You can use backticks to mark your inline code as I did in my edit. –  doncherry Oct 16 '11 at 16:59 Please provide version numbers and dates. I wrote to S. Hicks end of July 2010, someday then he published a first version of marginfix, there were bug reports (I wrote one end of August 2010) and then he published version 0.9.1. To which rewrite do you refer? –  Keks Dose Oct 21 '11 at 14:10 I never used the old version with \blockmargin, but I found the command in the TUGboat article (tug.org/TUGboat/tb31-2/tb98hicks.pdf). In the github repository (github.com/shicks/marginfix/commits/master) it is present in the initial commit (Jun 23, 2010) and was dropped in the next one (Aug 8, 2010). I use marginfix extensively and it works great. Now I want to mix it with figure captions that require \marginnote, those obviously overlap with the \marginpars. \blockmargin would be very helpful. –  Andy Oct 21 '11 at 14:26 I suggest to edit your question: Draft a minimal working example showing your need for a command blocking the margin, refer to the version of marginfix including \blockmargin and ask for somebody to code it. BUT from my own experience with marginfix I doubt that this will work, if not done with care. Marginfix moves marginnotes vertically. How should that be done in case of \blockmargin on the page? Probably you either have to cease using marginfix or captions in the margin. –  Keks Dose Oct 21 '11 at 15:09 @KeksDose: I did edit and we do indeed use captions under the figure instead of in the margin for now. –  Andy Oct 23 '11 at 18:24 If the figure is a float, my first feeling is to allow to the figure float in a more fine place than "here", that is saturated of margin notes. When the image must be just "here", a manual option is to adjust the alignment caption, for example using amargincap environment of the mcaption package or using SCfigure environment of the package sidecap with normal caption (which can be aligned with \sidecaptionvpos). But for a general solution to prevent any posible overlapping of the side caption with preceedings \marginpar notes (with some extra separation), simply convert the caption in a \marginpar note, using \figcaption (of package captdef), but outside of the float (that you can omit safely in this case). \documentclass{article} \usepackage{lipsum} \usepackage{xcolor} \usepackage{graphicx} \usepackage{marginnote} \usepackage[paperwidth=170mm, paperheight=240mm, left=40pt, top=40pt, textwidth=260pt, marginparsep=20pt, marginparwidth=100pt, textheight=560pt, footskip=40pt]{geometry} \usepackage{captdef} \begin{document} Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Ut purus elit, vestibulum ut, placerat ac, adipiscing vitae, felis. Curabitur dictum gravida mauris. Nam arcu libero, nonummy eget, consectetuer id, vulputate a, magna. \marginpar{This could be a couple of references and other sidenotes.} Donec vehicula augue eu neque. \marginpar{This could be a couple of references and other sidenotes.} Pellentesque habitant morbi tris- tique senectus et netus et malesuada fames ac turpis egestas. Mauris ut leo. Cras viverra metus rhoncus sem. Nulla et lectus vestibulum urna fringilla ultrices. \marginpar{This could be a couple of references and other sidenotes.} Phasellus eu tellus sit amet tortor gravida placerat. Integer sapien est, iaculis in, pretium quis, viverra ac, nunc. \marginpar{\textcolor{red}{\figcaption{ This would be where the caption should be.}}} \begin{figure}[h] \centering \includegraphics[width=\textwidth]{broken_loop} \end{figure} % or simply % \marginpar{\figcaption{...}} % \noindent\includegraphics{broken_loop} \lipsum[3] \end{document} ` -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8052405714988708, "perplexity": 7861.49797817195}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443736676622.16/warc/CC-MAIN-20151001215756-00041-ip-10-137-6-227.ec2.internal.warc.gz"}
https://www.cheenta.com/acute-angled-triangle-prmo-ii-2019-question-29/?mode=grid
Select Page Try this beautiful problem from the Pre-RMO II, 2019, Question 29, based on Acute angled triangle. ## Acute angled triangle – Problem 29 Let ABC be a acute angled triangle with AB=15 and BC=8. Let D be a point on AB such that BD=BC. Consider points E on AC such that $\angle$DEB=$\angle$BEC. If $\alpha$ denotes the product of all possible val;ues of AE, find[$\alpha$] the integer part of $\alpha$. • is 107 • is 68 • is 840 • cannot be determined from the given information Equation Algebra Integers ## Check the Answer But try the problem first… Answer: is 68. Source Suggested Reading PRMO II, 2019, Question 29 Higher Algebra by Hall and Knight ## Try with Hints First hint The pairs $E_1$,$E_2$ satisfies condition or $E_1$=intersection of CBO with AC and $E_2$=intersection of $\angle$bisector of B and AC since that $\angle DE_2B$=$\angle CE_2B$ and for $E_1$$\angle BE_1C$=$\angle$BDC=$\angle$BCD=$\angle BE_1D$ or, $AE_1.AC$=$AD.AB$=$7 \times 15$ $\frac{AE_2}{AC}$=$\frac{XY}{XC}$ (for y is midpoint of OC and X is foot of altitude from A to CD) Second Hint $\frac{XD}{DY}=\frac{7}{8}$ and DY=YC or, $\frac{XD+DY}{XC}$=$\frac{15}{7+8+8}$=$\frac{15}{23}$ or, $\frac{XY}{XC}=\frac{15}{23}$ or, $\frac{AE_2}{AC}$=$\frac{15}{23}$ or, $AE_1.AE_2$=$\frac{15}{23}(7.15)$=$\frac{225 \times 7}{23}$ Final Step $[\frac{225 \times 7}{23}]$=68.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.995559573173523, "perplexity": 5157.180005427693}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738878.11/warc/CC-MAIN-20200812053726-20200812083726-00366.warc.gz"}
https://planetmath.org/InequalityOfLogarithmicAndAsymptoticDensity
# inequality of logarithmic and asymptotic density For any $A\subseteq\mathbb{N}$ we denote $A(n):=|A\cap\{1,2,\ldots,n\}|$ and $S(n):=\sum\limits_{k=1}^{n}\frac{1}{k}$. Recall that the values $\underline{d}(A)=\liminf_{n\to\infty}\frac{A(n)}{n}\qquad\overline{d}(A)=% \limsup_{n\to\infty}\frac{A(n)}{n}$ are called lower and upper asymptotic density of $A$. The values $\underline{\delta}(A)=\liminf_{n\to\infty}\frac{\sum\limits_{k\in A;k\leq n}% \frac{1}{k}}{S(n)}\qquad\overline{\delta}(A)=\limsup_{n\to\infty}\frac{\sum% \limits_{k\in A;k\leq n}\frac{1}{k}}{S(n)}$ are called lower and upper logarithmic density of $A$. We have $S(n)\sim\ln n$ (we use the Landau notation). This follows from the fact that $\lim\limits_{n\to\infty}S(n)-\ln n=\gamma$ is Euler’s constant. Therefore we can use $\ln n$ instead of $S(n)$ in the definition of logarithmic density as well. The sum in the definition of logarithmic density can be rewritten using Iverson’s convention as $\sum_{k=1}^{n}\frac{1}{k}[k\in A]$. (This means that we only add elements fulfilling the condition $k\in A$. This notation is introduced in [1, p.24].) ###### Theorem 1. For any subset $A\subseteq\mathbb{N}$ $\underline{d}(A)\leq\underline{\delta}(A)\leq\overline{\delta}(A)\leq\overline% {d}(A)$ holds. ###### Proof. We first observe that $\displaystyle\frac{1}{k}[k\in A]=\frac{A(k)-A(k-1)}{k},$ $\displaystyle D(n):=\sum_{k=1}^{n}\frac{1}{k}[k\in A]=\frac{A(n)}{n}+\sum_{k=1% }^{n-1}\frac{A(k)}{k(k+1)}$ There exists an $n_{0}\in\mathbb{N}$ such that for each $n\geq n_{0}$ it holds $\underline{d}(A)-\varepsilon\leq\frac{A(n)}{n}\leq\overline{d}(A)+\varepsilon$. We denote $C:=1+S(n_{0})$. For $n\geq n_{0}$ we get $\displaystyle D(n)\leq C+\sum_{k=n_{0}}^{n-1}\frac{A(k)}{k}\cdot\frac{1}{k+1}% \leq C+(\overline{d}(A)+\varepsilon)\sum_{k=n_{0}}^{n-1}\frac{1}{k+1}\sim(% \overline{d}(A)+\varepsilon)\ln n,$ $\displaystyle\overline{\delta}(A)=\limsup_{n\to\infty}\frac{D(n)}{\ln n}\leq% \overline{d}(A)+\varepsilon.$ This inequality holds for any $\varepsilon>0$, thus $\overline{\delta}(A)\leq\overline{d}(A)$. For the proof of the inequality for lower densities we put $C^{\prime}:=\sum_{k=1}^{n_{0}-1}\frac{A(k)}{k(k+1)}-(\underline{d}(A)-% \varepsilon)S(n_{0})$. We get $\displaystyle D(n)\geq C^{\prime}+(\underline{d}(A)-\varepsilon)S(n_{0})+(% \underline{d}(A)-\varepsilon)\sum_{k=n_{0}}^{n}\frac{1}{k+1}=\\ \displaystyle C^{\prime}+(\underline{d}(A)-\varepsilon)S(n)\sim(\underline{d}(% A)-\varepsilon)\ln n$ and this implies $\underline{\delta}(A)\geq\underline{d}(A)$. ∎ For the proof using Abel’s partial summation see [4] or [5]. ###### Corollary 1. If a set has asymptotic density, then it has logarithmic density, too. A well-known example of a set having logarithmic density but not having asymptotic density is the set of all numbers with the first digit equal to 1. It can be moreover proved, that for any real numbers $0\leq\underline{\alpha}\leq\underline{\beta}\leq\overline{\beta}\leq\overline{% \alpha}\leq 1$ there exists a set $A\subseteq\mathbb{N}$ such that $\underline{d}(A)=\underline{\alpha}$, $\underline{\delta}(A)=\underline{\beta}$, $\overline{\delta}(A)=\overline{\beta}$ and $\overline{d}(A)=\overline{\alpha}$ (see [2]). ## References • 1 R. L. Graham, D. E. Knuth, and O. Patashnik. Concrete mathematics. A foundation for computer science. Addison-Wesley, 1989. • 2 L. Mišík. Sets of positive integers with prescribed values of densities. Mathematica Slovaca, 52(3):289–296, 2002. • 3 H. H. Ostmann. Additive Zahlentheorie I. Springer-Verlag, Berlin-Göttingen-Heidelberg, 1956. • 4 J. Steuding. http://www.math.uni-frankfurt.de/~steuding/steuding/prob.pdfProbabilistic number theory. • 5 G. Tenenbaum. Introduction to analytic and probabilistic number theory. Cambridge Univ. Press, Cambridge, 1995. Title inequality of logarithmic and asymptotic density InequalityOfLogarithmicAndAsymptoticDensity 2014-03-24 9:16:11 2014-03-24 9:16:11 kompik (10588) kompik (10588) 8 kompik (10588) Theorem msc 11B05 AsymptoticDensity LogarithmicDensity
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 35, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9937368631362915, "perplexity": 557.0113696943137}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038119532.50/warc/CC-MAIN-20210417102129-20210417132129-00104.warc.gz"}
http://mathhelpforum.com/algebra/67121-inequalities-help-word-problem.html
# Math Help - Inequalities help with word problem 1. ## Inequalities help with word problem Suppose you want to cover the back yard with decorative rock and plants and some trees. I need 30 tons of rock to cover the area. If each ton cost $60 and each tree is$84, what is the maximum number of trees you can buy with a budget for rocks and trees of $2,500? Write and inequality that illustrates the problem and solve. Express your answer as an inequality and explain how you arrived at your answer. Now, I can do most of this by doing basic math. I know that 30 tons of rock at 60 dollars a ton is$1800 total. I subtracted the $1800 from the$2500 and I have $700 left for trees. Trees being$84 dollars each I know that I can only by 8 trees for a total cost of $672 dollars. But I am so lost as to writing this as an inequality. I thought inequality symbols were greater than or less than or greater than or equal to or less than or equal to. I'm a bit confused as to how to write this problem as an inequality. Im asked if 5 would be the solution to the inequality and I know that it wont because 5 does not get me close to the 700 mark for trees. Im just a bit lost here. 2. Originally Posted by n0thx Suppose you want to cover the back yard with decorative rock and plants and some trees. I need 30 tons of rock to cover the area. If each ton cost$60 and each tree is $84, what is the maximum number of trees you can buy with a budget for rocks and trees of$2,500? Write and inequality that illustrates the problem and solve. Express your answer as an inequality and explain how you arrived at your answer. Now, I can do most of this by doing basic math. I know that 30 tons of rock at 60 dollars a ton is $1800 total. I subtracted the$1800 from the $2500 and I have$700 left for trees. Trees being $84 dollars each I know that I can only by 8 trees for a total cost of$672 dollars. But I am so lost as to writing this as an inequality. I thought inequality symbols were greater than or less than or greater than or equal to or less than or equal to. I'm a bit confused as to how to write this problem as an inequality. Im asked if 5 would be the solution to the inequality and I know that it wont because 5 does not get me close to the 700 mark for trees. Im just a bit lost here. Suppose max tree numbers those can buy is x, so x must be a nonnegative integer(U cann't buy or use half tree,, so inequality is x*84+30*60<=2500 (x is a nonnegative integer) 3. Ok..so...I just want to make sure I have this correct..so I hope I say it correctly.. The inequality is < because once the problem is solved it does not equal 2500 correct? The total amount of money used in this problem is $2472.00 making it less than$2500.00 right? Now to solve for the problem I need to find out what x is right? Then in the last part of the problem, when I am asked if 5 is a solution to the problem the answer is going to be no becuase it actually would be 8 trees that the person could buy for the the money left over from purchasing the rocks. Gahh...math is so hard.. Thank you for showing me how to write it as an inequality. I feel a bit better..I just am having trouble trying to figure out when to use x as the substitution in the equation..or any equation for that matter. 4. Originally Posted by n0thx Ok..so...I just want to make sure I have this correct..so I hope I say it correctly.. The inequality is < because once the problem is solved it does not equal 2500 correct? The total amount of money used in this problem is $2472.00 making it less than$2500.00 right? You are all right. Now to solve for the problem I need to find out what x is right? Right. Then in the last part of the problem, when I am asked if 5 is a solution to the problem the answer is going to be no becuase it actually would be 8 trees that the person could buy for the the money left over from purchasing the rocks. Gahh...math is so hard.. Thank you for showing me how to write it as an inequality. I feel a bit better..I just am having trouble trying to figure out when to use x as the substitution in the equation..or any equation for that matter. x just a meaning.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45912498235702515, "perplexity": 363.5505778273169}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500829210.91/warc/CC-MAIN-20140820021349-00187-ip-10-180-136-8.ec2.internal.warc.gz"}
https://chemistry.stackexchange.com/questions/21860/a-misunderstanding-about-the-energy-profile-of-reactions-with-a-catalyst-involve/42391
A misunderstanding about the energy profile of reactions with a catalyst involved All of us are aware of the importance of the catalysts in bio-chemistry. For a high school learner like me, catalysts ,and therefore, enzymes play a bridge-like role that connect high school bio to high school chemistry. Yet, I got baffled about the two common energy profiles drawn for an anonymous reaction with a presence of a catalyst. The question may seem rudimentary, I agree, but the possible answers to my question were too technical for me to understand. One of the energy profiles had several curves in the pathway for $E$, and the other one has only one curve in the pathway where the catalyst should have affected. My guess is that the first energy profile is about a net change in $E_\mathrm{a}$ but the second one demonstrates the reality when an exothermic reaction occurs. Now, here are the questions: • Why are the two energy profiles different? Please be as if you're teaching thermo-chemistry to a little kid! • Why do those curves occur in the latter energy profile and is there a way to know how many of those curves will occur in the presence of a catalyst? Image credits: chemwiki.ucdavis.edu (page here) and wiki (page here). The upper graph is just the most simple way to visualize the effect of a catalyst on a reaction $\ce{S -> P}$: The activation energy is lowered. The activation energy for the reaction in that direction is the difference of the energies of the starting material $S$ and a transition state $TS^\#$. Since it is the same starting marterial in the presence or absence of the catalyst, the energy of the transition state is different. Can the same transition state have two different energies - just through the remote distance magical action of a catalyst located somewhere? Probably not! It is much more plausible that - in the absence and presence of a catalyst - two completely different (different in structure and different in energy) transition states are involved. Exactly this situation is described in the second graph! The catalyst "reacts" with the starting material, either by forming a covalent bond, by hydrogen bonding, etc. and thus opens a different reaction channel with different intermediates and a transition state not found in in the non-catalyzed reaction. In the end, the same product $P$ is formed and the catalyst is regenerated, but this doesn't mean that the catalyst wasn't heavily involved in the reaction going on. 1# Curve 1 is just for a simple case where it's showing the comparison between a theoretical pathway for a catalyzed and non-catalyzed reaction but it has an another significant which I will discuss in part two. 2# How many of these curves can occur: Infinite (Theoretically) but in practical case you have to consider all types of possible elementary reaction in your model. An elementary reaction is a reaction which can occur in one step or the reactant, product and TS has well defined position on potential energy surface. For example if you have a simple ketone hydrogenation reaction, you can get an overall energy profile for gas phase reaction but on a catalytic surface there are lots of possibilities. At first the ketone and hydrogen will adsorb on the surface and then 1st hydrogen addition can occur either on C of the ketone group or on the hydrogen of the ketone group. They are intuitively main reaction pathway but there can be much more. In the above image I was working on LA hydrogenation and you can see there are two different pathway for the hydrogenation and there might be some other more pathway. And if you want to capture the real chemistry you have to make sure you are including all important intermediates. So your energy profile may look like this Here the picture becomes quite complicated with lots of elementary reaction. To determine the overall activation energy of a catalyzed reaction scientists follow two methods method-1: Energetic span theory: Given by Shaik et al. I am adding the link to that paper here, http://pubs.acs.org/doi/ipdf/10.1021/ar1000956 2) Apparent activation barrier: This is an age old method. Here the sensitivity of turn over frequency (TOF) with the change of temperature is measured and defined as apparent activation barrier. $$E_a=-R\frac{\delta~ln(TOF)}{\delta(\frac{1}{T})}$$ Now you can plot the first E profile with your gas phase activation energy and apparent activation energy of the catalyzed reaction. A reaction involving more than one elementary step has one or more intermediates being formed which, in turn, means there is more than one energy barrier to overcome. In other words, there is more than one transition state lying on the reaction pathway. As it is intuitive that pushing over an energy barrier or passing through a transition state peak would entail the highest energy, it becomes clear that it would be the slowest step in a reaction pathway. However, when more than one such barrier is to be crossed, it becomes important to recognize the highest barrier which will determine the rate of the reaction. This step of the reaction whose rate determines the overall rate of reaction is known as rate determining step or rate limiting step. The height of energy barrier is always measured relative to the energy of the reactant or starting material. Different possibilities have been shown in figure 6 From the link you gave on energy profile i found this. Basically there is no "net change" in $E_{a}$ . In the first graph in question the curve which is at the top is the representing how the reaction proceeds without a catalyst and the one below is of the reaction with catalyst. In the second graph there are 4 intermediate reactions/transition states before the product is formed. Hence you have 4 peaks of different activation energy as shown. A catalyst basically provides alternate routes to a reaction by forming the "transition/metastable/intermediate state at a lower energy than the original reaction. You will study later that this has to do with basic Collision theory and molecular dynamics. In the given graphs a catalyst can make the R->P through just one route where only one transition happens (as in the graph 1) or for 4(or even more) transition states (as in the graph 2). For one set of conditions(P,T,V etc) I am guessing the number of these peaks will be unique. It is also possible that for a certain catalyst and reactant the number of peaks / intermediate states is constant.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8471270799636841, "perplexity": 641.1781576397732}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145859.65/warc/CC-MAIN-20200223215635-20200224005635-00355.warc.gz"}
http://crypto.stackexchange.com/questions/3723/randomized-stream-cipher-using-multivariant-quadratic-equations
# Randomized stream cipher using multivariant quadratic equations This is an idea I had for cipher that I thought might reduce to a known hard problem. It is efficient (compared to something like BBS) in terms of time but not in terms of space. Here's the algorithm: 1. Produce 128 real random streams equal in length to the plain-text. (These do not need to be secret, they can be published with the cipher text.) 2. Let each bit of the 128-bit streams define the terms of a multi-variant quadratic polynomial in 128 variables over GF(2). One way to do this would be to label each stream as defining a specific term to multiply. 3. Split the secret key in to 128-bit 1 bit variables and substitute in to the equation take from step 2. 4. The resultant bit is then XORed with the plain-text to produce cipher-text. 5. Repeat for all bits in the plain-text and transmit the cipher-text plus the 128-bit key-streams. My security argument is that solving systems of multivariant quadratic equations in a lot of variables is a known hard problem. In the average case, it is meant to be NP Hard. All I've done is randomized the selection of the specific instance of the problem. Of course, very quickly the system would become over-determined which might aid solution of the problem. My question is whether I've made a mistake somewhere or misunderstood something? - 1) Most proofs are asymptotic and don't tell you anything about concrete sizes of the problem. BBS has concrete proofs too, and those require much bigger modulus sizes than most people expect. 2) You need a proof that a problem choses in the manner you do is hard too. I believe some knapsack based cryptosystems fell to this. –  CodesInChaos Sep 5 '12 at 10:50 Yeah, step two is a bit handwavey and is probably where the security is made or lost. You'd need to prove that step 2 really defines a difficult set of equations in the average case. My question is whether I've generally misunderstood the underlying problem or whether I've broadly got it right and that this would actually work if all the details were ironed out? –  Simon Johnson Sep 5 '12 at 11:00 @Thomas - no, sorry, the streams can be published publicly with the cipher-text. I try to depend on MQ to make sure that even with the streams being public knowledge, the key can not be computed. –  Simon Johnson Sep 5 '12 at 11:59 @SimonJohnson Oh, right, I see, my bad - I will delete my previous comment as it serves no purpose. –  Thomas Sep 5 '12 at 12:01 Summary. This scheme is insecure. It can be cryptanalyzed using standard methods from the cryptanalytic literature. It also has poor performance. Your algorithm. To summarize your scheme, in your algorithm a one-bit message $m \in GF(2)$ is encrypted by picking a random quadratic polynomial $p(x_1,\dots,x_{128})$ in $GF(2)[x_1,\dots,x_{128}]$, setting $c = m \oplus p(k)$, and transmitting $p(x_1,\dots,x_{128})$ and $c$. Here $k \in GF(2)^{128}$ is the key, and $k$ remains fixed for all messages (while in contrast $p$ is chosen afresh for each message). Performance. This scheme expands the length of the message by a huge amount: by a factor of 16384 or so. That's an enormous overhead. Existing schemes don't have that problem. Also, I expect that it would be very slow (compared to state-of-the-art stream ciphers), since for each bit encrypted, you have to pick 16384 pseudorandom bits and then evaluate the polynomial. Algorithmic background: solving multivariate equations. There has been a lot of work in the cryptographic literature on the hardness of solving a system of multivariate equations. One of the fundamental techniques is relinearization. If you have $n^2$ systems of equations, where each equation is of degree $\le 2$ (i.e., a multivariate quadratic equation) and where you have $n$ unknowns, then relinearization can solve the system of equations in polynomial time (approx. $O(n^6)$ time or less). Actually, you don't even need relinearization to solve this problem: simple linearization is sufficient. In particular, if $x_1,\dots,x_n$ denote the unknowns, then for each pair $x_i,x_j$ of unknowns, you introduce a new variable $y_{i,j} = x_i x_j$. Now we treat the $y_{i,j}$'s as additional unknowns. With these additional unknowns, each equation is now a linear equation (in the $x_i$'s and $y_{i,j}$'s). How many unknowns are there now? Well, there are about $n^2/2$ of the $y_{i,j}$'s, and another $n$ of the $x_i$'s. Consequently, we have $n^2$ linear equations in $\approx 0.5 n^2$ unknowns. Since there are more equations than unknowns, we can use standard linear algebra to solve these linear equations. The answer provides a solution to our original system of quadratic equations. Relinearization is a generalization of this idea that works with a smaller number of equations, at some cost in running time. For more on this subject, see the following research paper: Cryptanalysis. With this background, it then becomes easy to see how to cryptanalyze your system. Each bit of ciphertext reveals one quadratic equation on 128 unknowns, where the unknowns are the bits of the key $k$. If we're given 16384 bits of known plaintext, then we have $16384 = 128^2$ quadratic equations in 128 unknowns. Now we can apply linearization to recover the key. (With relinearization, we could reduce the amount of known-plaintext required, but 16384 bits of known plaintext is already a very modest requirement, so the simple linearization attack is already devastating.) Therefore, this algorithm falls to a simple known-plaintext attack. For that reason, it does not meet the standard security requirements and is not suitable for use. - Why are $n^2$ plaintext bits required instead of $n^2/2$? –  Antimony Dec 14 '13 at 21:53 @Antimony, yeah, $n^2/2$ should suffice. I wasn't trying to optimize the constant factors (just laziness). Thank you. –  D.W. Dec 15 '13 at 3:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7994574904441833, "perplexity": 651.9102530979334}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414637899702.24/warc/CC-MAIN-20141030025819-00226-ip-10-16-133-185.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/buoancy-volume-question.893658/
# Homework Help: Buoancy volume question 1. Nov 16, 2016 ### NY152 1. The problem statement, all variables and given/known data A 69.5 kg man floats in freshwater with 2.95% of his volume above water when his lungs are empty, and 4.75% of his volume above water when his lungs are full. Calculate the volume of air, in liters, that he inhales (this is called his lung capacity). Neglect the weight of air in his lungs. 2. Relevant equations d=m/v freshwater: d=1000 kg/m^3 3. The attempt at a solution From the given information, there's a 1.8% increase in volume. I'm just not sure where to start given the information above. 2. Nov 16, 2016 ### Staff: Mentor You should be able to determine the man's volume for both cases. Start by considering the volume of water he displaces in order to float. 3. Nov 16, 2016 ### NY152 Okay so I did ((69.5 kg/1000 kg/m^3)*(4.75/100))-((69.5/1000 kg/m^3)(2.95/100))= .00125 m^3 then converted that to liters which is 1.25 Liters. The hw system says "it looks like you may have confused the denominator and the numerator, check your algebra" Not sure where I'm going wrong here though... 4. Nov 16, 2016 ### Staff: Mentor You've found the difference between 4.75% of the displaced water and 2.95% of the displaced water. That's not what you're looking for. The displaced water volume is constant and smaller than the man's volume. You need to find the two volumes of the man. By the way, save yourself a heap of typing and just assign a variable name to repeated quantities. Call the volume of water displaced vw, for example. 5. Nov 16, 2016 ### NY152 Oh okay, so I'd do the same sort of math setup but use 95.25% and 97.05% instead? 6. Nov 16, 2016 ### Staff: Mentor No. Let's concentrate on one of the volumes for the man first. You've correctly determined that he displaces a volume of water $V_w = M/ρ$, where M is his mass and ρ the density of water. That is also the amount of his volume that is below water (since it's displacing the water). Let's call the man's total volume for the first case (the 2.95% above water case) $V_o$. What would be the volume above water (in symbols, no numbers yet)?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8877604007720947, "perplexity": 1508.1938613619034}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589557.39/warc/CC-MAIN-20180717031623-20180717051623-00341.warc.gz"}
http://pwntestprep.com/wp/2011/04/weekend-challenge-question/
Pretty easy question here, so I’ll attach a prize of fairly small value, but that might still be fun: the first person to comment with the answer gets to name a character in a future word problem on this site, and what they do/sell/wear/eat. For example: “Rita is the diaper changer at a daycare center that feeds the kids nothing but corn.” Of course, I have no way of contacting an anonymous commenter, so you’re going to have to identify yourself. Cool? Let’s goooo. Four kids are in a room; their average age is 8 years old. Then an adult enters the room, and the average age becomes 16. A tense conversation quickly escalates, culminating in one of the children screaming “You’re not even my real dad!” and leaving the room, but the average age in the room stays the same. A few minutes later, another kid leaves the room in search of a sandwich. If the last kid to leave was 4 years old, then the adult is how much older than the average age of the people remaining in the room, awkwardly staring at each other? MOAR UPDATEZ and SOLUTION: Congratulations to “Amy” who answered correctly, albeit with some degree of uncertainty. Good enough, kid. Email me to claim your prize. Solution after the jump. ##### Solution We’re going to solve this bad boy with the average table. Because it is awesome and if you don’t love it then you probably are incapable of love. Let’s start by finding the age of the adult. Here’s what the problem tells us: $\tiny&space;\inline&space;\dpi{300}&space;\fn_cm&space;[Number\:&space;of\:&space;Values]\times&space;[Average\:&space;of\:&space;Values]&space;=&space;[Sum\:&space;of\:&space;Values]$ 4 people 8 years/person +1 person 5 people 16 years/person In order to find the adult’s age, just fill in the sums of the ages in the first and third rows, and figure out the difference. That is to say: if we know the sum of the ages when it was just the 4 kids in the room was 32, and then the sum of ages became 80 when the adult walked in, how old must the adult have been? $\tiny&space;\inline&space;\dpi{300}&space;\fn_cm&space;[Number\:&space;of\:&space;Values]\times&space;[Average\:&space;of\:&space;Values]&space;=&space;[Sum\:&space;of\:&space;Values]$ 4 people 8 years/person 32 years +1 person 48 years/person +48 years 5 people 16 years/person 80 years OK, so if the adult is 48 years old, we have half of what we need. Now let’s figure out what happened after the adult walked in. We know two people left the room, and we know enough about each to figure out the average age in the room when the dust eventually settles. Picking up from where we left off, let’s deal with the first kid having a little hissy fit and leaving: $\tiny&space;\inline&space;\dpi{300}&space;\fn_cm&space;[Number\:&space;of\:&space;Values]\times&space;[Average\:&space;of\:&space;Values]&space;=&space;[Sum\:&space;of\:&space;Values]$ 5 people 16 years/person 80 years -1 person 4 people 16 years/person This looks a lot like the table above did, so we’ll fill it in the same way. Note that I filled in the age of the kid who left, although it doesn’t actually matter at all to the solution of the problem. $\tiny&space;\inline&space;\dpi{300}&space;\fn_cm&space;[Number\:&space;of\:&space;Values]\times&space;[Average\:&space;of\:&space;Values]&space;=&space;[Sum\:&space;of\:&space;Values]$ 5 people 16 years/person 80 years -1 person 16 years/person -16 years 4 people 16 years/person 64 years Ok, so since the average age stayed the same after that kid left, the sum of the ages in the room is now 64. When one more kid (age 4) leaves, we’re left with 3 people in the room, aged a total of 60 years. What’s the average age in the room? 20 years: $\tiny&space;\inline&space;\dpi{300}&space;\fn_cm&space;[Number\:&space;of\:&space;Values]\times&space;[Average\:&space;of\:&space;Values]&space;=&space;[Sum\:&space;of\:&space;Values]$ 4 people 16 years/person 64 years -1 person 4 years/person -4 years 3 people 20 years/person 60 years The adult is how much older than the average age in the room? 48 – 20 = 28. Sweet. Tags:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4909481406211853, "perplexity": 1076.9741478839267}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525659.27/warc/CC-MAIN-20190718145614-20190718171614-00144.warc.gz"}
https://genealogy.math.ndsu.nodak.edu/id.php?id=82609
Mashhoor Ibrahim Mohammed Al Ali Ph.D. University of Birmingham 1987 Dissertation: On the Characters of the Maximal Subgroups of the Projective Symplectic Group $PSp_4(q)$ ($q$ odd) Mathematics Subject Classification: 20—Group theory and generalizations
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2965961694717407, "perplexity": 3398.269892492963}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358443.87/warc/CC-MAIN-20211128013650-20211128043650-00534.warc.gz"}
https://quantiki.org/wiki/Superfidelity
Superfidelity Superfidelity is a measure of similarity between density operators. It is defined as $G(\rho,\sigma) = \mathrm{tr}\rho\sigma + \sqrt{1-\mathrm{tr}(\rho^2)}\sqrt{1-\mathrm{tr}(\sigma^2)},$ where σ and ρ are density matrices. Superfidelity was introduced inmiszczak09sub as an upper bound for fidelity. Properties Super-fidelity has also properties which make it useful for quantifying distance between quantum states. In particular we have: • Bounds: 0 ≤ G(ρ1, ρ2) ≤ 1. • Symmetry:  G(ρ1, ρ2) = G(ρ2, ρ1). • Unitary invariance: for any unitary operator U, we have  G(ρ1, ρ2) = G(Uρ1U † , Uρ2U † ). • Concavity: G(ρ1, αρ2 + (1 − α)ρ3) ≥ αG(ρ1, ρ2) + (1 − α)G(ρ1, ρ3) for any ρ1, ρ2, ρ3 ∈ ΩN and α ∈ [0, 1]. • Supermultiplicativity: for ρ1, ρ2, ρ3, ρ4 ∈ ΩN we have G(ρ1 ⊗ ρ2, ρ3 ⊗ ρ4) ≥ G(ρ1, ρ3)G(ρ2, ρ4).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9983946084976196, "perplexity": 145.1643232569495}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125947690.43/warc/CC-MAIN-20180425022414-20180425042414-00391.warc.gz"}
https://physics.stackexchange.com/questions/746057/free-boson-twisted-boundary-condition-and-t2-partition-function
Free boson twisted boundary condition and $T^2$ partition function Many CFT textbooks discuss free boson theory and free fermion theories on the torus. The partition function for the boson theory (without compactification and orbifold) is obtained by summing over the Verma modules from all possible highest weight states $$|\alpha \rangle$$ coming from the vertex operators $$e^{i \alpha X}$$. The result reads $$\sqrt{\frac{1}{\operatorname{Im}\tau}} \frac{1}{\eta(\tau) \overline{\eta(\tau})} \ ,$$ where the $$\operatorname{Im}\tau$$ factor comes from integrating over all possible "momenta" $$\alpha$$, while the $$\eta(\tau)$$ comes from summing over Virasoro descendants. For free fermion theory, the torus partition function takes in all four sectors, (R, R), (R, NS), (NS, R), (NS, NS), corresponding to different boundary conditions of the field $$\psi$$. The uncompactified free boson theory also has a twisted sector as well, with an anti-periodic boundary condition $$X(e^{2\pi i}z) = - X(z)$$. In the big yellow book, this situation is discussed, where the two-point function of $$X$$ and the stress tensor is computed. But what is left untouched is the representation theory in this sector. I wonder what is the $$T^2$$ partition function of the uncompactified is we also consider anti-periodic boundary condition? • Perhaps I'm misunderstanding the question, but we do take into account the twisted sector while compactifying on an orbifold, whereupon the "real" Hilbert space only consists of $\mathbb Z_2$-invariant states (you can do this with a projection operator in the trace) Jan 18 at 16:07 • @NiharKarve You are right, when $\mathbb{Z}_2$ orbifolding, people do consider the twisted sector. But what about before orbifolding and compactification, just the vanilla free boson theory? Jan 18 at 17:10 • @NiharKarve Ok I see. In the textbooks at hand, people always impose the twisted condition together with the compactification $X \sim X + 2\pi R$. I guess what I want to ask is if the compactification is optional, just do orbifold, and if so, what is the Hilbert space. I'm guessing one simply removes the $\mathbb{Z}_2$ non-invariant states from the original boson theory? Jan 19 at 1:22 • @NiharKarve Asked differently, what is the $T^2$ partition function if we also consider the anti-periodic boundary condition? For periodic condition, the partition function is as written in the post. Jan 19 at 1:50 • 1) Only the condition $X\sim X + 2\pi R$ is compactification on the circle, orbifold compactification needs the additional $\mathbb Z_2$ symmetry. 2. Naïvely all you have to do is start with the circle-compactified theory and then remove the non-$\mathbb Z_2$-invariant states, but modular invariance of the partition function forces inclusion of states generated by half-integer moded oscillators: that's the twisted sector. So the actual partition function is $\frac12 Z_\text{circ}+\left|\frac\eta{\vartheta_2}\right|+\left|\frac\eta{\vartheta_3}\right|+\left|\frac\eta{\vartheta_4}\right|$. Jan 19 at 9:31 $$\renewcommand{\Im}{\operatorname{Im}}$$Let's dissect the question a bit. First, the periodic BCs on the free boson on a torus correspond to (R,R). But of course, no one stops you from imposing (R,NS), (NS,R) and (NS,NS) BCs and compute the partition function. The physical meaning of this is having turned on a background $$\mathbb{Z}_2$$ gauge field along either or along both cycles of the torus. So to rephrase your question, you essentially ask the following: We know that $$Z_{\text{(R,R)}}[\tau] = \frac{1}{\sqrt{\Im(\tau)}}\frac{1}{\left|\eta(\tau)\right|^2}.\tag{0.1}\label{1}$$ What is $$Z_{(\bullet,\circ)}[\tau]$$ with $$(\bullet,\circ)\in\{\text{(R,NS), (NS,R), (NS,NS)}\}?$$ There are two ways to answer, and both of them use the following fact of life: The partition function of the non-compact scalar on a torus $$\mathbb{T}^2_\tau:=\mathbb{C}/\left(\mathbb{Z}\oplus\tau\mathbb{Z}\right)$$, with $$\tau\in\mathbb{H}$$ (the upper half plane), can be read from the partition function of the compact scalar by sending the compactification radius to infinity, $$R\to\infty$$. Way 1 The simplest way to obtain the answer is as follows. Well, it suffices to go look at the compactified case and stare at the orbifold computation. For example stare at equation (8.24) in Ginsparg's notes. You will see that it is only the (R,R) sector that contributes to the $$R$$ dependence. Therefore, the (R,NS), (NS,R) and (NS,NS) are identical in the uncompactified case (when you sent $$R\to\infty$$). So we have \begin{align} Z_\text{(R,NS)}[\tau] &= \left|\frac{2\eta(\tau)}{\vartheta_2(\tau)}\right| \tag{1.1}\label{1.1} \\ Z_\text{(NS,R)}[\tau] &= \left|\frac{\eta(\tau)}{\vartheta_4(\tau)}\right| \tag{1.2}\label{1.2}\\ Z_\text{(NS,NS)}[\tau] &= \left|\frac{\eta(\tau)}{\vartheta_3(\tau)}\right|. \tag{1.3}\label{1.3} \end{align} Way 2 Another way, is to do the computation from scratch. Namely, go back to the path integral and compute, say in the (R,R) case $$Z_\text{(R,R)}[\tau] = \frac{\operatorname{vol}(\text{zero-modes})}{\sqrt{\operatorname{det}_\text{(R,R)}'\!\big(\partial\bar\partial\big)}}.\tag{2.1}\label{2.1}$$ Up to factors of $$2$$ and $$\pi$$, you can then see the following: $$\operatorname{vol}(\text{zero-modes}) = \sqrt{\Im(\tau)}\tag{2.2}\label{2.2}$$ and the non-zero eigenvalues of $$\partial\bar\partial$$ on a torus with (R,R) BCs are simply $$\lambda^\text{(R,R)}_{n,m} = \frac{1}{\Im(\tau)^2}\left|n+\tau m\right|^2, \qquad (n,m)\in\mathbb{Z}^2\setminus(0,0),$$ giving $$\operatorname{det}'_\text{(R,R)}(\partial\bar\partial) = \Im(\tau)^2\left|\eta(\tau)\right|^4.\tag{2.3}\label{2.3}$$ Altogether plugging \eqref{2.2} and \eqref{2.3} in \eqref{2.1}, gives \eqref{1}. Now, for the other boundary conditions, all you have to do is observe that an NS BC along either cycle shifts either (or both) $$n$$ or $$m$$ by $$\frac{1}{2}$$, therefore, e.g. for (R,NS) BCs you have $$\lambda^\text{(R,NS)}_{n,m} = \frac{1}{\Im(\tau)}\left| n+\tau\left(m+\frac{1}{2}\right) \right|^2, \qquad n,m\in\mathbb{Z}^2.$$ Note that now you don't have a zero-mode anymore. So all you have to do now is compute the determinant of $$\partial\bar\partial$$ with these boundary conditions and find $$Z_{(\bullet,\circ)}[\tau] = \frac{1}{\sqrt{\det_{(\bullet,\circ)}(\partial\bar\partial)}}.$$ Doing this should land you on \eqref{1.1}-\eqref{1.3}. • Thanks for your answer! I didn't realize the $R$ independence of $R$ in the other sectors previously. Nice observation. So from your answer, somehow the Hilbert space of the "twisted sector" in the $R \to +\infty$ limit is still like some direct sum of $|m, n\rangle$ subsectors? How to understand this at the level of allowed primaries? Jan 19 at 13:44 • As a comparison, for the uncompactified boson theory, the untwisted sectors would consider $|\alpha\rangle$ sectors with continuous $\alpha$. Jan 19 at 13:46 • Actually, the RR sector partition function of the $\mathbb{Z}_2$ orbifold theory is computed in Ginsparg, eq (8.6). It's not obvious to me that the $R \to +\infty$ limit of the double infinite sum reproduces the expected factor $1/\sqrt{\operatorname{Im\tau}}$. Could you clarify a bit? Jan 20 at 5:58 • In (8.6) you can't directly take a limit $R\to\infty$ because the exponent contains both a $\propto R$ piece and a $\propto 1/R$ piece. To take the limit you must first Poisson resum one of the two sums so that you end up with something only $\propto R$ and then only the $(0,0)$ term contributes. The Poisson resummation also spits out a factor of $1/\sqrt{\mathrm{Im}(\tau)}$ from the Fourier transform of the exponential. Jan 20 at 12:45 • I see, I guess you are referring to Ginsparg's eq (8.3), where the sum is the Poisson resummation of eq (8.6). Indeed, each term (except when $n, n' = 0$) goes to zero individually when $r \to \infty$. Thanks! Jan 21 at 3:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 34, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9687023162841797, "perplexity": 555.0924783341181}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943562.70/warc/CC-MAIN-20230320211022-20230321001022-00067.warc.gz"}
https://chemistry.stackexchange.com/questions/18967/validity-of-troutons-rule?noredirect=1
Validity of Trouton's Rule I would like to know under what circumstances is Trouton's rule obeyed by liquids and why certain systems i.e. substances may deviate from Trouton's rule. By the way I know what Trouton's rule is and I have a vague idea that sometimes Trouton's rule is not obeyed due to the presence of hydrogen bonding in liquids such as water but I would like to know more about other reasons why certain situations the rule is invalid. The rule says that for many liquids the entropy of vaporization is almost the same, around ${\rm 85\,J K^{-1}}$. It's true that the rule states that entropy of vapourization is $85 – 88~\mathrm{J/K}$.However, the rule hardly works for high ordered substances exhibiting hydrogen bonding.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8006572127342224, "perplexity": 752.81580274957}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107882103.34/warc/CC-MAIN-20201024080855-20201024110855-00495.warc.gz"}
https://link.springer.com/article/10.1007/s00181-015-0947-6
Empirical Economics , Volume 50, Issue 3, pp 1091–1109 # Spatial dependence in stock returns: local normalization and VaR forecasts • Thilo A. Schmitt • Rudi Schäfer • Dominik Wied • Thomas Guhr Article ## Abstract We analyze a recently proposed spatial autoregressive model for stock returns and compare it to a one-factor model and the sample covariance matrix. The influence of refinements to these covariance estimation methods is studied. We employ power mapping and the shrinkage estimator as noise reduction techniques for the correlations. Further, we address the empirically observed time-varying trends and volatilities of stock returns. Local normalization strips the time series of changing trends and fluctuating volatilities. As an alternative method, we consider a GARCH fit. In the context of portfolio optimization, we find that the spatial model and the shrinkage estimator have the best match between the estimated and realized risk measures. ### Keywords GARCH One-factor model Power mapping Spatial autoregressive model ### References 1. Anselin L (1988) Spatial econometrics: methods and models. Studies in operational regional science. Springer, Berlin 2. Arnold M, Stahlberg S, Wied D (2013) Modeling different kinds of spatial dependence in stock returns. Empir Econ 44(2):761–774 3. Bekaert G, Harvey C (1995) Time-varying world market integration. J Finance L(2):403–444 4. Bollerslev T (1986) Generalized autoregressive conditional heteroskedasticity. J Econom 31(3):307–327 5. Bollerslev T, Engle R, Wooldridge J (1988) A capital asset pricing model with time-varying covariances. J Polit Econ 96(1):116–131Google Scholar 6. Bouchaud JP, Potters M (2009) Theory of financial risk and derivative pricing: from statistical physics to risk management, 2nd edn. Cambridge University Press, CambridgeGoogle Scholar 7. Cressie N (1993) Statistics for spatial data. Wiley series in probability and mathematical statistics: applied probability and statistics. Wiley, New YorkGoogle Scholar 8. Elton EJ, Gruber MJ, Brown SJ, Goetzmann WN (2006) Modern portfolio theory and investment analysis, 7th edn. Wiley, New YorkGoogle Scholar 9. Engle R (1982) Autoregressive conditional heteroscedasticity with estimates of the variance of United Kingdom inflation. Econom J Econom Soc 50(4):987–1007Google Scholar 10. Engle R (2002) Dynamic conditional correlation. J Bus Econ Stat 20(3):339–350 11. Giada L, Marsili M (2001) Data clustering and noise undressing of correlation matrices. Phys Rev E 63(6):061,101 12. Gopikrishnan P, Rosenow B, Plerou V, Stanley H (2001) Quantifying and interpreting collective behavior in financial markets. Phys Rev E 64(3):035,106 13. Guhr T, Kälber B (2003) A new method to estimate the noise in financial correlation matrices. J Phys A Math Gen 36(12):3009–3032 14. Hansen PR, Lunde A (2005) A forecast comparison of volatility models: does anything beat a GARCH(1,1)? J Appl Econom 20(7):873–889 15. Jorion P (2007) Value at risk: the new benchmark for managing financial risk, 3rd edn. McGraw-Hill, New YorkGoogle Scholar 16. Laloux L, Cizeau P, Bouchaud JP, Potters M (1999) Noise dressing of financial correlation matrices. Phys Rev Lett 83(7):1467–1470 17. Ledoit O, Wolf M (2003) Improved estimation of the covariance matrix of stock returns with an application to portfolio selection. J Empir Finance 10(5):603–621 18. Ledoit O, Wolf M (2004a) A well-conditioned estimator for large-dimensional covariance matrices. J Multivar Anal 88(2):365–411 19. Ledoit O, Wolf M (2004b) Honey, I shrunk the sample covariance matrix. J Portf Manag 30(4):1–22 20. Ledoit O, Wolf M (2008) Robust performance hypothesis testing with the Sharpe ratio. J Empir Finance 15(5):850–859 21. Lee LF, Liu X (2009) Efficient GMM estimation of high order spatial autoregressive models with autoregressive disturbances. Econom Theory 26(1):187 22. LeSage J, Pace R (2009) Introduction to spatial econometrics. Statistics: a series of textbooks and monographs. CRC Press INC, Boca Raton 23. Lin X, Lf Lee (2010) GMM estimation of spatial autoregressive models with unknown heteroskedasticity. J Econom 157(1):34–52 24. Longin FM, Solnik B (1995) Is the correlation in international equity returns constant: 1960–1990? J Int Money Finance 14(1):3–26 25. Markowitz H (1952) Portfolio selection. J Finance 7(1):77–91Google Scholar 26. Markowitz H (1959) Portfolio selection: efficient diversification of investment. Yale University Press, New HavenGoogle Scholar 27. Münnix MC, Shimada T, Schäfer R, Leyvraz F, Seligman TH, Guhr T, Stanley HE (2012) Identifying states of a financial market. Sci Rep 2:644Google Scholar 28. Pafka S, Kondor I (2002) Noisy covariance matrices and portfolio optimization. Eur Phys J B 27:277–280Google Scholar 29. Pafka S, Kondor I (2003) Noisy covariance matrices and portfolio optimization II. Phys A 319:487–494 30. Pantaleo E, Tumminello M, Lillo F, Mantegna RN (2011) When do improved covariance matrix estimators enhance portfolio optimization? An empirical comparative study of nine estimators. Quant Finance 11(7):1067–1080 31. Plerou V, Gopikrishnan P, Rosenow B, Amaral L, Stanley H (1999) Universal and nonuniversal properties of cross correlations in financial time series. Phys Rev Lett 83(7):1471–1474 32. Plerou V, Gopikrishnan P, Rosenow B, Amaral L, Guhr T, Stanley H (2002) Random matrix approach to cross correlations in financial data. Phys Rev E 65(066126)Google Scholar 33. Poon S, Granger C (2003) Forecasting volatility in financial markets: a review. J Econ Lit XLI(June):478–539 34. Santos A, Nogales F, Ruiz E (2013) Comparing univariate and multivariate models to forecast portfolio value-at-risk. J Finan Econom 11(2):400–441 35. Schäfer R, Guhr T (2010) Local normalization: uncovering correlations in non-stationary financial time series. Phys A 389(18):3856–3865 36. Schäfer R, Nilsson NF, Guhr T (2010) Power mapping with dynamical adjustment for improved portfolio optimization. Quant Finance 10(1):107–119 37. Schäfer R, Nilsson NF, Guhr T (2010) Power mapping with dynamical adjustment for improved portfolio optimization. Quant Finance 10(1):107–119 38. Sharpe W (1963) A simplified model for portfolio analysis. Manag Sci 9(2):277–293 39. Sharpe WF (1994) The sharpe ratio. J Portf Manag 21(1):49–58 40. Wied D (2013) Cusum-type testing for changing parameters in a spatial autoregressive model for stock returns. J Time Ser Anal 34(1):221–229 41. Wied D (2015) A nonparametric test for a constant correlation matrix. Econom Rev. doi:10.1080/07474938.2014.998152 ## Authors and Affiliations • Thilo A. Schmitt • 1 • Rudi Schäfer • 1 • Dominik Wied • 2 • Thomas Guhr • 1 1. 1.Fakultät für PhysikUniversität Duisburg-EssenDuisburgGermany 2. 2.Fakultät StatistikDortmundGermany
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8683851361274719, "perplexity": 25525.3106167037}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818686983.8/warc/CC-MAIN-20170920085844-20170920105844-00467.warc.gz"}
https://ask.libreoffice.org/en/question/179747/base-nm-many-to-many-relationships-between-two-tables-using-intermediate-table-how-to-enter-display-modify-the-data-using-forms/?comment=179792
# Base n:m many to many relationships between two tables using intermediate table, how to enter, display & modify the data using forms? edit The uploaded file is a simple example of 2 tables in a many to many relationship using an intermediate table. The Items table represents pieces of metal art. The Colors table represents paint colors that can be applied to the various Items. All colors are NOT applied to all Items. There is relevant data in all 3 tables. I've completed the Tools/Relationships chart accurately. The goals: 1) When entering a new Item, be able to see and select the paint color(s) - by color name (not just the ID). 2) When adding a new Color, be able to see and select the Item(s) by Item Description (not just the ID). 3) Be able to modify any Item record adding or removing colors. 4) When viewing Item records, see the color names displayed on the form. I suppose I'm no longer a newbie to LO base but I'm no expert either. Any advice would be greatly appreciated. edit retag close merge delete Hello, Have previously looked at your web site & so have a tiny bit of understanding at to your project here. Having a difficult time placing what you want into the overall picture. For example, when Item 0 (Bird - large) is ordered, a) is that available in only red, blue and orange?; b) how is it known what color is wanted - seems no tie to order process? What am I missing? These three tables seem isolated from all else with the exception of an item (of what color is unknown) to be ordered. ( 2019-01-17 19:37:36 +0100 )edit Hi - I appreciate you checking out our web-site though it is not the eCommerce site we want it to be. I hope you followed the link to our etsy store where you can better understand the breath of our metal art inventory. The example file I uploaded is just a microcosm of the larger system, in my attempt to not "muddy the water" with a lot of other superfluous minutiae. Your questions are good. The system is not part of a customer ordering system. It is part of a Sales Order Tracking, Inventory and Costing system. I'm trying to build the Sales Order Tracking piece first, since that is the urgent need. The Item file is the "heart" file (in my mind) of the system. Each piece of metal art has attributes associated with it that are used to analyze sales and forecast production. The many to many relationship ...(more) ( 2019-01-17 21:24:32 +0100 )edit Part 2 - My Wife and I are the only users of the system I'm creating. We are constantly thinking of new metal art to make and offer. Those new pieces (new Items, in the example uploaded file) would be assigned various "colors" (think "tags") that would then be used for sorting, grouping, filtering, etc. for future reports (that I have yet to begin creating). We occasionally add new tags (colors) but often add new Items (and their related "tags"). Today, we track everything in an incredibly complex (my opinion) Calc spreadsheet that is cumbersome to maintain and VERY inflexible. Let me know if you have more questions, PLEASE! ( 2019-01-17 21:30:58 +0100 )edit Part 3 - I haven't specifically answered your questions - I'm sorry. Here goes. a) Item 0 (Bird - large) would only be available in Red, Blue and Orange - Correct. b) The system would not be interacting with potential customers to be concerned about a different color request. I hope this helps. ( 2019-01-17 21:36:08 +0100 )edit One quick question. I know you've tagged questions with HSQLDB. Are you using the default embedded HSQLDB? IF so, before you go too far, consider changing. It is not for production use as there can be potential data loss. Backup! ( 2019-01-17 21:36:37 +0100 )edit Thank you for bringing up HSQLDB. I'm using the internal HSQLDB as a development platform and for learning. I have a MySQL server sitting across the room, that I'll move to for multi-user access (Wife & I) once I get the system working well enough to use. I see you've posted an answer, I'm about to look at it now. ( 2019-01-17 21:46:40 +0100 )edit Consider moving to MySQL now (I personally find PostgreSQL and Firebird to be better). You can place MySQL server software on any machine and migrate to another machine later. There are multiple reasons for changing NOW. Already mentioned embedded problem with data loss. Also, that embedded version is VERY old. Moving to MySQL or another DB allows you to take advantage of newer versions of SQL and capabilities. Later conversion may require you to re-write SQL you may already have as syntax may differ. Field types in tables can differ also. Well this is some. Hopefully you get the point. ( 2019-01-17 21:56:12 +0100 )edit Okay, you've made an impression. Will I have to add a connector to my workstation Base install to connect to say a Firebird instance, also installed on my workstation? ( 2019-01-17 22:16:06 +0100 )edit Well, first thing is to determine which DB you are to end up with. I believe I mentioned planning before. Consider in this choice the future and how e-commerce may affect this as hosting companies may allow for different DB's (they host but you interface with) and this may be a factor in choosing. Migration and connection will depend upon the from and to and is best in a new question. ( 2019-01-17 22:26:51 +0100 )edit Not going to migrate this system outside our house. ( 2019-01-17 22:33:37 +0100 )edit Sort by » oldest newest most voted Hello, Notwithstanding the open questions in the comment, have modified your 'Item' form. It is simply adding a sub form here. A table control was used and the Color_ID field was changed to a List box to display the color (from color table) rather than the ID. Sample ----- Many_To_Many_Question.odb A few other points. Have seen you have macros (did not look at closely - yet). Since there are still some open bugs regarding form sizing, you can set the size and position of the form using a macro. See: Base: How to define a form's exact size? and on Dutch forum (has sample files): are the properties width, heigth, xpos, ypos of a form in base accessible via basic? The other point is the button use for changing forms. Don't see that as necessary here. You can have multiple internal main forms on a Base form. You can also hide/reveal controls, if wanted, when using different forms. See: Hide combobox in starbasic with Macro Libreoffice Base - Display Form based on Group Box Option selected (be sure to see link in comment under answer) Tabbed forms within a Main Form Edit 2019-01-17: OK. Here is the modified form in edit view. Have used the Form Navigator and selected the SubFor and show the properties: By selecting the ellipsis to the far right of either Link property brings up this dialog: By using the drop down (actually a list of fields in each table) you can relate what is to be viewed in the sub form. Now although your intermediate table contains entries for many items, this linking process will only show those items in the intermediate table which match the selected field in the displayed Items (master) table. Hope this helps. more I've looked at the file you uploaded and the ItemsModified form. I see the table shows the Item_ID and Color columns. I just realized I omitted something that is significant. I don't see a way to "select" colors associated with a new Item and would then be saved as new record(s) in the intermediate rel_Item_Color table. I'll now look at the other links you provided... ( 2019-01-17 21:57:01 +0100 )edit Maybe I don't see something. On this modified form, if you enter a new item the sub form will be empty. You can add records to this sub form - this is the association and thus the intermediate table is updated. ( 2019-01-17 22:02:11 +0100 )edit Then I must be missing something. When I add a new Item record I don't see how to scroll or how to select color choices.... ( 2019-01-17 22:07:02 +0100 )edit I've added 2 new items and there are no associated intermediate table records for the new Items.... ( 2019-01-17 22:11:12 +0100 )edit Many things to learn yet. When you enter a new item, the record is not in the database until it is saved. Until it is saved there is no connection in the sub form. Two ways to save on existing form. 1) go to a different record (new or otherwise) and back to this one which essentially saves this record, or 2) on the Navigation bar under the Item click on the Save Record icon which will display the new record & allow entry in the sub form. ( 2019-01-17 22:14:36 +0100 )edit This empty sub form now has the Item ID already filled in. To add an item, select the Color field & use drop down to select color or simply start typing color wanted. It is list box as discussed in another question. ( 2019-01-17 22:18:03 +0100 )edit Okay, I see it working now. But I don't see how you did it.... When I "edit" the ItemsModified form, I can bring up the Properties of the Navigation Bar, nothing to see there. When I bring up the Properties of the Table Control I don't see anything there either....??? ( 2019-01-17 22:22:22 +0100 )edit Sorry, should have mentioned in answer. This is how you typically use a sub form (see LO docs for more info if needed). You limit what is seen in a sub form by linking it to the main form. This limits records in the sub form to what is associated with it in the main form. Look at the sub form properties on the Data tab under Link master/slave fields items. In this manner you can go even further with sub sub forms, sub sub sub forms etc. ( 2019-01-17 22:32:41 +0100 )edit My newbie-ism has me in a fog, I'm not understanding... ( 2019-01-17 22:35:54 +0100 )edit Okay, found the sub-form properties. ( 2019-01-17 22:50:34 +0100 )edit
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.16026781499385834, "perplexity": 2177.3936919049174}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247479159.2/warc/CC-MAIN-20190215204316-20190215230316-00447.warc.gz"}
https://dfrieds.com/python/math-operations.html
Python Beginner Concepts Tutorial # Math Operations In [1]: stuff = "Perform basic math operations, store logic and results in Python" In [2]: len(stuff) Out[2]: 64 Being a Mac guy, often times, I'll jump to Spotlight (press Cmd + spacebar) or the Google search bar for math calculations. It feels great to use my keyboard to write out operations fast and have a wide screen to include many numbers. But, I run into an issue. Have you ever done one off-the-cuff calculation, erased it, done another calculation, and then realized you wanted to compare your two ansewers? More than just your answers, you should probably compare your logic to see what's wrong. Please don't make the same mistake I had always done. I recommend to put numbers in a data structure, design a function to save your logic, and store results for easy access. Let me show you how. ### Software Sales Example¶ Python is fantastic because you can easily do commonplace operations such as addition, subtraction, multiplication, division and more! This example will run you through operations in a real-world example. Perhaps you're in a meeting and somebody asks you to write down the sales amounts for the past 4 months and see your % growth month over month. To solve this fast, you write out the numbers, 24200, 26800, 19500 and then do the operations one by one. In [3]: (26800 - 24200)/24200*100 Out[3]: 10.743801652892563 In [4]: (19500-26800)/26800*100 Out[4]: -27.238805970149254 However, doing math this way in Python you're prone to errors as you may mispell a large number, you haven't saved your output to compare in calculations and your math is not modular to expand on these calculations for the next few months. Here's what I'd recommend to fix this: #### Store amounts in a data structure¶ Now, we don't need to re-write these long numbers over and over again. Instead, we can index a number in a list and use auto-correct to write out the list name a second time. This data structure can be particularly useful because we can easily modify amounts in place and append additional amounts. In [5]: sales_by_month = [24200, 26800, 19500] #### Design a function to calculate month over month growth¶ Our function can store complex logic and allow us to easily perform calculations given any amounts moving forward. In [6]: def period_growth(amount_one, amount_two): return (amount_two - amount_one)/amount_one*100 In [7]: period_growth(sales_by_month[0], sales_by_month[1]) Out[7]: 10.743801652892563 This growth percentage is the same as our off-the-cuff calculations above. So, our function is the correct logic. #### Store the results¶ If we do an operation now, we can easily reference it later to share with our boss. In [8]: growth_in_february = period_growth(sales_by_month[0], sales_by_month[1]) In [9]: growth_in_march = period_growth(sales_by_month[1], sales_by_month[2]) If we're asked "What was growth from January to February?", we can just print out our variable. In [10]: growth_in_february Out[10]: 10.743801652892563 ### Summary of Operations in Python¶ In [11]: 8 + 2 Out[11]: 10 #### Subtraction¶ In [12]: 6 - 4 Out[12]: 2 #### Multiplication¶ In [13]: 5 * 4 Out[13]: 20 #### Division¶ In [14]: 10 / 5 Out[14]: 2.0 You're probably wondering why the answer is 2.0 and not simply 2. In Python 3, the / operator does floating point division so you're returned a float value of 2.0. #### Exponentiation¶ The operator ** acts to raise a number to a power. In [15]: 5 ** 2 Out[15]: 25 #### Floor division¶ The floor division operator, //, divides two numbers and round the result down to an integer. For example, let's say a movie is 80 minutes. You want to read off the duration of the movie in hours and minutes. In [16]: 80 // 60 Out[16]: 1 Let's use the modulo operator to find the minutes part of the 80 minute movie. #### Modulo¶ The modulus operator, %, is used to divide two numbers and return the remainder. In [17]: 80 % 60 Out[17]: 20 Based on use of these two operations, we know the movie is 1 hour and 20 minutes. Also, the modulus operator is useful to detect if numbers are even or odd. In [18]: 8 % 2 Out[18]: 0 8 is evenly divisible by 2 so we're left with a remainder of 0. In [19]: 7 % 2 Out[19]: 1 7 is not evenly divisible by 2 so we're left with a remainder of 1.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26090219616889954, "perplexity": 1379.5493835524394}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571150.88/warc/CC-MAIN-20220810070501-20220810100501-00234.warc.gz"}
https://socratic.org/questions/how-do-you-solve-6-x-4-8-x-2#257878
Algebra Topics # How do you solve 6(x+4)=8(x-2)? Apr 23, 2016 $6 \left(x + 4\right) = 8 \left(x - 2\right)$ $6 x + 24 = 8 x - 16$ $40 = 2 x$ $20 = x$ #### Explanation: To solve this equation, we first must expand both sets of brackets. To do this, multiply the number or letter on the outside of the brackets by the numbers or letter on the inside. $6 \left(x + 4\right)$ $6 x + 24$ $6$ multiplied by $x$ gives $6 x$ $6$ multiplied by $4$ gives $24$ So the sum currently looks like this: $6 x + 24 = 8 \left(x + 2\right)$ Now we must expand the brackets on the right side of the equals sign. $8 \left(x - 2\right)$ $8 x - 16$ $8$ multiplied by $x$ gives $8 x$ $8$ multiplied by $- 2$ gives $- 16$ So now the sum looks a little more approachable, like this: $6 x + 24 = 8 x - 16$ To finish the sum, we have to have the like terms on the same side of the equal sign. In other words, we have to get the $x$'s on one side of the equals sign and the numbers on the other. First, we have to cancel out the $- 16$ by adding $16$ to both sides of the equals sign. This gives $6 x + 40 = 8 x$ Then, cancel out the $6 x$ by taking $6 x$ off both sides of the equals sign. This gives $40 = 2 x$ Finally, to fully simplify the answer, divide both sides of the equals sign by two to get a single $x$ Giving: $20 = x$ Hope this helped! ##### Impact of this question 746 views around the world
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 31, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9303158521652222, "perplexity": 227.15524812970577}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304572.73/warc/CC-MAIN-20220124155118-20220124185118-00675.warc.gz"}
http://www.fractalfield.com/mathematicsoffusion/
] > ORIGIN of NEGENTROPY: Compressions, The Hydrogen Atom, and Phase Conjugation Prolog: to Mathematics of Fusion- May 17, 2012- from William Donavan, Martin Jones, Dan Winter article below ## Compressions, The Hydrogen Atom*, and Phase Conjugation New Golden Mathematics of Fusion/Implosion: Restoring Centripetal Forces "This way of turning compression into (charge) acceleration is hypothesized to be the core wave mechanism of phase conjugation (apparent self-organization) and the centripetal forces of gravity, life force, color, and perception." == Dan Winter's Compelling new Book- A Scientific Tour de Force- Implosion Group Publishing..see: www.fractalfield.com/fractalspacetime ## Fractal Space Time Dan Winter’s new book presents the most compelling and systematic scientific evidence to date – that fractality in space and time is the specific mechanism and cause of gravity, biologic negentropy, life force, perception, and human bliss. Although several hugely famous authors have already presented their books on fractality in time- by comparison – this book shows they were functionally clueless about the physics. Winter quotes many scientists who have already speculated that fractality is the cause of gravity- BUT Winter is the first to very specifically define the frequency, geometry, and wave mechanic optimization of that gravity causing fractality. "Everything Centripetal, Negentropic, Gravity Producing, or ALIVE - becomes so - by BEING a (perfect wave fractal) PHASE CONJUGATE PUMP WAVE!" Winter’s work (with his mathematics team- including Martin Jones)- was the first to prove the wave equations showing golden ratio solves the problem of max constructive interference, compression, and phase conjugation. SO phase conjugation – well known and proven in optics for time reversal, self organization and negentropy- now becomes a candidate for proving negentropy in general (many physicists agree that the gravity is itself negetropic / self  organizing). Originally- just Dan's theory – that golden ratio phase conjugate multiples of planck length and time – the universal key signature of the vacuum- would optimize negentropy. Winter has now gone on to prove it. He was famous for proving that hydrogen radii are exact golden ratio phase conjugate multiples of Planck. In his new book Winter has assembled with equation graphics results overwhelming further evidence- that golden ratio exact multiples of planck predict: * photosynthesis frequencies, * schumann resonance frequencies, * brainwave frequencies causing peak perception/ bliss, *(new) table showing the sacral cranial tidal frequencies are phase conjugate , * new table showing the ear ringing ‘sound current’ frequencies heard by meditators are phase conjugate !, *almost exact duration of Earth AND Venus orbital years,  - the list goes on ... (dodeca /icosa conjugate geometry of d,f electron shell noble gases and platinum group metals- physics of alchemic implosive charge collapse- new science of transmutation..). These are all the structures which produce life and negentropy! They fit the pattern far too well- for this to be arbitrary. Everyone agrees that fractality is infinite compression. Fractal mathematics teaches the mathematics of infinite compression but until Winter- no one knew what a fractal FIELD was. Golden Ratio is unquestionably self-similarity optimized- as precisely is fractality. Einstein argued that gravity was infinite charge compression- but never learned what a fractal was. Winter now presents overwhelming evidence that golden ratio phase conjugation IS fractality incarnate in all wave mechanics. IF Winter is right with his exact frequency signatures for fractality in space and time- he clearly presents how to repair all wave systems to emerge from chaos- fixing environments for peak perception, amplifying the Schumann phase conjugate pump wave so Gaia climate emerges from chaos- are just the beginning. Fractality in time is now quantized and predictable- you can set your calendar. Synchronicity is nothing more than the charge coupling produced by conjugate embedding – in the (fractal) charge rotation intervals- called time. How to emerge from chaos- starts with knowing why objects fall to the ground- now for the first time both questions are answered- in pure fractal highly accurate wave mechanics: HOW negentropy originates! Predictably with Winter, there is also a deeply spiritual aspect to his new book. He claims that ALL concepts of 'sacred space' (bioactive fields- the SHEM), collective unconscious, communion of saints, and living plasma surviving death (NDE)- are specifically explained and can be created and optimized intelligently with the teachable science of ubiquitous phase conjugate dielectrics, optics, magnetics etc (Elizabeth Rauscher has acknowledged Winter for inventing phase conjugate magnetics). He insists spiritual traditions are ONLY completed with a new and powerful fractal geometry wave physics- eliminating the need for disempowering miracle worship, personality worship.. AND religion wars. (The Fractal Space-Time equations here form the basis of a developing technology of Life Force - Plasma Rejuvenation Field - re-invented PRIORE device - see new PICTURES Priore hardware-at link for the book : www.fractalfield.com/fractalspacetime ) 3 Ways to Order Now: Fractal SpaceTime - Dan Winter's compelling new book - *1: PDF Full Color Online, *2: Amazon Kindle Color Online, *3. Printed 192 Page Softbound -Color Covers -Full A4 Size 1. Order the Full Color 192 Page PDF -for immediate reading online: 10 Euros ( = approx \$13.00 USD) Fractal Space Time-PDF color 2 Order the Full Color Amazon Kindle Edition - for immediate reading online: www.amazon.com/dp/B00OP4IFN6 1 1 1 or 3: Order the Beautifully Printed - Soft-Bound Color Cover - 192 Page PRINTED Version - just published-Implosion Group Publishing Australian \$29.50 + 10 Air Shipping =Total 39.50AUD (=Approx 35 USD) Fractal SpaceTime:Printed Book Mathematics to accompany the original: goldenmean.info/coincidence Application to Biologic Architecture: all living structures life force are defined by their ability to attract charge (become electrically centripetal)- 3 ways to measure life force electrically: goldenmean.info/architecture ! -Fractal Phase Conjugate Nature of Self-Organization/ LIFE- in TIME: Revealed by Dan Winter's new equation for implosive phase conjugation - precise golden ratio exponents- multiples times PLANCK TIME: Immediately above: the exact frequency signature (frequencies labelled in BLUE)- and wave shape of the lo frequency- longitudinal wave generating - phase conjugate pump wave - +which causes GAIA - the Earth grid (see the Schumann series there) to emerge from chaos and become self organizing +which causes DNA to implosive braid / recursive embed- to set up NEGENTROPY and SELF ORGANIZATION in the blood (ensoulment) +which then predicts the EXACT frequency series used to reduce pain magnetically used by Elizabeth Rauscher (she did not know the equation) +which then predicts the EXACT frequency series used to heal thousands of cancers by the PRIORE SYSTEM (who also did not know the equation). -- COHERE THE VACUUM- Fission vs Fusion- Measure the Local Binding Energy and PROVE That Fission Unravels the very Nest of Matter- Versus TRUE FUSION- Restored Centripetal Force- Actually Heals the Local Grid (Binding Energy)- Blockbuster new release- Hard Hitting Hi Tech- The Engineers introduce Radical New Thresholds in ZERO POINT ENERGY RESEARCH- Based on New Deep Understanding of the Phase Conjugate Nature of Fusion / Implosion - Forces: New Aug 28, 2013: New Realities- NY Radio Interview with Alan Steinfeld- Interviewing Dan Winter and Bill Donavan: Breakthru's in Zero Point Energy Research with Phase Conjugates / Implosion- The Latest! Download the 98 Minute MP3 Audio directly Here: www.fractalfield.com/mathematicsoffusion/NewRealitiesAlanDanBill.mp3 or - Access the show on New Realities RADIO - BBSRADIO- Here: http://www.bbsradio.com/archive_display.php?showname=New_Realities goldenmean.info/selforganization also compare 15 years ago Winter- predicted a new physics based on Golden Ratio: goldenmean.info/predictions -including that if golden ratio is the cause of gravity- then the phase velocities faster than light (time travel etc)- will be measured at golden ratio exponents times C light speed (which Prof Raymond Chiao may have measured: 4.23 x C is golden ratio cubed) *Dan Winter's extended new equation for the structure of the hydrogen atom: goldenmean.info/goldenproof - Planck Length x precise exponents of golden ratio= multiple hydrogen radii, shown in this article to be the precise wave mechanism of golden ratio implosive collapse / phase conjugation / fusion - would account for why 'fractal' hydrogen is the central bond of water, the 'zipper' ensouling DNA codon center bond, and solar fusion. Perfect FLAME IN THE MIND! - fractalfield.com/conjugatemind Application to the centripetal electrical wave mechanism of consciousness itself: new March 2013- Barcelona Conference lecture series with Dan Winter Nov 2012: Major Research Updates: Implosion Group / Fractal Field & New 2012 Dan Winter media interviews - RESEARCH PROJECTS Updates Related Articles from Dan Winter PREDICTIONS FOR A NEW PHYSICS OF GRAVITY & AWARENESS BASED ON RECURSION 2001: BrainPhire?Study Finds Fractal Golden Ratio Harmonics in BrainWaves during Euphoria / Active Visualization / Bliss? 10/2/2001: heterofi, Heterodyning and Powers of Phi - Review of Dan's Theory by Rick Anderson PHI(llotaxis)/GOLDEN RATIO- IS FRACTALITY/ RECURSION/ COMPRESSION/ EMBEDDING PERFECTED! Q:(Sirac / Spencer Brown): The problem of self reference : Or the problem of the observer : What is the nature of the world such that it can give rise to the question: What is the nature of the world?: How is it that the world can see itself?... A: (Winter): Self Reference / Self Embedding Perfected: (Pure wave principle of) Golden Mean as ratio - The mathematics of fusion (below), predicts a plasma symmetry which fusion is an energy source, www.greentechinfo.eu , which not only restores centripetal force- but like the ball lightning -self organization- (depicted also there)- is itself a profoundly healing and bioactive field! What part of the carbon powder DOES the fusion: nanotube / FULLERENES- their symmetry- see below (dodec - stellation- golden ratio! )- Such plasma containment- reduces local fission- increases the local binding energy (unlike in the employees from fission plants who go home with their bones measurably falling apart- a phenomenon that stops when they move away ..) For those who would like a friendly, enthusiastic, personal, highly graphic VIDEO intro to this concept how golden ratio creates implosive compression by perfecting constructive wave interference.. we suggest (one of over 300,000 videos of Dan Winter teaching on the web):- Melbourne lectures pt 1 of 8- with Dan Winter: Consistently voted 'the best of Dan Winter videos'- Starseed Gardens Presentations: Understanding Fractal Fields: Origins of Self Organizing / Centripetal Forces New Article on Naturally Occuring FRACTALITY in cell growth- Science Magazine: We hypothesize: LIFE FORCE in plants REQUIRES implosive charge compression from ELECTRICAL fractality. == In conjunction with Dan Winter's participation in NY 2014 (above): Dan wrote a new abstract on THE GEOMETRY OF SUCCESSFUL DEATH: - link www.goldenmean.info/immortality "The amount of coherence in your aura's charge field or plasma- is called your KA. The portion of that coherence which IS projectable and makes it to faster than light speed- is called the BA- like a seed pushed out of a husk or shell. The process of pushing the aura central charge seed (KA BA)- during death or coherent lucid dream involves a highly specific and known sequence of geometric phase transformations- documented as the 'Kluver Form Constants' (images below). What is profoundly instructive is that these transformations specifically known to be the geometric sequences generally seen at death- are precisely the recursive braid mechanics also present in DNA- AND follow the physics of phase conjugation. In other words- the so called pure intention / phase coherence of entering the world of the collective unconscious - or communion of saints requires specifically charge distribution optimized geometry - the so called ' phase conjugate dielectric/magnetic'. This then suggests a very specific electrical environment for successful death/birth -the prefect squeeze - enabling plasma distribution- the bed- the place- the magnetic map. As the Hopi and the Cherokee knew- the graveyard must be a plasma map- library card to survival critical ancestral memory as sustainable field effect. Korotkov pioneered how to measure the fractal charge distributed AIR- where phone calls to ancestors are enabled. Hints from the 'altar' or shem at Machu Pichu. (Generally opposite of the obscene electrosmog of a modern hospital). This further explains as Ray Moody has so nicely documented from professional medical reports- that the Near Death Experience (like the Kundalini) is an elecrically contagious plasma or charge field-with critical mass- and a propagation geometry. This Bio-PLASMA PROJECTION process explains in part the precise alignment of the tubes pointing from the Great Pyramid to Sirius - and the 'fractal' star mapping of dolmen structures. In fact the ancient SHEM unto the Lord- Biblically mistranslated as ALTAR in church- is a phase conjugate dielectric bioactive field plasma projector stone or crystal- not only door to electrically successful death/birth. It is no coincidence that the so called Sacred Ark - could non-destructively contain radioactive material- which we now know is an electrical quality of perfected conjugate compression EXACTLY like the same measureable quality of focused human attention. ALL of which is part of understanding the fractal / conjugate / implosive and negentropic ORIGIN in general of all centripetal forces... (the gravity, life force, perception).. which include the wave mechanics of successful death." --- Just Released: Dan Winter -New Aug 3, 2012 - Radio Interview- Lucid Dreaming and Physics of Death -Electrical Nature of Fear vs Fusion- and More - at TRUTHFREQUENCYRADIO.com - and TRUTHFREQUENCY.com Dan Winter- Index of Interviews- goldenmean.info/implosionmovies Guest Clip - below - for more see the source for the fractal-recursions.com Above note that in a FRACTAL FIELD- the recusive (inPHI-knit / "Gordian") SLIP KNOT- Goes on FOREVER! (Golden Ratio perfects this ..'Phase Conjugate Field' Principle) This is the IN-PHI-KNIT Connectedness (Collective Mind/Communion of Saints) Golden Ratio Principle at the core of DNA.. Entry upon test for PURE INTENTION.. The Perfectly SHAREABLE Wave! - See measured Is Golden Ratio optimized phase conjugation the way multiple compression waves converge constructively creating the origin of spin (vorticity) and in the process the origin of all centripetal forces- including gravity? Thus revealing HOW golden ratio phase conjugation defines WHICH frequencies create life! - Fractal Photosynthesis (charge distribution perfected wave mechanics/ perfected EMBEDDING - IS Life and mind).. ALL modelled beautifully by the Star Mother Kit- (below) New MAY 2013: Star Mother Kit Updated: === Exerpt from recent magazine article by Dan Winter: The key to (phase conjugate) cohering the energy of the vacuum.. is the understanding that the only way the inertia is stored and distributed IN the vacuum.. is in this (phase conjugate) symmetry which allows the wave inertia to be distributed infinitely with zero (resistance) destructive interference: (animation below- note the phase conjugate opposing 'pairs of pine cones kissing noses' pairs): grail anim below.. Images of phase conjugation from the Dan Winter's original TIMESTAR project: goldenmean.info/TIMESTAR/TIMESTAR.html Below- As this dodecahedron stellation undergoes expansion / collapse by precisely 2.618 (Phi^2)- try to visualize how the recursive constructive wave interference function- looks like this undergoing phase conjugation implosion / expansion (this original animation used with permission- courtesy of Paul Nylander http://bugman123.com ) SO to COHERE THE ENERGY of the Vacuum- to reach into the vacuum and extract the (charge) inertia- the symmetry of your (phase conjugate) array (optical/ dielectric / magnetic / phonon).. MUST invite that implosion of inertia into the perfect cone - BY BEING AS FRACTAL / CONJUGATE as the inertia of the vacuum is itself! In the pent, dodec/icos symmetry, we can see how the "concrescence" of waves, can produce centering of constructive pressure agreement producing the possibility of converging a virtually infinite number of nesting spin symmetries. This is nature PHI-lotactic of PERFECT embedding. Not only does it have evrything to do with magnetism ONE WAY WIND of inward implosion called gravity, it also is probably the essential geometry of COMPASSION itself. Being the solution to "constructive self re-entry" of wave mechanics, and the definition of 'opimized translation of vorticity' in hydrodynamics- the golden mean spiral provides the definition of: -self awareness (ability to self refer): meaning the feedback system which ultimately produces awareness (perfect fractality / self-similarity / recusion / embedding) - recycling perfected to the point of ultimately defining sustainability and (conjugate) self-organiztion - the ultimate path from matter (rotational inertia) to energy - the ultimate wave mechanic to extract the inertia of space / the vacuum: provides the implosion path- for 'Zero Point' vacuum coherence energy - the KHEM in AlKhemy -- "Phase conjugation: when pairs of pine cones learn to kiss noses" New MAY 2013: Star Mother Kit www.goldenmean.info/kit - Updated: \$60 US - + 10 for air mail shipping-total \$70 US - Visa/MC/Paypal to danwinter(at)fractalfield.com Note the Paypal Buy Now Button also accepts Visa or Master Card Star Mother Kit with Shipping Proof this is the correct model of hydrogen radii: goldenmean.info/goldenproof Mathematic Evidence this is the actual field effect symmetry which causes gravity and centripetal forces (phase conjugation): fractalfield.com/mathematicsoffusion Golden Ratio optimized conjugation of phases (wave fronts and wave phase velocities add and multiply or heterodyne- recursively and constructively)... simply is the best wave geometry for opposing vortex pairs to converge constructively... as we say - it is "the only way for pine cones to kiss noses". It is our strong hypothesis that this equation for golden ratio generated implosive / centripetal force- is the geometric wave mechanic of - phase conjugate optics, dielectrics AND (now we see examples of) magnetics! This is the precise animation of Dan Winter's new equation for the radii of hydrogen- in relation to Planck- (length AND time):ref goldenmean.info/goldenproof It is also the proven precise geometry of photons angles for the primary colors- which is the reason color AND photosynthesis exist (phase conjugation): goldenmean.info/fractalcolor This also precisely the geometry of the relationship of the proton, to planck, to black holes (ref:Nassim) see goldenmean.info/selforganization It is also the proven geometry of the DNA, Earth's magnetic lines, and arrangment of masses in the UNIVERSE! It is also the basic geometry of (golden ratio based) E8- Unified Field by Lisi It is also the precise geometric relationship of brainwaves during PEAK PERCEPTION/ BLISS - (the reason attention is electricall CENTRIPETAL) - my BLISS TUNER invention (at link) makes this teachable - In summary- this PHASE CONJUGATION optimized by golden ratio is the CAUSE and MECHANISM of: -the GRAVITY - all FUSION / black holes / all centripetal forces / all self organization - all LIFE / PERCEPTION / BLISS / - and the CAUSE of color / and photosynthesis (goldenmean.info/fractalcolor Note also - in line with how Bill Tiller- measured consciousness essentially as how much your focused attention CAUSES electric fields to compress- how John McGovern- discovered how the origin of all SHAMAN ALPHABETS- petro-glyphs are generally PLASMA RESIDUE- with a universal translation. He started after finding Hebrew on Australian Aboriginal cave petroglyphs - finding that he could essentially read any rock petroglyph carved by a shaman anywhere in the world - based only on PLASMA RESIDUE symmetry. Note how - now this has been embraced by TONY PERATT- Los Alamos PLASMA PHYSICS- who now publish this work. Discussion - literature examples- pictures - (lower part of): goldenmean.info/whaledreamers A shadow of a plasma residue / phosphene flare / or spiral on the donut IS alphabet - because putting these in symmetry (sequence) called ALPHABET- is how you MAKE the plasma of your mind implosive and charge compressing (golem making) - which IS (the only definition of ) consciousness! Phase Conjugation- How Double Cones (Pine Cones Kissing Noses)- Fractal/Golden Ratio Origin of Alphabets.. (continued here goldenmean.info/dnaring - with exerpt from ORIGIN OF SLAVIC / CYRILLIC / RUSSIAN Because the centripetal plasma force this creates in your aura- is the cause and mechanism of consciousness. This animation is an exerpt of http://www.youtube.com/watch?v=R83iRcKgTZM&feature=player_embedded# which makes reference to: http://peshera.org/ Here we see that the two spiral cone pairs (PHASE CONJUGATION) is also the origin also of Slavic / Cyrillic / Russian (we show elsewhere the same vortex pair origin - HEBREW / SANSKRIT / OPHANIM Note the REASON opposing vortex as PLASMA RESIDUE / PHOSPHENE FLARE geometry- are alphabet - because only assembling the elements of that symmetry- initiates the cone implosion compression (of charge) which is the cause and definition of how consciousness is created by the plasma bending hemispheres of the brain! -- Below exerpt from www.goldenmean.info/dnaring Dan Winter's equation to map the self organizing golden mean spiral on the self organizing torus (shape of all field effect)- then by animating the symmetry views of that- his new equation- proving the origin of Hebrew and Sanskrit- Simply (the psychokinesis of true alphabets) - the ability to visualize symmetry of field sequences- and thus the compression of charge sequences- which ARE creation mechanics (see link): Remember this self organizing golden spiral mapped on the self organizing torus donut domain- is not changing shape- only your point of view. Consume the perspective- e pluribus unum- from many: one- leap out of flatland - get up off your cross- and follow me - to symbolize IS to embed.. The pure self organizing symmetry of charge / light - is the only alphabet which ever was immortal OR psychokinetic. Dan Winter: Book 1, Book 2, Book 3, Book 4 The 'point' is.. as Stanford physicist Bill Tiller ("Conscious Acts of Creation") book proved unmistakeably - FOCUSED HUMAN ATTENTION CAUSES ELECTRIC FIELDS TO COMPRESS!- Dan Winter shows how properly used- the alphabet allows the mind- to trigger charge compression (golden spiral- on toroid field)- in symmetry sequence (alphabet) - to create the phenomenon of MINDFULNESS- which is literally (electrically) - a (golden spiral based) FOCUSing device- for charge. That implosion of focus (electric charge) being at once - the origin of MIND- / consciousness- AND the very (electrically centripetal) NATURE of LIFE FORCE ( see 3 ways to measure life force- as architecture of life- at goldenmean.info/architecture ). Using the alphabet correctly - we set the plasma field of the optical cortex (visual image field) - into the correct sequence of implosive donut (toroid) fields- to do exactly what physics knows is creation mechanics - in the mind (sequence field effects into the symmetry which makes them centripetal - golden ratio- AND nest in -tetra/cubic- arrays). The FUSION Symmetry Solution: Mathematics of FUSION- Exerpt from the original article: goldenmean.info/selforganization Golden Ratio SOLVES the problem of constructive wave interference & therefore of compression-THE PERFECTED CHARGE COLLAPSE WHICH IS THE CAUSE OF GRAVITY and therefore of the perfected charge distribution causing life and mind.. Download Dan Winter's original paper-from the Proceedings of the First-(in the series) of UNIFIED FIELD PHYSICISTS -Budapest 05(&08)- Exerpt from First Conference Proceedings: Dan Winter - in 05 & 06- First to Hypothesize- FRACTALITY as CAUSE of GRAVITY: The First to Predict: Gravity's Electric CAUSE- is Golden Ratio FRACTALITY!-Paper -"Is Fractality: The Electrical Mechanism of Gravity, (and Perception and Color Descrimination) pdf - from the CD / Proceedings: (w/ color graphics) 5.1 Meg : Ch19ADanWinter.pdf , Conference Links(w. Nassim, Eliz Rauscher, Richard Amoroso & Many others): goldenmean.info/budapest08/physicsoverview.html Note how our SUPERIMPLODER- APPLIES the EXACT 10 spiral cone geometry - scaled precisely- to new hydrogen fusion physics.. and successfully serves AGRICULTURE/ PLANT GROWTH: with RESTORED (LIFE GIVING) CENTRIPETAL FORCES! (in water AND magnetics combined) Dramatic new results for TheImploder.com for agriculture- see: fractalfield.com/implodermagneticresearch Here is the latest Imploder Research Results Page: fractalfield.com/implodermagneticresearch This article on the web url: fractalfield.com/mathematicsoffusion - from Dan Winter, Implosion Group and Friends, May 17, 2012 Main Index: goldenmean.info -2 mil. hits/ month -Implosion Group Dan Winter main film library 250+ films, All videos about Dan Winter - on the web- 100000 videos Announcing: All New Suite of Fractal Tech:Projects> TheImploder.com - fractalfield.com - Breakthru-Technologies.com - new Pyraphi.com Upcoming Events Calendar: w/Dan Winter, Subscribe/Unsubscribe email to: [email protected] , TWO Million hits/month: goldenmean.info & fractalfield.com, - Link to: 41,300 Websites which link to goldenmean.info > 6,850 Web sites which refer to fractalfield.com > Language Index- English, French, Spanish, German, Italian - new Polish, & Czech , Dutch >SiteSearch or Search Site w/Yahoo - DVD's/Books - "World Tours!"-Events Calendar Newest Implosion Powerpoint! Dan Winter's BOOKS:1.Alphabet of the Heart, 2. EartHeart, 3.Implosions Grand Attractor, 4. Implosion:Secret Science of Ecstasy&Immortality , - Origin of Alphabets Physics - Stellar Purpose/History of DNA Articles - new master Photo Galleries. -Bonus: Updated 25 DVD Set- 144 Euro -with Dan Winter- now includes latest beautiful color printed book: goldenmean.info/tools Just added to 300+ Implosion Education Film library with Dan Winter ( goldenmean.info/implosionmovies ) -- The technical article: Note: Due to the Latex html conversion for equations display, the below equations may display better in Firefox than Safari or... -- ## Compressions, The Hydrogen Atom, and Phase Conjugation New Golden Mathematics of Fusion/Implosion: Restoring Centripetal Forces Abstract: It has always been self-evident that virtually every living structure, leaf symmetry and the pent geometry of almost every living protein, uses the golden ratio for an electrical reason. Golden Ratio defines beauty in general for reasons which are also fundamental to physics. It is also self-evident, by inspection, that golden ratio solves the problem of recursive wave interference because it is the only solution to both adding and multiplying (recursive wave interference precisely turns recursive wave addition into multiplication). The authors present their new evidence of the golden ratio structure of hydrogen as one more evidence of the fundamental solution it poses to constructive wave interference, which IS constructive compression. Earlier they noted software emulation which showed golden ratio as the general solution to constructive interference. Here they show the wave-equation-mathematics proof. The first set shows golden ratio is the solution to constructive compression (implosion / fusion) in a line. Then they show how that precise line of golden ratio points extends out each axis of symmetry of the pent dodeca which is the proven shape of hydrogen, DNA, Earth Grid, and the UNIVERSE. Further a model is presented for how this golden ratio causes gravity electrically: As a portion of the inertia encounters recursive constructive wave interference by golden ratio, in addition to the wave lengths, the wave phase velocities are hypothesized to heterodyne constructively recursively. This way of turning compression into (charge) acceleration is hypothesized to be the core wave mechanism of phase conjugation (apparent self-organization) and the centripetal forces of gravity, life force, color, and perception. ### 1. Introduction Is Golden Ratio-optimized phase-conjugation the way that multiple compression waves converge constructively? Thus, the origin of spin (vorticity) solved, and in the process, the origin of all centripetal forces, including gravity? The question of how to COHERE THE VACUUM is not unlike Don Quixote tilting his sword as he attacks the WINDMILL. Actually, the unified field appears to be made of a compressible unified substance which behaves like a fluid in the wind. It matters little whether you call it aether, ether, or ‘the space time continuum of curved space’ or, as we choose to call it, the compression and rarefaction of the vacuum as really particle/waves of CHARGE itself. What IS important is to constantly imagine yourself part of a truly UNIFIED FIELD. This avoids the religious war-generating schizophrenia of those who consider their physics and their field theory quite and truly SEPARATE from their spirituality and life essence. We think that such a dichotomy is a fatal mistake. The reason that Don Quixote holds the clue here is that the unified field, and therefore the huge inertia which is clearly present in the vacuum, IS literally like a WIND. So, tilting at windmills with the right approach angle to transform the wind power to a life-giving-energizing advantage and not be blown away by it IS the appropriate way to gain the power of nature. Consider the pine cone or the chicken egg (or DNA proteins ) for example. Along the lines of the windmill analogy, clearly they arrange themselves into the perfect windmill-like configuration to catch the charge in the wind of gravity (the vacuum). That perfect windmill to catch the voltage, the energy - is clearly pine cone (fractal) shaped. Elsewhere we discuss ways to measure the millivolts called LIFE FORCE which pine cones and eggs clearly extract from the vacuum: http://goldenmean.info/biophoton , http://goldenmean.info/architecture The REASON this fractality catches the implosive collapse WIND of charge called gravity is the same reason that it CAUSES gravity. There are many technologies implied by these understandings like Schaeffer’s famous implosive collapse cavitation steam generator. The principle of implosion and fusion forces is always the same: gaining energy / inertia during implosive collapse. The authors suggest that our collective survival, like the way pine cones and seeds survive, may lie in discovering WHAT MAKES COLLAPSE IMPLOSIVE? The pine cone is like an implosive cavitation impeller- it holds its capacitance (its seeds) in the ambient wind of charge (the gravity field, the vacuum) in such a way as to make a windmill that works. By attracting charge into implosion, it extracts the voltage from gravity which motorizes all of life. Similarly the voltage measurement from the top to the bottom of a chicken egg- indicates the principle of life force: goldenmean.info/biophoton - It really is a pity your biology teacher never told you the reason WHY almost every biological structure from DNA to cells to tree branching is mostly based on golden ratio. The reason is strictly electrical. To implode voltage from gravity is HOW one makes life force. This is another example of restored centripetal forces. The KEY is to recognize how the WINDMILL of charge catches the wind of gravity / the vacuum AND in the process helps us to stop irresponsibly calling this energy ‘ “FREE” ENERGY’ because clearly it is not (we call the so called FREE energy movement: ‘The ATLANTIS Mistake!’). This inertia in the vacuum is part of binding energy which holds gravity and everything together. Did you know that around nuclear fission, the bonding (binding) energy of all atoms in the neighborhood literally begins to fall apart. The world literally begins to disintegrate at the nuclear bond level. This is the background BINDING energy- and it is NOT free. Nuclear fusion reactions- would have the opposite effect. THIS is not just the difference between fission and fusion- it is the difference between EXPLOSION versus IMPLOSION energy. It is the difference between world building, versus world unravelling. Bill explains- that the measureable 'embrittlement' (becoming more brittle) in the bones - of the workers at nuclear plants- reverses when they move away! This is an example of the decreased binding energy of the whole area- ( tearing apart the fabric of your own survival net) - in the pressence of nuclear fission. Fusion - is the opposite. Centripetal forces- RESTORE the binding energy. In this paper, we support the claims made above with a hard mathematical theory that curtails to empirical data and equations done by Winter and others especially related to the symmetry of the vacuum and the inherent symmetry of the matter we find ‘in’ the vacuum, and how the Golden Proportion is found to be a profound number found in the structure of the space-time fractal of the universe. ### 2. The Klein-Gordon Equation, (Equation One in a Unified Field Theory of Compressions) The Klein Gordon equation for a free particle is: ${\nabla }^{2}\Psi -\frac{{m}^{2}{c}^{2}}{{\hslash }^{2}}\Psi =\frac{1}{{c}^{2}}\frac{{\partial }^{2}\Psi }{\partial {t}^{2}}$ (1) Wikipedia says, ”It is the equation of motion of a quantum scalar or pseudoscalar field, a field whose quanta are spinless particles. It cannot be straightforwardly interpreted as a Schroedinger equation for a quantum state, because it is second order in time and because it does not admit a positive definite conserved probability density. Still, with the appropriate interpretation, it does describe the quantum amplitude for finding a point particle in various places, the relativistic wavefunction, but the particle propagates both forwards and backwards in time. Any solution to the Dirac equation is automatically a solution to the KleinGordon equation, but the converse is not true.” A solution to the Klein-Gordon equation can be made up of a linear combination of the following solution, ${A}_{n}{e}^{i\left(\frac{1}{\hslash }{p}_{n}\cdot x-{\varphi }^{n}{\omega }_{o}t\right)}$ (2) Where $\varphi$ is the Golden Ratio. Plugging in Equation 2 to Equation 1, and for convenience we say the free particle is moving in the x direction (${p}_{n}\cdot x={p}_{n}x$), we get ${p}_{n}^{2}{c}^{2}+{m}^{2}{c}^{4}={\hslash }^{2}{\varphi }^{2n}{\omega }_{o}^{2}$ (3) So the interference pattern, or compression, is then, $\Psi =\sum _{n=0}^{\infty }{A}_{n}{e}^{i\left({\left[{\left(\frac{{\varphi }^{n}{\omega }_{o}}{c}\right)}^{2}-{\left(\frac{mc}{\hslash }\right)}^{2}\right]}^{1∕2}x-{\varphi }^{n}{\omega }_{o}t\right)}$ (4) ${A}_{n}$ can be determined by normalizing $\Psi$. ### 3. Compressions can be solutions to the Klein-Gordon equation for a free particle We start with a definition for the concept of a compression, a term coined by Winter: Definition 1 a superposition of waves or quantum wave-states added together in a fashion such as to synchronize their respective position and time Take the following equation for a sum of waves, where $\Psi$ is a solution to the Klein-Gordon equation and is a compression, $\frac{\partial \Psi }{\partial \varphi }=\sum _{n=0}^{3}{A}_{n}n{\varphi }^{n-1}{e}^{i\left(\frac{{p}_{n}}{\hslash }x-{\varphi }^{n}{\omega }_{o}t\right)}=0$ (5) Let ${A}_{n}=\left[C,-1,-\frac{1}{2},\frac{1}{3}\right]$, or ${A}_{0}=C$, ${A}_{n}=-1∕n$ for $0, and ${A}_{3}=1∕3$ for $n=3$. With these values, equation 1 becomes, $-{e}^{i\left(\frac{{p}_{n}}{\hslash }x-\varphi {\omega }_{o}t\right)}-\varphi {e}^{i\left(\frac{{p}_{n}}{\hslash }x-{\varphi }^{2}{\omega }_{o}t\right)}+{\varphi }^{2}{e}^{i\left(\frac{{p}_{n}}{\hslash }x-{\varphi }^{3}{\omega }_{o}t\right)}=0$ (6) Now, if we look at the maximum at the point $\left(x=0,t=0\right)$ our equation turns into, ${\varphi }^{2}-\varphi -1=0$ (7) The solutions to this equation are the golden ratio. This is evidence (‘proof’) for golden ratio maximizing constructive interference at the point $\left(0,0\right)$. #### 3.1. Extension to infinite sum of waves over time at a point Consider Equation 5 from zero to infinity, and at the point x=0: $\frac{\partial \Psi }{\partial \varphi }=\sum _{n=0}^{\infty }{A}_{n}n{\varphi }^{n-1}\left(cos\left({\varphi }^{n}{\omega }_{o}t\right)+{B}_{n}sin\left({\varphi }^{n}{\omega }_{o}t\right)\right)=0$ (8) We only consider the wave when ${\varphi }^{n}{\omega }_{o}t=n\pi$ so Equation 5 becomes, $\frac{\partial \Psi }{\partial \varphi }=\sum _{n=0}^{\infty }{A}_{n}n{\varphi }^{n-1}=0$ (9) If we let ${A}_{0}=C$, ${A}_{1}=-1$, ${A}_{2}=-1$, ${A}_{n}=-\frac{1}{n}$ (for $n=3$,...,$\infty -2$), $\underset{n\to \infty -1}{lim}{A}_{n}=0$, and $\underset{n\to \infty }{lim}{A}_{n}=\frac{1}{n}$, then the equation becomes a polynomial such that the golden ratio is a solution. QED Therefore, at times ${t}_{n}=\frac{n\pi }{{\varphi }^{n}{\omega }_{o}}$, an infinite sum of waves with frequencies that have relative ratios of powers of the golden ratio have maximum values and therefore interfere constructively. We have designed a compression of wave states that firstly, solves the Klein-Gordon equation for a free particle, and secondly, is optimized for maximum constructive interference at precise times and positions. It is found for a particular set of coefficients that when the frequencies of the individual wave-states are related by ratio to powers of a number, that maximizing number for maximum constructive interference at well-defined positions and times of the compression is phi, or the Golden Ratio. This means that an infinite number of states of a single, or multiple free particle/s is/are compressed maximally when the ratio between wave-state frequencies is a power of the Golden Proportion. The coefficients chosen to discover this possible scenario of maximum compression must be a set that is of great importance and must be studied. The sum of squares of the coefficients equal 1, and this is so to make the compression normalized to unity. We already have shown that considering the origin only, that there are precise times where maximum constructive interference occurs in the compression. We can generalize this further by setting the entire exponent of the e term in the equation to $n\ast pi$ similarly to how we did for time. This would give us new equations for quantized times and positions of maximum constructive interference of compressed waves. Here is a figure due to other collaborators of Winter that shows results of a software program that numerically adds waves with frequencies that have ratios that are powers of $\varphi$: Note the peaks of constructive interference. We show later that the ratios between the distance from the origin of any two peaks in a one dimensional perfected compression is approximately powers of Golden Ratio. ### 4. Modelling with a One-Dimensional Compression (Free Particles All Moving Along a Line) Compressions are unique because they embody the idea that ‘the whole is greater than the sum of its parts’. We have found that there are ‘nodes’ of maximum interference of the individual wave-states, where the nodes can be interpreted as quasi-particles or centers of particle systems. Subsequently, the node numbers correspond to the quantum number of the ordered wave-state in the compression, where node number n corresponds to a frequency of ${\varphi }^{n}{\omega }_{o}$ of wave-state n. Also, modelling with compressions makes some assumptions about the system being modeled, one of them being that momenta of the constituent waves do not depend on $\varphi$, while the frequency does. This may be remedied by the knowledge that $\varphi$ is inherently a constant, and in previous derivations showing the Golden Proportion as the maximizing ratio for perfected compressions, it was treated as a variable in order to prove maximum constructive interference at specific times (we will soon show the positions in what follows). That mathematical trick to show maximum interference may be just that: a trick, and our assumption about the momenta of the system is not necessary. #### 4.1. A New Heisenberg Relation For now, we will retain the assumption about momenta and refer back to Equation 5. The maximizing ratio for maximum interference will still be the Golden Proportion if we now set the exponent equal to $n\pi$ so that we have, $\frac{{p}_{n}}{\hslash }{x}_{n}-{\varphi }^{n}{\omega }_{o}{t}_{n}=n\pi$ (10) Plugging in the values for the maximizing times at the origin found earlier, ${t}_{n}=\frac{n\pi }{{\varphi }^{n}{\omega }_{o}}$, we get an interesting formula that resembles the Heisenberg Relations for position and momentum: ${x}_{n}{p}_{n}=2n\pi \hslash$ (11) So, the position and momentum of each node at the times of maximizing interference considered at the origin are thusly derived. #### 4.2. General positions of the nodes in a 1D compression We will now derive formulae for the positions and times of the nodes of maximum interference in a 1D compression. To start, we define two points, ${x}_{n}$ and ${x}_{q}$ that both obey Equation 10, and ${x}_{q}$ alone obeys Equation 11. For now consider the two points as different from one another. We shall later derive equations for these two points that hold for all nodes in the compression. $\left(\begin{array}{ccc}\hfill \frac{{p}_{n}}{\hslash }\hfill & \hfill 0\hfill & \hfill -{\varphi }^{n}{\omega }_{o}\hfill \\ \hfill 0\hfill & \hfill \frac{{p}_{q}}{\hslash }\hfill & \hfill -{\varphi }^{q}{\omega }_{o}\hfill \\ \hfill 0\hfill & \hfill \frac{{p}_{q}}{2q\pi \hslash }\hfill & \hfill 0\hfill \end{array}\right)\left(\begin{array}{c}\hfill {x}_{n}\hfill \\ \hfill {x}_{q}\hfill \\ \hfill t\hfill \end{array}\right)=\left(\begin{array}{c}\hfill n\pi \hfill \\ \hfill q\pi \hfill \\ \hfill 1\hfill \end{array}\right)$ Solving this system of equations one obtains the following formulae for positions and times of maximum interference at the nodes in the compression: ${x}_{n}{p}_{n}=n\pi \hslash +q\pi \hslash {\varphi }^{n-q}$ (12) ${x}_{q}{p}_{q}=2q\pi \hslash$ (13) $t=\frac{q\pi }{{\varphi }^{q}{\omega }_{o}}$ (14) So there are quantized times that maximum interference occurs at the nodes with quantum time number q. Also, Equation 12 is equal to Equation 13 when node number n is equal to quantum time number q. So the most interesting result is Equation 12 that depends on both time and node quantum numbers because it gives the positions of every node in the compression at the particular times of maximum interference, hence perfect compression. #### 4.3. An aside on the times of maximum interference An analysis of Equation 14 shows that for all positive time quantum numbers q there is a maximum time during which all possible maximum interference at the nodes occur. This fact implies that processes that are modelled with 1D compressions are only perfected within a finite lifespan. This gives a lifetime of those processes at ${t}_{max}=\frac{2\pi }{{\varphi }^{2}{\omega }_{o}}$. A notable consequence of this formula is that the lifetime of processes only depend on $\pi$, $\varphi$, and ${\omega }_{o}$ which we are interpreting as the characteristic frequency of the system. ##### 4.3.1. Fractality of Time Evidence that time is ‘fractal’ and phase conjugate - in order to produce efficient charge distribution (time connection / ‘coincidence’): We present a mathematical defense of the original papers: (http://goldenmean.info/coincidence, and http://fractalfield.com/fractalphotosynthesis): We understand that while space is quantized like the grain structure of the quantum foam by the planck length, that time is quantized by the planck time. This means essentially that every wave that physics has ever measured fits evenly into these units. (We would also hypothesize that physics has never measured anything but waves). For our way of conceiving this wave mechanism of quantum function, it seems that the compressibility of the quantum foam or vacuum (‘ether’) behaves like a fluid in which the wave propagation stores inertia when it rotates; that inertia then becomes our (only?) definition of mass. It is also likely that we could describe the compression and rarefaction within this compressible ‘liquid’ media as the (only?) definition / origin of plus and minus charge. Further, then the rotation of these waves of charge then originates the concept of the period of this spin, which becomes our (only?) definition of time. Further, we hypothesize in this paper how constructive interference optimized by golden ratio, then is the origin of spin in general (shown later). So, the very existence of vorticity, wormholes, strings, and the toroid tornados in this ‘fluid’ is due to wave interference optimized toward non-destructive interference (PERFECTED compressions) by the recursion-perfected, non-destructive self re-entry called the golden mean spiral. In hydrodynamics, it then makes sense that this spiral is called optimized translation of vorticity because this corrected path from linear inertia (energy) to rotational inertia (mass) becomes effectively the only self-organizing path from energy to mass. Note how later, after we measured golden ratio in brainwaves correlating to peak perception, (http://goldenmean.info/clinicalintro), it makes sense to define this phase conjugation by golden ratio as the origin of perception. Ability to self-reenter (‘know thyself’) does appear to be the charge wave mechanic of consciousness (“with turning inside-out ness”). This also then accounts for how focused human attention so measureably causes charge fields to compress (Bill Tiller), and radioactivity to be reduced (Uri Geller- measurements not confirmed). We wish to properly introduce the mathematics of fractality in time. IF we are correct then coincidence is optimized by charge rotations which phase conjugate (become fractal) and thus, exchange charge rotation inertia in both space and time. This would predict both when and where coincidence is more possible. And would explain as Bruce Cathie showed that even the critical mass of nuclear reactions depends on correct placement in the (fractal / phase conjugate ) grid of space and time. We note how significant it is that time reversal in phase conjugate optics for example can ONLY return a wave system to its previous state time IF that wave state is more ordered. Here we would say that the state of order of wave systems climaxes or reaches its greatest coherence in the state of phase conjugation. This model of phase conjugation producing time ‘travel’ (moving between nodes of great conjugate coherence) further predicts that the time connection (phase conjugate charge rotation coupled) always requires wave phase velocities that are superluminal / faster than light- by GOLDEN RATIO MULTIPLES TIMES THE SPEED OF LIGHT. This hypothesis arises out of the model of gravity implied, which requires that gravity originates because the charge waves add and multiply constructively their phase velocities using golden ratio, producing acceleration of charge (gravity) when the compression of charge is golden ratio (as we show here is the essential structure of hydrogen and hydrogen fusion). So further this hypothesis suggests that gravity cannot originate or exist except when charge waves meet in golden ratio symmetry. Note how clearly the literature confirms that not only hydrogen, but DNA, Earth Grid, and the Universe are essential penta-, dodeca- symmetry precisely because the dodeca- stellation produces infinite nodes where every vertex is only a golden ratio exponent distance from center. (http://goldenmean.info/selforganization). We have made the point graphically (http://goldenmean.info/creation) that then gravity seems to only exist in atoms to the extent that their inside (nucleus) is self similar or ’fractal’ to their outside (electrons). There you see how the ‘platonic’ symmetry distribution of the nuclear hadrons does appear to mirror the same ‘platonic’ distribution of electrons. THE EVIDENCE: After Winter discovered that whole number exponents of planck length modelled the hydrogen radii with amazing accuracy, he explored the correlating frequency signature for hydrogen. Multiplying whole number exponents times the planck TIME, Winter discovered: • significant prediction of the frequency (Mhz - insert equation here) for the radio frequency John Kanzius so famously used to burn water • other frequencies famously associated with hydrolysis - like Keeley’s - near 10,000 hertz ( see evidence from our fractal hydrolysis project: fractalfield.com/hydrogen • the duration of the Earth year (or at least very significant approximation) • the duration of the Venus year • the duration/ frequency virtually exactly of the 2 frequencies which motorize photosynthesis (fractalfield.com/fractalphotosynthesis) Also, it has been posited that a multiplication operation between two compressions could model aspects of spin of a system because two fractal (or vector) numbers multiplied produce an axial vector number which is spin. Winter has also posited that time can only be defined through charge rotation. So, by extension, we say that one compression perfected by Golden Ratio operating on another compression perfected by Golden Ratio leads to a model of spinning charge. With Winter’s supposition about the nature of time, we then say that if charged particles can be modelled as perfect compressions, then time also has a fractal representation through the two compressions. Two phase conjugated compressions should lead to negative time, a phase conjugated compression operated on a regular compression should give new physics. Seeing the profound implication we hypothesize here: that all biology only exists to the extent which it participates efficiently in the charge distribution perfected by phase conjugate golden ratio (See three separate ways ‘sacred’ or fertile space is electrically measured at the top of http://goldenmean.info/architecture). And, we strongly suggest that all living architecture must be defined and optimized to embed in this rose like structure at both micro and macro levels. We have translated this into an already world famous curriculum for biological architecture at http://goldenmean.info/architecture We give two examples here: seed germination requires a phase conjugate centripetal field (only those seeds which suck charge are alive). This explains why Stonehenge is measureably an electrical seed germination device (book: Seeds of Knowledge, discussion - iniitiation defined: http://goldenmean.info/malta). How to build bioactive centripetal fields: (http://pyraphi.com) The reason containers made of biologic materials cause growth versus containers made of (non-phase conjugate) steel and aluminum (buildings) effectively make capacitance poisonous to every living thing. As we explain in our architecure curriculum this is measureable as the same harmonic inclusiveness (charge breathing made efficient) in Heart Rate Variability, which medical doctors use to measure whether you have an immune system (discussion: http://goldenmean.info/holarchy), or not. So, embedability optimized by this golden ratio phase conjugation seems to define the wave system of all life and mind and self-organization and coincidence because this is the only way centripetal wave forces like gravity, life, mind, and perception originate. #### 4.4. Healing, Pain Reduction, Growth Accelerant: As Restored Centripetal Forces A further implication is that since all self-organization requires this centripetal phase conjugate field symmetry, as well as growth acceleration, there will be healing and pain reduction produced by such centripetal fields. Priure produced phase conjugate fields which eliminated cancer. Tom Bearden describes this (http://goldenmean.info/phaseconjugate). Extending the principle we see why Elizabeth Rauscher’s magnetic harmonics (below) produced documented pain reduction, AND healing acceleration. Winter later coined the term PHASE CONJUGATE MAGNETICS for this centripetal magnetic principle (which Elizabeth appeared to embrace) when he discovered that the magnetic frequencies she used later turned out to largely fit his equation for phase conjugation (planck time multipled by exact golden ratio exponents). #### 4.5. Ratios of the nodal postions Using Equation 12 and taking the ratio of two different nodes we arrive at the following ratio: $\frac{{x}_{n}{p}_{n}}{{x}_{m}{p}_{m}}=\frac{n{\varphi }^{q}+q{\varphi }^{n}}{m{\varphi }^{q}+q{\varphi }^{m}}$ (15) For large values of n and m, this ratio approximates simply to powers of the Golden Proportion. This can be shown with a model of the hydrogen atom, taken as a compression or distribution of free, compressed waves. We will introduce such a model further in the section on the hydrogen atom. In the penta-, dodeca-/icosa- symmetry, we can see how the “concrescence” of waves, can produce centering of constructive pressure agreement producing the possibility of converging a virtually infinite number of nesting spin symmetries. This is nature’s tactic of PERFECT embedding. Not only does it have everything to do with magnetism ONE WAY WIND of inward implosion called gravity, it also is probably the essential geometry of COMPASSION itself. Our paper shows the mathematical evidence that Golden Ratio in one line (linear, one dimension) perfects compression. We present here that what we have shown mathematically as the solution to compression in one dimension IS the solution in 3 dimensions (and more) because in the dodeca-/icosa-/dodeca- infinite nest, EACH vertex x,y,z coordinate is a simple whole exponent of golden ratio, meaning (as our top animation shows) the distance to center in this nodal array - from every node infinitely is a simple multiple of golden ratio (wave compression). This paper is suggesting the significant evidence for the hypothesis that this symmetry (golden ratio to planck) is the wave mechanics (cause and mechanism ) of • the structure of hydrogen and fusion and implosive collapse in general • the electrical wave geometric cause of gravity • the wave geometric phase conjugate origin of color, (see yellow and blue as dodeca photon phase angles here) - and self - organization in phase conjugation - in general (optical, dielectric, phonon, and magnetic), and origin of centripetal forces (like electronegativity • the origin of perception (as evidenced by golden ratio in eeg during peak perception- and the fact as Tiller proved- focused attention compresses charge fields. Note how the x,y,z values of each of the vertices of all the nodes in the Infinite Dodeca-/Icosa- golden ratio Stellation, we call The ”star mother" goldenmean.info/kit , are simple exponents (whole number multiples) of Golden Mean .618 , 1.0, 1.618, 2.618 . This is further mathematical evidence that since we see this EACH POINT IS ALWAYS A MULTIPLE OF GOLDEN RATIO, ALSO IN DISTANCE TO CENTER POINT! (meaning - perfect compression by golden ratio is present on ALL SYMMETRY LINES TO CENTER- 'Mathematics of fusion/ superconductivity / implosive collapse') This has been recently proven in the structure of hydrogen, and is how this model effectively (quoting mathematician El Naschie) “how the GOLDEN QUANTUM FIELD THEORY is EVIDENCE THAT FRACTALITY IS THE CAUSE OF GRAVITY”. So, we reprint the list here as Dan Winter published in his first book “ONE CRYSTAL’S DANCE” over 30 years ago. The Golden Coordinates of the Star Mother: This set assumes the cube edge = 2 units. So, the tetra- edges would be square root of 2 times 2, the dodeca- edge = .618 times 2, the icosa- edge = 1.618 times 2, and the NEXT outer dodeca stellation edge 2.618 times 2. • Vertices of the octahedron: (6) – (0,1,0), (1,0,0), (0,-1,0), (-1,0,0), (0,0,1), (0,0,-1) • Vertices of the cube (8) – (1,1,1), (1,-1,1), (-1,-1,1), (-1,1,1), (1,1,-1), (1,-1,-1), (-1,-1,-1), (-1,1,-1) • Vertices of the dodecahedron (20) – The dodeca vertex are composed exactly of the above 8 vertex of the cube- PLUS-these additional 12: (-.618, 1.618, 0), (.618, 1.618, 0), (1.618, 0, .618), (1.618, 0, -.618), (-.618, -1.618, 0), (.618, -1.618, 0), (-1.618, 0, .618), (-1.618, 0, -.618), (0, .618, 1.618), (0, -.618, 1.618), (0, .618, -1.618), (0, -.618, -1.618) • Vertices of the icosahedron (12) – (2.618, 0, 1.618), (2.618, 0, -1.618), (-2.618, 0, 1.618), (-2.618, 0, -1.618), (-1.618, 2.618, 0), (1.618, 2.618, 0), (-1.618, -2.618, 0), (1.618, -2.618, 0), (0, 1.618, 2.618), (0, -1.618, 2.618), (0, 1.618, -2.618), (0, -1.618, -2.618) Quoting further from “One Crystal’s Dance” by Dan Winter - (in his 20’s): “Note how simple it is to continue infinitely.. - simply extend every icosahedron edge length straight out by ratio Golden Mean, to make another Dodeca, then extend that dodeca edge straight out again by Golden Ratio longer to make another icosa etc. Alternating (interdigitating) infinitely. Each succeeding dodeca or icosa can be plotted digitally by simply multiplying by Golden Mean squared (2.618) to the vertex coordinates of the previous! Use vision to understand the physical (phi cycle) significance of this. The distance from every node to every axis of symmetry, AND to the core (center point) is ALWAYS a power of the GOLDEN MEAN. See 12 golden mean, spiral cones, in this required pyramid like angular relation, making our STAR MOTHER, and indirectly the dodecahedron of DNA. See concentric spheres as wave bubbles. The wave length must divide evenly into the RADIUS AS A POWER OF THE GOLDEN MEAN. This fulfills the requirement that waves colliding toward center of gravity (mass), in order to conserve momentum (order, memory, mind) must not interfere with each other. The harmonics make ONLY constructive interference as they nest in this way.” Thirty years later this seems precisely predictive of El Naschie’s GOLDEN QUANTUM FIELD THEORY- MATHEMATICS- agreeing that this FRACTALITY IS THE CAUSE OF GRAVITY. (see golden ratio proven in the structure of hydrogen at http://goldenmean.info/poleshift - bottom- see this nest become the atomic table: http://goldenmean.info/creation) ### 5. The Hydrogen Atom We know from work done by Winter that at least three hydrogen orbital radii can be calculated from the planck length multiplied by integer powers of phi, $\varphi$, the Golden Ratio. Here we write these three radii as ${h}_{l}{\varphi }^{115+n}$ from $n=1,2,3$ and derive some formulas for writing the probability density of the hydrogen state: $|{\psi }_{n00}{|}^{2}$. #### 5.1. First way It is known how to calculate the average radius of hydrogen. The formula is ${〈r〉}_{n}={\int }_{V}r|{\psi }_{n00}{|}^{2}dV$, where V is all space. We would like for this formula to give Winter’s orbital radii at each n, so we define a function $\Phi ={\int }_{0}^{r}{X}_{n}\left(r\right)dr$ such that ${h}_{l}{\varphi }^{115+n}{\int }_{0}^{r}{X}_{n}\left(r\right)dr=4\pi {\int }_{0}^{r}{r}^{3}|{\psi }_{n00}{|}^{2}dr$ (16) and $\underset{r\to \infty }{lim}{\int }_{0}^{r}{X}_{n}\left(r\right)dr=1$ (17) so that ${h}_{l}{\varphi }^{115+n}={〈r〉}_{n}$ (18) So, we have a new function with a constraint with which we can write the hydrogen probability density. There are other constraints that we can impose later, that include more physics, but for those there may be no solution for ${X}_{n}$, so we start with the one simple constraint thus derived. From Equation 1 we take the derivative with respect to r of both sides and rearrange to get $|{\psi }_{n00}{|}^{2}=\frac{{h}_{l}{\varphi }^{115+n}}{4\pi {r}^{3}}{X}_{n}\left(r\right)$ (19) Because we know that ${\psi }_{n00}$ is a Real function, we may take the square root of both sides of the equation to get, ${\psi }_{n00}=\sqrt{\frac{{h}_{l}{\varphi }^{115+n}}{4\pi {r}^{3}}}{\left({X}_{n}\left(r\right)\right)}^{1∕2}$. With this new wavefunction for the hydrogen atom, one may make some predictions. Simply choose the operator for which one would like to make the prediction, say the Hamiltonian, $\stackrel{̄}{H}$, and find the average total energy for the system: ${〈H〉}_{n}={\int }_{V}{\psi }_{n00}\stackrel{̄}{H}{\psi }_{n00}^{\ast }dV$ (20) All that is needed is to find the appropriate ${X}_{n}$ that meet the criteria outlined above in Equation 17, preferably something that resembles the radial hydrogen equations. Equation 16 is a definition that leads to an approximation of the hydrogen atom solution of the Shroedinger equation that has exact average radii equal to Winter’s results, so that the form of ${X}_{n}$ is not critical due to the theorems (i.e. Variational Theorem) already proven for ground state approximations of the wavefunction. Looking again at Equation 20, if we assume that that our approximate wavefunction is an exact form of the original hydrogen wavefunction, we arrive at ${〈H〉}_{n}={h}_{l}{\varphi }^{115+n}{E}_{n}{\int }_{0}^{\infty }\frac{{X}_{n}\left(r\right)}{r}dr$ (21) where ${E}_{n}$ is the well known energy of the electron in the hydrogen atom at each orbital, numbered by $n$. Thus we obtain an approximation equation for the energy of the hydrogen atom in terms of our function, ${X}_{n}$, and Winter’s orbital radii. If the wavefunctions thus obtained are normalized to unity, then Equation 7 would simply be ${E}_{n}$. But because we have derived conditions for an approximate result, it may not be as simple as to assume that $\stackrel{̄}{H}{\psi }_{n00}={E}_{n}{\psi }_{n00}$ for our approximation (where ${E}_{n}$ is the known value mentioned previously), so that Equation 21 remains only a mathematical exercise. To obtain the appropriate approximation, we must begin at Equation 6. The energy eigenvalues for the approximate eigenfunctions we have derived will most likely be different from ${E}_{n}$. #### 5.2. Second way Another way to introduce Winter’s equations is similar to the First Way. But for the Second Way we make use of the definite integral: ${\int }_{0}^{\infty }{e}^{-ar}dr=\frac{1}{|a|}$. So we strategically choose $a$ such that the definite integral is equal to Winter’s radii: ${h}_{l}{\varphi }^{115+n}$ so we have, ${\int }_{0}^{r}{e}^{-\frac{r}{{h}_{l}{\varphi }^{115+n}}}dr=4\pi {\int }_{0}^{r}{r}^{3}|{\psi }_{n00}{|}^{2}dr$ (22) So, $|{\psi }_{n00}{|}^{2}=\frac{1}{4\pi {r}^{3}}{e}^{-\frac{r}{{h}_{l}{\varphi }^{115+n}}}$ (23) #### 5.3. Modelling the hydrogen atom as a 1D compression Consider the hydrogen atom as a perfected compression. This gives the lifetime of PERFECTED processes in the atom as finite and assumes that there are nodes where maximum constructive interference occurs: where the nucleic constituents are, and where the electronic constituents are. We must then resolve to find the node that corresponds to the states of hydrogen with which we are familiar. If we set those positions equal to Winter’s radii, then we obtain a formula for the momenta of nodes, corresponding to electronic constituents, for quantum number n of the hydrogen atom. ### 6. Phase Conjugation Simply put: perfected compression is perfected phase conjugation. The proof of this reality is quite simple. The phase conjugated solution of the Klein-Gordon equation that we have called a compression has it’s exponent multiplied by $-1$. This is the mathematical definition of a phase conjugated wave. Carrying through the negative 1 in the calculations gives all the same results of Equations (12, 13, 14) except the right hand side of those equations is multiplied by negative one. This then opens up the possibilities of negative time and direction. Remember there is a minimum time for perfected interference now in negative time for the phase conjugated perfected compression. So, more precisely, perfected phase conjugation IS perfected compression in negative time and direction. ### 7. A Perfect Compression, By Analogy, is Evidence That The Infinite ‘Compression’ of Symmetry in a Fractal is Perfected by the Golden Ratio In the future, we may be able to make a more precise relation between physical compressions, mathematical compressions, and even metaphysical compressions, bridging the gap between physics-creating-math and math-creating-physics. For now, it is possible to draw the logical inference that since the Golden Proportion creates the ideal compression with nodes of maximum constructive interference that have a fractal distribution, then the infinite ‘compression’ of symmetry in a fractal also has a mathematical representation as a compression whereby the maximizing ratio for PERFECT fractality is also the Golden Proportion. It has been shown by Winter that the Golden Ratio does in fact show up in fractals such as the Mandelbrot set, so with our analysis we say that those fractals are not PERFECT, but close to perfect, or in other words, those fractals contain nodal points having significant constructive interference. ### 8. Origin of Spin It is intuitively evident that if 2 compressing wave fronts approach each other- the only thing that remains after they have interfered is in fact vorticity/spin. Forming little tornado like vortex structures is clearly the primary occupation of the universe, as it is clearly the root of string and wormhole theory, as well as the notion that every standing wave stored in the universe is essentially toroidal : a donut with 2 vortices. Yet, science often has asked the question: ‘what is the origin of spin?’ without clearly conceiving of a concise answer. Here we suggest explicitly, like constructive wave compression is the result of golden ratio interference, therefore so also is this the origin of spin. The only way constructive interference can in effect store its inertia in the fluid medium of the ether/vacuum is in fact vorticity/ spin. It has come to the attention of the authors by a mathematician, Robert Powell, Sr. that our theory, working towards a new theory of unification, can have added to it an equation that defines a spin, and in regards to Winter’s and others’ (Frank van den Bovenkamp, etc...) work on the origin of color, an explanation for the phenomenon of color implementing spin of a vortex-like structure of space-time, the interaction of two compressions. The equation is: $S=\hslash \left({\Psi }_{n}×{\Psi }_{m}\right)$ (24) We must throw aside most notions of the current understanding of number to grok the importance of Equation 24. Due to Powell’s work in The Rest of Euclid, all known number theory is a subset of the Vector Numbers. So, we can claim cross product multiplication between two supposedly scalar quantities because they have a representation in the Euclidean plane as quantities having magnitude AND direction. This makes it possible to realize the magnitude of S introducing an angle between the two compressions and making the assumption that the spin of the vortex is concentrated mostly in the zeroth order wave-state of the compression understanding that it is possible that the spin may be concentrated in other components of the compression as well. Using the definition of the coefficients for a perfect compression, we arrive at $S=\hslash |{C}_{n}||{C}_{m}|sin\theta$ (25) So it is possible to predict the spin of any particle modelled as two merging compressions due to the freedom of the coefficient ${C}_{n}$ and ${C}_{m}$ which correspond to the coefficient of the zeroth order wave-state in the two compressions. Going back to Winter’s work on color, we are able to interpret the angle $\theta$ for two compressions modelling a photon as cube-dodecahedron face angles that correspond to spin of the photon and the different colors associated with those angles that have been discovered. Origin of Color: We further present the symmetry evidence that this phase conjugation by golden ratio is the origin of color at the link fractalfield.com/fractalphotosynthesis. That argument depends on the hypothesis that the octave of wavelengths that are visible can be converted in perfect linearity from zero to 180 degree phase angle, or tilt, of the photon (thought to be a torus). Amazingly, this results in a simple platonic series of angles which emerge ‘magically’ exactly at the wavelength/frequencies of the primary colors. (below) For example, green is a color defined by a photon which has rotated exactly at a cubic 90 degrees. We hypothesize that this angle produces exactly a wave interference ‘shadow’ which is opposite to phase conjugate, and why every living plant (photosynthesis ) spits out / does not eat the green photon (graphic at link). The only 2 photon angles- 63 and 117 degrees tilt or phase angle for Yellow and Blue - which are not cube angles- are in fact the precise face angle to bring the photon into the DODECA face to center. This means that all the angles of the primary colors - photons- are the result of cube / dodec face to center. THAT symmetry is precisely what is required to get the photons to phase conjugate by golden ratio. That means that color is the result of golden ratio phase conjugation sorting in the quantum liquid core of the rainbow. (discussion at link above - see- 3 pair of primary colors fit the hex view of dodec/cube). Note how this PHASE CONJUGATION cause and origin of color hypothesis predicts that rainbows are more likely where the air is fractal / conjugate/ charge dense - measurement examples at goldenmean.info/architecture , also would cause the famous Green Flash, - also would explain why the contribution to phase conjugate plasma compression caused when a Tibetan saint dies- causes rainbows. Dan Winter's discovery: Planck Length x Golden Ratio^ (exactly) 136 and 137 - power - EXACTLY predict the 2 wavelength's motorizing photosynthesis! This proves that phase conjugation (by golden ratio) down to planck scale- dictates the CHARGE DISTRIBUTION EFFICIENCY which defines LIFE FORCE. The evidence is clear- the reason every living plant can absorb any photon BUT green- is that the green photon being precisely a 90 degree phase angle or tilt, is exactly OPPOSITE to phase conjugate. And so - being the opposite of constructive interference (green / right angles create max DESTRUCTIVE interference)- so the GREEN photon is the opposite of LIFE! Life is defined by embedability in perfected (phase conjugate) charge distribution. Note also how - now it is possible to define evil- simply as that charge which cannot be embedded in the perfected (phase conjugate) distribution of charge called LIFE! ### 9. Conclusions The authors feel strongly that a new form of physics which knows why and how centripetal forces (like fusion, gravity, and life force) are generated with specific electrical symmetry; And that this symmetry is essential to survival in general. Knowing how the same golden ratio stellated dodeca- symmetry group is the essential wave mechanic of hydrogen, gravity, fusion, life force and perception could in fact go a long way to bringing us to restoring centripetal forces in our planet sphere teetering to the opposite of self-organization, which is chaos. If we know that gravity is stronger when centers of mass and magnetism are arranged dodecahedrally- might we not prevent our highway engineers from cutting major magnetic lines from the planet dodeca- grid once we realize that when fractality bleeds, so also does gravity, and coincidentally our atmosphere! It is no mere coincidence that the exact same dodeca- stellated geometry scaled to planck and golden ratio ( phase conjugate magnetics), in addition to healing planets, also demonstrably reduces pain and accelerates healing in humans. Restoring compression where the fractal bleeds from loss of self-organization is more than a poetic metaphor. It is the only way out of chaos. Appendix: Application to Centripetal / 'Time Biased' / Implosive / 'Conjugate' / HEALING / Magnetics ### A. A Different Twist: Another Perspective on Mechanisms of Magnetic Field Formation and Related Phenomena #### A.1. Introduction There were many areas within magnetics which require a theoretical update, a more comprehensive unified understanding. For example, the fact that a strong magnetic field effects the gravity of test objects. There is a gravitational component to the magnetic field. Also, magnetic healers have long known that one side of a magnet is more for pain relief (centrifugal) than the other side, which is more healing accelerating (centripetal). In this paper- Bill suggests the centripetal / centrifugal (North vs South) side of a magnet- is charge accelerating- essentially implosion versus explosion dominant. Because- time is that acceleration of charge (charge rotation is our only definition of time) - that means magnetic lines are 'time biased'... and ALSO affect gravity measureably. Bill Donavan’s paper provides the framework in mathematical physics to allow us to develop for bioactive use among other things, centripetal or phase conjugate magnetics, thus, restorative of the implosive compression of charge which identifies healing and self organization. We spoke here of the frequency signature of phase conjugate (restored compression) magnetics, but this also provides the physical framework for us to understand how phase conjugate arrays of permanent magnets restores compression and accelerates healing and growth, and affects time. We have known that there is a high frequency component even to what seems like a DC magnetic flux line. Bill’s new helical model of the braided electric lines which make magnetic flux lines helps us to understand the frequency associated (the width of the helix predicts it frequency). Also, now we can understand that inside itself, a simple magnetic line is an inside-out helix of what is an electric field line! Ultimately, the restoration of centripetal or self-organizing (and phase conjugate) forces in general is going to require a more comprehensive physics of magnetics which predicts implosive (gravity making and bioactive) magnetics. When Bill speaks of negative and positive magnetics as forward and reverse time biased, consider that like all wave fields, they self organize in torus shape. This means that the vortex in one side will generally be more centripetal than the other. This is WHY one side of a magnet is better to heal and reduce bleeding, while the other (centrifugal) side is better to reduce the pain (from a tooth ache for example). The centripetal side of a magnet (see chart) has a component of its flux lines which phase conjugate. This means that charge compression BECOMES acceleration (gravity) because in conjugate (golden ratio) wave crossings phase velocities can add and multiply constructively (produce acceleration from compression, the cause of gravity and time travel). The component of phase velocity which goes through the speed of light $c$ by multiples of golden ratio is time penetrating. We make the point elsewhere that this precise difference applies to the torus shape of the photon to define the origin of color. The red side is centripetal (Hebrew for red: ATOM, is to MAKE HARD). The blue side is centrifugal. Note the same points we make about phase conjugate being the origin of color applies to magnetics. Compare this to the film here with Tom Bearden explaining how phase conjugate dielectrics heal (http://goldenmean.info/phaseconjugate). #### A.2. Paper The purpose of this appendix is to expand the concept originally put forth in a previous work titled, “In Front of Us” It addresses both the concerns of the CPT invariance as well as the concepts in Tom Beardens talk “The Lost Unified Field Theory of James Clerk Maxwell”.[1] First, lets look at the conventional equation for the B field: $B=\nabla ×A$ (1) In this form, B is the magnetic field, delta is the change per unit time, and A is the magnetic vector potential. It does not address the issue of the internal torsional component, which is present in the formation of the field, or of the field itself. In Magnetism and its Effects on the Living System [2], Rawls mentions that the poles of a magnet have differing properties. They are not identical to one another. There is a problem when empirical evidence does not match the mathematical model. When this occurs, the model does not approximate physical reality, and the result is either a lag behind the phenomenology, which is most often the case, or in the extreme, a form of schizophrenia: where the engineers and experimentalists use their own math models and throw out the politically accepted version as non functional. So let us look at a few possibilities that fit the empirical data more closely. Postulate 1 Time is not a direct observable. What this means is that we cannot directly measure time. We measure it in spatial displacement: the hands of a clock, the vibration of cesium atoms or quartz crystals. So therefore if we speak of time reversal, that reversal is measured as spatial reversal: a backing up or reversal of spatial vectors. Lets look at the conventional magnetic field geometry: In this model, the field lines loop back, forming a continuous path around the magnet. This model is what we normally see when you put the magnet under a piece of paper, and sprinkle iron filings on top of it. In fact, this is the only time we see this geometry. When Sparky (Floyd) Sweet placed his magnets on top of a cathode ray tube[3], he saw a figure 8 pattern. Davis and Rawls also diagram this in their work. Plasma anomalies also show this figure 8 pattern. So which one is right, the filings or the plasma? We have one instance of a continuous loop, and several of this other geometry. In the continuous loop, the delta remains positive through the whole process. In the other words, we have quite a different animal. In this diagram, we have the magnetic vector potential flowing outward in the North pole, interfering with itself. The second A is in continuous flow around each pole. This interference pattern produces the B field. This cross product also produces the B field. So what is missing here? What is the A potential, anyway? Is it real, or a convenient fiction? In a previous paper, I showed that the possibility that the E field is a modulation of the G potential is a real possibility. So what are the possibilities here? Can the magnetic vector potential be a simplified version of a more complex reality? If so, we would have two potentials reacting to one another, one rotational form, and another standing wave form. One would have two sub-forms: one for north, and another for south. Postulate 2 In keeping with the CPT (charge, parity, time) invariance, the negative side of the magnet is the one that promotes increased entropy. The positive pole is the one that decreases local entropy, as it is time reversed in its delta. In “Healing with Magnets”, by Gary Null, pages 8-9[4], there is confusion between what is north or south, positive or negative. We see this in a great deal of literature. It is quite easy to clear up the confusion. Let the positive side be the one with $-\nabla A$, and the negative $+\nabla A$. In quantum mechanics, when one reverses the charge, parity as well as time is also reversed. Therefore, one side of the magnet has a positive time bias, and the other side a negative time bias. One side promotes entropy and growth, and the other neg-entropy and healing. This is seen in the literature. But what about the rotational side? How do we incorporate that? Lets see a tentative solution: $B=\nabla ×\left(\nabla ×G\right)$ (2) What is the supporting evidence for this? We see in Rawls work, that the poles of a magnet seem to have a torsion field aspect to it. That twist cannot be ignored as it is supported empirically. However, this is only a partial solution, as the other side has a temporal dipole aspect. The alternative is that the opposite pole also has an opposite twist, and therefore would be $-\nabla ×\left(\nabla ×G\right)$. This is more elegant, as it preserves the symmetry of the model. The negative delta of the curl G would interact with the negative magnetic vector potential to preserve the negative temporal bias. Postulate 3 Time is the engine that transforms potentials into fields. In a previous paper, “Riding a Beam of Light”[5], it was shown that all deltas are “frozen out” as one moves at light speed. That means that the E field, which was postulated as $\nabla G$, is now pure longitudinal gravitational potential, a ripple in space-time. The H field vector, which is a motional version of the B field, or magnetic vector of the EM wave, collapses from $\nabla ×A$ to just a static magnetic vector potential. As speculated earlier, the “A” potential itself may be a composite of gravitational potentials with internal deltas. An alternative key would be to use circular polarization solutions to the magnetic field line and treat it as a standing wave. This means that each line would have a fixed energy, as both the rotational frequency as well as velocity would be equal to C, with fixed wave crests and amplitudes. It could not decrease below a fixed amplitude. The energy would be extremely high, with a large frequency to account for the one-dimensional attribute of the field line. This quantization implies that magnetic monopoles are unnecessary in the model. What this model implies is that the magnetic field line is an artifact, an interference between the torsion field and the magnetic vector potential. Is the magnetic vector potential itself a composite of gravitational potentials with internal deltas? It is an intriguing concept. This concept needs further work and validation. #### A.3. Possible Extensions One possible extension of the theory is that as one models the die Broglie wave of a particle, the other half of the wave would exhibit negative time characteristics. The wave would have what would appear to be a negative DC offset, or a positive time bias. The bias could be inversely proportional to mass, so therefore the discrepancy between particle charges and mass could be due to differences in bias. So, for example, since an electron has a mass of +1, and a charge of -1, the wave is nearly all in positive time. The proton, with approximately 1836 times the mass of the electron, but a charge of +1, has most of the mass in negative time. In this case, like an iceberg, most of the mass except for 1/1836 lies in negative time. If this theory is consistent, then the implication is that no Higgs boson is necessary. The mass of the particle depends on the DC offset of the wave function. Neutral particles would be internally conjugate pairs with dynamic infolded engines that sum zero for charge, but exist in positive or negative time, depending on whether it is a particle or antiparticle. One would have to account for the internal structure of the neutral particle. But what about relativistic effects? As the delta G decreases, the charge of the particle would also decrease along with frame rotation. However, instead of a mass increase, we would see an increase in inertia, until at C the mass would be zero and the inertia infinite. This then reconciles the problem with the energy discrepancy of relativistic effects. It becomes a smooth transformation of mass to energy. The surface charge becomes an observed charge vector of the wave in our frame, which is rotated relative to the moving particle. If delta G is zero on the moving wave, then when one “rides” the wave, they see only potentials, and no fields. Furthermore, those potentials are rotated 90 degrees, so from the standpoint of the wave rider, the fundamental character is longitudinal, and not transverse. Postulate 4 Mass movement through time is the result of the DC offset. Is it possible therefore to produce a real “time machine” with this knowledge? If one could duplicate the “DC” bias offset, and reverse it, then there might possibly be a reverse vector in time. Is this how it is done in phase conjugation? In a later paper, we will investigate proofs for this. For now, consider it to be a partly baked concept. But real time travel? David Anderson seemed to think so, and a few years back he invented something called a “time warp field generator”. It produced relativistic effects on the laboratory bench. Not only did it slow down time, it also accelerated it. This phenomenon of temporal acceleration would be expected if one were producing the offset in the other direction. Its the flip side of relativity. For true time travel, one would need to produce offsets that create massive relativistic effects, such that the delta that created the fields in the first place is so low that the potentials are overshadowing the fields. This effectively drives the particles into the Dirac plenum, quenching the fields that produce the illusion of mass. Therefore the effective mass of the particles decreases below the level of detection, and slip into the virtual world. The mass, and the particles that compose it, seem to dematerialize and cease to be part of the observable universe. #### A.4. Dark Matter – Or is it Phase Conjugate Matter? What happens when large amounts of matter get pushed into the Dirac plenum? The mass is not totally gone, simply minimized. It may be no longer observable, however, it still has influence on the universe if one analyzed it collectively. If one has a trillion parts of something with a distribution of one part per million, then it accumulates to an amplitude of one million parts, and therefore has influence. The point here is that masses can approach an offset that gives it zero mass, but cannot actually get there. There is another possibilitywhat if the die Broglie waves themselves can combine out of phase? If that is true, then one can have “phase conjugate” matter waves. What would this look like? This might be the philosophical premise that J.G. Bennett proposed in the “Regenerative Ratio” as the eternal, matter having equal amounts of positive and negative time. In this case, the E and B fields making up the wave are cancelled, with the negative and positive deltas superimposed. If this is the case, then the internal engines are still active, and producing an influence on the vacuum. This influence, the collective gravitational potential that is not self-canceled may be what we call “dark matter”. #### A.5. The CPT Photon What about the model for the photon? In a previous paper, “In Front of Us”, I showed that half of the wave is in positive time, and the other half is in negative time. What this means is that the model for the particle is similar to Sokolows, as he proposes that the photon consists of a virtual particle and antiparticle locked together moving at the speed of light.[6] So what are these particles? Given enough energy, the kinetic energy of the particles should exceed the binding energy, the “glue” holding them together. In astrophysics, there is something called the Chandrasekhar limit. It is the hottest that anything can become in the universe. Beyond this temperature, approximately 6 billion degrees Kelvin, there is observed in the cores of supernovae a sudden burst of neutrinos, with an implosion occurring shortly thereafter. The theory thus far is that shortly after this limit is reached, the core of the star drops from 6 billion degrees to slightly above absolute zero, forming a Bose-Einstein Condensate. It is this state that contributes to the gravitational implosion of the star, and the subsequent thermal spike on the imploding outer layers that cause a sudden fusion reaction, as temperatures and pressures rise to the point where fusion flashover occurs. The neutrino spike, which may be the decay of the photon as it hits the Chandrasekhar limit, may be the proof of the CPT photon. If the neutrino pair is “hooked” together serially, then it becomes a photon. If it is conjugate, with both neutrino wave states in phase spatially, but 180 degrees out of phase temporally, then we would have a wave packet that would give the appearance of a graviton. This would explain the gravity wave spikes that occur coincident with supernova events. What this would amount to is a fraction of the photon decay events producing conjugate pairs of neutrinos. If more than a fraction of the neutrino pairs undergo this conjugation, I predict that we might have an un-nova, where an immense gravity spike causes a gravitational implosion of the star. The star, which by all intents and purposes should go through a supernova blast, just “winks out” of existence becoming a black hole.[7] ### References [1]   The Lost Unified Field Theory of James Clerk Maxwell by Tom Bearden, USPA 1986, and web site: http://www.cheniere.org/books/aids/ch4.htm Film Below- from Tom Bearden -LOST UNFIED FIELD THEORY OF MAXWELL- shows in the first part of the film- (below) that when Oliver Heaviside castrated Maxwell's equations by removing the quaternion (ability to represent SPIN)- it made it impossible for those like Einstein who followed hopelessly trying to reconcile electromagnetic SPIN with gravity- with equations MINUS the very element which described the necessary rotational component. Implications for PHASE CONJUGATION and self organization- and in the last part: Phase Conjugation and HEALING ELECTRIC FIELDS- - - note film will not play in email version of this article- use web browser- fractalfield.com/mathematicsoffusion Compare the Prieure- PHASE CONJUGATE fields (centripetal) which heal- to the documents on this also from Bearden - at the bottom of: goldenmean.info/phaseconjugate [2]   Magnetism and Its Effects on the Living System by Albert Roy Davis and Walter C. Rawls. Publication Date: April 1996 — ISBN-10: 0911311149 http://www.unexplainable.net/technology/magnetism_and_its_effects_on_the_living_system_1725.php [3]   Video of Sparky Sweets Experiments, courtesy of Ken Macneil. It shows the “figure 8” structure of the field path of the electrons interacting with the magnetic flux. http://www.rexresearch.com/sweet/1nothing.htm Sweets paper on a phase conjugate vacuum triode. Includes some interesting math relations. [4]   Healing with Magnets, Gary Null, PhD. Copyright 1998. ISBN: 0-7867-0530-2. See pgs. 8-9. [5]   Self-published. This has the original Einstein (gedanken) thought experiment. What it means is that at C, all fields collapse into potentials, their true form without the delta operator. [6]   A Dual Ether Universe, Leonid Sokolow, 1977. ISBN 0-682-48721-X. See pgs. 114-115. [7]   http://www.cosmosmagazine.com/news/2205/looking-stars-vanish-sky This site for Cosmos Magazine has an article that in some computer models, some stars implode without a supernova explosion.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 97, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7721745371818542, "perplexity": 5068.844390147234}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413558066654.17/warc/CC-MAIN-20141017150106-00322-ip-10-16-133-185.ec2.internal.warc.gz"}
https://tivadardanka.com/blog/how-the-dot-product-measures-similarity/
# How the dot product measures similarity In machine learning, we use the dot product every day. Its definition is far from revealing. For instance, what does the sum of coordinate products have to do with similarity? There is a beautiful geometric explanation behind. The dot product is one of the most fundamental concepts in machine learning, making appearances almost everywhere. By definition, the dot product (or inner product) is defined between two vectors as the sum of coordinate products. ## The fundamental properties of the dot product To peek behind the curtain, there are three key properties that we have to understand. First, the dot product is linear in both variables. This property is called bilinearity. Second, the dot product is zero if the vectors are orthogonal. (In fact, the dot product generalizes the concept of orthogonality beyond Euclidean spaces. But that's for another day :) ) Third, the dot product of a vector with itself equals the square of its magnitude. ## The geometric interpretation of the dot product Now comes the interesting part. Given a vector $y$, we can decompose $x$ into the two components $x_o$ and $x_p$. One is parallel to $y$, while the other is orthogonal to it. In physics, we apply the same decomposition to various forces all the time. The vectors $x_o$ and $x_p$ are characterized by two properties: 1. $x_p$ is a scalar multiple of $y$, 2. and $x_0$ is orthogonal to $x_p$ (and thus to $y$). We are going to use these properties to find an explicit formula for $x_p$. Spoiler alert: it is related to the dot product. Due to $x_0$ being orthogonal to $y$, we can use the bilinearity of the dot product to express the $c$ in $x_p = c y$. By solving for $c$, we get that it is the ratio of the dot product and the magnitude of $y$. If both $x$ and $y$ are unit vectors, the dot product simply expresses the magnitude of the orthogonal projection! ## Dot product as similarity Do you recall how the famous trigonometric functions sine and cosine are defined? Let's say that the hypotenuse of our right triangle is a unit vector and one of the legs is on the $x$-axis. Then the trigonometric functions equal the magnitudes of the projections to the axes. Using trigonometric functions, we see that the dot product of two unit vectors is the cosine of their enclosed angle $\alpha$! This is how the dot product relates to cosine. If $x$ and $y$ are not unit vectors, we can scale them and use our previous discovery to get the cosine of $\alpha$. The closer to its value to $1$, the more similar $x$ and $y$ are. (In a sense.) In machine learning, we call this quantity the cosine similarity. Now you understand why.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 29, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.995685875415802, "perplexity": 156.58549568040013}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710764.12/warc/CC-MAIN-20221130124353-20221130154353-00594.warc.gz"}
https://sfxpt.wordpress.com/2012/12/12/wordpress-markdown-test/
# An exhibit of Markdown [tags markdown] This note demonstrates some of what wordpress Markdown is capable of doing. [more] ## References http://www.markitdown.net/markdown http://en.support.wordpress.com/markdown-quick-reference/ https://guides.github.com/features/mastering-markdown/ ## Markdown quick reference for wordpress http://en.support.wordpress.com/markdown-quick-reference/ Special shortcodes can be embedded in email to configure the published post, e.g., [more], [delay +1 hour] etc. Emphasize emphasize Strong Strong Footnotes: I have more 1 to say up here. Lists 1. Item 2. Item • Mixed • Mixed 3. Item • Unordered list can use asterisks • Or minuses • Or pluses Preformatted Begin each line with two spaces or more to make text look e x a c t l y like you type i t. Code block #button { border: none; } Definition Lists WordPress A semantic personal publishing platform Markdown Text-to-HTML conversion tool Abbreviations Markdown converts text to HTML. Definitions can be anywhere in the document ## More Markdown Examples from Pandoc-Markdown http://www.unexpected-vortices.com/sw/rippledoc/quick-markdown-example.html Use 3 dashes for — an em-dash. Use 2 dashes for ranges (ex., “it’s all in chapters 12–14”). Three dots … will be converted to an ellipsis. Unicode is supported. Inline math equations go in like so: $\omega = d\phi / dt$. Display math should get its own line and be put in in double-dollarsigns: $$I = \int \rho R^{2} dV$$ And note that you can backslash-escape any punctuation characters which you wish to be displayed literally, ex.: foo, *bar*, etc. Block quotes are written like so. They can span multiple paragraphs, if you like. Here’s a “line block”: | Line one | Line too | Line tree Tables can look like this: size material color 9 leather brown 10 hemp canvas natural 11 glass transparent Table: Shoes, their sizes, and what they’re made of (The above is the caption for the table.) [nextpage] ## Emphasis Emphasis, aka italics, with asterisks or underscores. Strong emphasis, aka bold, with asterisks or underscores. Combined emphasis with asterisks and underscores. Strikethrough uses two tildes. ~~Scratch this~~. ## Lists 1. First ordered list item 2. Another item • Unordered sub-list. 3. Actual numbers don’t matter, just that it’s a number 1. Ordered sub-list 4. And another item. You can have properly indented paragraphs within list items. Notice the blank line above, and the leading spaces (at least one, but we’ll use three here to also align the raw Markdown). To have a line break without a paragraph, you will need to use two trailing spaces. Note that this line is separate, but within the same paragraph. (This is contrary to the typical GFM line break behaviour, where trailing spaces are not required.) • Unordered list can use asterisks • Or minuses • Or pluses ## URLs URLs can be made in a handful of ways: • http://github.com – automatic! • A named link to MarkItDown. The easiest way to do these is to select what you want to make a link and hit Ctrl+L. • Another named link to MarkItDown • Sometimes you just want a URL like . ### internal links / named anchors For Markdown’s support for internal links / named anchors, obvious solution is to place your own anchor point in the page wherever you like, thus: before the line you want to ‘link’ to. Don’t forget the quotation marks around it. Then a markdown link like: anywhere in the document takes you there. It might be OK to put the anchor in the heading line you wish to link. ## Images Here’s our logo (hover to see the title text): Inline-style: Reference-style: ## Code and Syntax Highlighting Code blocks are part of the Markdown spec, but syntax highlighting isn’t. However, many renderers — like Github’s and Markdown Here — support syntax highlighting. Inline code has back-ticks around it. ### code blocks Code blocks are very useful for developers and other people who look at code or other things that are written in plain text. As you can see, it uses a fixed-width font. Blocks of code are either fenced by lines with three back-ticks “`, or are indented with four spaces. I recommend only using the fenced code blocks — they’re easier and only they support syntax highlighting. var s = "JavaScript syntax highlighting"; s = "Python syntax highlighting" print s No language indicated, so no syntax highlighting. But let's throw in a <b>tag</b>. ## Blockquotes Blockquotes are very handy in email to emulate reply text. This line is part of the same quote. Quote break. This is a very long line that will still be quoted properly when it wraps. Oh boy let’s keep writing to make sure this is long enough to actually wrap for everyone. Oh, you can put Markdown into a blockquote. ## Inline HTML You can also use raw HTML in your Markdown, and it’ll mostly work pretty well. Definition list Is something people use sometimes. Markdown in HTML Does *not* work **very** well. Use HTML tags. # Headings – H1, can also contain formatting There are six levels of headings. They correspond with the six levels of HTML headings. ## H2 ### H3 #### H4 ##### H5 ###### H6 [end] everything after this shortcode is ignored (i.e. signatures). Make sure it’s on its own line with a blank line above it. 1. To say down here.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29536932706832886, "perplexity": 10418.314973426099}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463615093.77/warc/CC-MAIN-20170530105157-20170530125157-00139.warc.gz"}
http://math.stackexchange.com/questions/62754/converting-from-one-representation-of-a-field-element-to-another
# Converting From One Representation of a Field Element to Another Let $A \subseteq B \subseteq C$ be fields and let $\alpha$, $\beta$, $\gamma$ be such that $A(\alpha) = B$, $B(\beta) = C$, $A(\gamma) = C$. Assume $B$ and $C$ have finite degree over $A$. Let $m(\alpha,A)$ be the minimal polynomial of $\alpha$ over $A$, let $m(\beta,B)$ be the minimal polynomial of $\beta$ over $B$, let $m(\beta,A)$ be the minimal polynomial of $\beta$ over $A$, and let $m(\gamma,A)$ be the minimal polynomial of $\gamma$ over $A$. Let $c \in C$. We have, on the one hand, $$c=\sum_{i = 1}^{[C:B]} b_i \beta^{i-1}, \quad b_i \in B$$ $$b_i=\sum_{j = 1}^{[B:A]} a_{ij} \alpha^{j-1}, \quad a_{ij} \in A$$ and on the other hand $$c=\sum_{k = 1}^{[C:A]} c_k \gamma^{k-1}, \quad c_k \in A$$ We also have $$\gamma = \sum_{i = 1}^{[C:B]} d_i \beta^{i-1}, \quad d_i \in B$$ $$d_i=\sum_{j = 1}^{[B:A]} e_{ij} \alpha^{j-1}, \quad e_{ij} \in A$$ If the polynomials $m(\alpha,A)$, $m(\beta,B)$, $m(\beta,A)$, $m(\gamma,A)$ and the numbers $a_{ij}$, $e_{ij}$ are known explicitly, how can I calculate the $c_i$? ADDED: Now that Joriki and Jyriki have helped me to formulate the problem correctly, I see the solution is not so hard. Since $$\gamma = \sum_{i = 1}^{[C:B]} \sum_{j = 1}^{[B:A]} e_{ij} \alpha^{j-1}\beta^{i-1}$$ we can find numbers $f_{ijk}$ such that $$\gamma^{k-1} = \sum_{i = 1}^{[C:B]} \sum_{j = 1}^{[B:A]} f_{ijk} \alpha^{j-1}\beta^{i-1}$$ by using the polynomials $m(\alpha,A)$, $m(\beta,B)$ to express large powers of $\alpha$ and $\beta$ in terms of smaller powers. Then $$c = \sum_{i = 1}^{[C:B]} \sum_{j = 1}^{[B:A]} a_{ij} \alpha^{j-1}\beta^{i-1}$$ and also $$c = \sum_{k=1}^{[C:A]} c_k \left( \sum_{i = 1}^{[C:B]} \sum_{j = 1}^{[B:A]} f_{ijk} \alpha^{j-1}\beta^{i-1} \right) = \sum_{i = 1}^{[C:B]} \sum_{j = 1}^{[B:A]} \left( \sum_{k=1}^{[C:A]} c_k f_{ijk} \right) \alpha^{j-1}\beta^{i-1}$$ Therefore $$a_{ij} = \sum_{k=1}^{[C:A]} c_k f_{ijk}$$ So we are reduced to solving this system of system $[C:A] = [C:B] \cdot [B:A]$ linear equations for the $c_k$. - With some difficulty, I think. Take a fairly simple example. Let $A$ be the rationals, $\alpha=\sqrt2$, $\beta=\sqrt3$, $\gamma=\sqrt2+\sqrt3$. You have $c=a_{11}+a_{12}\sqrt2+a_{21}\sqrt3+a_{22}\sqrt6$, and you want $c=c_1+c_2(\sqrt2+\sqrt3)+c_3(\sqrt2+\sqrt3)^2+c_4(\sqrt2+\sqrt3)^3$. Expressing the $c_i$ in terms of the $a_{ij}$ (and the minimal polynomials) looks mildly unpleasant. –  Gerry Myerson Sep 8 '11 at 6:30 I would have thought that you can't, since there may be several values of $\gamma$ with the same $A(\gamma)$ and $m(\gamma,A)$ that will generally have different $c_i$? For example, consider $A=\mathbb Q$, $\alpha=1$, $\beta=\sqrt2$ and $\gamma=\pm\sqrt2$. Then $A(\gamma)=\mathbb Q(\sqrt2)$ and $m(\gamma,A)=x^2-2$ for both signs, but the sign of $c_2$ is flipped. –  joriki Sep 8 '11 at 6:46 @Gerry Unpleasant or not, I need to find an algorithm do it. –  maxpower Sep 8 '11 at 9:20 @maxpower: I don't understand. How do the $a_{ij}$ help? They're the same for both signs in my example. –  joriki Sep 8 '11 at 9:29 @maxpower: I agree with joriki. The problem specification must include a way of identifying the element $\gamma$ in terms of $\alpha$ and $\beta$. Another example would be $\alpha=\sqrt2$, $\beta=i$, $\gamma$ any primitive eighth root of unity. All four sign combination are possible in $\gamma=\pm\alpha(1\pm\beta)/2$ in that all those numbers share the same minimial polynomial $x^4+1$. Unless you know which combination is $\gamma$, the problem cannot be solved. –  Jyrki Lahtonen Sep 8 '11 at 9:44 The $c_i$ are not determined by the given data. Different values of $\gamma$ can lead to identical $A(\gamma)$ and $m(\gamma,A)$ but different values of $c_i$, as in the examples given in comments by Jyrki and me.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9802478551864624, "perplexity": 102.74673284490034}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645151768.51/warc/CC-MAIN-20150827031231-00048-ip-10-171-96-226.ec2.internal.warc.gz"}
http://pypla.net/en/latest/api/core_storage.html
# pyplanet.core.storage¶ exception pyplanet.core.storage.exceptions.StorageException[source] Base storage exception. class pyplanet.core.storage.storage.Storage(instance, driver: pyplanet.core.storage.interface.StorageDriver, config)[source] The storage component manager is managing the storage access trough drivers that can be customized. Warning Some drivers are work in progress! driver Get the raw driver. Be careful with this! Returns: Driver Instance pyplanet.core.storage.interface.StorageDriver open(file: str, mode: str = 'rb', **kwargs)[source] Open a file on the server. Use relative path to the dedicated root. Use the other open methods to relative from another base path. Parameters: file – Filename/path, relative to the dedicated root path. mode – Mode to open, see the python open manual for supported modes. File handler. open_map(file: str, mode: str = 'rb', **kwargs)[source] Open a file on the server. Relative to the Maps folder (UserData/Maps). Parameters: file – Filename/path, relative to the dedicated maps folder. mode – Mode to open, see the python open manual for supported modes. File handler. open_match_settings(file: str, mode: str = 'r', **kwargs)[source] Open a file on the server. Relative to the MatchSettings folder (UserData/Maps/MatchSettings). Parameters: file – Filename/path, relative to the dedicated matchsettings folder. mode – Mode to open, see the python open manual for supported modes. File handler. remove_map(file: str)[source] Remove a map file with filename given. Parameters: file – Filename, relative to Maps folder. ## pyplanet.core.storage.drivers¶ class pyplanet.core.storage.drivers.local.LocalDriver(instance, config: dict = None)[source] Local storage driver is using the Python build-in file access utilities for accessing a local storage-like system. Option BASE_PATH: Override the maniaplanet given base path. class pyplanet.core.storage.drivers.asyncssh.SFTPDriver(instance, config: dict = None)[source] SFTP storage driver is using the asyncssh module to access storage that is situated remotely. Warning This driver is not ready for production use!! Option HOST: Option PORT: Hostname of destinotion server. Port destinotion server. Username of the user account. Password of the user account. (optional if you use public/private keys). File to the Known Hosts file. Array with client private keys. Passphrase to unlock private key(s). Any other options that will be passed to asyncssh. connect_sftp()[source] Get sftp client. Returns: Sftp client. asyncssh.SFTPClient
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2581746280193329, "perplexity": 29682.809024806513}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823588.0/warc/CC-MAIN-20181211061718-20181211083218-00594.warc.gz"}
https://www.shaalaa.com/textbook-solutions/c/ncert-solutions-physics-textbook-class-11-part-2-chapter-10-mechanical-properties-of-fluids_158
Account User Register Share Books Shortlist # NCERT solutions Physics Class 11 Part 2 chapter 10 Mechanical Properties of Fluids ## Chapter 10 - Mechanical Properties of Fluids #### Pages 268 - 271 Explain why The blood pressure in humans is greater at the feet than at the brain Q 1.1 | Page 268 Explain why Atmospheric pressure at a height of about 6 km decreases to nearly half of its value at the sea level, though the height of the atmosphere is more than 100 km Q 1.2 | Page 268 Explain why Hydrostatic pressure is a scalar quantity even though the pressure is force divided by area. Q 1.3 | Page 268 Explain why The angle of contact of mercury with glass is obtuse, while that of water with glass is acute Q 2.1 | Page 268 Explain why Water on a clean glass surface tends to spread out while mercury on the same surface tends to form drops. (Put differently, water wets glass while mercury does not.) Q 2.2 | Page 268 Explain why Surface tension of a liquid is independent of the area of the surface Q 2.3 | Page 268 Explain why Water with detergent dissolved in it should have small angles of contact. Q 2.4 | Page 268 Explain why A drop of liquid under no external forces is always spherical in shape Q 2.5 | Page 268 Fill in the blanks using the word(s) from the list appended with each statement Surface tension of liquids generally . . . with temperatures (increases / decreases) Q 3.1 | Page 268 Fill in the blanks using the word(s) from the list appended with each statement Viscosity of gases. .. with temperature, whereas viscosity of liquids . . . with temperature (increases / decreases) Q 3.2 | Page 268 Fill in the blanks using the word(s) from the list appended with each statement For solids with elastic modulus of rigidity, the shearing force is proportional to . . . , while for fluids it is proportional to . .. (shear strain / rate of shear strain) Q 3.3 | Page 268 Fill in the blanks using the word(s) from the list appended with each statement: For a fluid in a steady flow, the increase in flow speed at a constriction follows (conservation of mass / Bernoulli’s principle) Q 3.4 | Page 268 Fill in the blanks using the word(s) from the list appended with each statement For the model of a plane in a wind tunnel, turbulence occurs at a ... speed for turbulence for an actual plane (greater / smaller) Q 3.5 | Page 268 Explain why To keep a piece of paper horizontal, you should blow over, not under, it Q 4.1 | Page 268 Explain why When we try to close a water tap with our fingers, fast jets of water gush through the openings between our fingers Q 4.2 | Page 268 Explain why The size of the needle of a syringe controls flow rate better than the thumb pressure exerted by a doctor while administering an injection Q 4.3 | Page 268 Explain why A fluid flowing out of a small hole in a vessel results in a backward thrust on the vessel Q 4.4 | Page 268 Explain why A spinning cricket ball in air does not follow a parabolic trajectory Q 4.5 | Page 268 A 50 kg girl wearing high heel shoes balances on a single heel. The heel is circular with a diameter 1.0 cm. What is the pressure exerted by the heel on the horizontal floor? Q 5 | Page 268 Toricelli’s barometer used mercury. Pascal duplicated it using French wine of density 984 kg m–3. Determine the height of the wine column for normal atmospheric pressure. Q 6 | Page 269 A vertical off-shore structure is built to withstand a maximum stress of 109 Pa. Is the structure suitable for putting up on top of an oil well in the ocean? Take the depth of the ocean to be roughly 3 km, and ignore ocean currents. Q 7 | Page 269 A hydraulic automobile lift is designed to lift cars with a maximum mass of 3000 kg. The area of cross-section of the piston carrying the load is 425 cm2. What maximum pressure would the smaller piston have to bear? Q 8 | Page 269 A U-tube contains water and methylated spirit separated by mercury. The mercury columns in the two arms are in level with 10.0 cm of water in one arm and 12.5 cm of spirit in the other. What is the specific gravity of spirit? Q 9 | Page 269 In problem 10.9, if 15.0 cm of water and spirit each are further poured into the respective arms of the tube, what is the difference in the levels of mercury in the two arms? (Specific gravity of mercury = 13.6) Q 10 | Page 269 Can Bernoulli’s equation be used to describe the flow of water through a rapid in a river? Explain. Q 11 | Page 269 Does it matter if one uses gauge instead of absolute pressures in applying Bernoulli’s equation? Explain. Q 12 | Page 269 Glycerine flows steadily through a horizontal tube of length 1.5 m and radius 1.0 cm. If the amount of glycerine collected per second at one end is 4.0 × 10–3 kg s–1, what is the pressure difference between the two ends of the tube? (Density of glycerine = 1.3 × 103 kg m–3 and viscosity of glycerine = 0.83 Pa s). [You may also like to check if the assumption of laminar flow in the tube is correct]. Q 13 | Page 269 In a test experiment on a model aeroplane in a wind tunnel, the flow speeds on the upper and lower surfaces of the wing are 70 m s–1and 63 m s–1 respectively. What is the lift on the wing if its area is 2.5 m2? Take the density of air to be 1.3 kg m–3. Q 14 | Page 269 Figures (a) and (b) refer to the steady flow of a (non-viscous) liquid. Which of the two figures is incorrect? Why? Q 15 | Page 269 The cylindrical tube of a spray pump has a cross-section of 8.0 cm2 one end of which has 40 fine holes each of diameter 1.0 mm. If the liquid flow inside the tube is 1.5 m min–1, what is the speed of ejection of the liquid through the holes? Q 16 | Page 269 A U-shaped wire is dipped in a soap solution and removed. The thin soap film formed between the wire and the light slider supports a weight of 1.5 × 10–2 N (which includes the small weight of the slider). The length of the slider is 30 cm. What is the surface tension of the film? Q 17 | Page 269 Figure  (a) shows a thin liquid film supporting a small weight = 4.5 × 10–2 N. What is the weight supported by a film of the same liquid at the same temperature in Fig. (b) and (c)? Explain your answer physically. Q 18 | Page 269 What is the pressure inside the drop of mercury of radius 3.00 mm at room temperature? The surface tension of mercury at that temperature (20°C) is 4.65 × 10–1 N m–1. The atmospheric pressure is 1.01 × 105 Pa. Also give the excess pressure inside the drop Q 19 | Page 270 What is the excess pressure inside a bubble of soap solution of radius 5.00 mm, given that the surface tension of soap solution at the temperature (20 °C) is 2.50 × 10–2 N m–1? If an air bubble of the same dimension were formed at depth of 40.0 cm inside a container containing the soap solution (of relative density 1.20), what would be the pressure inside the bubble? (1 atmospheric pressure is 1.01 × 105 Pa). Q 20 | Page 270 A tank with a square base of area 1.0 m2 is divided by a vertical partition in the middle. The bottom of the partition has a small-hinged door of area 20 cm2. The tank is filled with water in one compartment, and an acid (of relative density 1.7) in the other, both to a height of 4.0 m. compute the force necessary to keep the door close. Q 21 | Page 270 A manometer reads the pressure of a gas in an enclosure as shown in Figure (a) When a pump removes some of the gas, the manometer reads as in Figure (b) The liquid used in the manometers is mercury and the atmospheric pressure is 76 cm of mercury. (a) Give the absolute and gauge pressure of the gas in the enclosure for cases (a) and (b), in units of cm of mercury. (b) How would the levels change in case (b) if 13.6 cm of water (immiscible with mercury) are poured into the right limb of the manometer? (Ignore the small change in the volume of the gas). Q 22 | Page 270 Two vessels have the same base area but different shapes. The first vessel takes twice the volume of water that the second vessel requires to fill up to a particular common height. Is the force exerted by the water on the base of the vessel the same in the two cases? If so, why do the vessels filled with water to that same height give different readings on a weighing scale? Q 23 | Page 270 During blood transfusion the needle is inserted in a vein where the gauge pressure is 2000 Pa. At what height must the blood container be placed so that blood may just enter the vein? [Use the density of whole blood from Table 10.1]. Q 24 | Page 271 In deriving Bernoulli’s equation, we equated the work done on the fluid in the tube to its change in the potential and kinetic energy. (a) What is the largest average velocity of blood flow in an artery of diameter 2 × 10–3 m if the flow must remain laminar? (b) Do the dissipative forces become more important as the fluid velocity increases? Discuss qualitatively. Q 25 | Page 271 (a) What is the largest average velocity of blood flow in an artery of radius 2 × 10–3 m if the flow must remain laminar? (b) What is the corresponding flow rate? (Take viscosity of blood to be 2.084 × 10–3 Pa s). Q 26 | Page 271 A plane is in level flight at a constant speed and each of its two wings has an area of 25 m2. If the speed of the air is 180 km/h over the lower wing and 234 km/h over the upper wing surface, determine the plane’s mass. (Take air density to be 1 kg m–3). Q 27 | Page 271 In Millikan’s oil drop experiment, what is the terminal speed of an uncharged drop of radius 2.0 × 10–5 m and density 1.2 × 103 kg m–3? Take the viscosity of air at the temperature of the experiment to be 1.8 × 10–5 Pa s. How much is the viscous force on the drop at that speed? Neglect buoyancy of the drop due to air. Q 28 | Page 271 Mercury has an angle of contact equal to 140° with soda lime glass. A narrow tube of radius 1.00 mm made of this glass is dipped in a trough containing mercury. By what amount does the mercury dip down in the tube relative to the liquid surface outside? Surface tension of mercury at the temperature of the experiment is 0.465 N m–1. Density of mercury = 13.6 × 103 kg m–3 Q 29 | Page 271 Two narrow bores of diameters 3.0 mm and 6.0 mm are joined together to form a U-tube open at both ends. If the U-tube contains water, what is the difference in its levels in the two limbs of the tube? Surface tension of water at the temperature of the experiment is 7.3 × 10–2 N m–1. Take the angle of contact to be zero and density of water to be 1.0 × 103 kg m–3 (g = 9.8 m s–2) Q 30 | Page 271 It is known that density ρ of air decreases with height as 0^(e^(y/y_0)) Whererho_0 1.25 kg m–3 is the density at sea level, and y0 is a constant. This density variation is called the law of atmospheres. Obtain this law assuming that the temperature of atmosphere remains a constant (isothermal conditions). Also, assume that the value of gremains constant Q 31.1 | Page 271 A large He balloon of volume 1425 m3 is used to lift a payload of 400 kg. Assume that the balloon maintains constant radius as it rises. How high does it rise? [Take y0= 8000 m and rho_"He"= 0.18 kg m–3]. Q 31.2 | Page 271 S
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7012065649032593, "perplexity": 1169.660266285264}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647892.89/warc/CC-MAIN-20180322151300-20180322171300-00574.warc.gz"}
https://orbital-mechanics.space/orbital-maneuvers/nonimpulsive-orbital-maneuvers.html
# Nonimpulsive Orbital Maneuvers# Up to now, all the maneuvers we have considered have been impulsive. This means that they happen extremely quickly relative to the time scale of the overall maneuver or trajectory. Practically, this means that the position vector is constant during the impulse. However, there are very useful maneuvers that can be performed by providing nonimpulsive thrust. For instance, providing a very low thrust over a sufficiently long time period may be more efficient, depending on the propellant source, than impulsive maneuvers. Nonimpulsive maneuvers typically include propulsion devices such as solar sails and ion engines. Since the position is changing during the impulse, we must return to the equation of motion and add an additional force term to the right hand side: $\ddot{\vector{r}} = -\mu\frac{\vector{r}}{r^3} + \frac{\vector{F}}{m}$ Assuming that force is a thrust is provided in the same direction as the velocity, then: $\vector{F} = T\frac{\vector{v}}{v}$ where $$T$$ is the magnitude of the thrust force and $$\vector{v} = \dot{\vector{r}}$$. If the thrust is provided in the opposite direction of the velocity, then a negative sign should be added to the previous equation. In any case, this results in three scalar equations of motion: \begin{aligned}\ddot{x} &= -\mu\frac{x}{r^3} + \frac{T}{m}\frac{\dot{x}}{v} & \ddot{y} &= -\mu\frac{y}{r^3} + \frac{T}{m}\frac{\dot{y}}{v} & \ddot{z} &= -\mu\frac{z}{r^3} + \frac{T}{m}\frac{\dot{z}}{v}\end{aligned} In addition, to provide the thrust $$T$$, the rocket motors must eject propellant overboard. This causes the mass of the spacecraft to decrease according to: $\frac{dm}{dt} = -\frac{T}{I_{sp}g_0}$ where $$m$$ is the instantaneous mass of the spacecraft, $$I_{sp}$$ is the specific impulse of the engine/propellant combination, and $$g_0$$ is the sea-level acceleration of gravity. This set of differential equations does not have an analytical solution in general. However, we can construct a numerical solution by writing the system of ODEs as the six components of the state vector (3 position and 3 velocity) plus the equation for the mass.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9616224765777588, "perplexity": 498.1666457456828}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949642.35/warc/CC-MAIN-20230331113819-20230331143819-00524.warc.gz"}
https://tex.stackexchange.com/questions/297334/lyx-document-compiled-in-osx
# LyX document compiled in OSX? LyX is a wysiwyg LaTeX editor that I find useful to write exercises fast instead of TextMate that I use for long-term projects. So the below shows an exercise that I would like to print as PDF: however I am getting the following errors Undefined control sequence that are very vague. I cannot see any \begin{document} or other LaTeX commands in LyX in the view which makes debugging harder than in text-based editors. So How can I compile a document in LyX? • the fact that \mathbb is what's unidentified means that amsfonts (or if you need more symbols, amssymb, which loads amsfonts) isn't loaded. so try adding that to your job. (but i don't know how, not being a lyx user.) – barbara beeton Mar 4 '16 at 17:51 • @barbarabeeton thank you for the observation: I realized from that to search for packages and I found a solution, provided below for new LyX users, nice software :) – hhh Mar 4 '16 at 18:06 • What options do you have set in Document > Settings > Math Options? – scottkosty Mar 4 '16 at 18:35 • @scottkosty thank you for the observation, added the alternative method here. – hhh Mar 5 '16 at 16:20 • @hhh looks good. I would add that there is an advantage of doing it the LyX way because that way LyX knows about the packages that are being used. This way LyX can load them in the best order (sometimes this makes a difference in LaTeX, although in this case I doubt it). – scottkosty Mar 5 '16 at 16:57 and add your preamble such as \usepackage{amsmath,amsfonts,amssymb,amsthm} so and CMD+R to get the document compiled like in TextMate -- it works without writing any \begin{document}. Document > Settings > Math Options More graphical method as pointed out by the comment is to tongle the settings in Math Options like the below and in this case you don't need to add the mathematical packages to LaTeX Preamble.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9380436539649963, "perplexity": 1781.0977786582907}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986657949.34/warc/CC-MAIN-20191015082202-20191015105702-00201.warc.gz"}
https://math.stackexchange.com/questions/386766/power-series-and-integral-test
# Power Series and Integral test so I'm studying for my exams and there are a few questions that I don't completely understand. I need help with questions (b) and (d). For (b), I ended up getting diverging, because the limit is infinity?. Also I have no clue how to do (d) So I also spent all night trying to learn power series and I need help with (d), (e) and (f). I ended up obtaining the right answer for (d), but I'm not sure if my method of working it out is correct. So basically what I did first was do the ratio test and I ended up getting |x^2|limit k->infinity (log(1+k))/log(2+k). Am I suppose to use L'hopital's rule to find the limit? Because I used it and the interval ended up being from -1infinity (k^k)/(l+1)^k ,what do I do from there? , Btw the answers to (b) and (d) for the integral test is converging for both of them. And the answers to (d), (e) and (f) for the power series is 1, 2e and 0, respectively. • Help is very much appreciated :) – George Randall May 9 '13 at 16:46 Under "Integral Test": A useful result all students should work out at least once in their "calculus lives" is $\int_1^{\infty} \frac{1}{x^p} \ dx \ ,$ to see for what values of $p$ this integral converges or diverges (you may have already seen this when you covered Type I improper integrals). It will help in spotting which "p-series" $\Sigma_{n=1}^{\infty} \frac{1}{n^p}$ converge. In your set, you'll need to use u-substitution in examining $\int_0^{\infty} \frac{1}{(n+1)^{\gamma}} \ dx \ ,$ and partial fraction decomposition for $\int_0^{\infty} \frac{1}{(x+1) \ \cdot \ (x+2)} \ dx \$ (or use the hint). (And having re-read your last sentences, yes, (b) and (d) converge.) $$\\$$ Under "Power Series" : I believe you are correct for (d), (e) , and (f). The Ratio Test for (d) produces $$\lim_{k \rightarrow \infty} | \ \frac{x^{2k+2}}{x^{2k}} \ \cdot \ \frac{\log (k+1)}{\log (k+2)} \ | \ = \ \lim_{k \rightarrow \infty} | \ x^2 \ \cdot \ \frac{\log (k+1)}{\log (k+2)} \ | ,$$ and I don't think you need a lot of justification to declare that the limit for the ratio of logarithms is $1$ . (The ratio is equivalent to $\log_{k + 2} (k+1)$ ). As for (e) and (f), these both hinge on dealing with $k^k$ in some manner. For (e), the ratio includes the factors $$\frac{(k+1)! \cdot k^k}{k! \cdot (k+1)^{(k+1)}} \ ,$$ which reduce to $$(k+1) \ \cdot \ \frac{ k^k}{ (k+1)^{(k+1)}} \ = \ ( \frac{k}{ k+1})^k \ ,$$ for which the limit at infinity can be found by appropriate use of l'Hopital's Rule on "indeterminate powers", or familiarity with the behavior of this function (as many members of this forum handle it). Here is one example, your first one. When you integrate that square root term, you get 2*SQRT[x+1] and so this integral is divergent (WHY?) Therefore the Series is divergent. Third example, the anti derivative is an arctan, so can you show that this one will be convergent? First example of Radius of Convergence, with ratio test (check your book for definition!) you can find that interval of convergence is between -3 and 3. So what's the Radius? Now you can try some more...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9436496496200562, "perplexity": 177.40614224999294}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256546.11/warc/CC-MAIN-20190521182616-20190521204616-00463.warc.gz"}
https://planetmath.org/MatrixPnorm
# matrix p-norm A class of matrix norms, denoted $\|\cdot\|_{p}$, is defined as $\|\,A\,\|_{p}=\sup_{x\neq 0}\frac{\|\,Ax\,\|_{p}}{\|\,x\,\|_{p}}\qquad{}x\in% \mathbb{R}^{n},A\in\mathbb{R}^{m\times n}.$ The matrix $p$-norms are defined in terms of the vector $p$-norms (http://planetmath.org/VectorPNorm). An alternate definition is $\|\,A\,\|_{p}=\max_{\|\,x\,\|_{p}=1}\|\,Ax\,\|_{p}.$ As with vector $p$-norms, the most important are the 1, 2, and $\infty$ norms. The 1 and $\infty$ norms are very easy to calculate for an arbitrary matrix: $\begin{array}[]{ll}\|\,A\,\|_{1}&=\displaystyle\max_{1\leq j\leq n}\sum_{i=1}^% {m}|a_{ij}|\\ \|\,A\,\|_{\infty}&=\displaystyle\max_{1\leq i\leq m}\sum_{j=1}^{n}|a_{ij}|.% \end{array}$ It directly follows from this that $\|\,A\,\|_{1}=\|\,A^{T}\,\|_{\infty}$. The calculation of the $2$-norm is more complicated. However, it can be shown that the 2-norm of $A$ is the square root of the largest eigenvalue of $A^{T}A$. There are also various inequalities that allow one to make estimates on the value of $\|\,A\,\|_{2}$: $\frac{1}{\sqrt{n}}\|\,A\,\|_{\infty}\leq\|\,A\,\|_{2}\leq\sqrt{m}\|\,A\,\|_{% \infty}.$ $\frac{1}{\sqrt{m}}\|\,A\,\|_{1}\leq\|\,A\,\|_{2}\leq\sqrt{n}\|\,A\,\|_{1}.$ $\|\,A\,\|_{2}^{2}\leq\|\,A\,\|_{\infty}\cdot\|\,A\,\|_{1}.$ $\|\,A\,\|_{2}\leq\|\,A\,\|_{F}\leq\sqrt{n}\|\,A\,\|_{2}.$ ($\|\,A\,\|_{F}$ is the Frobenius matrix norm) Title matrix p-norm MatrixPnorm 2013-03-22 11:43:22 2013-03-22 11:43:22 mathcam (2727) mathcam (2727) 20 mathcam (2727) Definition msc 15A60 msc 00A69 MatrixNorm VectorNorm FrobeniusMatrixNorm
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 19, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9756673574447632, "perplexity": 682.419563762664}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202474.26/warc/CC-MAIN-20190320230554-20190321012554-00258.warc.gz"}
https://huggingface.co/docs/evaluate/installation
Evaluate documentation Installation Join the Hugging Face community to get started # Installation Before you start, you will need to setup your environment and install the appropriate packages. 🤗 Evaluate is tested on Python 3.7+. ## Virtual environment You should install 🤗 Evaluate in a virtual environment to keep everything neat and tidy. 1. Create and navigate to your project directory: mkdir ~/my-project cd ~/my-project 2. Start a virtual environment inside the directory: python -m venv .env 3. Activate and deactivate the virtual environment with the following commands: # Activate the virtual environment source .env/bin/activate # Deactivate the virtual environment source .env/bin/deactivate Once you have created your virtual environment, you can install 🤗 Evaluate in it. ## pip The most straightforward way to install 🤗 Evaluate is with pip: pip install evaluate Run the following command to check if 🤗 Evaluate has been properly installed: python -c "import evaluate; print(evaluate.load('exact_match').compute(references=['hello'], predictions=['hello']))" This should return: {'exact_match': 1.0} ## source Building 🤗 Evaluate from source lets you make changes to the code base. To install from source, clone the repository and install with the following commands: git clone https://github.com/huggingface/evaluate.git cd evaluate pip install -e . Again, you can check if 🤗 Evaluate has been properly installed with: python -c "import evaluate; print(evaluate.load('exact_match').compute(references=['hello'], predictions=['hello']))"
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1549852043390274, "perplexity": 26289.278306790497}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499871.68/warc/CC-MAIN-20230131122916-20230131152916-00771.warc.gz"}
http://www.dcc.fc.up.pt/~rvr/aulas/AC1314/AA1314/syllabus/
# Rogério Reis ## Err and err and err again, but less and less and less. 14-10-2013 Lecturer: Rogério Reis • Formal languages. • Operations on languages. • Regular languages. • Regular expressions. • Deterministic finite automata (DFA). • Non-deterministic finite automata. • NFA to DFA conversion. • Operations on NFAs. • Regular expression equivalent to a NFA (Brzozowski state elimination algorithm). • Regular expression to NFA conversion (the Thompson automaton). • Uniqueness of the minimum DFA (state distinguishability). • Minimisation algorithms: Moore, Hopcroft and Brzozowski. Bibliography John E. Hopcroft and Jeffrey D. Ullman. Introduction to Automata Theory, Languages, and Computation. Addison-Wesley, 1979. ISBN 0-201-02988. Subjects of this class can be found in Chapter 2 of Hopcroft & Ullman book (pages 13 to 54) Jacques Sakarovitch. Elements of Automata Theory. Cambridge University Press, 2009. ISBN 978-0-521-84425-3 21-10-2013 Lecturer: Rogério Reis • The Myhill-Nerode Theorem • The "Pumping Lemma" for regular languages and its applications. • The product semi-automaton. Reunion, intersection and complementation operations in automata. • Proportional removal and cycle shift operations are closed for regular expressions. • The notion of derivative of a regular expression with respect to one symbol. • The Brzozowski derivative DFA of a regular expression. • The Glushkov nondeterministic automaton of a regular expression. • The notion of partial derivative of a regular expression with respect to a symbol. • The Antimirov nondeterministic automaton of a regular expression. • Antimirov algorithm to test the equivalence of two regular expressions. • Very brief introduction to Analytic Combinatorics and its application to Descriptional Complexity. Bibliography Myhill-Nerode Theorem, Pumping Lemma and the product and complementary automata constructions can be studied in Hopcroft & Ullman' s book. Jeffrey Shallit. A Second Course in Formal Languages and Automata Theory. Cambridge University Press, 2009. ISBN 798-0-521-86572-2 Regular Languages Class closure for Proportional Removal and Cyclic Shift operations can be seen in this book (pages 58-61). A brief description of Glushkov and Antimirov automata can be studied here and here. A description and discussion of Antimirov's algorithm to compare regular expressions can be studied here. A brief introduction to Analytics Combinatorics and its application to Descriptional Complexity can be seen here. A paper with the main results of state complexity on Finite Automata is available here. ## Descriptional Complexity DataBase (DesCo) 28-10-2013 Lecturer: Nelma Moreira • Relations on words. • Operations on relations: boolean, inverse, composition. • Regular relations on words. • Transducers over two alphabets. • Synchronous and literal transducers. • Composition of transducers. • Sequential transducers . • Determinization and conditions for a transducer to be determinizable. • Normalization of sequential transducers. • Minimization of sequential transducers. • Semirings. Complete semirings. • Weighted automata and transducers • Regulated weighted transducers • Regular operations on transducers • Composition of transducers over complete semirings (and without epsilon-paths) • Determinization of weighted automata over a weakly divisible semiring • Weight pushing • Minimization of weighted automata Bibliography Handbook of Weighted Automata Editors: • Manfred Droste, • Werner Kuich, • Heiko Vogler ISBN: 978-3-642-01491-8 , Springer, 2009 Chap. Weighted Automata Algorithms, Mehryar Mohri 04-11-2013 Lecturer: Sabine Broda • Model checking. • Linear temporal logic (LTL). • Transition systems and Kripke models. • LTL semantics. • Properties specification in LTL. • Semantic equivalence in LTL. • Complete set of connectives. • Model checking example: mutual exclusion. Specification of models and properties. • Non-deterministic finite automata and Büchi automata. • Alternating finite automata. • Alternating Büchi automata for a LTL formula and models. • Automata based model checking algorithm for LTL. Bibliography Logic in Computer Science: Modelling and reasoning about systems, Michael Huth and Mark Ryan Cambridge University Press, 2004 Chapters: 3.2-3.3 An Automata-Theoretic Approach to Linear Temporal Logic, Moshe Y. Vardi. Logics for Concurrency Lecture Notes in Computer Science Volume 1043, 1996, pp 238-266. Some exercises on this subject. 10-11-2013 • The notion of cellular automata. • Conway's "Game of Life" and its universality. • Linear cellular automata and graphs. • Unidimentional cellular automata: Wolfram coding. • Bidimensional Cellular automata. Totalistic and outer-totalistic rules and their numeric coding. • Reversible automata. Classification of all linear reversible finite cellular automata with support on $P_n$ graph over $\mathbb{Z}_2$ and $\mathbb{Z}_3$. • Wolfram cipher and the PKC system proposed by P.Guan. Bibliography Andrew Ilachinski. Cellular Automata: a Discrete Universe. World Scientific, 2001. Stephen Wolfram, Cryptography with Cellular Automata, in "Advances in Cryptology: CRYPTO '85 Proceedings" [Williams, H. C. (Ed.)]. Lecture Notes in Computer Science 218. Springer-Verlag, 429–432, 1986. Puhua Guan, Cellular Automaton Public-Key Cryptosystem, Complex Systems 1 (1987) 51-57.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7486404776573181, "perplexity": 7953.586814895721}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583511744.53/warc/CC-MAIN-20181018063902-20181018085402-00315.warc.gz"}
http://scitation.aip.org/content/aip/journal/pof2/20/7/10.1063/1.2958319
• journal/journal.article • aip/pof2 • /content/aip/journal/pof2/20/7/10.1063/1.2958319 • pof.aip.org 1887 No data available. No metrics data to plot. The attempt to plot a graph for these metrics has failed. Model of a truncated fast rotating flow at infinite Reynolds number USD 10.1063/1.2958319 By L. Bourouiba1,a) View Affiliations Hide Affiliations Affiliations: 1 McGill University, 805 Sherbrooke W., Room 945, Montréal, Québec H3A 2K6, Canada a) Electronic mail: [email protected]. Phys. Fluids 20, 075112 (2008) /content/aip/journal/pof2/20/7/10.1063/1.2958319 http://aip.metastore.ingenta.com/content/aip/journal/pof2/20/7/10.1063/1.2958319 ## Figures FIG. 1. Initial horizontal spectra of ICs, IC: I and IC: II. FIG. 2. Time series of the energy contributions and for , 0.2, and 0.01, initiated with IC: I (a) and IC: II (b). The time axis is the dimensional time. FIG. 3. Time series of the 2D enstrophy (a) and the energy of the vertical component of the 2D field (b) for , 0.2, and 0.01 for both IC: I and IC: II. The time axis is the dimensional time. FIG. 4. Time series of the and for the simulation initialized with IC: I (a) and IC: II (b). The time axis is the nondimensional time . FIG. 5. Horizontal wavenumber spectra of (upper panel) and (lower panel) for and for IC: I (left column) and IC: II (right column). The theoretical spectra have been offset for clarity. The initial numerical spectra are denoted and multiple lines are for different times. FIG. 6. Time series of [centroid of 2D energy spectra defined by Eq. (39)], with nondimensional time and for . Both simulations started with IC: I and IC: II are shown. FIG. 7. Horizontal wavenumber spectra for and for IC: I (a) and IC: II (b). The theoretical spectra have been offset for clarity. The initial numerical spectra are denoted and the multiple lines are for different times. FIG. 8. Vertical spectra of 3D energy, , for simulations initiated with IC: I (a) and IC: II (b). The initial numerical spectra are denoted and the multiple lines are for different times. The theoretical spectra have been offset for clarity only for the simulation IC: II (a). FIG. 9. Time averaged 3D energy spectrum in log-log scale at an initial time [(a) and (b)], an intermediate time range below such that [(c) and (d)], an intermediate time [(e) and (f)], and the end of the simulations with [(g) and (h)]. Both the and time ranges correspond to times larger than , i.e., beyond the decoupled phase. and the ICs are IC: I (left column) and IC: II (right column). The colors are normalized for each graph such that the maximum (minimum) value of the modal spectrum is represented by the brightest (darkest) color. ## Tables Table I. The timestep , the rotation rate , the final output time , the 2D Rossby number , and the Robert filter parameter for each of the selected simulations. /content/aip/journal/pof2/20/7/10.1063/1.2958319 2008-07-30 2014-04-24 Article content/aip/journal/pof2 Journal 5 3 ### Most cited this month More Less This is a required field
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8208197951316833, "perplexity": 4408.467698320504}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00048-ip-10-147-4-33.ec2.internal.warc.gz"}
http://mathhelpforum.com/discrete-math/106135-solved-one-one-function-not-print.html
# [SOLVED] one-to-one function or not? • Oct 4th 2009, 06:26 PM smith.5954 [SOLVED] one-to-one function or not? thanks • Oct 4th 2009, 11:02 PM Gamma i think the biggest problem is that function is not well defined. $f(1/2)=2\cdot 1 + 2=4$ but $f(2/4)=2 \cdot 2 + 4= 8$ Even if it were well defined, its not injective $f(3/4)=f(1/8)$ but clearly $3/4 \not = 1/8$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7412166595458984, "perplexity": 5403.624695614277}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660916.37/warc/CC-MAIN-20160924173740-00102-ip-10-143-35-109.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/boundary-value-problem.152879/
# Boundary value problem 1. Jan 24, 2007 ### John O' Meara Schodinger's equation for one-dimensional motion of a particle whose potential energy is zero is $$\frac{d^2}{dx^2}\psi +(2mE/h^2)^\frac{1}{2}\psi = 0$$ where $$\psi$$ is the wave function, m the mass of the particle, E its kinetic energy and h is Planck's constant. Show that $$\psi = Asin(kx) + Bcos(kx)$$ ( where A and B are constants) and $$k =(2mE/h^2)^\frac{1}{2}$$ is a solution of the equation. Using the boundary conditions $$\psi=0$$ when x=0 and when x=a, show that (i) the kinetic energy $$E=h^2n^2/8ma^2$$ (ii) the wave function $$\psi = A sin(n\pi\times x/a)$$ where n is any integer. (Note if $$sin(\theta) = 0 then \theta=n\pi$$) My attempt: A*sin(0) + B*cos(0) = 0, => 0 +B =0 => B = 0. Therefore A*sin(k*a)=0, Therefore $$(2mE/h^2)^\frac{1}{2}a = n\pi => E=n^2\pi^2h^2/2ma^2$$ 1. The problem statement, all variables and given/known data 2. Relevant equations 3. The attempt at a solution Last edited: Jan 24, 2007 2. Jan 24, 2007 ### John O' Meara (ii) should read A*sin(n*pi*x/a), I'm just notgood at boundary value problems. 3. Jan 24, 2007 ### HallsofIvy Well, first, you haven't shown that that $\psi$ does, in fact, satisfy the differential equation. After that, yes, B= 0. Now, assuming A is not 0, that is that $\psi$ is not itself identically 0, then yes, we must have sin(ka)= 0 so that $ka= (2mE/h^2)^{\frac{1}{2}}= n\pi$. E follows eactly as you say. 1. The problem statement, all variables and given/known data 2. Relevant equations 3. The attempt at a solution[/QUOTE] 4. Jan 25, 2007 ### John O' Meara I was able to show that $$\psi$$ is a solution of the equation, it is (i) and (ii) that I had trouble doing espically (ii) i.e., $$\psi = A sin(n\pi\times x/a)$$. Where did he get the argument "$$n\pi\times x/a$$", or more importantly how does he expect me to get that argument of the sine. Also remember in (i) the answer he has for E $$=h^2n^2/8ma^2$$, not what I got for E. Thanks for the help. 5. Jan 26, 2007 ### John O' Meara I hope someone can tell me how $$\psi$$ can go from $$Asin((2mE/h^2)^\frac{1}{2}x) to Asin(n\pi\times x/a)$$. Thanks for the help. 6. Jan 26, 2007 ### HallsofIvy Well, when x= a, the argument is $n\pi$ what is $sin(n\pi)[/tex]? Remember that you were told that [itex]\psi(0)= 0$ and $\psi(a)= 0$. Knowing that cos(0)= 1 tells us that the second constant, B, must be 0. That leaves Asin(kx). We must have Asin(ka)= 0 and we don't want A= 0 (that would mean our function is always 0) so we must have sin(ka)= 0. For what x is sin(x)= 0? Multiples of $\pi$ of course: $ka= n\pi$. For that to be true, k must be equal to $n\pi/a$ You were also told that $k= \sqrt{2ME/h^2}$ and you now know $k= n\pi/a[\itex] so [itex]n\pi/a= \sqrt{2ME/h^2}$. Solve that for E.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9840323328971863, "perplexity": 853.8932731634618}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257648205.76/warc/CC-MAIN-20180323083246-20180323103246-00389.warc.gz"}
https://de.maplesoft.com/support/help/errors/view.aspx?path=updates/Maple18/TimeSeriesAnalysis&L=G
Time Series Analysis - Maple Help Home : Support : Online Help : System : Information : Updates : Maple 18 : Time Series Analysis Time Series Analysis The TimeSeriesAnalysis package is new to Maple 18 and deals with any data that varies with time. In particular, any data where the time intervals between data points are regular, such as with macroeconomical data and in many other fields like statistics, signal processing, econometrics, and mathematical finance. The TimeSeriesAnalysis package has many tools working with data including tools for analyzing and modeling, finding patterns and forecasting, and visualizing time series data. > $\mathrm{with}\left(\mathrm{TimeSeriesAnalysis}\right)$ $\left[{\mathrm{AIC}}{,}{\mathrm{AICc}}{,}{\mathrm{Apply}}{,}{\mathrm{BIC}}{,}{\mathrm{BoxCoxTransform}}{,}{\mathrm{Decomposition}}{,}{\mathrm{Difference}}{,}{\mathrm{ExponentialSmoothingModel}}{,}{\mathrm{Forecast}}{,}{\mathrm{GetData}}{,}{\mathrm{GetDateFormat}}{,}{\mathrm{GetDates}}{,}{\mathrm{GetHeaders}}{,}{\mathrm{GetParameter}}{,}{\mathrm{GetParameters}}{,}{\mathrm{GetPeriod}}{,}{\mathrm{Initialize}}{,}{\mathrm{Join}}{,}{\mathrm{LogLikelihood}}{,}{\mathrm{LogTransform}}{,}{\mathrm{LongestDefinedSubsequence}}{,}{\mathrm{NumberOfParameters}}{,}{\mathrm{OneStepForecasts}}{,}{\mathrm{Optimize}}{,}{\mathrm{SeasonalSubseriesPlot}}{,}{\mathrm{SetParameter}}{,}{\mathrm{Specialize}}{,}{\mathrm{TimeSeries}}{,}{\mathrm{TimeSeriesPlot}}{,}{\mathrm{Unapply}}\right]$ (1) Forecasting Seasonal Analysis Using the Time Series Analysis Package The following example uses a data set containing the number of monthly air passengers (in thousands of passengers) from 1949 until 1960. The data is from Box, Jenkins, and Reinsel, noted in the references below. > $\mathrm{path}:=\mathrm{FileTools}:-\mathrm{JoinPath}\left(\left[\mathrm{kernelopts}\left('\mathrm{datadir}'\right),"datasets","air_passengers.csv"\right]\right):$ > $\mathrm{data}:=\mathrm{ImportMatrix}\left(\mathrm{path}\right)$ ${\mathrm{data}}{:=}\left[\begin{array}{c}{\mathrm{145 x 2}}{\mathrm{Matrix}}\\ {\mathrm{Data Type:}}{\mathrm{anything}}\\ {\mathrm{Storage:}}{\mathrm{rectangular}}\\ {\mathrm{Order:}}{\mathrm{Fortran_order}}\end{array}\right]$ (2) • The data has one column of dates (year) and one column of data (monthly passengers). The first row is a header. > $\mathrm{data}\left[1..5\right]$ $\left[\begin{array}{cc}{"Date"}& {"Monthly Passengers"}\\ {"1949-01"}& {112}\\ {"1949-02"}& {118}\\ {"1949-03"}& {132}\\ {"1949-04"}& {129}\end{array}\right]$ (3) • To work with this data, construct a TimeSeries object. Such an object can contain one or more data sets, measured at a common set of time points, as well as data headers and other metadata. You can construct this time series object as follows: > $\mathrm{ts}:=\mathrm{TimeSeries}\left(\mathrm{data},'\mathrm{header}'=1,'\mathrm{dates}'=1,'\mathrm{period}'=12\right)$ ${\mathrm{ts}}{:=}\left[\begin{array}{c}{\mathrm{Time series}}\\ {\mathrm{Monthly Passengers}}\\ {\mathrm{144 rows of data:}}\\ {\mathrm{1949-01-01 - 1960-12-01}}\end{array}\right]$ (4) • The options above indicate that the first row and column contain a header and the dates, respectively, and that you expect any seasonal characteristics to occur with period 12. • To inspect the data, you can use the GetData, GetDates, and GetHeaders commands. > $\mathrm{GetData}\left(\mathrm{ts}\right)$ $\left[\begin{array}{c}{\mathrm{144 x 1}}{\mathrm{Matrix}}\\ {\mathrm{Data Type:}}{{\mathrm{float}}}_{{8}}\\ {\mathrm{Storage:}}{\mathrm{rectangular}}\\ {\mathrm{Order:}}{\mathrm{Fortran_order}}\end{array}\right]$ (5) > $\mathrm{GetDates}\left(\mathrm{ts}\right)$ $\left[\begin{array}{c}{\mathrm{1 .. 144}}{{\mathrm{Vector}}}_{{\mathrm{column}}}\\ {\mathrm{Data Type:}}{\mathrm{anything}}\\ {\mathrm{Storage:}}{\mathrm{rectangular}}\\ {\mathrm{Order:}}{\mathrm{Fortran_order}}\end{array}\right]$ (6) > $\mathrm{GetHeaders}\left(\mathrm{ts}\right)$ $\left[{"Monthly Passengers"}\right]$ (7) • Alternatively, you can plot the data using the TimeSeriesPlot command. > $\mathrm{TimeSeriesPlot}\left(\mathrm{ts}\right)$ • We can also look for seasonal trends in our data using the SeasonalSubseriesPlot command. The following plot shows the number of passengers on a monthly basis. > $\mathrm{SeasonalSubseriesPlot}\left(\mathrm{ts},\mathbit{seasonnames}=\left["January","February","March","April","May","June","July","August","September","October","November","December"\right],\mathrm{size}=\left[800,300\right]\right)$ • You can now have Maple select a suitable model from a family of 30 related models, and adjust it to this time series. In fact, you will adjust it to the first 10 years of data; this can be done by specifying the time range as an index. > $\mathrm{tstrain}≔\mathrm{ts}\left[.."1958-12"\right]$ ${\mathrm{tstrain}}{:=}\left[\begin{array}{c}{\mathrm{Time series}}\\ {\mathrm{Monthly Passengers}}\\ {\mathrm{120 rows of data:}}\\ {\mathrm{1949-01-01 - 1958-12-01}}\end{array}\right]$ (8) > ${\mathrm{tsverify}}{:=}\left[\begin{array}{c}{\mathrm{Time series}}\\ {\mathrm{Monthly Passengers}}\\ {\mathrm{24 rows of data:}}\\ {\mathrm{1959-01-01 - 1960-12-01}}\end{array}\right]$ (9) • The family of models used is an exponential smoothing model. > $\mathrm{model}:=\mathrm{ExponentialSmoothingModel}\left(\mathrm{tstrain}\right)$ ${\mathrm{model}}{:=}{\mathrm{< an ETS\left(M,M,M\right) model >}}$ (10) • You can observe that the best model in this case has multiplicative errors, a multiplicative trend, and multiplicative seasonal information. You can get the details on this exponential smoothing model using the GetParameters command. > $\mathrm{GetParameters}\left(\mathrm{model}\right)$ $\left[{\mathrm{errors}}{=}\left\{{"M"}\right\}{,}{\mathrm{trend}}{=}\left\{{"M"}\right\}{,}{\mathrm{seasonal}}{=}\left\{{"M"}\right\}{,}{\mathrm{α}}{=}{0.667352215015678}{,}{\mathrm{β}}{=}{0.00136509636089604}{,}{\mathrm{γ}}{=}{0.000347993880774949}{,}{\mathrm{φ}}{=}{1.}{,}{\mathrm{period}}{=}{12}{,}{\mathrm{l0}}{=}{123.060900436068}{,}{\mathrm{b0}}{=}{1.01002269643555}{,}{s}{=}\left[\begin{array}{c}{\mathrm{1 .. 12}}{{\mathrm{Vector}}}_{{\mathrm{column}}}\\ {\mathrm{Data Type:}}{{\mathrm{float}}}_{{8}}\\ {\mathrm{Storage:}}{\mathrm{rectangular}}\\ {\mathrm{Order:}}{\mathrm{Fortran_order}}\end{array}\right]{,}{\mathrm{σ}}{=}{0.0346510099657471}{,}{\mathrm{constraints}}{=}{"both"}\right]$ (11) • Use the model to predict two years of future data using the forecast command: the actual data for this time range is in $\mathrm{tsverify}$. > $\mathrm{forecast}:=\mathrm{Forecast}\left(\mathrm{model},\mathrm{tstrain},24\right)$ ${\mathrm{forecast}}{:=}\left[\begin{array}{c}{\mathrm{Time series}}\\ {\mathrm{Monthly Passengers \left(forecast\right)}}\\ {\mathrm{24 rows of data:}}\\ {\mathrm{1958-12-31 - 1960-11-30}}\end{array}\right]$ (12) • The forecast is itself a time series object that can be inspected using GetData, GetHeaders, and other commands. > $\mathrm{GetHeaders}\left(\mathrm{forecast}\right)$ $\left[{"Monthly Passengers \left(forecast\right)"}\right]$ (13) > $\mathrm{TimeSeriesPlot}\left(\mathrm{tstrain},\mathrm{tsverify},\mathrm{forecast}\right)$ • There is reasonable agreement between the forecast and the verification data. You get a better handle on this if you include confidence intervals for all data points. You can then see how often the true data falls within the boundaries of the confidence intervals. > $\mathrm{confidence}:=\mathrm{Forecast}\left(\mathrm{model},\mathrm{tstrain},24,\mathrm{output}=\mathrm{confidenceintervals}\left(80,95\right)\right)$ ${\mathrm{confidence}}{:=}\left[\begin{array}{c}{\mathrm{Time series}}\\ {\mathrm{Monthly Passengers \left(forecast - 2 percentile\right), ..., Monthly Passengers \left(forecast - 98 percentile\right)}}\\ {\mathrm{24 rows of data:}}\\ {\mathrm{1958-12-31 - 1960-11-30}}\end{array}\right]$ (14) > $\mathrm{TimeSeriesPlot}\left(\mathrm{tstrain},\mathrm{forecast},\mathrm{confidence},\left[\mathrm{tsverify},\mathrm{color}="Red",\mathrm{thickness}=3\right]\right)$ • Use the model to decompose the data set into several components; the number of components depends on the model. This is done using the Decomposition command. For an exponential smoothing model, there are always level and residual components. There may also be trend and seasonal components depending on whether or not the model has those properties. This can be used to, for example, correct for seasonal influences or smooth data. > $\mathrm{decomposition}:=\mathrm{Decomposition}\left(\mathrm{model},\mathrm{tstrain}\right)$ ${\mathrm{decomposition}}{:=}\left[\begin{array}{c}{\mathrm{Time series}}\\ {\mathrm{Monthly Passengers \left(residuals\right), ..., Monthly Passengers \left(seasonal\right)}}\\ {\mathrm{120 rows of data:}}\\ {\mathrm{1949-01-01 - 1958-12-01}}\end{array}\right]$ (15) > $\mathrm{TimeSeriesPlot}\left(\mathrm{tstrain},\mathrm{decomposition},\mathrm{split}=\mathrm{pertimeseries}\right)$ • Because the seasonal component is multiplicative, you can compensate for it by dividing the original data by it. > $\mathrm{trainingdata}:=\mathrm{GetData}\left(\mathrm{tstrain}\right)$ ${\mathrm{trainingdata}}{:=}\left[\begin{array}{c}{\mathrm{120 x 1}}{\mathrm{Matrix}}\\ {\mathrm{Data Type:}}{{\mathrm{float}}}_{{8}}\\ {\mathrm{Storage:}}{\mathrm{rectangular}}\\ {\mathrm{Order:}}{\mathrm{Fortran_order}}\end{array}\right]$ (16) > ${\mathrm{seasonaldata}}{:=}\left[\begin{array}{c}{\mathrm{120 x 1}}{\mathrm{Matrix}}\\ {\mathrm{Data Type:}}{{\mathrm{float}}}_{{8}}\\ {\mathrm{Storage:}}{\mathrm{rectangular}}\\ {\mathrm{Order:}}{\mathrm{Fortran_order}}\end{array}\right]$ (17) > ${\mathrm{nonseasonaldata}}{:=}\left[\begin{array}{c}{\mathrm{Time series}}\\ {\mathrm{Monthly Passengers \left(deseasonalized\right)}}\\ {\mathrm{120 rows of data:}}\\ {\mathrm{1949-01-01 - 1958-12-01}}\end{array}\right]$ (18) > $\mathrm{TimeSeriesPlot}\left(\mathrm{nonseasonaldata}\right)$ Working with Time Series Data There are several commands that can be used to modify existing time series objects. Using the data from above, you can use the Join command to merge the forecast with the training data set. > $\mathrm{merged}:=\mathrm{Join}\left(\mathrm{tstrain},\mathrm{forecast}\right)$ ${\mathrm{merged}}{:=}\left[\begin{array}{c}{\mathrm{Time series}}\\ {\mathrm{Monthly Passengers, Monthly Passengers \left(forecast\right)}}\\ {\mathrm{144 rows of data:}}\\ {\mathrm{1949-01-01 - 1960-11-30}}\end{array}\right]$ (19) > $\mathrm{TimeSeriesPlot}\left(\mathrm{merged}\right)$ • Another command, Difference, applies a differencing transformation in a flexible way. The LogTransform command applies a logarithm transformation, and BoxCoxTransform generalizes that to a general Box-Cox transformation. > $\mathrm{differenced}:=\mathrm{Apply}\left(\mathrm{Difference},\mathrm{ts}\right)$ ${\mathrm{differenced}}{:=}\left[\begin{array}{c}{\mathrm{Time series}}\\ {\mathrm{Monthly Passengers \left(differenced\right)}}\\ {\mathrm{143 rows of data:}}\\ {\mathrm{1949-02-01 - 1960-12-01}}\end{array}\right]$ (20) > $\mathrm{logs}:=\mathrm{Apply}\left(\mathrm{LogTransform},\mathrm{ts}\right)$ ${\mathrm{logs}}{:=}\left[\begin{array}{c}{\mathrm{Time series}}\\ {\mathrm{Logarithm of Monthly Passengers}}\\ {\mathrm{144 rows of data:}}\\ {\mathrm{1949-01-01 - 1960-12-01}}\end{array}\right]$ (21) > $\mathrm{boxcox}:=\mathrm{Apply}\left(\mathrm{BoxCoxTransform}\left(\mathrm{λ}=\frac{1}{3}\right),\mathrm{ts}\right)$ ${\mathrm{boxcox}}{:=}\left[\begin{array}{c}{\mathrm{Time series}}\\ {\mathrm{Box-Cox transform of Monthly Passengers}}\\ {\mathrm{144 rows of data:}}\\ {\mathrm{1949-01-01 - 1960-12-01}}\end{array}\right]$ (22) > $\mathrm{TimeSeriesPlot}\left(\mathrm{ts},\mathrm{differenced},\mathrm{logs},\mathrm{boxcox},\mathrm{split}=\mathrm{pertimeseries}\right)$ More Details on Choosing Exponential Smoothing Models You can manually step through the process of finding a suitable model for the data set using the Specialize, Initialize, and Optimize commands. The ExponentialSmoothingModel command generates an exponential smoothing model object; it represents a wide range of models. In this case, you can see that the data has a strong seasonal component, so you might be able to guess that you can discard the models that do not take the seasonal component into account. > $\mathrm{general_model}:=\mathrm{ExponentialSmoothingModel}\left('\mathrm{seasonal}=\left\{A,M\right\}'\right)$ ${\mathrm{general_model}}{:=}{\mathrm{< an ETS\left(*,*,*\right) model >}}$ (23) • You can now specialize this to all individual model formulations represented by the general model. That is, where the seasonal component is additive or multiplicative. Some models are excluded by default because they are subject to numerical difficulties: the forecasts have infinite variance. (They can be included by overriding an option to Specialize.) > $\mathrm{individual_models}:=\mathrm{Specialize}\left(\mathrm{general_model},\mathrm{ts}\right)$ ${\mathrm{individual_models}}{:=}\left[{\mathrm{< an ETS\left(A,A,A\right) model >}}{,}{\mathrm{< an ETS\left(A,Ad,A\right) model >}}{,}{\mathrm{< an ETS\left(A,N,A\right) model >}}{,}{\mathrm{< an ETS\left(M,A,A\right) model >}}{,}{\mathrm{< an ETS\left(M,A,M\right) model >}}{,}{\mathrm{< an ETS\left(M,Ad,A\right) model >}}{,}{\mathrm{< an ETS\left(M,Ad,M\right) model >}}{,}{\mathrm{< an ETS\left(M,M,M\right) model >}}{,}{\mathrm{< an ETS\left(M,Md,M\right) model >}}{,}{\mathrm{< an ETS\left(M,N,A\right) model >}}{,}{\mathrm{< an ETS\left(M,N,M\right) model >}}\right]$ (24) • Each individual model now has a slightly different set of model equations. They all still have a number of parameters and initial values for the model's state variables and need to be optimized for the best fit. This optimization process consists of a number of iterations. In each iteration, Maple picks a new set of values for the parameters and initial state values, then runs the simulation using the model, and finally, computes the deviations from the actual observed data. The first set of values (which initializes the optimization process) is computed directly from the actual data, by the Initialize command. > $\mathrm{initialization_tables}:=\mathrm{map}\left(\mathrm{Initialize},\mathrm{individual_models},\mathrm{ts}\right):$ • Here is an example of these initialization values: > ${\mathrm{individual_models}}_{1},{\mathrm{initialization_tables}}_{1}$ ${\mathrm{< an ETS\left(A,A,A\right) model >}}{,}{\mathrm{table}}\left(\left[{\mathrm{b0}}{=}{0.646614725853865}{,}{\mathrm{α}}{=}\frac{{1}}{{2}}{,}{s}{=}\left[\begin{array}{c}{\mathrm{1 .. 12}}{{\mathrm{Vector}}}_{{\mathrm{column}}}\\ {\mathrm{Data Type:}}{{\mathrm{float}}}_{{8}}\\ {\mathrm{Storage:}}{\mathrm{rectangular}}\\ {\mathrm{Order:}}{\mathrm{Fortran_order}}\end{array}\right]{,}{\mathrm{l0}}{=}{122.838670948617}{,}{\mathrm{γ}}{=}\frac{{1}}{{100}}{,}{\mathrm{β}}{=}\frac{{1}}{{10}}\right]\right)$ (25) • You can now perform the optimization. > $\mathrm{map}\left(\mathrm{Optimize},\mathrm{individual_models},\mathrm{ts}\right)$ $\left[{-}{579.1189017}{,}{-}{587.8357482}{,}{-}{587.5947092}{,}{-}{577.5718572}{,}{-}{527.4987042}{,}{-}{559.4653237}{,}{-}{528.6819747}{,}{-}{527.8440927}{,}{-}{527.3996887}{,}{-}{565.0019522}{,}{-}{535.5928547}\right]$ (26) • The optimization process sets all parameters and initial state values to the optimal values found. The Optimize command returns a measure for how close the fit between the simulation run and the actual data is. However, these models all have different numbers of parameters. If a similar fit is achieved with a model that has fewer parameters, then the principle of parsimony says you should prefer the latter model. This is quantified in a so-called information criterion: a function that takes both the closeness of the fit and the number of parameters into account. The TimeSeriesAnalysis package has three information criteria built in: two versions of Akaike's Information Criterion (AICc, which includes a correction for small data sizes, and AIC, which does not), and the Bayesian Information Criterion (BIC). Let us compare the BIC for all these models. > ${\mathrm{< an ETS\left(A,A,A\right) model >}}{,}{1412.79583526645}$ ${\mathrm{< an ETS\left(A,Ad,A\right) model >}}{,}{1445.84094061076}$ ${\mathrm{< an ETS\left(A,N,A\right) model >}}{,}{1422.76337107009}$ ${\mathrm{< an ETS\left(M,A,A\right) model >}}{,}{1722.03500867297}$ ${\mathrm{< an ETS\left(M,A,M\right) model >}}{,}{1133.89411113851}$ ${\mathrm{< an ETS\left(M,Ad,A\right) model >}}{,}{1945.38209291183}$ ${\mathrm{< an ETS\left(M,Ad,M\right) model >}}{,}{1141.84855312639}$ ${\mathrm{< an ETS\left(M,M,M\right) model >}}{,}{1139.65948604423}$ ${\mathrm{< an ETS\left(M,Md,M\right) model >}}{,}{1141.03566969852}$ ${\mathrm{< an ETS\left(M,N,A\right) model >}}{,}{2087.66988840404}$ ${\mathrm{< an ETS\left(M,N,M\right) model >}}{,}{1150.95261586989}$ (27) • From the results, the $\left(M,A,M\right)$, $\left(M,\mathrm{Ad},M\right)$,, $(M,N,M$), and (models give the best results in terms of the Bayesian information criterion. References Box, G.E.P., Jenkins, G.M., and Reinsel, G.C. (1976) Time Series Analysis, Forecasting and Control. Third Edition. Holden-Day. Series G. Hyndman, R.J. and Athanasopoulos, G. (2013) Forecasting: principles and practice. http://otexts.org/fpp/. Accessed on 2013-10-09. Hyndman, R.J., Koehler, A.B., Ord, J.K., and Snyder, R.D. (2008) Forecasting with Exponential Smoothing: The State Space Approach. Springer Series in Statistics. Springer-Verlag Berlin Heidelberg.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 74, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7570326328277588, "perplexity": 721.212984131349}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948932.75/warc/CC-MAIN-20230329023546-20230329053546-00596.warc.gz"}
http://www.findstat.org/Graphs
Graphs (undirected, unlabelled, and simple) # 1. Definition A graph $G = (V,E)$ consists of • a set $V$ of vertices (which we assume for simplicity to be $\{0,\ldots,n-1\}$) and, • a set $E \subseteq \binom{V}{2}$ of edges. We consider graphs to be • undirected: edges are unordered pairs of vertices, • unlabelled: graphs $G = (V,E)$ and $G' = (V',E')$ are considered equal if there is a permutation $\pi : V \rightarrow V'$ such that $\{u,v\} \in E \Leftrightarrow \{\pi(u),\pi(v)\} \in E'$, • simple: multiple edges and loops are disallowed. # 2. Examples The four (undirected, unlabelled, simple) graphs on three vertices $\{0,1,2\}$ are • $E_1 = \{\}$, • $E_2 = \{\{0,1\}\}$, • $E_3 = \{\{0,1\},\{1,2\}\}$, • $E_4 = \{\{0,1\},\{1,2\},\{0,2\}\}$. # 3. Further Definitions • If $\{u,v\}$ is an edge in a graph $G$, then $u$ and $v$ are adjacent vertices. $u$ and $v$ are also known as neighbors. The set of neighbors of $v$, denoted $N(v)$, is called the neighborhood of $v$. The closed neighborhood of $v$ is $N[v]=N(v)\cup {v}$. • If two edges share a vertex in common (e.g. $\{u,v\}$ and $\{v,w\})$, then they are adjacent edges. • The degree of a vertex $v$, denoted deg($v$), is the number of vertices adjacent to $v$. • We call $|V(G)|$, the cardinality of the vertices of a graph $G$, the order of the graph. We also say $|E(G)|$, the cardinality of the edges of a graph $G$, is the size of the graph. • A graph of size 0 is called an empty graph. Any graph with at least one edge is called nonempty. • A graph is complete when any two distinct vertices are adjacent. The complete graph of $n$ vertices is notated $K_{n}$ • A planar graph is a graph that can be embedded in the plane. I.e. it can be drawn on the plane such a way that its edges intersect only at their endpoints. • A walk $W$ in a graph $G$ is a sequence of vertices in $G$, beginning at a vertex $u$ and ending at a vertex $v$ such that the consecutive vertices in $W$ are adjacent in $G$. • A walk whose initial and terminal vertices are distinct is called an open walk, otherwise it is a closed walk. • A walk in which no edge repeats is called a trail. • A path $P$ in a graph $G$ is a sequence of edges which connect a sequence of vertices which, are all distinct from one another. A path can also be thought of as a walk with no repeated vertex. • A simple path is one which contains no repeated vertices (in other words, it does not cross over itself). • If there is a path from a vertex $u$ to a vertex $v$ then these two vertices are said to be connected. If every two vertices in a graph $G$ are connected, then $G$ is itself a connected graph. • A nontrivial closed walk in a graph $G$ in which no edge is repeated is a circuit in $G$. • A circuit with vertices $v_1, v_2, ..., v_k, v_1$ where $v_2, ..., v_k$ are all distinct is called a cycle. • Let $G$ be a nontrivial connected graph. A circuit $C$ of $G$ that contains every edge of $G$ (necessarily exactly once) is called an Eularian Circuit. Any graph which contains an Eularian Circuit is called Eularian. # 4. Properties ## 4.1. Subgraphs Given graphs $G$ and $H$, $H$ is a subgraph of $G$, notated $H\subseteq G$, if • $V(H)\subseteq V(G)$ • $E(H)\subseteq E(G)$ $H$ is a proper subgraph if either • $V(H)\subsetneq V(G)$ • $E(H)\subsetneq E(G)$ If $V(H)=V(G)$ but $E(H)\subsetneq E(G)$, then $H$ is a spanning subgraph of $G$. ## 4.2. Edge Counting Theorem If $G$ is a graph of size $m$, then $\sum_{v\in V(G)} deg(v) = 2m$ ## 4.3. Eularian Graph Criteria A nontrivial connected graph $G$ is Eularian if and only if every vertex of G has even degree. ## 4.4. Four Color Theorem Every planar graph is four-colorable. That is, the chromatic number of a planar graph is at most four. # 5. Remarks A labeled graph is a graph which has all of its vertices labeled. A multigraph is a graph in which multiple edges between vertices are allowed as well as loops that connect a vertex to itself. A directed graph or digraph is a graph in which each edge has a specific orientation or direction. We have the following 103 statistics in the database: The number of edges of a graph. The number of subgraphs. The number of induced subgraphs. The length of the maximal independent set of vertices of a graph. The number of triangles of a graph. The number of spanning trees of a graph. The order of the largest clique of the graph. The chromatic number of a graph. The degree of the graph. The Grundy number of a graph. The cardinality of the automorphism group of a graph. The burning number of a graph. The diameter of a connected graph. The radius of a connected graph. The edge connectivity of a graph. The vertex connectivity of a graph. The Szeged index of a graph. The girth of a graph, which is not a tree. The Wiener index of a graph. The number of spanning subgraphs of a graph with the same connected components. The number of maximal spanning forests contained in a graph. The number of strongly connected orientations of a graph. The number of acyclic orientations of a graph. The number of forests contained in a graph. The chromatic index of a connected graph. The treewidth of a graph. The domination number of a graph. The number of perfect matchings of a graph. The size of the preimage of the map 'to graph' from Ordered trees to Graphs. The size of the preimage of the map 'to graph' from Binary trees to Graphs. The number of connected components of the complement of a graph. The number of connected components of a graph. The number of nonisomorphic vertex-induced subtrees. The number of independent sets of vertices of a graph. The number of facets of the stable set polytope of a graph. The determinant of the distance matrix of a connected graph. The determinant of the product of the incidence matrix and its transpose of a gra.... The number of vertices with even degree. The minimal degree of a vertex of a graph. The number of vertices of odd degree in a graph. The number of leaves in a graph. The number of degree 2 vertices of a graph. The number of isolated vertices of a graph. The skewness of a graph. The minimal crossing number of a graph. The number of spanning subgraphs of a graph. The number of strongly connected outdegree sequences of a graph. The number of different adjacency matrices of a graph. The sum of the vertex degrees of a graph. The determinant of the adjacency matrix of a graph. The second Zagreb index of a graph. The size of a minimal vertex cover of a graph. The number of minimal vertex covers of a graph. The exponent of the automorphism group of a graph. The Altshuler-Steinberg determinant of a graph. The genus of a graph. The number of Hamiltonian cycles in a graph. The matching number of a graph. The number of orbits of vertices of a graph under automorphisms. The Szeged index minus the Wiener index of a graph. The energy of a graph, if it is integral. The number of pairs of vertices of a graph with distance 3. The number of pairs of vertices of a graph with distance 2. The number of pairs of vertices of a graph with distance 4. The number of edges minus the number of vertices plus 2 of a graph. The number of distinct eigenvalues of a graph. The number of distinct Laplacian eigenvalues of a graph. The largest eigenvalue of a graph if it is integral. The second largest eigenvalue of a graph if it is integral. The monochromatic index of a connected graph. The Schultz index of a connected graph. The first Zagreb index of a graph. The Gutman (or modified Schultz) index of a connected graph. The hyper-Wiener index of a connected graph. The Hosoya index of a graph. The distinguishing number of a graph. The Ramsey number of a graph. The (zero)-forcing number of a graph. The rank-width of a graph. The pathwidth of a graph. The cutwidth of a graph. The cop number of a graph. The number of cut vertices of a graph. The number of blocks of a connected graph. The F-index (or forgotten topological index) of a graph. The hull number of a graph. The length of the longest cycle in a graph. The maximin edge-connectivity for choosing a subgraph. The toughness times the least common multiple of 1,. The largest Laplacian eigenvalue of a graph if it is integral. The number of different neighbourhoods in a graph. The maximal cardinality of a set of vertices with the same neighbourhood in a gra.... The Colin de Verdière graph invariant. The largest multiplicity of a distance Laplacian eigenvalue in a connected graph..... The multiplicity of the largest distance Laplacian eigenvalue in a connected grap.... The multiplicity of the largest Laplacian eigenvalue in a graph. The maximal multiplicity of a Laplacian eigenvalue in a graph. The multiplicity of the largest eigenvalue in a graph. The maximal multiplicity of an eigenvalue in a graph. The number of distinct eigenvalues of the distance Laplacian of a connected graph.... The metric dimension of a graph. The number of distinct colouring schemes of a graph. The maximal number of occurrences of a colour in a proper colouring of a graph. # 6. Maps We have the following 3 maps in the database: to partition of connected components complement Ore closure # 7. References • G. Chartrand, L. Lesniak, and P. Zhang. Graphs and Digraphs. CRC Press, Oct. 2010. # 8. Sage examples Graphs (last edited 2016-05-30 11:33:50 by ChristianStump)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5903339385986328, "perplexity": 293.7485266194167}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122159.33/warc/CC-MAIN-20170423031202-00604-ip-10-145-167-34.ec2.internal.warc.gz"}
https://en.m.wikipedia.org/wiki/Vector_(mathematics)
# Vector (mathematics and physics) (Redirected from Vector (mathematics)) In mathematics and physics, a vector is an element of a vector space. For many specific vector spaces, the vectors have received specific names, which are listed below. Historically, vectors were introduced in geometry and physics (typically in mechanics) before the formalization of the concept of vector space. Therefore, one often talks about vectors without specifying the vector space to which they belong. Specifically, in a Euclidean space, one considers spatial vectors, also called Euclidean vectors which are used to represent quantities that have both magnitude and direction, and may be added, subtracted and scaled (i.e. multiplied by a real number) for forming a vector space.[1] ## Vectors in Euclidean geometry In classical Euclidean geometry (i.e., synthetic geometry), vectors were introduced (during the 19th century) as equivalence classes under equipollence, of ordered pairs of points; two pairs (A, B) and (C, D) being equipollent if the points A, B, D, C, in this order, form a parallelogram. Such an equivalence class is called a vector, more precisely, a Euclidean vector.[2] The equivalence class of (A, B) is often denoted ${\displaystyle {\overrightarrow {AB}}.}$ A Euclidean vector is thus an equivalence class of directed segments with the same magnitude (e.g., the length of the line segment (A, B)) and same direction (e.g., the direction from A to B).[3] In physics, Euclidean vectors are used to represent physical quantities that have both magnitude and direction, but are not located at a specific place, in contrast to scalars, which have no direction.[4] For example, velocity, forces and acceleration are represented by vectors. In modern geometry, Euclidean spaces are often defined from linear algebra. More precisely, a Euclidean space E is defined as a set to which is associated an inner product space of finite dimension over the reals ${\displaystyle {\overrightarrow {E}},}$  and a group action of the additive group of ${\displaystyle {\overrightarrow {E}},}$  which is free and transitive (See Affine space for details of this construction). The elements of ${\displaystyle {\overrightarrow {E}}}$  are called translations. It has been proven that the two definitions of Euclidean spaces are equivalent, and that the equivalence classes under equipollence may be identified with translations. Sometimes, Euclidean vectors are considered without reference to a Euclidean space. In this case, a Euclidean vector is an element of a normed vector space of finite dimension over the reals, or, typically, an element of ${\displaystyle \mathbb {R} ^{n}}$  equipped with the dot product. This makes sense, as the addition in such a vector space acts freely and transitively on the vector space itself. That is, ${\displaystyle \mathbb {R} ^{n}}$  is a Euclidean space, with itself as an associated vector space, and the dot product as an inner product. The Euclidean space ${\displaystyle \mathbb {R} ^{n}}$  is often presented as the Euclidean space of dimension n. This is motivated by the fact that every Euclidean space of dimension n is isomorphic to the Euclidean space ${\displaystyle \mathbb {R} ^{n}.}$  More precisely, given such a Euclidean space, one may choose any point O as an origin. By Gram–Schmidt process, one may also find an orthonormal basis of the associated vector space (a basis such that the inner product of two basis vectors is 0 if they are different and 1 if they are equal). This defines Cartesian coordinates of any point P of the space, as the coordinates on this basis of the vector ${\displaystyle {\overrightarrow {OP}}.}$  These choices define an isomorphism of the given Euclidean space onto ${\displaystyle \mathbb {R} ^{n},}$  by mapping any point to the n-tuple of its Cartesian coordinates, and every vector to its coordinate vector. ## Vectors in specific vector spaces • Column vector, a matrix with only one column. The column vectors with a fixed number of rows form a vector space. • Row vector, a matrix with only one row. The row vectors with a fixed number of columns form a vector space. • Coordinate vector, the n-tuple of the coordinates of a vector on a basis of n elements. For a vector space over a field F, these n-tuples form the vector space ${\displaystyle F^{n}}$  (where the operation are pointwise addition and scalar multiplication). • Displacement vector, a vector that specifies the change in position of a point relative to a previous position. Displacement vectors belong to the vector space of translations. • Position vector of a point, the displacement vector from a reference point (called the origin) to the point. A position vector represents the position of a point in a Euclidean space or an affine space. • Velocity vector, the derivative, with respect to time, of the position vector. It does not depend of the choice of the origin, and, thus belongs to the vector space of translations. • Pseudovector, also called axial vector, an element of the dual of a vector space. In an inner product space, the inner product defines an isomorphism between the space and its dual, which may make difficult to distinguish a pseudo vector from a vector. The distinction becomes apparent when one changes coordinates: the matrix used for a change of coordinates of pseudovectors is the transpose of that of vectors. • Tangent vector, an element of the tangent space of a curve, a surface or, more generally, a differential manifold at a given point (these tangent spaces are naturally endowed with a structure of vector space) • Normal vector or simply normal, in a Euclidean space or, more generally, in an inner product space, a vector that is perpendicular to a tangent space at a point. Normals are pseudovectors that belong to the dual of the tangent space. • Gradient, the coordinates vector of the partial derivatives of a function of several real variables. In a Euclidean space the gradient gives the magnitude and direction of maximum increase of a scalar field. The gradient is a pseudo vector that is normal to a level curve. • Four-vector, in the theory of relativity, a vector in a four-dimensional real vector space called Minkowski space ## Tuples that are not really vectors The set ${\displaystyle \mathbb {R} ^{n}}$  of tuples of n real numbers has a natural structure of vector space defined by component-wise addition and scalar multiplication. When such tuples are used for representing some data, it is common to call them vectors, even if the vector addition does not mean anything for these data, which may make the terminology confusing. Similarly, some physical phenomena involve a direction and a magnitude. They are often represented by vectors, even if operations of vector spaces do not apply to them. ## Vectors in algebras Every algebra over a field is a vector space, but elements of an algebra are generally not called vectors. However, in some cases, they are called vectors, mainly due to historical reasons. ### Vector fields A vector field is a vector-valued function that, generally, has a domain of the same dimension (as a manifold) as its codomain, ### Miscellaneous • Ricci calculus • Vector Analysis, a textbook on vector calculus by Wilson, first published in 1901, which did much to standardize the notation and vocabulary of three-dimensional linear algebra and vector calculus • Vector bundle, a topological construction that makes precise the idea of a family of vector spaces parameterized by another space • Vector calculus, a branch of mathematics concerned with differentiation and integration of vector fields • Vector differential, or del, a vector differential operator represented by the nabla symbol ${\displaystyle \nabla }$ • Vector Laplacian, the vector Laplace operator, denoted by ${\displaystyle \nabla ^{2}}$ , is a differential operator defined over a vector field • Vector notation, common notation used when working with vectors • Vector operator, a type of differential operator used in vector calculus • Vector product, or cross product, an operation on two vectors in a three-dimensional Euclidean space, producing a third three-dimensional Euclidean vector • Vector projection, also known as vector resolute or vector component, a linear mapping producing a vector parallel to a second vector • Vector-valued function, a function that has a vector space as a codomain • Vectorization (mathematics), a linear transformation that converts a matrix into a column vector • Vector autoregression, an econometric model used to capture the evolution and the interdependencies between multiple time series • Vector boson, a boson with the spin quantum number equal to 1 • Vector measure, a function defined on a family of sets and taking vector values satisfying certain properties • Vector meson, a meson with total spin 1 and odd parity • Vector quantization, a quantization technique used in signal processing • Vector soliton, a solitary wave with multiple components coupled together that maintains its shape during propagation • Vector synthesis, a type of audio synthesis ## Notes 1. ^ "vector | Definition & Facts". Encyclopedia Britannica. Retrieved 2020-08-19. 2. ^ In some old texts, the pair (A, B) is called a bound vector, and its equivalence class is called a free vector. 3. ^ "1.1: Vectors". Mathematics LibreTexts. 2013-11-07. Retrieved 2020-08-19. 4. ^ "Vectors". www.mathsisfun.com. Retrieved 2020-08-19. 5. ^ "Compendium of Mathematical Symbols". Math Vault. 2020-03-01. Retrieved 2020-08-19. 6. ^ a b Weisstein, Eric W. "Vector". mathworld.wolfram.com. Retrieved 2020-08-19.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 15, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9519175291061401, "perplexity": 357.23068706848636}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703524743.61/warc/CC-MAIN-20210121101406-20210121131406-00774.warc.gz"}
https://igraph.org/r/html/1.3.2/eigen_centrality.html
# R igraph manual pages Use this if you are using igraph from R eigen_centrality {igraph} R Documentation ## Find Eigenvector Centrality Scores of Network Positions ### Description eigen_centrality takes a graph (graph) and returns the eigenvector centralities of positions v within it ### Usage eigen_centrality( graph, directed = FALSE, scale = TRUE, weights = NULL, options = arpack_defaults ) ### Arguments graph Graph to be analyzed. directed Logical scalar, whether to consider direction of the edges in directed graphs. It is ignored for undirected graphs. scale Logical scalar, whether to scale the result to have a maximum score of one. If no scaling is used then the result vector has unit length in the Euclidean norm. weights A numerical vector or NULL. This argument can be used to give edge weights for calculating the weighted eigenvector centrality of vertices. If this is NULL and the graph has a weight edge attribute then that is used. If weights is a numerical vector then it used, even if the graph has a weight edge attribute. If this is NA, then no edge weights are used (even if the graph has a weight edge attribute. Note that if there are negative edge weights and the direction of the edges is considered, then the eigenvector might be complex. In this case only the real part is reported. This function interprets weights as connection strength. Higher weights spread the centrality better. options A named list, to override some ARPACK options. See arpack for details. ### Details Eigenvector centrality scores correspond to the values of the first eigenvector of the graph adjacency matrix; these scores may, in turn, be interpreted as arising from a reciprocal process in which the centrality of each actor is proportional to the sum of the centralities of those actors to whom he or she is connected. In general, vertices with high eigenvector centralities are those which are connected to many other vertices which are, in turn, connected to many others (and so on). (The perceptive may realize that this implies that the largest values will be obtained by individuals in large cliques (or high-density substructures). This is also intelligible from an algebraic point of view, with the first eigenvector being closely related to the best rank-1 approximation of the adjacency matrix (a relationship which is easy to see in the special case of a diagonalizable symmetric real matrix via the SLS^-1 decomposition).) The adjacency matrix used in the eigenvector centrality calculation assumes that loop edges are counted twice; this is because each loop edge has two endpoints that are both connected to the same vertex, and you could traverse the loop edge via either endpoint. From igraph version 0.5 this function uses ARPACK for the underlying computation, see arpack for more about ARPACK in igraph. ### Value A named list with components: vector A vector containing the centrality scores. value The eigenvalue corresponding to the calculated eigenvector, i.e. the centrality scores. options A named list, information about the underlying ARPACK computation. See arpack for the details. ### WARNING eigen_centrality will not symmetrize your data before extracting eigenvectors; don't send this routine asymmetric matrices unless you really mean to do so. ### Author(s) Gabor Csardi [email protected] and Carter T. Butts (http://www.faculty.uci.edu/profile.cfm?faculty_id=5057) for the manual page. ### References Bonacich, P. (1987). Power and Centrality: A Family of Measures. American Journal of Sociology, 92, 1170-1182. ### Examples #Generate some test data g <- make_ring(10, directed=FALSE) #Compute eigenvector centrality scores eigen_centrality(g) [Package igraph version 1.3.2 Index]
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5847352147102356, "perplexity": 1379.9196787103406}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710771.39/warc/CC-MAIN-20221130192708-20221130222708-00597.warc.gz"}
https://fr.maplesoft.com/support/help/maple/view.aspx?path=Student%2FStatistics%2FChiSquareSuitableModelTest
ChiSquareSuitableModelTest - Maple Help Student[Statistics] ChiSquareSuitableModelTest apply the chi-square suitable model test Calling Sequence ChiSquareSuitableModelTest(X, F, options) Parameters X - observed data sample F - function, algebraic; probability distribution or random variable to match data against options - (optional) equation(s) of the form option=value where option is one of bins, level, output, or range; specify options for the ChiSquareSuitableModelTest function Description • The ChiSquareSuitableModelTest function performs the chi-square suitable model test upon an observed data sample against a known random variable or probability distribution. It works by determining bins for a histogram from the probability distribution, then classifying the entries of X into these bins, and finally testing whether the resulting histogram matches the histogram for the probability distribution. • The first parameter X is a data sample of observed data to use in the analysis. • The second parameter F is a random variable or probability distribution that is compared to the observed data sample. • This test is only appropriate if there is prior knowledge of any parameters in the distribution. If any of the parameters in the distribution have been fitted to the data sample in question, then an adjustment of the degrees-of-freedom parameter is necessary. This adjustment is not available in the current implementation. Options The options argument can contain one or more of the options shown below. • bins='deduce' or posint This option indicates the number of bins to use when categorizing data from X and probabilities from F.  If set to 'deduce' (default), the function attempts to determine a reasonable value for this option. This parameter is ignored if the distribution is discrete. • range='deduce' or range This option indicates the range to use when considering data values - data outside of the range is discarded during processing.  If set to 'deduce' (default), the function attempts to determine a suitable range. • level=float This option is used to specify the level of the analysis (minimum criteria for the observed data to be considered well-fit to the expected data).  By default, this value is 0.05. • output=report or plot or both If the option output is not included or is specified to be output=report, then the function will return a report. If output=plot is specified, then the function will return a plot of the sample test. If output=both is specified, then both the report and the plot will be returned. Examples > $\mathrm{with}\left(\mathrm{Student}\left[\mathrm{Statistics}\right]\right):$ Initialize an array of data > $X≔\mathrm{Sample}\left(\mathrm{NormalRandomVariable}\left(0,1\right),100\right):$ Perform the suitable model test upon this sample. > $\mathrm{ChiSquareSuitableModelTest}\left(X,\mathrm{UniformRandomVariable}\left(0,1\right),\mathrm{bins}=10\right)$ Chi-Square Test for Suitable Probability Model ---------------------------------------------- Null Hypothesis: Sample was drawn from specified probability distribution Alt. Hypothesis: Sample was not drawn from specified probability distribution   Bins:                    10 Degrees of Freedom:      9 Distribution:            ChiSquare(9) Computed Statistic:      301.6000000 Computed p-value:        0. Critical Values:         16.9189774487099   Result: [Rejected] This statistical test provides evidence that the null hypothesis is false. $\left[{\mathrm{hypothesis}}{=}{\mathrm{false}}{,}{\mathrm{criticalvalue}}{=}{16.9189774487099}{,}{\mathrm{distribution}}{=}{\mathrm{ChiSquare}}{}\left({9}\right){,}{\mathrm{pvalue}}{=}{0.}{,}{\mathrm{statistic}}{=}{301.6000000}\right]$ (1) > $\mathrm{ChiSquareSuitableModelTest}\left(X,\mathrm{NormalRandomVariable}\left(0,1\right),\mathrm{bins}=10\right)$ Chi-Square Test for Suitable Probability Model ---------------------------------------------- Null Hypothesis: Sample was drawn from specified probability distribution Alt. Hypothesis: Sample was not drawn from specified probability distribution   Bins:                    10 Degrees of Freedom:      9 Distribution:            ChiSquare(9) Computed Statistic:      14.80000000 Computed p-value:        .0965781731648307 Critical Values:         16.9189774487099   Result: [Accepted] This statistical test does not provide enough evidence to conclude that the null hypothesis is false. $\left[{\mathrm{hypothesis}}{=}{\mathrm{true}}{,}{\mathrm{criticalvalue}}{=}{16.9189774487099}{,}{\mathrm{distribution}}{=}{\mathrm{ChiSquare}}{}\left({9}\right){,}{\mathrm{pvalue}}{=}{0.0965781731648307}{,}{\mathrm{statistic}}{=}{14.80000000}\right]$ (2) If the output=plot option is included, then a report will be returned. > $\mathrm{ChiSquareSuitableModelTest}\left(X,\mathrm{NormalRandomVariable}\left(0,1\right),\mathrm{bins}=10,\mathrm{output}=\mathrm{plot}\right)$ If the output=both option is included, then both a report and a plot will be returned. > $\mathrm{report},\mathrm{graph}≔\mathrm{ChiSquareSuitableModelTest}\left(X,\mathrm{NormalRandomVariable}\left(0,1\right),\mathrm{bins}=10,\mathrm{output}=\mathrm{both}\right):$ Chi-Square Test for Suitable Probability Model ---------------------------------------------- Null Hypothesis: Sample was drawn from specified probability distribution Alt. Hypothesis: Sample was not drawn from specified probability distribution   Bins:                    10 Degrees of Freedom:      9 Distribution:            ChiSquare(9) Computed Statistic:      14.80000000 Computed p-value:        .0965781731648307 Critical Values:         16.9189774487099   Result: [Accepted] This statistical test does not provide enough evidence to conclude that the null hypothesis is false. Histogram Type:  default Data Range:      -1.6348567543439 .. 2.21337958939036 Bin Width:       .128274544791142 Number of Bins:  30 Frequency Scale: relative > $\mathrm{report}$ $\left[{\mathrm{hypothesis}}{=}{\mathrm{true}}{,}{\mathrm{criticalvalue}}{=}{16.9189774487099}{,}{\mathrm{distribution}}{=}{\mathrm{ChiSquare}}{}\left({9}\right){,}{\mathrm{pvalue}}{=}{0.0965781731648307}{,}{\mathrm{statistic}}{=}{14.80000000}\right]$ (3) > $\mathrm{graph}$ References Kanju, Gopal K. 100 Statistical Tests. London: SAGE Publications Ltd., 1994. Sheskin, David J. Handbook of Parametric and Nonparametric Statistical Procedures. London: CRC Press, 1997. Compatibility • The Student[Statistics][ChiSquareSuitableModelTest] command was introduced in Maple 18.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 11, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9674355983734131, "perplexity": 2969.5185974766255}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304217.55/warc/CC-MAIN-20220123081226-20220123111226-00280.warc.gz"}
https://rdrr.io/cran/GHap/man/blup.html
blup: Convert breeding values into BLUP solutions of HapAllele... In GHap: Genome-Wide Haplotyping Description Given genomic estimated breeding values (GEBVs), compute Best Linear Unbiased Predictor (BLUP) solutions for HapAllele effects. Usage 1 2 ghap.blup(gebvs,haplo,invcov, gebvsweights = NULL, haploweights = NULL, nperm = 1, only.active.alleles = TRUE, ncores = 1) Arguments gebvs A vector of GEBVs. The vector must be named and all names must be present in the GHap.haplo object. haplo A GHap.haplo object. invcov The inverse covariance (i.e., genomic kinship) matrix for GEBVs. gebvsweights A numeric vector providing individual-specific weights. haploweights A numeric vector providing HapAllele-specific weights. nperm Number of permutations to be performed for significance assessment (default = 1). only.active.alleles A logical value specifying whether only active haplotype alleles should be included in the calculations (default = TRUE). ncores A numeric value specifying the number of cores to be used in parallel computing (default = 1). Details The function uses the equation: \mathbf{\hat{a}} = q\mathbf{DM}'\mathbf{K}^{-1}\mathbf{\hat{u}} where \mathbf{M} is the N x H centered matrix of HapGenotypes observed for N individuals and H HapAlleles, \mathbf{D} = diag(d_i), d_i is the weight of HapAllele i (default d_i = 1), q is the inverse weighted sum of variances in the columns of \mathbf{M}, \mathbf{K} is the haplotype-based kinship matrix and \hat{u} is the vector of GEBVs. The permutation procedure consists in randomizing the vector \hat{\mathbf{u}} and computing the null statistic max(\mathbf{\hat{a}}). The permutation p-value is computed as the number of times the HapAllele effect was smaller than the null statistic. Value The function returns a dataframe with columns: BLOCK Block alias. CHR Chromosome name. BP1 Block start position. BP2 Block end position. ALLELE Haplotype allele identity. SCORE BLUP for the random effect of the haplotype allele. FREQ Frequency of the haplotype allele. VAR Variance in allele-specific breeding values. pVAR Proportion of variance explained by the haplotype allele. CENTER Average genotype (meaningful only for predictions with ghap.profile). SCALE A constant set to 1 (meaningful only for predictions with ghap.profile). P P-value for the permutation test. This column is suppressed if nperm < 1. Author(s) Yuri Tani Utsunomiya <[email protected]> References I. Stranden and D.J. Garrick. Technical note: derivation of equivalent computing algorithms for genomic predictions and reliabilities of animal merit. J Dairy Sci. 2009. 92:2971-2975. Examples 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 # #### DO NOT RUN IF NOT NECESSARY ### # # # Copy the example data in the current working directory # ghap.makefile() # # # Load data # phase <- ghap.loadphase("human.samples", "human.markers", "human.phase") # # # Subset data - markers with maf > 0.05 # maf <- ghap.maf(phase, ncores = 2) # markers <- phase$marker[maf > 0.05] # phase <- ghap.subsetphase(phase, unique(phase$id), markers) # # # Generate blocks of 5 markers sliding 5 markers at a time # blocks.mkr <- ghap.blockgen(phase, windowsize = 5, slide = 5, unit = "marker") # # # Generate matrix of haplotype genotypes # ghap.haplotyping(phase, blocks.mkr, batchsize = 100, ncores = 2, outfile = "human") # # # Load haplotype genotypes # haplo <- ghap.loadhaplo("human.hapsamples", "human.hapalleles", "human.hapgenotypes") # # # ### RUN ### # # Subset common haplotypes in Europeans # EUR.ids <- haplo$id[haplo$pop %in% c("TSI","CEU")] # haplo <- ghap.subsethaplo(haplo,EUR.ids,rep(TRUE,times=haplo$nalleles)) # hapstats <- ghap.hapstats(haplo, ncores = 2) # common <- hapstats$TYPE %in% c("REGULAR","MAJOR") & # hapstats$FREQ > 0.05 & # hapstats$FREQ < 0.95 # haplo <- ghap.subsethaplo(haplo,EUR.ids,common) # # #Compute relationship matrix # K <- ghap.kinship(haplo, batchsize = 100) # # # Quantitative trait with 50% heritability # # Unbalanced repeated measurements (0 to 30) # # Two major haplotypes accounting for 50% of the genetic variance # myseed <- 123456789 # set.seed(myseed) # major <- sample(which(haplo$allele.in == TRUE),size = 2) # g2 <- runif(n = 2, min = 0, max = 1) # g2 <- (g2/sum(g2))*0.5 # sim <- ghap.simpheno(haplo, kinship = K, h2 = 0.5, g2 = g2, nrep = 30, # balanced = FALSE, major = major, seed = myseed) # # #Fit model using REML # model <- ghap.lmm(fixed = phenotype ~ 1, random = ~ individual, # covmat = list(individual = K), data = sim$data) # # # ### RUN ### # # #BLUP GWAS # gebvs <- model$random$individual # gebvsw <- table(sim$data$individual) # gebvsw <- gebvsw + mean(gebvsw) # gebvsw <- gebvsw[names(gebvs)] # Kinv <- ghap.kinv(K) # gwas.blup <- ghap.blup(gebvs = gebvs, haplo = haplo, gebvsweights = gebvsw, # ncores = 4, invcov = Kinv) # plot(gwas.blup$BP1/1e+6,gwas.blup$pVAR*100,pch=20, # xlab="Position (in Mb)",ylab="Variance explained (%)") # abline(v=haplo$bp1[major]/1e+6) # #BLUP with one update # w <- gwas.blup$VAR*nrow(gwas.blup) # K2 <- ghap.kinship(haplo=haplo,weights = w) # Kinv2 <- ghap.kinv(K2) # gwas.blup2 <- ghap.blup(gebvs = gebvs, haplo = haplo, invcov = Kinv2, ncores = 2, # gebvsweights = gebvsw, haploweights = w) # plot(gwas.blup2$BP1/1e+6,gwas.blup2$pVAR*100,pch=20, # xlab="Position (in Mb)",ylab="Variance explained (%)") # abline(v=haplo\$bp1[major]/1e+6) GHap documentation built on May 29, 2017, 9:56 p.m.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.524823009967804, "perplexity": 8955.554372015598}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806070.53/warc/CC-MAIN-20171120145722-20171120165722-00758.warc.gz"}
http://math.stackexchange.com/questions/29844/is-this-a-kind-of-permutation/29848
# Is this a kind of Permutation? I'm trying to design an algorithm to generate something that I don't know how exactly to call! Ok, I'm not a mathematician, I'm studying computer science and thought this would be a great moment to use some recursive algorithms. But I'm not asking how to do it, what I really want to know is what's the name of this so I can read more about. Basically, it would be a kind of permutation, but I'm not sure: Given a set of $\{a, b, c\}$ generate conjuncts of 3 elements, which came from the set. Example: aaa | bbb | ccc | aab | aba | baa | bba | bab | abb | ... | aca | ... | acc | ... | abc | acb | cbb | ... These are 27 in total, as I have three elements, each one from a set of another 3: _ _ _ -> $3 \cdot 3 \cdot 3$ -> $27$ Similarly, given a set of $\{0, 1\}$ generating a 8 element conjunct would end in a list of zero to 255, in binary. So, how is this called formally? - In technical mathematical terms, these are called tuples (over sets of size $n$). They are ordered collections of length $k$ with replacement (the latter meaning you can repeat an element that has already appeared in the list). The name comes from things like 'triple' which is an ordered collection of length 3. The set $\{a,b,c\}$ is of size $n=3$, and you are showing all ways of listing a sequence of length $k=3$. So one object is $aab$ and another is $aba$ (order matters, and you can repeat elements. For your second example, your list is of length 8, and each item is either 0 or 1. And you've found how to count the total ways of assembling such a compound object $n^k$, because each item has $n$ ways of choosing it, and there are $k$ positions, for $n\cdot n\cdot ...\cdot n$ for $k$ times. A permutation technically refers to an ordered collection without replacement. Or even more technically, a $k$-permutation is one of length $k$ from a set of size $n$ (how many of these are there?), and a plain old permutation is really an $n$-permutation, a permutation of length $n$ from a set of size $n$ (how many of -these- are there?) and you should be able to convince yourself why there are no $n+1$- permutations. Another possibility is an unordered collection (of size $k$) without replacement from a set of size $n$. This is called a combination (or subset). You can figure out how many of these there are from the number of $k$-permutations. The missing part of the square are called multisets (unordered with replacement). The terminology of 'replacement' is in reference to a balls and bins model; I find it easier to think instead of 'repetition' allowed or not. - Thanks a lot for your comprehensive answer. Math is such a beautiful thing! Tuples is a new term for me. Most of the others I started to remember from school, but it was really useful to read this here as my primary education was not in english, and I'm now having to relearn many terms. If I may answer the questions, the first for the $k$-permutation would be $\frac{n!}{(n-k)!}$, or something like that, right? The $n$-permutation is easy: $n!$. Now, I'm not quite sure if I can convince myself why there are no $n+1$-permutations. Does $n+1$ relates to the set, without sufficient elements? – sidyll Mar 30 '11 at 12:17 @sidyll: The idea 'without sufficient elements' is right. Suppose you have an $n$-permutation and you want to put one more element at the end (for length $n+1$), but you can't repeat any that you've used already? You've used all $n$ already so there are no more, so it is impossible to do. – Mitch Mar 30 '11 at 13:52 @sidyll: Yes, $\frac{n!}{(n-k)!}$ is right. For combinations, divide this by $k!$ (because you're removing all the ways of ordering those $k!$ elements. – Mitch Mar 30 '11 at 13:53 @sidyll: mathematics is its own foreign language, so no matter how evocative or metaphorical a technical term may sound, one has to learn what it really technically means. – Mitch Mar 30 '11 at 13:55 These are called "tuples", "3-tuples", or "triples". They are called arrays or vectors in programming, and "strings of length 3 from the alphabet {a,b,c}" in computer science (as in a formal language). They are often denoted as: $$\{ a,b,c \}^3 = \{ aaa, aab, aac, aba, abb, abc, aca, acb, acc, \dots, ccc \}$$ Wikipedia has an article on them. - +1, thanks for your answer sir, and for providing the useful links. – sidyll Mar 30 '11 at 12:20 The above can rightly be called the Permutations, which is basically the no. of ways of arranging the elements of a set in a specified manner specification refers to: 1) No. of positions to fill (like in n-tuple, there're n positions to fill). 2) Total no. of elements that we can fill the positions with (i.e. Cardinality of set of elements) 3)Whether we're allowed to repeat the elements, once we already have used it to fill a position. Yours is the case where we're allowed to repeat the elements, so that : **For the first case** :- 1) You can fill the first position in 3 ways (i.e. with a or b or c). 2) Then 2nd position in 3 ways (repetition is allowed ) 3) 3rd too in 3 ways HENCE making total no. of ways 3 * 3 * 3 = 27 (3^3) - Your conjuncts are called tuples or sequences. (See other names in @Jack Schmidt:'s answer.) The set of tuples, where each element of a tuple is taken from some set $B$ and a length of the tuple is always $n$, is written via the $n$-fold Cartesian product $B\times\dots\times B$. It is essentially the same as a set of functions $N\to B$, where $N$ is some set such that $card(N)=n$. Somebody works with functions instead of tuples. The number of functions $N\to B$ is $card(B)^{card(N)}$, $3^3=27, 2^8=256$ in your examples. In your first example any conjunct is just a function $\{0,1,2\} \to \{a,b,c\}$. $card(A)$ is a number of elements in the set $A$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7615576386451721, "perplexity": 401.2178670281049}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257829325.58/warc/CC-MAIN-20160723071029-00170-ip-10-185-27-174.ec2.internal.warc.gz"}
https://codegolf.stackexchange.com/questions/51096/hook-length-product/51097
# Hook length product A Young diagram is an arrangement of boxes in left-justified rows and top-justified columns. For each box, all the spaces above it and to its left are occupied. XXXXX XXX XXX X The hook length of a box is the number of boxes to its right in its row, and below it in its column, also counting itself once. For example, the second box has a hook length of 6: X**** X*X X*X X Here are all the hook lengths: 86521 532 421 1 Your goal is compute the product of the hook lengths, here 8*6*5*2*1*5*3*2*4*2*1*1 = 115200. (Read about the hook length formula if you're interested in why this expression matters.) Input: A collection of row-sizes as numbers like [5,3,3,1] or as a repeated unary symbol like [[1,1,1,1,1], [1,1,1], [1,1,1], [1]] or "XXXXX XXX XXX X". You can expect the list to be sorted ascending or descending, as you wish. The list will be non-empty and only contain positive integers. Output: The product of hook lengths, which is a positive integer. Don't worry about integer overflows or runtime. Built-ins dealing specifically with Young diagrams or integer partitions are not allowed. Test cases: [1] 1 [2] 2 [1, 1] 2 [5] 120 [2, 1] 3 [5, 4, 3, 2, 1] 4465125 [5, 3, 3, 1] 115200 [10, 5] 798336000 # CJam, 20 19 bytes {ee::+W%}_q~%z%:+:* This takes in CJam style unary list in an ascending order. For example: [[1] [1 1 1] [1 1 1] [1 1 1 1 1]] gives 115200 How it works This version is provided by Dennis and it uses the fact that a Block ArrayList % still works in CJam :D { }_ e# Put this block on stack and make a copy q~ e# Read the input and evaluate it to put the array of arrays on stack % e# Use the copy of the block and map the array using that block ee e# Here we are mapping over each unary array in the input. ee converts e# the array to [index value] pair. ::+ e# Add up each index value pair. Now we have the horizontal half of e# hook length for each row W% e# Reverse the array to make sure the count is for blocks to the right z% e# Transpose and do the same mapping for columns :+ e# Now we have all the hook lengths. Flatten the array :* e# Get the product of all hook lengths. This is the original 20 bytes version 1q~:,Wf%z:ee{:+)*}f/ This takes in a CJam style list of row-sizes in an ascending order. For example: [1 3 3 5] gives 115200 How it works If we look at it, hook length of each block in a Young block diagram is the sum of the index of that block in its row and column, counting backwards. i.e. Start the index in each row from right side and start the index in each column from bottom. We take the input in ascending order of row-size in order to easily start the index from bottom in each column. First, we get the index per row and reverse it. Then we transpose. Since the original row order was reversed, taking index in this transposed diagram will directly give the bottom to top index. Code expansion 1 e# This serves as the initial term for product of hook lengths q~ e# Read the input and eval it to put an array on stack :, e# For each row-size (N), get an array of [0..N-1] Wf% e# Reverse each row so that each row becomes [N-1..0] z e# Transpose for the calculation of blocks below each block :ee e# Enumerate each row. Convert it into array of [index value] pairs { }f/ e# Apply this mapping block to each cell of each row :+ e# Add the index value pair. Here, index is the blocks below the e# block and value is the blocks to the right of it in the Young diag ) e# Increment the sum by 1 to account for the block itself * e# Multiply it with the current holding product, starting with 1 Try it online here • {ee::+W%}_q~%z%:+:* (19 bytes) Input format: [[1][1 1 1][1 1 1][1 1 1 1 1]] – Dennis Jun 2 '15 at 4:32 • @Dennis Nice (ab)use of arity order for % :P – Optimizer Jun 2 '15 at 6:42 # J, 24 bytes */@,@(1|@-+/\."1++/\)@:> 25 bytes (with explanation): */@,@(+/\."1|@<:@++/\)@:> Takes input as list of ascending lists of unary digits similar to the example [[1], [1,1,1], [1,1,1], [1,1,1,1,1]]. Usage: f=.*/@,@(+/\."1|@<:@++/\)@:> f 1;1 1 1;1 1 1;1 1 1 1 1 115200 Method • Create a binary matrix from the input • Compute the running differences in both dimensions. • For each cell add the two results, subtract 1, take the absolute value (to map the originally zero cells to 1) • Ravel the matrix and take the product of the numbers. Intermediate results shown on the input 1 1 1 1 1;1 1 1;1 1 1;1 (5,3,3,1 in unary) (this is for a previous version with descending lengths but using the same method): ]c=.1 1 1 1 1;1 1 1;1 1 1;1 ┌─────────┬─────┬─────┬─┐ │1 1 1 1 1│1 1 1│1 1 1│1│ └─────────┴─────┴─────┴─┘ (>) c 1 1 1 1 1 1 1 1 0 0 1 1 1 0 0 1 0 0 0 0 (+/\.@:>) c 4 3 3 1 1 3 2 2 0 0 2 1 1 0 0 1 0 0 0 0 (+/\."1@:>) c 5 4 3 2 1 3 2 1 0 0 3 2 1 0 0 1 0 0 0 0 ((+/\."1++/\.)@:>) c 9 7 6 3 2 6 4 3 0 0 5 3 2 0 0 2 0 0 0 0 ((+/\."1<:@++/\.)@:>) c 8 6 5 2 1 5 3 2 _1 _1 4 2 1 _1 _1 1 _1 _1 _1 _1 ((+/\."1|@<:@++/\.)@:>) c 8 6 5 2 1 5 3 2 1 1 4 2 1 1 1 1 1 1 1 1 (,@(+/\."1|@<:@++/\.)@:>) c 8 6 5 2 1 5 3 2 1 1 4 2 1 1 1 1 1 1 1 1 (*/@,@(+/\."1|@<:@++/\.)@:>) c 115200 Same length explicit version: 3 :'*/,|<:(+/\."1++/\)>y' Try it online here. # Pyth - 21 bytes I'm losing a lot of bytes in the vertical calculation. Gonna focus on golfing that. *Fs.em+lf>Td>Qkt-bdbQ Takes input like [5, 3, 3, 1]. # Pyth, 18 bytes *Fsm.e+k-bdf>TdQeQ Takes input in ascending order, like [1, 3, 3, 5]. Demonstration. Alternate solution, 19 bytes *Fs.em+s>Rd<Qk-bdbQ # Python 2, 89 88 bytes p=j=-1;d={} for n in input():j+=1;i=0;exec"a=d[i]=d.get(i,j);p*=n-i+j-a;i+=1;"*n print-p (Thanks to @xnor for one insane byte save by combining p and j) The d.get looks a little suspicious to me, but otherwise I'm relatively happy with this. I tried some other approaches, like recursion and zipping, but this is the only one I managed to get under 100. Takes input from STDIN as a list in ascending order, e.g. [1, 3, 3, 5]. f[]=1 f g@(h:t)=(h+length t)*f[x-1|x<-g,x>1] p[]=1 p g@(_:t)=f g*p t Usage example: p [5,4,3,2,1] -> 4465125 f scans from left to right by multiplying the length of outmost hook with a recursive call to itself where each element of the input list is reduced by 1 (dropping it when reaching 0). p scans from top to bottom by multiplying f of the whole list with p of the tail. # R, 174 bytes So... This solution is quite long and could probably be more golfed. I'll think about it ! v=c();d=length;m=matrix(-1,l<-d(a<-scan()),M<-max(a));for(i in 1:l)m[i,(1:a[i])]=c(a[i]:1);for(j in 1:M)m[,j]=m[,j]+c((((p=d(which(m[,j]>0)))-1)):0,rep(0,(l-p)));abs(prod(m)) Ungolfed : v=c() #Empty vector d=length #Alias m=matrix(-1,l<-d(a<-scan()),M<-max(a)) #Builds a matrix full of -1 for(i in 1:l) m[i,(1:a[i])]=c(a[i]:1) #Replaces each row of the matrix by n to 1, n being the #corresponding input : each number is the number of non-empty #cells at its left + itself for(j in 1:M) m[,j]=m[,j]+c((((p=d(which(m[,j]>0)))-1)):0,rep(0,(l-p))) #This part calculates the number of "non-empty" (i.e. without -1 in a column), -1, #because the count for the cell itself is already done. # Then, it creates a vector of those count, appending 0's at the end if necessary #(this avoids recycling) abs(prod(m)) #Outputs the absolute value of the product (because of the -1's) # Python 2, 135 128 bytes This takes a Python type list from stdin: r=input() c=[-1]*r[0] for a in r: for b in range(a):c[b]+=1 s=1 y=0 for a in r: for x in range(a):s*=a-x+c[x]-y y+=1 print s This is a very canonical implementation, but I haven't come up with anything much smarter so far. I have a feeling that there will be much shorter solutions even with "real" programming languages. We get the number of boxes in each row as input. This solution first counts the number of boxes in each column, which is stored in c (it's actually the count minus 1 to simplify its usage in the later calculation). Then it iterates over all boxes, and multiplies the hook lengths. The hook length itself is trivial to calculate once you have the count of boxes in each row and column. • Looks like you're not using m? – xnor Jun 1 '15 at 4:11 • Could have sworn that I deleted it! I remember noticing that I was only using it once, and substituting the only usage. But then I must have missed to actually delete the variable. :( – Reto Koradi Jun 1 '15 at 4:12 # JavaScript (ES6) 69 A function taking an array of integers in ascending order. Run the snippet to test (Firefox only) F=x=>x.map(r=>{for(i=-1;++i<r;p[i]=-~p[i])t*=r-i+~~p[i]},p=[],t=1)&&t // TEST out=x=>O.innerHTML += x + '\n'; test=[ {y:[1], h: 1} ,{y:[2], h: 2} ,{y:[1, 1], h: 2} ,{y:[5], h: 120} ,{y:[2, 1], h: 3} ,{y:[5, 4, 3, 2, 1], h: 4465125} ,{y:[5, 3, 3, 1], h: 115200} ,{y:[10, 5], h: 798336000} ] test.forEach(t=>{ t.y.reverse(); // put in ascending order r=F(t.y); out((r==t.h? 'Ok':'Fail')+' Y: ['+t.y+'] Result:'+r+' Check:'+t.h) }) <pre id=O></pre> # Python, 95 91 bytes This is a Python implementation of nimi's Haskell answer. Golfing suggestions welcome. f=lambda z:z==[]or(z[0]+len(z)-1)*f([i-1for i in z if~-i]) p=lambda z:z==[]or f(z)*p(z[1:]) • Welcome to Python golfing! You can do z and _ or 1 as z==[]or _ when z is a list, using the fact that True==1. Python's function declarations are wordier than Haskell, so it often gives a good payoff to define a single recursive function that does both the inner and outer recursive loops, though I don't know how feasible that is here. – xnor Aug 26 '16 at 19:21 • @xnor "Welcome to Python golfing"? – Sherlock9 Aug 26 '16 at 19:23 • Oh, sorry, you do golf in Python. I associate you with Actually. – xnor Aug 26 '16 at 19:28 • @xnor Long, long before I started in Actually, I was golfing in Python. I'm a little miffed that you don't remember :P – Sherlock9 Aug 26 '16 at 19:30 • I can't speak for xnor, but I recognize users mainly by their avatar. – Dennis Aug 27 '16 at 2:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33420872688293457, "perplexity": 1968.2981879158767}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250601241.42/warc/CC-MAIN-20200121014531-20200121043531-00405.warc.gz"}
https://hal-cea.archives-ouvertes.fr/cea-00828207
# Convection and differential rotation properties of G and K stars computed with the ASH code * Corresponding author Abstract : The stellar luminosity and depth of the convective envelope vary rapidly with mass for G-and K-type main sequence stars. In order to understand how these properties influence the convective turbulence, differential rotation, and meridional circulation , we have carried out 3D dynamical simulations of the interiors of rotating main sequence stars, using the anelastic spherical harmonic (ASH) code. The stars in our simulations have masses of 0.5, 0.7, 0.9, and 1.1 M , corresponding to spectral types K7 through G0, and rotate at the same angular speed as the sun. We identify several trends of convection zone properties with stellar mass, exhibited by the simulations. The convective velocities, temperature contrast between up-and downflows, and meridional circulation velocities all increase with stellar luminosity. As a consequence of the trend in convective velocity, the Rossby number (at a fixed rotation rate) increases and the convective turnover timescales decrease significantly with increasing stellar mass. The 3 lowest mass cases exhibit solar-like differential rotation, in a sense that they show a maximum rotation at the equator and minimum at higher latitudes, but the 1.1 M case exhibits anti-solar rotation. At low mass, the meridional circulation is multi-cellular and aligned with the rotation axis; as the mass increases, the circulation pattern tends toward a unicellular structure covering each hemisphere in the convection zone. Keywords : Document type : Journal articles Complete list of metadatas Cited literature [32 references] https://hal-cea.archives-ouvertes.fr/cea-00828207 Contributor : Marianne Leriche <> Submitted on : Wednesday, September 30, 2020 - 5:15:42 PM Last modification on : Wednesday, September 30, 2020 - 5:17:51 PM Long-term archiving on: : Monday, January 4, 2021 - 8:43:21 AM ### File Brow1.pdf Files produced by the author(s) ### Citation S.P. Matt1, O. Do Cao, B.P. Brown, A.S. Brun. Convection and differential rotation properties of G and K stars computed with the ASH code. Astronomical Notes / Astronomische Nachrichten, Wiley-VCH Verlag, 2011, 332 (9-10), pp.897-906. ⟨10.1002/asna.201111624⟩. ⟨cea-00828207⟩ Record views
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8834047913551331, "perplexity": 4457.04578818443}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703511903.11/warc/CC-MAIN-20210117081748-20210117111748-00358.warc.gz"}
http://pballew.blogspot.com/2015/06/on-this-day-in-math-june-5.html
## Friday, 5 June 2015 ### On This Day in Math - June 5 If your ideas are any good, you'll have to ram them down people's throats. ~Howard Aiken The 156th day of the year; 156 is the number of graphs with six vertices. *What's So Special About This Number. $( \pi(1)+\pi(5)+\pi(6)) * (p_1 + p_5 + p_6) = 156$. 156 is the smallest number for which this is true, and the only even number for which it is true. (The symbols $\pi(n)$ and $p_n$ represent the number of primes less than or equal to n, and the nth prime respectively) 156 is evenly divisible by 12, the sum of its digits. Numbers which are divisible by the sum of their digits are usually called Niven Numbers. According to an article in the Journal of Recreational Mathematics the origin of the name is as follows. In 1977, Ivan Niven, a famous number theorist presented a talk at a conference in which he mentioned integers which are twice the sum of their digits. Then in an article by Kennedy appearing in 1982, and in honor of Niven, he christened numbers which are divisible by their digital sum “Niven numbers.” One might try to find the smallest strings of consecutive Niven Numbers with more than a single digit. *http://trottermath.net/niven-numbers/ I wonder about the relative order of the classes of numbers which are n times their digit sum for various n. EVENTS 1661 Newton admitted to Trinity College.  He was admitted as a "sizar", which meant he earned part of the cost of his education by doing menial chores.  His mother was quite wealthy enough to pay his tuition, but was unsure about his prospects at college since he seemed to be such a poor farmer. Mama and Junior seemed to have an unsteady relationship. He once admitted to his diary in a list of sins, "Threatening my father and mother Smith to burn them and the house over them." 1828 The final meeting of the Board of Longitude in Greenwich. This was the 243rd meeting of the Board since it's creation in 1714. John Barrow, Second Secretary of the Admiralty chaired the meeting. On July 15th the Board was dissolved by Parliament. 1833 Ada Lovelace first meets Charles Babbage at the home of Mary Sommerville. She is known to have assisted Charles Babbage in the design of an "analytical engine", an early mechanical computing device. She is often credited with writing the first computer program. Ada's mother, Lady Byron, had intentionally schooled Ada in the Sciences and Mathematics to counteract the "poetic tendencies" she might have inherited fom her father. Ada knew Mary Somerville and Augustus de Morgan socially and received some math instruction from both. She died of cancer in the womb in November of 1852, only 36 years of age, and was buried beside Lord Byron, the father she never knew, in the parish church of St. Mary Magdalene, Hucknall in the UK. In 1980, 165th years after Ada's birth, the US Defense Department announced a powerful new computer language. They named it Ada in honour of the Countess of Lovelace's important role in the history of computing. It may be of interest to students of mathematics and computer science that Ada Lovelace husband,also named William, was the Baron of Ockham (ancestor of 14th century William of Occam, for whom Occam’s Razor is named) in the 19th century. 1873 The term “radian” first appeared in print. Some suggest it may have been intended as an abbreviation for "RADIus ANgle". Here is a quote from Cajori's History of Mathematical Notations, vol 2 (1929) as provided by Julio Cabellion to the Historia-Matematica Newsgroup: "An isolated matter of interest is the origin of the term 'radian', used with trigonometric functions. It first appeared in print on June 5, 1873, in examination questions set by James Thomson at Queen's College, Belfast. James Thomson was a brother of Lord Kelvin. He used the term as early as 1871, while in 1869 Thomas Muir, then of St. Andrew's University, hesitated between 'rad', 'radial' and 'radian'. In 1874, T. Muir adopted 'radian' after a consultation with James Thomson. (+)" (+) _Nature_, Vol. 83, pp. 156, 217, 459, 460. The concept of a radian measure, as opposed to the degree of an angle, should probably be credited to Roger Cotes. According to a recent post to a math history newsgroup by Bob Stein; "He then calculated this as approximately 57.295 degrees. He had the radian in everything but name, and he recognized its naturalness as a unit of angular measure." 1929 The US Post Office issued a 2 cent stamp commemorating the Golden Jubilee of Edison's electric Lamp. On Dec 31, 1879 Edison gave the first public demonstration of his new incandescent lamp when he lit up a street in Menlo Park, New Jersey. The Pennsylvania Railroad Company ran special trains to Menlo Park on the day of the demonstration in response to public enthusiasm over the event. Although the first incandescent lamp had been produced 40 years earlier, no inventor had been able to come up with a practical design until Edison embraced the challenge in the late 1870s. His patent would be approved on January 27, 1880. *.history.com 1943 Contract signed to develop ENIAC with the Moore School at the University of Pennsylvania. 1977, first personal computer, the Apple II, went on sale. They were the invention of Steve Wozniak and Steve Jobs. They have the 6502 microprocessor, ability to do Hi-res and Lo-res color graphics, sound, joystick input, and casette tape I/O. They have a total of eight expansion Slots for adding peripherials. Clock speed is 1MHz and, with Apple's Language Card installed, standard memory size is 64kB. (The Apple I designation referred to an earlier computer that was not much more than a board. You had to supply your own keyboard, monitor and case.) The Apple II was one of three prominent personal computers that came out in 1977. Despite its higher price, it quickly pulled ahead of the TRS-80 and the Commodore Pet. *TIS Model pictured must be after 1979 when the floppy disk drive (1978) and spreadsheet program VisiCalc (1979) made it a blockbuster. 1995 The first gaseous condensate was produced by Eric Cornell and Carl Wieman at the University of Colorado at Boulder NIST–JILA lab, using a gas of rubidium atoms cooled to 170 nanokelvin (nK) [6] (1.7×10−7 K). For their achievements Cornell, Wieman, and Wolfgang Ketterle at MIT received the 2001 Nobel Prize in Physics. This Bose–Einstein condensate was first predicted by Satyendra Nath Bose and Albert Einstein in 1924–25. Interestingly, Bose first letter to Einstein was written on June 4,1924 so the discovery was one day over exactly 71 years later. *Wik BIRTHS 1819 John Couch Adams (5 June 1819 – 21 January 1892); In 1878 he published his calculation of Euler’s constant (Euler-Mascheronie constant) to 263 decimal places. (he also calculated the Bernoulli numbers up to the 62 nd) *VFR The Euler-Mascheronie constant is the limiting value of the difference between the sum of the first n values in the harmonic series and the natural log of n. (not 263 places, but the approximate value is 0.5772156649015328606065...) He also predicted the location of the then unkown planet of Neptune, but it seems he failed to convince Airy to search for the planet. Independently, Urbanne LeVerrier predicted its locatin in Germany, and then assisted Galle in the Berlin Observatory in locating the planet on 23 September 1846. As a side note, when he was appointed to a Regius position at St. Andrews in Scotland, he was the last professor ever to have to swear and oath of “abjuration and allegience”, swearing fealty to Queen Victoria, and abjuring the Jacobite succession. The need for the oath was removed by the 1858 Universities Scotland Act. Adams made many other contributions to astronomy, notably his studies of the Leonid meteor shower (1866) where he showed that the orbit of the meteor shower was very similar to that of a comet. He was able to correctly conclude that the meteor shower was associated with the comet. 1883 John Maynard Keynes born. (5 June, 1883–21 April, 1946) a British economist whose ideas have profoundly affected the theory and practice of modern macroeconomics, as well as the economic policies of governments. He greatly refined earlier work on the causes of business cycles, and advocated the use of fiscal and monetary measures to mitigate the adverse effects of economic recessions and depressions. His ideas are the basis for the school of thought known as Keynesian economics, as well as its various offshoots. *Wik In one logic class of Whitehead he was the only student. Keynes worked on the foundations of probability. 1888 Gregor Michailowitch Fichtenholz, ( 5 June 1888 in Odessa; 25 June 1959 in Leningrad)who was the founder of the Leningrad school of function theory. *VFR 1900 Dennis Gabor (5 Jun 1900, 8 Feb 1979 at age 78) Hungarian-born British electrical engineer who won the Nobel Prize for Physics in 1971 for his invention of holography, a system of lensless, three-dimensional photography that has many applications. He first conceived the idea of holography in 1947 using conventional filtered-light sources. Because such sources had limitations of either too little light or too diffuse, holography was not commercially feasible until the invention of the laser (1960), which amplifies the intensity of light waves. He also did research on high-speed oscilloscopes, communication theory, physical optics, and television. Gabor held more than 100 patents. *TIS 1904 George McVittie studied at Edinburgh and Cambridge. He then held posts at Leeds, Edinburgh and London and became Professor of Astronomy at the University of Illinois. His main work was in Relativity and Cosmology. *SAU More detail of his life can be found in this obituary. DEATHS 1716 Roger Cotes (10 July 1682 — 5 June 1716) died at age 33 of a violent fever. Sir Isaac Newton, speaking of Mr. Cotes, said, “If he had lived we might have known something.” See Ronald Gowing’s Roger Cotes, Natural Philosopher, pp. 136 and 142. *VFR A really nice bio about Cotes is at the Renaissance Mathematicus blog by Thony Christie. 1940 Augustus Edward Hough Love (17 April 1863, Weston-super-Mare – 5 June 1940, Oxford), British geophysicist and mathematician who discovered a major type of earthquake wave that was subsequently named for him. Love assumed that the Earth consists of concentric layers that differ in density and postulated the occurrence of a seismic wave confined to the surface layer (crust) of the Earth which propagated between the crust and underlying mantle. His prediction was confirmed by recordings of the behaviour of waves in the surface layer of the Earth. He proposed a method, based on measurements of Love waves, to measure the thickness of the Earth's crust. In addition to his work on geophysical theory, Love studied elasticity and wrote A Treatise on the Mathematical Theory of Elasticity, 2 vol. (1892-93). *TIS 1965 Tadashi Nakayama or Tadasi Nakayama (July 26, 1912 – June 5, 1964) was a mathematician who made important contributions to representation theory. He received his degrees from Tokyo University and Osaka University and held permanent positions at Osaka University and Nagoya University. He had visiting positions at Princeton University, Illinois University, and Hamburg University. Nakayama's lemma and Nakayama algebras and Nakayama's conjecture are named after him. Credits : *CHM=Computer History Museum *FFF=Kane, Famous First Facts *NSEC= NASA Solar Eclipse Calendar *RMAT= The Renaissance Mathematicus, Thony Christie *SAU=St Andrews Univ. Math History *TIA = Today in Astronomy *TIS= Today in Science History *VFR = V Frederick Rickey, USMA *Wik = Wikipedia *WM = Women of Mathematics, Grinstein & Campbell
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3574909269809723, "perplexity": 2588.968442630895}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218188550.58/warc/CC-MAIN-20170322212948-00364-ip-10-233-31-227.ec2.internal.warc.gz"}
https://bugs.php.net/bug.php?id=44859
php.net |  support |  documentation |  report a bug |  advanced search |  search howto |  statistics |  random bug |  login go to bug id or search bugs for Bug #44859 is_readable() returns incorrect result with NTFS ACL permissions Submitted: 2008-04-29 00:27 UTC Modified: 2009-05-17 19:54 UTC Votes: 2 4.0 ± 0.0 2 of 2 (100.0%) 1 (50.0%) 1 (50.0%) From: phpbugs at steve dot ipapp dot com Assigned: pajoye (profile) Status: Closed Package: Filesystem function related PHP Version: 5.2.6 OS: win32 only Private report: No CVE-ID: None [2008-04-29 00:27 UTC] phpbugs at steve dot ipapp dot com Description: ------------ NT ACL Permissions can be modified in Windows by right clicking on the file, going to properties, and security. Clicking the Everyone user and hitting Deny Read, will prevent ANYTHING from reading, even if they have READ permissions granted elsewhere. is_readable() doesn't seem to care and thinks that all these files are readable, when in fact they aren't. is_writeable() probably has the same problem. Previous Bugs Identified with this have been closed: 41519. Reproduce code: --------------- $some_file = 'C:\\path\to\file.txt'; if(is_readable($some_file)) { echo file_get_contents(\$some_file); } else { } Expected result: ---------------- With NTFS ACL Permissions set to allow reading: *Contents of File* With NTFS ACL Permissions set to disallow reading: Actual result: -------------- With NTFS ACL Permissions set to allow reading: *Contents of File* With NTFS ACL Permissions set to disallow reading: Warning: file_get_contents(C:\\path\to\file.txt) [function.file-get-contents]: failed to open stream: Permission denied in C:\\path\to\script.php on line 4 ## History [2008-07-08 17:11 UTC] carsten_sttgt at gmx dot de | NT ACL Permissions ... | is_readable() doesn't seem to care and thinks that all | these files are readable, when in fact they aren't. I just run into the same problem and can verify/reproduce this :-/ | is_writeable() probably has the same problem. Correct. is_writeable() only looks for the "read-only" file system attribute (this one works). But like is_readable(), it does not make a real UID/GID check on Windows. And there is no similar attribute for is_readable(), so is_readable() is useless on Windows at the moment. Regards, Carsten [2009-05-17 19:54 UTC] [email protected] This bug has been fixed in CVS. Snapshots of the sources are packaged every three hours; this change will be in the next snapshot. You can grab the snapshot at http://snaps.php.net/. Thank you for the report, and for helping us make PHP better. fixed in 5.3 and HEAD (6)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2461690902709961, "perplexity": 23410.895444329777}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655878639.9/warc/CC-MAIN-20200702080623-20200702110623-00130.warc.gz"}
https://standards.globalspec.com/std/798178/api-mpms-8-4
### This is embarrasing... An error occurred while processing the form. Please try again in a few minutes. ### This is embarrasing... An error occurred while processing the form. Please try again in a few minutes. # API MPMS 8.4 ## Manual of Petroleum Measurement Standards Chapter 8 - Sampling Section 4 - Standard Practice for Sampling and Handling of Fuels for Volatility Measurement inactive Organization: API Publication Date: 1 December 2004 Status: inactive Page Count: 18 ##### scope: This practice covers procedures and equipment for obtaining, mixing, and handling representative samples of volatile fuels for the purpose of testing for compliance with the standards set forth for volatility related measurements applicable to light fuels. The applicable dry vapor pressure equivalent range of this practice is 13 to 105 kPa (2 to 16 psia). This practice is applicable to the sampling, mixing, and handling of reformulated fuels including those containing oxygenates. The values stated in SI units are to be regarded as the standard except in some cases where drawings may show inch-pound measurements which are customary for that equipment. This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use. *A Summary of Changes section appears at the end of this standard. ### Document History May 1, 2020 Manual of Petroleum Measurement Standards Chapter 8.4 Standard Practice for Sampling and Handling of Fuels for Volatility Measurement This practice covers procedures and equipment for obtaining, mixing, and handling representative samples of volatile fuels for the purpose of testing for compliance with the standards set forth for... December 1, 2017 Manual of Petroleum Measurement Standards Chapter 8.4 Standard Practice for Sampling and Handling of Fuels for Volatility Measurement This practice covers procedures and equipment for obtaining, mixing, and handling representative samples of volatile fuels for the purpose of testing for compliance with the standards set forth for... March 1, 2014 Manual of Petroleum Measurement Standards Chapter 8.4 Standard Practice for Sampling and Handling of Fuels for Volatility Measurement This practice covers procedures and equipment for obtaining, mixing, and handling representative samples of volatile fuels for the purpose of testing for compliance with the standards set forth for... December 1, 2004 Manual of Petroleum Measurement Standards Chapter 8 - Sampling Section 4 - Standard Practice for Sampling and Handling of Fuels for Volatility Measurement This practice covers procedures and equipment for obtaining, mixing, and handling representative samples of volatile fuels for the purpose of testing for compliance with the standards set forth for... API MPMS 8.4 December 1, 2004 Manual of Petroleum Measurement Standards Chapter 8 - Sampling Section 4 - Standard Practice for Sampling and Handling of Fuels for Volatility Measurement This practice covers procedures and equipment for obtaining, mixing, and handling representative samples of volatile fuels for the purpose of testing for compliance with the standards set forth for... January 1, 1995 Manual of Petroleum Measurement Standards Chapter 8 - Sampling Section 4 - Standard Practice for Manual Sampling and Handling of Fuels for Volatility Measurement 1 Scope 1.1 The applicable dry vapor pressure equivalent range of this standard is 13-105 kilopascals (2-16 pounds per square inch absolute). 1.2 This standard is applicable to the sampling, mixing,...
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.93471759557724, "perplexity": 4123.015727371569}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303845.33/warc/CC-MAIN-20220122103819-20220122133819-00543.warc.gz"}
https://coderunner.org.nz/mod/forum/discuss.php?d=37
Question Authors' Forum Programming statistics in R Programming statistics in R If not then how much work would this be to add? Thanks, Chris Re: Programming statistics in R I asked about this and we did a very little investigation about a year ago but then I ran out of free time and got distracted ...  We started by looking at a basic R install to see what configuration options would be useful.  I do remember noting down the --silent option.  There did seem to be potential for creating some resources that our stats students who need to use R but are not programmers could use to at least start finding their feet with it. Jenny Re: Programming statistics in R Just to give you a bit of background to Jenny's answer ... When Jenny raised the question of R a year or two ago, my first response was that it would be trivial to ask R questions of the simple "write-an-R-function" or "write-an-R-program" variety. I installed R-base on our Jobe server (5 mins work) and we used a Python3 question (with R as the Ace language) to run R. That was easy enough. Here's a possible template: import subprocess r_prog = """{{ STUDENT_ANSWER | e('py') }}""" r_prog += "\n" + """{{TEST.testcode | e('py')}}""" with open('prog.r', 'w') as fout: fout.write(r_prog) cmd = "R --slave --vanilla" subprocess.call(cmd.split(), stdin=open('prog.r'), universal_newlines=True) With that question type you can ask questions like the following: However, as I recall that wasn't the sort of question that Jenny thought the stats lecturers would want to ask their students, who aren't really programmers. So then you get into the much harder issue of "What sort of question to you really want to ask?". Richard Re: Programming statistics in R Thank you both for such swift and helpful replies. I'm very reassured that CodeRunner can, at least at a technical level, accept R code.  I'm not sure exactly what my colleagues have in mind. We will be running on a server which also has my own STACK question type installed (https://github.com/maths/moodle-qtype_stack) and I think a combination of the normal Moodle questions, STACK and CodeRunner will be an interesting combination of tasks for students which combine mathematical and programming elements. The 10^6$question is always "What sort of question to you really want to ask?". I'll talk with colleagues about that one... Chris Re: Programming statistics in R I'm in the process of setting up CodeRunner to call "R" so that we can develop some introduction to R/stats courses here in Edinburgh. I'm getting a strange setup problem, which I suspect is nothing to do with CodeRunner. I've setup the latest version of Jobe, and CodeRunner ($plugin->version  = 2017082200;)  etc. CodeRunner works just fine with Python, Java, C, and also with Octave. I've created the attached very simple python code which calls R, and this executes with the following result. python3 r.py [1] 3[1] 1.581139 So, I think I have R on my Jobe server, and python can call R in the way Jobe would expect to (permissions not withstanding....). I'm using the sample R question referred to above, but, I get the error shown in the screen shot. ***Error*** Fatal error: couldn't allocate node stack Having done a little bit of digging, I think this error is related to node.js.  I'm not sure.  Does anyone know what is causing this error and how I can fix it please? Thanks, Chris Re: Programming statistics in R This is probably just an out-of-memory error. I'd suggest using Customise > Advanced Customisation and setting MemLimit (MB) for this question to a something like 500. The default is 200 MB, which is probably not sufficient for Python + R together. Or you could try setting it to 0 to turn off memory limit checking altogether. Richard Re: Programming statistics in R Thanks Richard, I've tried this.  Even with 0, to remove limit checking, this error persists. Chris Re: Programming statistics in R I've downloaded R and found the actual error you're getting. It's in r-source/src/main/memory.c: R_BCNodeStackBase = (R_bcstack_t *) malloc(R_BCNODESTACKSIZE * sizeof(R_bcstack_t)); if (R_BCNodeStackBase == NULL) R_Suicide("couldn't allocate node stack"); So certainly it's a memory error - a failed call to malloc. I tried making a simple R test question using exactly your prototype and of course it worked fine for me :)  Here's the proof: I attach the exported Moodle XML question; please try importing and running that first off. If that works on your system too, then the problem is in the setting of the memory limit. But I suspect it will fail on your system too. In which case: tell me a bit about your Jobe server. What version of Linux is it running on? Were any non-standard actions taken during the install? Richard Re: Programming statistics in R Thank you Richard, This is now working.  I think the explicit memory limit 0 has fixed this. You help is much appreciated.  We can take this from here. Chris Re: Programming statistics in R Good to know the problem is fixed. However, I'd advise against using 0 for the memory limit in a production question because a memory gobbling submission might then be able to cripple Jobe. It's safer to find a value at which a typical submission will run OK and then, say, double it. I'm also curious as to why your Jobe server seems more prone to memory limit problems than ours. Are you perhaps running it on a 32-bit OS? Richard Re: Programming statistics in R I've just been asked for more information about R questions, so I'm posting an XML export of two versions of a very simple R question. One version is like the above, using a customised python3 question. The other version is split into two: a prototype to define a new question type Rtest and the actual question that uses that prototype. Obviously R needs to be installed on Jobe for either of these questions to run. Please realise I'm not an R programmer and cannot offer any R-specific support. Richard
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3623789846897125, "perplexity": 2361.420395988111}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039741219.9/warc/CC-MAIN-20181113041552-20181113063552-00453.warc.gz"}
https://phys.libretexts.org/Courses/University_of_California_Davis/UCD%3A_Physics_7C/10%3A_Electromagnetism/10.1%3A_Fields/10.1.3%3A_Fields_in_Physics
$$\require{cancel}$$ # 10.1.3: Fields in Physics There are three fields in which we will be interested for physics 7C: 1. the Gravitational Field 2. the Electric Field 3. the Magnetic Field Currently, the most familiar of these is the gravitational field, so the motivation for using fields will start here. ## The Gravitational Field of Earth Let us start by making a simple statement, which is very imprecise: the Earth’s gravity is stronger than the Moon’s gravity. Justification for this statement comes from watching videos of the astronauts on the Moon and who fall more slowly and leap higher than on Earth. In science we must be more precise. If we calculate the force of the Moon on the Apollo lander, this is much greater than the force of Earth on an apple. We see that we cannot make a blanket statement that the force of gravity on Earth is always greater than the force of gravity on the Moon. The solution to comparing the Earth's and Moon's gravities is rather simple: we need to compare apples to apples. If we ask what $$\mathbf{F}_{\text{Earth on apple}}$$ is and what $$\mathbf{F}_{\text{Moon on apple}}$$ is then we find $$\mathbf{F}_{\text{Earth on apple}}> \mathbf{F}_{\text{Moon on apple}}$$. More generally, it is true for any object $$X$$: $| \mathbf{F}_{\text{Earth on X}} \text{(at surface of Earth)}| > | \mathbf{F}_{\text{Moon on X}} \text{(at surface of Moon)}|$ This is a more precise version of what we mean when we say that the Earth’s gravity is stronger than the Moon’s gravity. We can actually do a little bit better than this. The force of gravity does not distinguish between apples, oranges or skyscrapers. If we could build a skyscraper with the mass of an apple, $$\mathbf{F}_{\text{Earth on skyscraper}}$$ would be the same as $$\mathbf{F}_{\text{Earth on apple}}$$; to compare the strength of the gravitational field we don’t need to use exactly the same object, just two objects with the same mass. Now let's come back to the gravitational field. Recall from Physics 7B that the force of gravity between two masses is $| \mathbf{F}_{\text{mass 1 on mass 2}}| = \dfrac{G M_1 M_2}{r^2} = M_2 \left( \dfrac{G M_1}{r^2} \right)$ where $$r$$ is the distance between the centers of mass, and the direction of the force pulls the masses together. $$G$$ is known as the universal gravitational constant, and is equal to $$6.67 \times 10^{−11} \text{ Nm}^2 \text{kg}^{−2}$$. $$G$$ is a universal constant meaning that it takes the same value regardless of the problem we are doing. Because $$G$$ is so small, we do not notice the gravitational attraction of objects around us unless the object has an enormous mass. Now let's ask the question “what would the force of the Earth be on an object of mass 1 kg located a distance $$r$$ away?" We can calculate this:$| \mathbf{F}_{\text{Earth on 1 kg object}} | = (1 kg) \times \left( \dfrac{G M_{Earth}}{r^2} \right)$If we agree to always compare the gravity of an object by referring to what force it would exert on a second 1 kg mass then we can do this for any second mass. We find that for any object: $| \mathbf{F}_{\text{Earth on object}} = M_{object} \left( \dfrac{G M_{Earth}}{r^2} \right) = M_{object} |\mathbf{g}_{Earth} |$The quantity on the right, which refers to the Earth and distance, is the gravitation field of the Earth. We denote this $$\mathbf{g}_{Earth}$$:$| \mathbf{g}_{Earth} | \equiv \dfrac{G}{M_{Earth}}{r^2}$As this definition might suggest, $$\mathbf{g}_{Earth}$$ is a vector field with units of acceleration that points towards the Earth. Once we know $$\mathbf{g}_{Earth}$$ we can easily calculate the force on any other mass:$|\mathbf{F}_{\text{Earth on object}} | = M_{object} | \mathbf{g}_{Earth} |$We have seen this relationship many times before (this is why we chose to call the gravitational field $$\mathbf{g}_{Earth}$$ rather than some other letter). Like all fields, the value of $$\mathbf{g}_{Earth}$$ depends on position; it decreases with increased $$r$$, which is the distance from the center of the Earth to the position of interest. ## The Direct and Field Model of Forces In the way that we have introduced the gravitational field the field is simply a shortcut. Instead of saying “the force a 1 kg object would feel, if placed here, is 5 N” we can simply say “the gravitational field here is 5 N/kg”. The field is not necessary to determine the gravitational force between two objects, it is simply convenient. We will see later that we actually need to talk about fields if we want energy and momentum to be conserved, but for now we will simply treat them as a shortcut. With this in mind, we have two separate ways of discussing how a gravitational force acts between two objects. The first is where we calculate the force by putting numbers into Newton’s gravitational law without considering the field: the Direct Model: $\text{Object #1} \xrightarrow{\text{creates force on}} \text{Object #2}$ The other way we could think about this is in our new language of fields; one mass creates the field and the other feels the effects: Field Model: $\text{Object #1} \xrightarrow{\text{creates field}} \mathbf{g}_{\text{obj #1}} \xrightarrow{\text{exerts force on}} \text{Object #2}$ Instead of thinking of one object directly exerting a force on another we think of one object (referred to as the “source”) creating a field and then that field creates a force on the second object (sometimes referred to as the “test object”). Of course, both methods are calculations of the same thing and yield the same answer, as the next Example will show: Example #1 A. What gravitation force does the Earth exert on a 2 kg book on its surface? B. What gravitation force does the Earth exert on the same book 2 km above its surface? Use both the direct and field methods (The mass and radius of the Earth can be found online) Solution A. Direct Method After looking up the mass and radius of the Earth we can use Newton's law: $\mathbf{F}_{\text{Earth on book}} = \dfrac{GM_{Earth}M_{book}}{r^2_{Earth}}$ $=\dfrac{(6.67 \times 10^{−11} \text{ N m}^2 \text{ kg}^{−2})(5.98 \times 10^{24} \text{ kg})(2 \text{ kg})}{(6380000 \text{ m})^2}$ $= 19.6 \text{ N}$ A. Field Method We know that $$\mathbf{g}_{Earth} = 9.8 \text{ N/kg}$$ at the surface of the Earth. Normally we approximate this to 10 N/kg, but let us be more precise for this example. $\mathbf{F}_{\text{Earth on book}} = M_{book} \mathbf{g}_{Earth} = (2 \text{ kg})(9.8 \text{ N/kg}) = 19.6 \text{ N}$ Notice how this calculation was much easier, since we already knew $$\mathbf{g}_{Earth}$$. B. Direct Method This proceeds almost exactly the same as before. The two tricky points here are that we have to recall that $$r$$ is the distance from the center of the Earth, and to change 10,000 km into meters. \begin{align} \mathbf{F}_{\text{Earth on book}} &= \dfrac{GM_{Earth}M_{book}}{r_{Earth}+ 10,000 \text{ km})^2} \\[5pt] &=\dfrac{(6.67 \times 10^{−11} \text{ N m}^2 \text{ kg}^{−2})(5.98 \times 10^{24} \text{ kg})(2 \text{ kg})}{(6,380,000\text{ m} + 10,000,000 \text{ m})^2} \\[5pt] &= 3.0 \text{ N} \end{align} B Field method We don't know $$\mathbf{g}_{Earth}$$ at a distance of 10,000 km from the surface of the Earth off the top of our head, we calculate it first. $\mathbf{g}_{Eart} \text{(at 10,000 km from the surface)} = \dfrac{GM_{Earth}}{(r_{Earth} + 10,000 \text{ km})^2}$ $=\dfrac{(6.67 \times 10^{−11} \text{ N m}^2 \text{ kg}^{−2})(5.98 \times 10^{24} \text{ kg})(2 \text{ kg})}{(6,380,000\text{ m} + 10,000,000 \text{ m})^2}$ $= 1.5 \text{ N/kg}$ We can calculate the force the Earth exerts on the book: $\mathbf{F}_{\text{Earth on book}} = M_{book} \mathbf{g}_{Earth} = (2 \text{ kg})(1.5 \text{ N/kg}) = 3.0 N$ Because we did not know $$\mathbf{g}_{Earth}$$ before starting the problem the field method was longer. But if we were asked to do the same calculation for a different mass 10,000 km above the Earth’s surface we now have $$\mathbf{g}_{Earth}$$ and could do it much quicker. ## Which Mass Creates the Field? (Newton's Third Law) In the example above, when using the field method we decided that the Earth would create the field and the book would respond to it. This seems quite acceptable, as we are used to the Earth exerting a gravitational force. But what if we decided to use a book and a chair in our example? Which would be the “source” for the gravitational field, and which would be pulled by the field? To try and work out the answer to this question, let us think about the same problem using the direct model of forces. Calculating the magnitude of the force of the book on the chair gives $|\mathbf{F}_{\text{Book on chair}}| = \dfrac{GM_{book}M_{chair}}{r^2}$ where $$r$$ is the distance between them. Now let us calculate the force of the chair on the book: $|\mathbf{F}_{\text{Chair on book}}| = \dfrac{GM_{book}M_{chair}}{r^2} = |\mathbf{F}_{\text{Book on chair}}|$ These forces have the same magnitude, but pull in opposite directions. This is not a coincidence, but is a consequence of Newton’s third law that we learned in 7B: $\mathbf{F}_{\text{A on B}} = - \mathbf{F}_{\text{B on A}}$ In the language of the field model we see that the answer is both the chair and the book create a field. To do the complete problem in the field model we would have to look at both the book's field acting on the chair and the chair's field acting on the book $\text{Book} \xrightarrow{\text{creates field}} \mathbf{g}_{Book} \xrightarrow{\text{exerts force on}} \text{Chair}$ $\text{Chair} \xrightarrow{\text{creates field}} \mathbf{g}_{Chair} \xrightarrow{\text{exerts force on}} \text{Book}$ An important consequence of this is that to be affected by a field, an object must also create a field of the same type. Note that an object does not feel its own field, only the field of all external objects. In other words, the object does not self-interact, but to feel an external gravitational field, for example, it must also create its own gravitational field. While an object that feels a field must also create the same field, when we are emphasizing an object's ability to create a field we refer to it as the "source" of the field. When we emphasize an object's response to a field we call it the "test" object. We can usually adopt this convention when we can ignore the response of the source object to the field of the test object. However, as we just learned, both objects create a field and are affected by the other's field. For gravity, any object with mass creates and feels a gravitational field. For the electric field, any object with (electric) charge is a source and feels the field. For the magnetic field, as discussed later, the source is moving electric charges. Likewise, charges feel magnetic fields only when they are in motion. Exercise In Example #1 above, would we have to worry about the force of the book on the Earth? If not, why not? ## Field Lines We learned above that the gravitational field of earth is given by $$|\mathbf{g}_{Earth}| = \dfrac{GM_{Earth}}{r^2}$$ where $$r$$ is the distance of the test object from our source, Earth. A similar argument as the one above can be made for any spherical object with mass. We can see using this method hat the gravitational field created by a spherical mass at a distance $$r$$ away is $\mathbf{g}_{spherical} = \dfrac{GM_{spherical}}{r^2}$This equation also applies to point masses. The direction of the gravitational field is always pulling inward. To represent the field, let's create a field map by picking a set of points and drawing vectors to indicate the direction and magnitude of the gravitational field: The vectors only refer to the value of the fields at the location that they start. The actual length of the vectors is arbitrary, because we could always define some scale to convert the vectors to the right lengths and units. The ratio of lengths between two different vectors is not arbitrary – to accurately represent the vector field, all vectors should be scaled down for the field map exactly the same way. If we look at the previous field map of the Earth’s gravitational field, we see that far from the Earth it is almost impossible to read the direction of those arrows. We could enlarge them, but then we'd need to scale up every arrow just as much. Some of the arrows near Earth are quite big, so this can become very messy. Furthermore, the size of the arrows limits the amount of information we could put on that part of the map. To address these shortcomings, we introduce a different representation of a vector field by using field lines. To construct field lines, we draw a continuous lines starting at a point, always going along the direction of the field. An example for the gravitational field of the Earth is shown below: Notice that the length of the arrows no longer corresponds to the strength of the field, so the strength cannot be read directly from looking at this picture. However this picture contains all the information as the field map. While we cannot look at the length of the arrows to get the strength of the field, we can instead look at how closely packed the field lines are. The closer together we see the lines (near the Earth, in this example) the stronger the field. The further apart the field lines are (far from the Earth), the weaker the field. If we start with the field line diagram, we can reconstruct the field at any given point the following way: • Direction: Take a tangent to the field line at that point. This is the direction of the vector field at that point. • Magnitude: Given by the density of the surrounding lines (denser lines mean longer vectors). While the vector map is the most direct representation of the field, we will return to using field lines when it's convenient. Vector Map Field Lines Pros • Scales well for vectors that differ in magnitude Cons • Hard scaling and readability issues for areas of small field • Must work to reconstruct the magnitude of the field. ## Gauss's Law One thing that has not been made explicit in the discussion of field lines so far is that they cannot just start or stop in any location. Where the field lines start or stop depends on the field. For the gravitational field, field lines start at far distances away and can only stop when they encounter a mass. If there are no masses in a particular region then field lines cannot be created or destroyed. The number of field lines that stop at a mass is proportional to the mass of the object encountered. Thus not all the field lines in our Earth example will “stop” at the Earth (they might hit a satellite, for example). Don’t worry about field lines passing through physical objects; remember that field lines they are only a representation of the field. For the electric field, field lines start on positive charges and end on negative charges. This makes electric examples slightly more subtle, because if the number of field lines entering is the same as the number leaving it could indicate either that the region has no charge in it or it has an equal number of positive and negative charges. For electric fields looking at the number of field lines entering and exiting a region only tells us about the net charge in the region, not how many individual charges are in that region. In either case, in regions with no mass (for gravity) or no net charge (for electric field lines), field lines cannot be created or destroyed. Therefore if the number of field lines entering a region is different from the number leaving we must have a field source (i.e. a mass or electric charge) in that region. By knowing the difference, we can figure out exactly how much of that source is in the region. This is the essence of Gauss's Law, which we now make more precise. Think of a region inside an imaginary closed surface. In other words, imagine a shape with an inside that you cannot get out of unless you go through the surface. A box with a lid is a closed surface, but a cup is not. We can use any imaginary region we desire for Gauss's Law, provided that it is a closed surface. Once we pick our region, we inspect how many field lines enter it, and how many leave. The only way a field line can get in (or out) is by going through this surface. If a different number of field lines come in than go out, then field lines are being created or destroyed inside. For gravitational field lines, this would indicate a mass; for electric field lines, this would indicate a net charge. If there is no difference between the field lines entering and leaving there is no mass inside or no net charge, and overall no field lines are being created or destroyed. The amount of field lines going through the surface is referred to as the flux. An increase in flux, for example, can either be represented with more field lines, or by field lines that are drawn closer together. Gauss’s law is simply the statement that $\text{Net flux} \propto \text{(# field lines entering)} - \text{(# field lines leaving)}$ $\text{Net flux} \propto \begin{cases} \text{total mass inside surface} & \quad \text{(gravity)} \\ \text{total charge inside surface} & \quad \text{(electric)} \\ \end{cases}$ A useful analogy to a closed surface and field lines is a leaky bucket filled with water. The bucket is not closed, but you can imagine a closed surface that immediately surrounds the bucket. If water is leaking out of the bucket, it is also leaking out from the inside of this imaginary surface. Therefore the water must be passing through the surface. If we measure the flux of water through our surface, we can measure the amount of water that has leaked out of our bucket. Of course, you did not need consider imaginary surfaces to realize that the bucket is leaking. But this familiar example displays what we do with fields; if the field lines are coming out of a surface, then something is creating them. If we are losing field lines in a region, something is destroying them. ###### Gauss's Law and Magnetism? So far Gauss’s law has been discussed for gravitational fields and electric fields, but no mention of magnetic fields has been made. That is because, to the best of our knowledge magnetic monopoles (pieces of magnetic "charge") do not exist. Instead all known magnetic fields are created by moving electric fields. As a consequence magnetic fields do not “start” or “end” but instead make complete loops. Because magnetic field lines never start or end the number of magnetic field lines entering a closed surface is always equal to the number of magnetic field lines leaving that surface.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.891442596912384, "perplexity": 459.4064775852259}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912203168.70/warc/CC-MAIN-20190324022143-20190324044143-00528.warc.gz"}
http://physics.aps.org/articles/v1/20
# Viewpoint: Light finds a way through the maze , The Blackett Laboratory, Department of Physics, Imperial College London, London SW7 2AZ, UK Published September 15, 2008  |  Physics 1, 20 (2008)  |  DOI: 10.1103/Physics.1.20 #### Universal Optimal Transmission of Light Through Disordered Materials I. M. Vellekoop and A. P. Mosk Published September 15, 2008 | PDF (free) Consider radiation passing through a slab of material. In the absence of disorder, energy flows in an orderly fashion and is distributed among a set of modes or channels defined by the angle of incidence, which remains constant as the light propagates through the slab. Break up the slab into random pieces and the orderly progression of energy is disturbed: the modes are mixed up; lots of energy is scattered back to where it came from and if the disorder is really bad very little energy makes it to the far side of the slab. Now, Ivo Vellekoop and Allard Mosk of the University of Twente report in Physical Review Letters experimental evidence that this scattering may be overcome to allow a large fraction of the incident light to pass through opaque matter [1]. In their experiment they measured how much light is transmitted through a disordered sample. Sure enough, the modes became mixed up and little light emerged. However, they then tried to sneak the light through by sending it at the sample from different angles and with different phases, which were adjusted to optimize transmission of energy (Fig. 1, top panels). Incredibly, transmission could be increased in this way by almost a factor of ${10}^{3}$. In fact, theorists have been predicting this weird effect for some time [2, 3, 4, 5, 6, 7]. When transmission channels mix they form new channels. Rather than each of these new channels contributing a little bit to the much-reduced transmission, they divide into two classes: channels that are almost completely closed (i.e., blocking light) and channels that are almost completely open (transmitting light). Hence if an experiment could find the open channels and direct all the incident energy into them, very high transmission should always be possible. Light can always find a way through a maze of disorder. The experiment has been many years in the making, since the first theoretical papers, but now these remarkable conclusions have been confirmed. The experiment was performed on samples of disordered zinc oxide particles with average diameter of 200 nm, which strongly scatter light so that the mean free path is only $0.85\phantom{\rule{0.333em}{0ex}}\mu \text{m}$. Two sets of samples were used—one 5.7 and the other $11.3\phantom{\rule{0.333em}{0ex}}\mu \phantom{\rule{0}{0ex}}m$ thick. Light from a laser was first reflected off of a liquid-crystal display, which could be contoured to shape the reflected wavefront, and then focussed onto the samples. The liquid-crystal display had 3816 independently programmable segments that were adjusted to optimize transmission. The results of optimization are displayed in Fig. 1 in the lower panels. Propagation of waves through disordered systems is one of the most challenging topics in theoretical physics. The existence of open and closed channels as discussed above is only one fascinating aspect. Another is the sudden alteration in the nature of the channels as disorder is increased. Weak disorder results in diffusive behavior with which we are all familiar, for example, light scattered by milk. This regime is relatively well understood because it can be treated by perturbation theory [2, 4]. At high levels of disorder a phenomenon known as localization sets in and it is much more difficult to find open channels. What happens is that the scattering caused by disorder results in destructive interference of propagating waves, so the light is stopped in its tracks. In fact, the number of open channels falls off exponentially with sample thickness for strong disorder, as opposed to weak disorder where open channels decrease in number inversely with the thickness. This transition to localization has been endlessly debated in the literature and is still not understood. The normally powerful tool of perturbation theory seems not to work for strong disorder and our understanding remains fragmentary. For this reason the new optical experiments are important. Localization was initially studied in the context of spin diffusion [8] and electron localization, but the additional complication of electron-electron interaction makes experiments hard to interpret. Photons do not normally interact with one another, removing a layer of complexity but still leaving the very challenging and unsolved problem of disorder in its purest form. Some progress has been made with localized systems, particularly in 1D, where a nonperturbative approach gives a complete solution [9]. Diffusing behavior is never seen in 1D and scattering has always to be considered as strong. Nevertheless the open-channel/closed-channel result still holds good and it now has a physical interpretation [10]. In this picture, a 1D optical system might comprise a stack of perfectly flat transparent plates each of a random thickness. At random locations and at random frequencies, resonant traps occur that light finds difficult to enter and difficult to escape from. These form the closed channels—nearly always rejecting the light and preventing it from passing through the system. Just occasionally, in a few samples, the resonances might happen to be uniformly spaced across the sample and lined up in frequency so that incident light can use them as a set of stepping stones, hopping from one resonance to another. Obviously such an occurrence is quite rare but when it happens transmission can be nearly 100%. These stepping stones or “necklace states,” as they are sometimes called because of their resemblance to a string of pearls, are the dominant contribution to the average transmission through disordered 1D samples. In a box of random 1D samples most will show negligible transmission. Occasionally there will be a sample that has the resonances all lined up and that will show very high transmission. Hence the expression “maximal fluctuations” [10, 11]. The necklace states remained a theorists’ pipe dream until recently when two other groups confirmed their existence though experiment. The group of Diederik Wiersma (University of Florence, Italy) constructed optical systems comprising alternate layers of high- and low-refractive-index materials arranged to have random thicknesses [12]. The crucial point in their experiment was that the layers were extremely flat so the light was either transmitted or reflected but never altered its angle to the normal. In this way the system is, in effect, one dimensional. They were able to find samples that showed anomalously high transmission and identified the associated necklace states. More or less simultaneously, Azi Genack’s team (City University of New York) worked at microwave frequencies, filling a waveguide with random elements [13]. They too were able to confirm the presence of necklace states. In fact, the open-channel/closed-channel result has been shown to be a universal theorem, independent of dimensionality and independent of whether the disorder is strong or weak. For strong disorder, when systems are localized, the 1D picture of transmission through necklaces of resonances strung from one side of the system to the other gives an intuitive understanding. For the 2D and 3D systems the most probable necklaces are the shortest ones and tend to link opposite sides of the sample, giving a more or less direct route across. In contrast, for weak disorder, the open channels have a more complex structure. Rather than as chains of localized resonances, they can be thought of as meandering creeks following a random path between the two sides of the sample. As disorder is increased, more and more of the creeks reach a dead end without crossing the sample and the few remaining connections take a more direct route. At the transition to localization the picture remains unclear and a challenge for theorists like myself. The catalyst for resolution of this issue will be further experiments similar to the ones reported by Vellekoop and Mosk but extended to strongly disordered systems near the localization threshold. Thus the recent optical experiments go directly to the heart of this problem and are like the bugle call of cavalry charging to the rescue of hapless theorists. ### References 1. I. M. Vellekoop and A. P. Mosk, Phys. Rev. Lett. 101, 120601 (2008). 2. O. Dorokhov, Sol. St. Comm. 51, 381 (1984). 3. P. D. Kirkman and J. B. Pendry, J. Phys. C 17, 5707 (1984). 4. Y. Imry, Europhys. Lett. 1, 249 (1986). 5. P. A. Mello, P. Pereyra, and N. Kumar, Ann. Phys.-New York 181, 290 (1988). 6. J. B. Pendry, A. Mackinnon, and A. Prêtre, Physica A 168, 400 (1990). 7. J. B. Pendry, A. MacKinnon, and P. J. Roberts, Proc. Roy. Soc. 437, 67 (1992). 8. P. W. Anderson, Phys. Rev. 109, 1492 (1958). 9. J. B. Pendry, Adv. Phys. 43, 461 (1994). 10. J. B. Pendry, J. Phys. C 20, 733 (1987). 11. A. V. Tartakovskii, et al., Sov. Phys. Semicond. 21, 370 (1987). 12. J. Bertolotti, S. Gottardo, D. S. Wiersma, M. Ghulinyan, and L. Pavesi, Phys. Rev. Lett. 94, 113903 (2005). 13. K. Y. Bliokh, Y. P. Bliokh, V. Freilikher, A. Z. Genack, B. Hu, and P. Sebbah, Phys. Rev. Lett. 97, 243904 (2006). ### About the Author: John Pendry John Pendry obtained his Ph.D. in 1969 from Cambridge University, UK, where, apart from a year spent at AT&T Bell Laboratories, he remained until 1975. There followed six years at the Daresbury Laboratory as head of the theoretical group. Since 1981 he has worked at the Blackett Laboratory, Imperial College London, where he has served as Dean, Head of the Physics Department, and Principal for Physical Sciences. His research interests are broad, originally centering on condensed matter theory but now extending into optics. He has worked extensively on electronic and structural properties of surfaces, transport in disordered systems, and in the past ten years has developed the theory behind metamaterials, negative refraction, and cloaking.(Author photograph: Imperial College London/Mike Finn-Kelcey)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 3, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7561609148979187, "perplexity": 1180.5708097806107}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422122030742.53/warc/CC-MAIN-20150124175350-00050-ip-10-180-212-252.ec2.internal.warc.gz"}
http://yadda.icm.edu.pl/yadda/element/bwmeta1.element.12a32edb-6aea-3267-9c2b-9a7a8c7f3caf
PL EN Preferencje Język Widoczny [Schowaj] Abstrakt Liczba wyników Czasopismo ## Filozofia (Philosophy) 2007 | 62 | 4 | 324-328 Tytuł artykułu ### ANAPHORA AND RESTRICTED QUANTIFICATION Autorzy Warianty tytułu Języki publikacji SK Abstrakty EN The aim of the paper is to analyze the different approaches to anaphora with the restricted quantifiers. An important point is distinguishing the anaphoric process, which is in fact structured, from the outcomes of that process. The requirements which we put on anaphora (referential dependence and extensional identity of semantic values of antecedent and anaphoric expressions, together with preserving the meaning of analyzed sentences) cannot be met by equipping the classical semantic theories of anaphora (e.g. the analyses of Keenan or Neale). Anaphora is then explained as an algorithmic process in which the semantic value of the anaphoric expression is a higher-order structured function. This function can also be represented as an algorithm consisting of two main sub-algorithms: a calling procedure of picking up the semantic value of a restricted quantifier (also interpreted as a special kind of algorithm) and an execution procedure containing the semantic value already selected. The result is that the semantic values/structured functions of the anaphoric expressions depend on semantic values of the antecedent expressions without violating the principle of preserving the meaning. Given the identity of the extensional relevant algorithmic parts, also the extensions of these functions are the same. Słowa kluczowe EN Czasopismo Rocznik Tom Numer Strony 324-328 Opis fizyczny Rodzaj publikacji ARTICLE Twórcy autor • J. Podrouzek, Filozoficky ustav SAV, Klemensova 19, 813 64 Bratislava, Slovak Republic Bibliografia Typ dokumentu Bibliografia Identyfikatory CEJSH db identifier 07SKAAAA02465117
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8647796511650085, "perplexity": 2738.101153110865}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806620.63/warc/CC-MAIN-20171122175744-20171122195744-00016.warc.gz"}
http://www.ams.org/cgi-bin/bookstore/booksearch?fn=100&pg1=CN&s1=Steinmetz_Zikesch_Wilhelm_Alexander&arg9=Wilhelm_Alexander_Steinmetz_Zikesch
New Titles  |  FAQ  |  Keep Informed  |  Review Cart  |  Contact Us Quick Search (Advanced Search ) Browse by Subject General Interest Logic & Foundations Number Theory Algebra & Algebraic Geometry Discrete Math & Combinatorics Analysis Differential Equations Geometry & Topology Probability & Statistics Applications Mathematical Physics Math Education Mémoires de la Société Mathématique de France 2012; 99 pp; softcover Number: 129 ISBN-13: 978-2-85629-349-2 List Price: US$48 Member Price: US$38.40 Order Code: SMFMEM/129 A note to readers: This book is in French. Let $$k$$ be an algebraically closed field of characteristic zero and let $$R$$ be the Laurent polynomial ring in two variables over $$k$$. The main motivation behind this work is a class of infinite dimensional Lie algebras over $$k$$, called extended affine Lie algebras (EALAs). These algebras correspond to torsors under algebraic groups over $$R$$. In this work the author classifies $$R$$-torsors under classical groups of large enough rank for outer type $$A$$ and types $$B, C, D$$, as well as for inner type $$A$$ under stronger hypotheses. The author can thus deduce results on EALAs and also obtain a positive answer to a variant of Serre's Conjecture II for the ring $$R$$: every smooth $$R$$-torsor under a semi-simple simply connected $$R$$-group of large enough rank of classical type $$B,C,D$$ is trivial. A publication of the Société Mathématique de France, Marseilles (SMF), distributed by the AMS in the U.S., Canada, and Mexico. Orders from other countries should be sent to the SMF. Members of the SMF receive a 30% discount from list. Readership Graduate students and research mathematicians interested in Lie Algebras. Table of Contents Introduction Généralités et préliminaires Les conjectures Le cas $${^1}A_{n-1}$$ et les groupes orthogonaux Le cas $$C_n$$, les autres groupes du type $$D_n$$ et le cas $$^{2}A{_n}$$ La conjecture B Bibliographie
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34284472465515137, "perplexity": 1261.2528611308487}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042981856.5/warc/CC-MAIN-20150728002301-00115-ip-10-236-191-2.ec2.internal.warc.gz"}
https://discuss.codechef.com/questions/44800/digjump-editorial
× # DIGJUMP - Editorial Editorialist: Praveen Dhinwa Easy bfs, dijkstra # PROBLEM: Given a string s of N. You have to go from start of the string(index 0) to the end of the string (index N - 1). From position i, you can go to next (i + 1) or previous (i - 1) position. You can also move from the current position to the indices where the character is same as current character s[i]. # QUICK EXPLANATION • Minimum number of operations can not be greater than 19. • By your moves, you will never be visiting a single digit more than twice. • You can solve this problem by a modified bfs. • You can also make use of simple dijkstra's algorithm. # EXPLANATION Few observations • Minimum number of operations can not be greater than 19. Proof: You can start from first position and go to rightmost index where you can directly go. Then from that position go to next position and keep repeating the previous step. Note that you will be visiting a single number at most twice. Hence you can at most make 19 moves because first digit will be visited once. They will 19 in the cases of 001122334455667788999. • By your moves, you will never be visiting a single digit more than twice. Proof: If you are using more than 2 moves for going from a digit to another, you can simply reduce one of the move by simply going from one of the position to other in just a single move. So you can simply keep the at most 2 moves for moving from a digit to another. Wrong greedy strategies Let us first discuss about some greedy strategies and figure out the reason why they are wrong. From the current position, go to the rightmost index having same character/digit as the current character/digit. If this number does not occur again in the right part of array, then go to next position (ie. i + 1). Please see the following recursive implementation of this strategy. Pseudo Code def greedy(int cur): // cur = N denotes end/ target position. if (cur = N) return 0; last = cur + 1; for i = cur + 1 to N: if (s[i] == s[pos]): last = i; return 1 + greedy(cur); The above strategy will fail in these kind of cases: 010000561 According to greedy strategy, From 0, you will go to rightmost 0, then from that position to 5, then to 6 and finally you will go to 1. Total number of operations required are 4. But you can do it in simply 2 operations. Go from 0 to 1 and then go to rightmost 1 (target position). Wrong dp algorithm Some contestants have used wrong dp algorithm. Let dp[i] denote the minimum number of moves needed to reach position i from position 0. Some wre considering the transition from (i - 1) th position to i or from some position j < i (such that digit at j is same as digit at i.) to i. Note that this kind of dp solutions are wrong because they don't consider the moves going backwards (from position i to i - 1), they are only considering the forward moves. A simple test case where they will fail. In the case: 02356401237894, dp program will give answer 6, but we can go from position 0 to 6 and then to 4 on the left side of second 0 (backward move) and then directly go to 4. So total number of operations required are 3. Bfs Solution Now consider the movement operations from one position to other to be edges of the graph and indices of the string as nodes of the graphs. Finding minimum number of operations to reach from 0 to N - 1 is equivalent to finding shortest path in the graph above mentioned. As the weights in the give graph are unit weights, we can use bfs instead of using dijkstra's algorithm. So we can simply do a bfs from our start node(index 0) to end node(index n - 1). Number of nodes in the graph are n, but the number of edges could potentially go up to N 2 (Consider the case of all 0's, entire graph is a complete graph.). Optimized bfs Solution Now we will make use of the 2 observations that we have made in the starting and we will update the bfs solution accordingly. Whenever you visit a vertex i such that then you should also visit all the the indices j such that s[j] = s[i] (this follows directly from observation 2). Now you can make sure to not to push any of the indices having digit same as current digit because according to observation 2, we are never going to make more than 2 moves from a position to another position with same digit, So after adding that the current character, you should make sure that you are never going to visit any vertex with same value as s[i]. For a reference implementation, see Vivek's solution. Another Easy solution Credit for the solution goes to Sergey Nagin(Sereja). Let dp[i] denote the number of steps required to go from position 0 to position i. From the previous observations, we know that we wont need more than 20 steps. So lets make 20 iterations. Before starting all the iterations, we will set dp[1] = 0 and dp[i] = infinity for all other i > 1. On each iteration, we will calculate Q[k] where Q[k] is the minimum value of dp[i] such that s[i] = k. ie. Q[k] denotes the minimum value of dp over the positions where the digit is equal to k. We can update the dp by following method. dp[i] = min(dp[i], dp[i - 1] + 1, dp[i + 1] + 1, Q[s[i]] + 1); Here the term dp[i - 1] + 1 denotes that we have come from previous position ie (i - 1). Here the term dp[i + 1] + 1 denotes that we have come from next position ie (i + 1). The term Q[s[i]] + 1 denotes the minimum number of operations needed to come from a position with same digit as the current i th digit. Pseudo code: // initialization phase. dp[1] = 0; for (int i = 2; i <= N; i++) dp[i] = inf; for (int it = 0; it < 20; i++) { // Compute Q[k] for (int k = 0; k < 10; k++) Q[k] = inf; for (int i = 1; i <= n; i++) { Q[s[i] - '0'] = min(Q[s[i] - '0'], dp[i]); } // Update the current iteration. for (int i = 1; i <= n; i++) { dp[i] = min(dp[i], dp[i - 1] + 1, dp[i + 1] + 1, Q[s[i] - '0'] + 1); } } // dp[n] will be our answer. Proof If you done proof of dijkstra's algorithm, you can simply found equivalence between the two proofs. Complexity: Complexity is O(20 * N). Here 20 is max number of iterations. # AUTHOR'S AND TESTER'S SOLUTIONS: This question is marked "community wiki". 2.4k52127164 accept rate: 21% 1.2k11 5 Shouldn't the maximum number of moves be 19? (16 Jun '14, 15:11) 3★ my solution is below can i know for which test case/cases it went wrong please.... https://ideone.com/iWZLna (16 Jun '14, 15:45) 2★ 1 (16 Jun '14, 16:26) how could the value of i start from 0 during updating dp[i]?I think the value of i should start from 1 in sereja psudo code. (16 Jun '14, 17:06) (16 Jun '14, 17:18) 3★ 6 Nice problem and an amazing tutorial. Lately there's been a decline in the quality of editorials but this one was certainly one of the best. (16 Jun '14, 17:24) For a reference implementation, see one of the solutions in the references. ??? Where are the references?? Or Can anyone point me the solution using bfs?? (16 Jun '14, 18:11) 1★ beautiful solution, one of the best implementation of bfs (16 Jun '14, 19:35) 3★ @jony, I have updated the link. @rishavz_sagar, Yes, you are right, Updated. (17 Jun '14, 18:48) nice tutorial by dpraveen followed by crisp and self-explanatory implementation by vivek, makes it one of the bestest editorials. (18 Jun '14, 15:30) Can someone explain me that in Sergey Nagin's solution why are we iterating the loop 20 times . for (int it = 0; it < 20; it++) I can understand that in each loop dp[i] gets updated and after some iterations we will get correct result . but how 20? (20 Jun '14, 22:48) @dpraveen I am trying to make it using BFS but I am not getting how the adjacency list of the graph will be generated. (23 Jan '16, 09:17) 2★ showing 5 of 12 show all 12 Who are getting WA can have the following cases : 94563214791026657896112 ans -> 4 12345178905 ans -> 3 112 ans -> 2 1112 ans -> 2 4589562452263697 ans -> 5 14511236478115 ans -> 2 0123456754360123457 ans -> 5 answered 16 Jun '14, 16:34 2.2k●6●17●46 accept rate: 10% 1 Actually I don't understand last test case :( My code output for this one(0123456754360123457) is 7... Can anyone help me to understand this test case? Thanks in advance :) (16 Jun '14, 23:03) can u explain the last one?how can it be 5?? (16 Jun '14, 23:52) 1 for last case follow this path(indexes in bracket):0(0) -> 0(12) -> 6(11) -> 6(6) -> 7(7) -> 7(18). (17 Jun '14, 00:28) one more test case of later updating:- $72266886688661137700886699445511449955$ ans=6 ex 7-7-3-1-1-5-5 (20 Oct '15, 22:01) admin1235★ 11 Well the test cases for this problem are still weak . Here is my accepted solution . My code fails on this 348117304825225015142699765169 . The expected output is 5 whereas my code gives 6 as the answer . I was almost half sure whether my solution would pass or not but luckily it passed . Its really difficult to make tricky test cases which can fail wrong solutions . I would request the setter to update the test cases for this problem in the practice section if possible . answered 17 Jun '14, 02:19 299●3●5●10 accept rate: 0% i think the expected output should also be 6 and not 5. correct me if i m wrong(plzz provide the steps). (25 Jun '14, 09:58) 1 indexes -> 0->6,6->5,5->24,24->23,23->29 i.e 3->3,3->7,7->7,7->9,9->9 These are the five steps. (25 Jun '14, 18:36) 9 We can solve it using a Dijkstra too without relying on the fact that the solution would be bounded by 20. Instead of interconnecting all nodes with the same digit value resulting in potentially ~V^2 edges, we create one super node for each digit from 0 to 9, and connect each digit with its corresponding super node with an edge of weight 0.5. We can thus move to and from same digit nodes in distance 1, which was exactly what was required. This results in ~V edges and we can run Dijkstra on the first node in VlogV time. Finally we can double all edge lengths to avoid floating points and divide the required distance by 2 at the end. answered 16 Jun '14, 17:11 226●1●7 accept rate: 0% 5 My solution was accepted and uses dynamic programming. However because the solution is based on my intuition I'm not sure if understand whether or why it's 100% correct. let dp[i] represent the minimum amount of steps in order to reach position i of the input array. We want to find dp[N-1]. let nums be an array of size 10 where nums[i] represents the minimum amount of moves that are needed in order to reach number i. Note that this number can exist anywhere in the array. Now we have to scan the array from left to right and then from right to left as many times as needed in order to calculate the final value for dp[N-1]. We stop scanning the array when the values of dp aren't changed from a single scan (left->right, right->left). Initialization all values of dp and nums to INF dp[0] = 0 nums[number of input[0]] = 0 First we scan it from left to right. dp[i] = min(dp[i], dp[i-1]+1, nums[number of input[i]]+1) then we scan the array from right to left: dp[i] = min(dp[i], dp[i+1]+1, nums[number of input[i]]+1) then again from left to right, right to left etc, until nothing changes in the array dp. Basically I assume that the convergence to a solution is fast, however I have yet to think of a proof to this. answered 16 Jun '14, 16:20 3★kmampent 445●1●7●7 accept rate: 16% 1 See "Another Easy Solution" section in the editorial.Your method is similar to what is described there. (16 Jun '14, 17:36) plcpunk5★ thanks a lot, I missed that part! (16 Jun '14, 17:47) kmampent3★ 2 @picpunk : I used the same algorithm as yours but I scanned it only thrice i.e. first from left to right and then from reverse and again finally from front . I got AC with this approach and did not scan multiple times as you said . I would like to get a test case for which my solution will get fail because the test cases are still weak IMHO. Here's the link to my solution : http://www.codechef.com/viewsolution/4100953 (16 Jun '14, 18:08) @aayushagarwal, my solution gets AC if we scan the array 3 times as well. I'm also not sure if this AC is due to weak test cases though. (16 Jun '14, 18:11) kmampent3★ @kmampent : But in the editorial it is mentioned that at most we have to do 20 iterations because the maximum value can be 20 , perhaps there can be a test case which I am not able to deduce now . (16 Jun '14, 18:42) indeed, it would be interesting to see if someone can find a test case where this method doesn't work. (16 Jun '14, 19:11) kmampent3★ @aayushagarwal: your code gives incorrect output for 348117304825225015142699765169, expected output is 5 and your solution gives 6. (17 Jun '14, 00:55) @vaibhavatul47 : Thank you for the test case ! (17 Jun '14, 02:15) showing 5 of 8 show all 4 If you are finding "Wrong Answer" then try following testcase :- 0998887776665554443322223300885577 Correct Ans --> 5 Directions : 0,0(last occurrence),8(next),8(index:5),7(next),7(last) answered 16 Jun '14, 17:17 61●1●2 accept rate: 0% 3 i have checked all the test cases given here for each one my code is giving correct output. can any one tell me for which test case it got [email protected] think for this problem it will be helpful for many of us if test cases used for evaluation be made public. https://ideone.com/iWZLna thanks answered 17 Jun '14, 11:00 2★manmauji 46●2 accept rate: 0% Can you explain what is wrong with my dp solution. It gave right answer for your test case. I have implemented the dp such that it considers going backwards. # include <algorithm> using namespace std; int main() { int dp[100005]; string s; cin>>s; int len = s.size(); //cout<<s.size()<<endl; int mini[10]; memset(mini,-1,sizeof(mini)); for(int i=0;i<len;i++) dp[i] = 100005; dp[0] = 0; mini[s[0]-'0']=0; int curMin,j; for(int i=1;i<len;i++) { if(mini[s[i]-'0']==-1) mini[s[i]-'0']=i; curMin = dp[mini[s[i]-'0']]; dp[i] = min(dp[i-1]+1, curMin+1); if(dp[i] < dp[mini[s[i]-'0']]) mini[s[i]-'0'] = i; j=i; while((dp[j-1] > (dp[j]+1))&&(j!=(len-1))) { dp[j-1] = dp[j]+1; if(dp[j-1] < dp[mini[s[j-1]-'0']]) mini[s[j-1]-'0'] = (j-1); j--; } } /* cout<<endl; cout<<mini[3]<<endl;*/ /*for(int i=0;i<len;i++) cout<<dp[i]; cout<<endl;*/ cout<<dp[len-1]; return 0; } Please it will be great if someone can explain to me my mistake here. My approach is similar to Sergey's approach. My mini array does the same thing that the Q array does in his code 513 accept rate: 0% 1 7711965557423006 ur o/p: 6, correct o/p: 5 (16 Jun '14, 17:12) @vaibhavatul47 Why is ans 5..?? 7 to 7(last 7) - 1 7 to 5(left) - 2 5 to 5(left) - 3 5 to 5(left) - 4 5 to 6(left) - 5 6 to 6(last) - 6 It cannot make a jump from '5' at index 8 to '5' at index 6 in 1 step because the problem states only (i-1) backwards. Can you please explain..?? (19 Jun '14, 16:18) ...But we can move to the same digit anywhere in the list. That is the main thing in this problem! (27 Jun '14, 21:37) 5★ 1 Hi All I used the BFS and dijsktra approach to solve the problem . It is working fine for all cases mentined above .Please have a look at it and would be grateful to let me know whch testcases it failed . Thanks :) http://www.codechef.com/viewsolution/4092676 answered 16 Jun '14, 16:24 2★anisdube 16●1 accept rate: 0% hi all my code seems to be working for all the below test cases , 94563214791026657896112 ans -> 4 12345178905 ans -> 3 112 ans -> 2 1112 ans -> 2 4589562452263697 ans -> 5 14511236478115 ans -> 2 0123456754360123457 ans -> 5 still it showed WA . any help is appreciated (17 Jun '14, 01:18) anisdube2★ hey for "348117304825225015142699765169 . The expected output is 5" i am getting 5 also for this . totally not sure why it is still saying WA (17 Jun '14, 13:10) anisdube2★ @shiplu can you please check y my code is giving WA . thanks (18 Jun '14, 23:35) anisdube2★ @Praveen Dhinwa , can you please have a look at my code , looks like it is working for eacha nd every tes cases . (30 Jun '14, 14:34) anisdube2★ @anisdube, your method of taking input is problematic. (04 Jul '14, 02:11) shiplu3★ 0 Can someone point out the bug in my Greedy problem as well, it gives me WA. Thanks in advance :) #include #include #include #include using namespace std; struct intint { int number; int start; int end; int jump; }; bool compareJumps(intint a,intint b) { return (a.jump > b.jump); } bool isCompatible(vector baseVector, intint toCheck) { for(int i=0;i baseVector[i].end) && (toCheck.end > baseVector[i].end)) {} else { return false; } } return true; } int main() { string S; cin>>S; vector max; for(int i=0;i<10;i++) { intint temp; temp.number = i; temp.jump = -1; temp.start = -1; temp.end = -1; max.push_back(temp); } for(int i=0;i jumpsTaken; for(int i=0;i 0)&&(isCompatible(jumpsTaken,max[i]))) { jumpsTaken.push_back(max[i]); } } int path = S.length() - 1; for(int i=0;i 0 Great Observations. Used BFS for this question. i wish, if i had thought of this bfs optimisation before. great question answered 16 Jun '14, 15:46 156●1●5●11 accept rate: 0% 0 I used O(2^8*8!), cause was not able to come up with easier approach. omg )) answered 16 Jun '14, 16:07 76●1●1 accept rate: 0% 0 Can anyone tell me what is wring with this code, it gives wrong answer on submission, but is working fine on my system for every possible input i can think of. I am not able to find the type of input for which it can give a wrong answer. http://www.codechef.com/viewsolution/4105903 answered 16 Jun '14, 16:33 1 accept rate: 0% I tried the DP thing, iterating over it and making it better, made a lot of submissions in the process. Here is my last one I tried: from __future__ import division return map(func, raw_input().strip().split(" ")) return int(raw_input().strip()) return raw_input().strip() return float(raw_input().strip()) lim = len(s)-1 dp = [-1 for _ in xrange(lim+1)] dp[0] = 0 from collections import defaultdict dpa = [10**10 for i in xrange(11)] dpa[s[0]] = 0 for i in xrange(1, lim+1): minn = dpa[s[i]] dp[i] = min(minn, dp[i-1])+1 dpa[s[i]] = min(dpa[s[i]]+1, dp[i]) for _ in xrange(19): dpa = [10**10 for i in xrange(11)] dpa[s[0]] = 0 for i in xrange(1, lim+1): minn = dpa[s[i]] dp[i] = min(minn, dp[i-1], dp[i+1] if i!=lim else 10**10)+1 dpa[s[i]] = min(dpa[s[i]]+1, dp[i]) print dp[-1] But it gives me a WA. Why is it so? # Edit I tried chanakya's input and it gives me 8 as answer instead of 3. I can't figure out why. :/ Input: 0998887776665554443322223300885577 Output: 8 3★svineet 26249 accept rate: 0% Can anyone tell me what i am commiting mistake in my code...... # include<bits stdc++.h=""> using namespace std; string str; int a[10][10],flag[10]; void distance_matrix() { int i,j; for(i=0;i<str.size();i++) { if(i==0) { a[str[i]-'0'][str[i+1]-'0']=1; a[str[i]-'0'][str[i]-'0']=0; flag[str[i]-'0']=1; } else if(i==str.size()-1) { if(flag[str[i]-'0']==1) { a[str[i]-'0'][str[i]-'0']=1; a[str[i]-'0'][str[i-1]-'0']=min(2,a[str[i]-'0'][str[i-1]-'0']); } else { a[str[i]-'0'][str[i]-'0']=0; a[str[i]-'0'][str[i-1]-'0']=1; flag[str[i]-'0']=1; } } else { if(flag[str[i]-'0']==1) { a[str[i]-'0'][str[i]-'0']=1; a[str[i]-'0'][str[i+1]-'0']=min(2,a[str[i]-'0'][str[i+1]-'0']); a[str[i]-'0'][str[i-1]-'0']=min(2,a[str[i]-'0'][str[i-1]-'0']); } else { a[str[i]-'0'][str[i]-'0']=0; a[str[i]-'0'][str[i+1]-'0']=1; a[str[i]-'0'][str[i-1]-'0']=1; flag[str[i]-'0']=1; } } } /for(i=0;i<10;i++) { for(j=0;j<10;j++) cout<<a[i][j]<<" "; cout<<endl; }/ } int main() { //freopen("in.txt","r",stdin); //freopen("out.txt","w",stdout); int n,i,j,k; for(i=0;i<10;i++) { for(j=0;j<10;j++) a[i][j]=INT_MAX/2; flag[i]=0; } cin>>str; if(str.size()==1) { cout<<"0"<<endl; return 0; } distance_matrix(); int visit[10],distance[10]; for(i=0;i<10;i++) { visit[i]=0; distance[i]=INT_MAX/2; } visit[str[0]-'0']=1; for(i=0;i<10;i++) { distance[i]=a[str[0]-'0'][i]; } distance[str[0]-'0']=0; /for(i=0;i<10;i++) { cout<<visit[i]<<" "; }cout<<endl; for(i=0;i<10;i++) { cout<<distance[i]<<" "; } for(i=0;i<10;i++) { for(j=0;j<10;j++) cout<<a[i][j]<<" "; cout<<endl; }/ for(i=0;i<10;i++) { int min_dis=INT_MAX/2,index; for(j=0;j<10;j++) { if(visit[j]==0&&distance[j]<=min_dis) { min_dis=distance[j]; index=j; } } visit[index]=1; for(j=0;j<10;j++) { if(visit[j]==0) { distance[j]=min(distance[j],min_dis+a[index][j]); } } } /for(i=0;i<10;i++) { cout<<visit[i]<<" "; }cout<<endl; for(i=0;i<10;i++) { cout<<distance[i]+a[i][i]<<" "; }/ cout<<distance[str[str.size()-1]-'0']+a[str[str.size()-1]-'0'][str[str.size()-1]-'0']<<endl; return 0; } 1 accept rate: 0% 0 http://www.codechef.com/viewsolution/4050931 Can someone check my this code. it is showing run time error. but in my compiler it is providing me with all correct answers. help would be appreciated. answered 17 Jun '14, 14:30 1●1 accept rate: 0% 0 I am not able to understand under the heading "Another Easy Solution" DP. Pseudo Code is not working for me but using the idea I solved the problem. So i change the code for moving from position 0 to n-1 .(rather than from 1 to n). dp[0]=0 ; dp[1 ]=1; Answer is given by dp[n-1] My Solution link is http://www.codechef.com/viewsolution/4110410 answered 17 Jun '14, 22:43 1●1 accept rate: 0% see the updated pseudo code. also see my accepted submission http://www.codechef.com/viewplaintext/4111204 (18 Jun '14, 04:35) I've used a solution similar to the one by Sergey Nagin. I have, however, used 10 iterations instead of 20. I got AC which could be due to weak test cases. Can someone please give a test case that shows the possible mistake in my code? Or is it actually possible to do it in 10 and not 20 iterations? The following arrays in my code have the following meaning as in the pseudo code by Sergin. val[i] - dp[i] (stores minimum no of moves to reach this point) minval[i] - Q[i] (stores minimum no of moves to reach the number i) # include<string.h> int main() { int i,n; char str[100000]; int minval[10],val[100001]; scanf("%s",str); n = strlen(str); val[0]=0; val[n]=20000; for(i=0;i<n;i++) str[i]=str[i]-48; for(i=0;i<10;i++) minval[i]=20; minval[str[0]]=0; for(i=0;i<n;i++) { if(i!=0) { if(val[i-1]<=minval[str[i]]) { val[i]=val[i-1]+1; } else { val[i]=minval[str[i]]+1; } if(minval[str[i]]>val[i]) minval[str[i]]=val[i]; } if(minval[str[i]]>val[i]) minval[str[i]]=val[i]; } int j; for(j=0;j<10;j++) { for(i=0;i<n;i++) { if( i!=0) { if(val[i-1]<minval[str[i]]&&val[i-1]<val[i+1]) { val[i]=val[i-1]+1; } else if(val[i+1]<minval[str[i]]&&val[i+1]<val[i-1]) { val[i]=val[i+1]+1; } else { val[i]=minval[str[i]]+1; } if(minval[str[i]]>val[i]) minval[str[i]]=val[i]; } if(minval[str[i]]>val[i]) minval[str[i]]=val[i]; } } printf("%d\n",val[n-1]); } 1 accept rate: 0% 0 i think the expected answer should also be 6 not 5. correct me if i m wrong(plzz write the steps) answered 25 Jun '14, 09:53 1●1 accept rate: 0% 0 For people whose program is passing all the test cases given by the question as well as other users Check whether ur program works for inputs such as 00112233445566778899 - 19 445566 - 5 001122 - 5 22445599 - 7 I had the same issue and my program was accepted once i rectified this. answered 26 Jun '14, 11:55 3★abhi011 0●1●1●3 accept rate: 0% 0 My code passes all the test cases provided above in the comments but I still get WA when submitting :/ Any help ? Here's my code : http://www.codechef.com/viewsolution/4097046 answered 04 Jul '14, 00:03 26●1●1●4 accept rate: 0% I'm getting tle. :( Can plz anyone help me ..? # include<queue> using namespace std; # define for(i,n) for( i=0;i<n;i++) vector< int > v[10]; # define NIL -1 queue <int> buff; int main() { char s[MAX]; cin>>s; int n,i,val; n=strlen(s); for(i,n) { val=s[i]-48; v[val].push_back(i); } int state[n]; int pre[n]; for(i,n) { state[i]=initial; pre[i]=-1; } int pt,st; buff.push(0); vector< int > :: iterator p; while(!buff.empty()) { pt=buff.front(); if(pt==n-1) break; st=s[pt]-48; state[buff.front()]=waiting; //cout<<v[pt].front()<<v[pt].size()<<endl; p=v[st].begin(); while(p!=v[st].end()) { if(state[*p]==initial) { state[*p]=waiting; pre[*p]=pt; buff.push(*p);} p++; } if( (pt+1< n)&& (state[pt+1]==initial)) { state[buff.front()+1]=waiting; pre[buff.front()+1]=pt; buff.push(buff.front()+1);} if((0< pt-1 )&& (state[pt-1]==initial) ) { state[buff.front()-1]=waiting; pre[buff.front()-1]=pt; buff.push(pt-1);} state[pt]=visited; buff.pop(); } int sd=0; int v=n-1; while(v!=0) { v=pre[v]; sd++; } cout<<sd; return 0; } 11 accept rate: 0% 0 I have applied the same logic as given in the Optimized BFS solution and I have tested my program for all the test cases, including the ones given in the comments above. Still it gives wrong answer. here is the link to my solution: http://www.codechef.com/viewsolution/5358955 Plz help. answered 12 Nov '14, 20:50 1●1 accept rate: 0% 0 Can anyone explain me how to solve this problem using BFS? I am not able to understand how to use BFS on this problem. answered 05 Apr '15, 10:25 1 accept rate: 0% 0 can someone explain me the implementation of optimization part of Vivek's solution ??? answered 04 May '15, 11:43 17●3 accept rate: 0% 0 Well, my line of reasoning for "Another easy solution" was different. Think in terms of forward jump and backward jump. If there is only forward jumps , then for(i=1;i 0 can anyone please tell me why my answer is a tle https://www.codechef.com/viewsolution/9705925 answered 20 Mar '16, 17:53 1★bhawin91 1 accept rate: 0% 0 I thought of modelling the problem into graph and then doing a simple bfs to get the correct number of jumps .I tried modelling the problem into graph and since the indices with same character has also an edge, it takes me O(n^2) to model these edges.This gives me TLE. Help me to tackle this problem. answered 24 May '16, 19:23 93●8 accept rate: 0% 0 I thought of modelling the problem into graph and then doing a simple bfs to get the correct number of jumps .I tried modelling the problem into graph and since the indices with same character has also an edge, it takes me O(n^2) to model these edges.This gives me TLE. Help me to tackle this problem. answered 24 May '16, 19:24 93●8 accept rate: 0% I tried again but still getting TLE. Someone pls help solution lnk https://www.codechef.com/viewsolution/10159123 (26 May '16, 04:31) 0 Hi !! My code is showing correct output on all the cases mentioned at any point above. Here it is :- https://hastebin.com/uhiwezozat.py Unfortunately, it is showing Wrong Answer. Can anyone please help me with a Test Case or suggestions ? Thanks. answered 03 Sep, 19:48 1 accept rate: 0% 0 #include #include #include #include #include using namespace std; vector > similar(10, vector(0)); typedef pair ii; int main() { string s,temp; cin >> s; int n = s.length(); bool vist[10] = {0}; for(int i = 0; i < n; ++i){ similar[s[i]-'0'].push_back(i); } int dist[n]; for(int i = 0; i < n;++i) { dist[i] = 100000000; } dist[0] = 0; priority_queue pq; pq.push(ii(0, 0)); while(!pq.empty()) { int node = pq.top().second; int d = -1*pq.top().first; pq.pop(); if(node == n -1) break; if(node < n-1){ if(dist[node+1] > d + 1) { pq.push(ii(-1*(d + 1), (node+1))); dist[node+1] = d + 1; } } if(node > 0) { if(dist[node - 1] > d + 1 ) { pq.push(ii(-1*(d + 1), (node-1)) ); dist[node-1] = d + 1; } } int no = s[node] - '0'; for(int i = 0; i < similar[no].size() and !vist[no]; ++i) { int curnode = similar[no][i]; if(curnode > node) { if(dist[curnode] > d + 1) { dist[curnode] = d + 1; pq.push(ii(-1*(d+1), curnode)); } } } vist[no] = true; } cout << dist[n-1]; } can someone plz tell me where i am wrong, i have implemented this question using the flavor of Dijikstra.....please help answered 01 Oct, 18:45 126●7 accept rate: 8% what is wrong with my this DP solution its giving correct answer for 02356401237894 (3) # define int_max 1000000 int main() { int jump[100005]; char num[100005]; scanf("%s",num); int i; jump[0] = 0; for(i=0;i<100004;i++) jump[i] = 1000000; jump[0] = 0; for(i=1;num[i] != '\0';i++) { int j = 0; // jump[i] = 1000000; while(j<i) { if( ( i == j+1 || num[i] == num[j] )) { if(jump[i] > jump[j] + 1) jump[i] = jump[j] + 1; break; } j = j+1; } if( jump[i-1] > jump[i] + 1) jump[i-1] = jump[i] + 1; if(jump[i+1] != '\0' && jump[i+1] > jump[i] + 1) jump[i+1] = jump[i] + 1; // printf("%d",j); } printf("%d\n",jump[i-1]); return 0; ` } 0 accept rate: 0% can you check your output on this input 023564101237894 ??? Correct answer is 4. (16 Jun '14, 15:43) check your output for: 248612676 Ans should be 3. (16 Jun '14, 15:51) toggle preview community wiki: Preview By Email: Markdown Basics • *italic* or _italic_ • **bold** or __bold__ • image?![alt text](/path/img.jpg "title") • numbered list: 1. Foo 2. Bar • to add a line break simply add two spaces to where you would like the new line to be. • basic HTML tags are also supported • mathemetical formulas in Latex between \$ symbol Question tags: ×12,235 ×2,686 ×357 ×138 ×43 ×41 question asked: 16 Jun '14, 15:01 question was seen: 17,077 times last updated: 01 Oct, 18:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7424734830856323, "perplexity": 3191.1539046678163}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948513512.31/warc/CC-MAIN-20171211125234-20171211145234-00294.warc.gz"}
http://exciting-code.org/beryllium-general-lattice-optimization
General Lattice Optimization by Rostam Golesorkhtabar for exciting beryllium Purpose: In this tutorial, you will learn how to optimize a general crystal structure. An explicit example is given for hexagonal structures. Here, you will set up and execute a series of calculations for different volumes (at constant $c/a$ ratio) and for different $c/a$ ratios (at constant volume) for Be in the hexagonal structure. The tools which are used in this tutorial are applicable for any crystal type. ### 0. Define relevant environment variables Read the following paragraphs before starting with the rest of this tutorial! Before starting, be sure that relevant environment variables are already defined as specified in Tutorial scripts and environment variables. Here is a list of the scripts which are relevant for this tutorial with a short description. • OPTIMIZE-lattice.py: Python script. A manager program which calls the setup and analyze scripts. • OPTIMIZE-setup.py: Python script for generating structures at different volume/strains. This script is used within the lattice script. • OPTIMIZE-analyze.py: Python script for fitting the energy-vs-volume and energy-vs-strain curves. This script is called by the lattice script. • OPTIMIZE-submit.sh: (Bash) shell script for running a series of exciting calculations. • OPTIMIZE-clean.sh: (Bash) shell script for cleaning unnecessary files. • exciting2sgroup.xsl: xsl script for converting an exciting input file to an input file for the program sgroup. Requirements: Bash shell. Python numpy, lxml, matplotlib.pyplot, and sys libraries. From now on the symbol $will indicate the shell prompt. ##### Extra requirement: Tool for space-group determination The scripts in this tutorial use the sgroup tool. If you have not done before, this tool should be downloaded and installed. The code sgroup is a utility which allows to determine the space group and symmetry operations of a crystal structure. After the download, you will get a tar.gz file, go to the directory where you saved this file and execute the following commands. $ tar xfvp DownloadedFile.tar.gz $cd SpaceGroups$ make $cp sgroup$EXCITINGSCRIPTS Now, you have all requirements to optimize lattice parameters of any given crystal structure. ### 1. Basics of lattice optimization The lattice of a general crystal structure is determined by giving six lattice parameters, $a, b, c, \alpha, \beta,$ and $\gamma$. The first 3 parameters are connected with the length of the 3 primitive vectors of the crystal; the last 3 are the angles between the primitive vectors. In order to perform the optimization of the energy of a crystal with respect to all lattice parameters, we can use an iterative cyclic procedure where in turns one parameter is varied and the remaining 5 are kept fixed. The procedure can be repeated until the obtained equilibrium parameters do not vary anymore within a desired accuracy. In particular, the minimization procedure as performed by the script OPTIMIZE-lattice.py will contain the following cycles. 1. Minimization with respect to the volume $V$ (by applying an isotropic strain). 2. Minimization with respect to the $b/a$ ratio (all other parameter are fixed). 3. Minimization with respect to the $c/a$ ratio (all other parameter are fixed). 4. Minimization with respect to the angle $\alpha$ between the $b$ and $c$ axes (all other parameter are fixed). 5. Minimization with respect to the angle $\beta$ between the $a$ and $c$ axes (all other parameter are fixed). 6. Minimization with respect to the angle $\gamma$ between the $a$ and $b$ axes (all other parameter are fixed). In the example reported in this tutorial, we consider a crystal with hexagonal crystal structure. In this case there are only two free parameters, the volume $V$ and the $c/a$ ratio. In the next, we show how to perform the lattice optimization with respect to these two parameters. ### 2. Preparation of the input file The first step is to create a directory for the system that you want to investigate. In this tutorial, we consider as an example the calculation of energy-vs-volume and energy-vs-strain curves for Be in the hexagonal structure. Therefore, we create a directory Be_OPT and we move inside it. $mkdir Be_OPT$ cd Be_OPT Inside this directory, we create or copy an exciting input file for hexagonal Be with the name Be_opt.xml name. This file could look like the following. <input> <title>Be: Lattice optimization</title> <structure speciespath="$EXCITINGROOT/species"> <crystal scale="4.3100"> <basevect> 1.00000000 0.00000000 0.00000000 </basevect> <basevect> -0.50000000 0.86602540 0.00000000 </basevect> <basevect> 0.00000000 0.00000000 1.51000000 </basevect> </crystal> <species speciesfile="Be.xml" rmt="1.45"> <atom coord="0.66666667 0.33333333 0.75000000"/> <atom coord="0.33333333 0.66666667 0.25000000"/> </species> </structure> <groundstate ngridk="6 6 4" gmaxvr="14"> </groundstate> <relax/> </input> This file can be saved with any name. In this tutorial is not necessary to rename the exciting input file as input.xml, because this file is the input of the script OPTIMIZE-lattice.py and not of exciting itself. Please, notice that the input file for a direct exciting calculation must be always called input.xml. Be sure to set the correct path for the exciting root directory (indicated in this example by$EXCITINGROOT) to the one pointing to the place where the exciting directory is placed. In order to do this, use the command $SETUP-excitingroot.sh Be_opt.xml ### 3. Lattice optimization of hexagonal crystals In the next, we illustrate the iterative procedure for performing the optimization of the crystal stucture of hexagonal Beryllium. The only relevant parameters in this case are the volume of the unit cell and the$c/a$ratio. ##### STEP1: Optimizing the volume At the first step, we optimize the energy with respect to the volume. In order to generate input files for a series of volumes you have to use the script OPTIMIZE-lattice.py. $ OPTIMIZE-lattice.py >>>> Please enter the exciting input file name: Be_opt.xml Number and name of space group: 194 (P 63/m m c) Hexagonal I structure in the Laue classification. Which parameter would you like to optimize? 1 ... volume 2 ... c/a ratio with constant volume >>>> Please choose '1' or '2': 1 >>>> Please enter the maximum physical strain value The suggested value is between 0.001 and 0.050: 0.01 The maximum physical strain is 0.01 >>>> Please enter the number of the distorted structures [odd number > 4]: 5 The number of the distorted structures is 5 $ Entry values must be typed on the screen when requested. In this case, entries are the following. 1. Be_opt.xml, the name of the input file you have created. 2. 1, the type of optimization you choose. 3. 0.01, the absolute value of the maximum strain for which we want to perform the calculation. 4. 5, the number of deformed structures equally spaced in strain, which are generated between the maximum negative strain and the maximum positive one. Now, you must move to the directory VOL and execute the calculation there. If you list the contents of directory, you should see something like $ cd VOL $ls INFO_VOL vol-Parameters vol-xml vol_01 vol_02 vol_03 vol_04 vol_05$ To execute the calculations, you have to run the script OPTIMIZE-submit.sh. If you do so, the screen output will be similar to the following. $OPTIMIZE-submit.sh SCF calculation of "vol_01" starts -------------------------------- Elapsed time = 0m12s SCF calculation of "vol_02" starts -------------------------------- Elapsed time = 0m12s ... SCF calculation of "vol_05" starts -------------------------------- Elapsed time = 0m15s$ When calculations finished to run, results can be analyzed. In order to do this, you have to run again the OPTIMIZE-lattice.py python script in the current directory. $OPTIMIZE-lattice.py >>>> Murnaghan or Birch-Murnaghan EOS: [M/B] At this point, the script is asking whether you desire to use a Murnaghan (M) or Birch-Murnaghan (B) equation of state for extracting equilibrium parameters such as the equilibrium volume and bulk modulus. Therefore, if you type B or b, you will get as a result $ OPTIMIZE-lattice.py >>>> Murnaghan or Birch-Murnaghan EOS: [M/B] b ===================================================================== Fit accuracy: Log(Final residue in [Ha]): -6.49 Final parameters: E_min = -29.3607724 [Ha] V_min = 106.6948 [Bohr^3] B_0 = 120.664 [GPa] B' = 3.904 Optimized lattice parameters saved into the file: "BM-optimized.xml" ===================================================================== $ Moreover, the script generates a plot (PostScript file BM_eos.eps) which looks like the following. On this plot, you can also find the optimized values of the parameters appearing in the equation of state (minimum energy, equilibrium volume, bulk modulus, and bulk modulus pressure derivative. Please note: 1. The bulk modulus and bulk modulus pressure derivative which are derived here have to be interpreted only as fitting parameters. They do not coincide with the "exact" bulk modulus and bulk modulus pressure derivative of the crystal. Indeed, these "exact" values should be obtained by fitting, with an equation of state (Birch or Murnaghan), the function E=E(V), where for each volume V, E(V) is the energy obtained by optimizing, at that given V, all other lattice and internal parameters. 2. The visual analysis of the plots is very important. The user should always check them at each step. In particular, if the minimum lies ouside the displayed region, the calculation should be restarted with more appropriate values of the initial parameters (e.g., volume or$c/a$ratio). The optimal situation is when the minimum of the energy curves is located in the middle of the investigated region. 3. If the difference in energy between the calculated points and the fit is larger than the final required accuracy in the energy, the calculation should be restarted with more appropriate computational parameters. In particular, one should consider the number of k points (ngridk), the value of rgkmax, and the accuracy in the calculated total energy (epsengy). A file corresponding to an exciting input file for the optimized geometry is created with the name BM-optimized.xml. If you are interested to check how accurate are the calculated equilibrium parameters at this step, you can find more information here. Now, you can delete unnecessary files by executing the command $ OPTIMIZE-clean.sh At this point, you have performed the first optimization step by varying only the volume. In order to be prepared for the next step, you should move now to the parent directory and rename the VOL directory to 1-VOL (first step, optimizing only the volume). Then, you should copy the BM-optimized.xml file to the current directory with the new name 1-VOL.xml. This file will be used as the input file in the next step. $cd ..$ mv VOL 1-VOL $cp 1-VOL/BM-optimized.xml 1-VOL.xml$ ls 1-VOL 1-VOL.xml Be_opt.xml sgroup.out $ ##### STEP2: Optimizing the$c/a$ratio In order to perform the next optimization step, you have to run again OPTIMIZE-lattice.py in the Be_OPT directory, typing entries like in the following. $ OPTIMIZE-lattice.py >>>> Please enter the exciting input file name: 1-VOL.xml Number and name of space group: 194 (P 63/m m c) Hexagonal I structure in the Laue classification. Which parameter would you like to optimize? 1 ... volume 2 ... c/a ratio with constant volume >>>> Please choose '1' or '2': 2 >>>> Please enter the maximum physical strain value The suggested value is between 0.001 and 0.050: 0.01 The maximum physical strain is 0.01 >>>> Please enter the number of the distorted structures [odd number > 4]: 5 The number of the distorted structures is 5 $ Now, move to the COA directory and run OPTIMIZE-submit.sh. $ cd COA/ $OPTIMIZE-submit.sh When calculations are finished, run again OPTIMIZE-lattice.py. $ OPTIMIZE-lattice.py ===================================================================== Optimized lattice parameters saved into the file: "coa-optimized.xml" ===================================================================== $ In this case, the optimization is performed using a fourth-order polynomial fit for calculating the minimum energy and the corresponding strain. The resulting plot (also available as PostScript file as coa.eps) should look like the following. Delete unnecessary files. $ OPTIMIZE-clean.sh At this point, you have completed the second optimization step (by varying only the $c/a$ ratio). The optimized structure is saved in the coa-optimized.xml file. Similar to the previous step, you should move out to the parent directory, rename COA to 2-COA (second optimization step, varying only $c/a$). Then, copy the coa-optimized.xml file to the current directory with the name 2-COA.xml. $cd ..$ mv COA 2-COA $cp 2-COA/coa-optimized.xml 2-COA.xml ##### STEP3: Optimizing again the volume Notice: Due to the fact that the volume and$c/a$ratio have been already optimized once, we can choose smaller range of distortion for the next steps. Repeat now the procedure already explained in STEP1, running the script OPTIMIZE-lattice.py and using as entries values 2-COA.xml, 1, 0.005, and 5 in the given order. After having performed the calculation (running the script OPTIMIZE-submit.sh inside the directory VOL), you run OPTIMIZE-lattice.py and get the following plot. At this point, you have optimized the volume for the second time. Follow the last part of STEP1 and copy the file BM-optimized.xml to the parent directory under the name 3-VOL.xml. $ cd .. $mv VOL 3-VOL$ cp 3-VOL/BM-optimized.xml 3-VOL.xml ##### STEP4: Optimizing again the $c/a$ ratio Proceed in a similar way to to STEP2. Run the script OPTIMIZE-lattice.py using as entries values 3-VOL.xml, 2, 0.005, and 5 in the given order. Using the same procedure as in the previous steps, you will end up with the following plot. At this point, the second optimization of the $c/a$ ratio has been completed and the new optimized structure has been saved in coa-optimized.xml. ##### Reaching convergence In the following table you find a summary of the result of the first 4 optimization steps. Step $V$min [Bohr3] ($c/a$)min $E$min [Ha] 0 104.6982 1.51000 -29.3606931 1 106.6948 " -29.3607724 2 " 1.52207 -29.3608015 3 106.5714 " -29.3608017 4 " 1.52233 -29.3608018 As you can see from the previous table, at the 4-th iteration you reached the following convergence. • The equilibrium volume is converged within 10-1 Bohr3. • The c/a ratio is converged within 3$\times$10-4. • The energy at the minimum seems to be converged within 10-4 mHa. Indeed, such a small value should be considered an artifact of the optimization procedure, which assumes that the calculated total energies are exact. However, the accuracy in the determination of the minimum energy cannot be smaller than the accuracy of the total energy in a single SCF calculation. For the calculations performed in this tutorial, total energies are calculated with the default value of the accuracy, i.e., 10-4 Ha. If these results correspond to the desired accuracy you can stop the optimization procedure. Otherwise, you proceed with the next step and, using the new results, you check again the convergence behavior of the equilibrium parameters.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7554323077201843, "perplexity": 1788.2154457081497}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221215523.56/warc/CC-MAIN-20180820023141-20180820043141-00230.warc.gz"}
http://www2.geog.ucl.ac.uk/~plewis/geogg122/_build/html/Chapter3_Scientific_Numerical_Python/advanced.html
# A3. Advanced notes: Scientific and Numerical Python¶ ## A3.1 Pulling Compressed netCDF Files¶ Sometimes, such as when we want to pull data from netCDF files from some data site such as http://www.globalbedo.org, we might find that ‘older style’ formats have been used, such as netCDF3 which might not have internal compression. To save storage space, it is common to compress such files extrenally (i.e. to gzip a file). That makes direct reading from a url a bit more tricky, and in such cases, we may as well uncompress the file to a local temporary file. ### Doing this in Python¶ What we are going to do is to write a class to download a gzipped file from a url and return a filename that can be read by other functions. The file is available as gzurl.py, in the directory files/python. To be able to import this, we have to put files/python in the path where Python looks for modules: import sys,os # put local directory into the path sys.path.insert(0,os.path.abspath('files%spython'%os.sep)) # import module from gzurl import gzurl help(gzurl) Help on class gzurl in module gzurl: class gzurl(__builtin__.object) | | Prof. P. Lewis, UCL, | Thu 10 Oct 2013 12:01:00 BST | [email protected] | | Methods defined here: | | __del__(self) | Destriuctor | | Tidy up | | __init__(self, url, filename=None, store=False, file=True) | initialise class instance | | Parameters: | | url : url of gzipped file | | Options: | | filename: | specify a filename explicitly, rather than | a temporary file (default None) | store : boolean flag to store the uncompressed | data in self.data (default false) | file : boolean flag to store data to a file | (default True) | | close(self) | Tidy up | | read gzipped data from url | and uncompress | | ---------------------------------------------------------------------- | Data descriptors defined here: | | __dict__ | dictionary for instance variables (if defined) | | __weakref__ | list of weak references to the object (if defined) You can look through the file gzurl.py <files/python/gzurl.py>__ at your leisure, but it is of interest to see how we have done this. We need to load the following modules to do this: urllib2, io, gzip, tempfile: import urllib2, io, gzip, tempfile The first thing we do is attempt to open a file specified from a url: # codes for url specification on globalbedo.org years = range(1998,2012) codes = [95,95,97,97,26,66,54,54,29,25,53,56,56,78] XX = dict(zip(years,codes)) year = 2009 root = 'http://www.globalbedo.org/GlobAlbedo%d/mosaics/%d/0.5/monthly/'%\ (XX[year],year) # filename formatting string: use %02d for month eg 01 for 1 month = 1 url = root + '/GlobAlbedo.%d%02d.mosaic.5.nc.gz'%(year,month) print url # open file from url f = urllib2.urlopen(url) http://www.globalbedo.org/GlobAlbedo56/mosaics/2009/0.5/monthly//GlobAlbedo.200901.mosaic.5.nc.gz We then read data from that with the statement: bdata = f.read() # which looks like this: bdata[:50] 'x1fx8bx08x08xf5xe71Rx00x03GlobAlbedo.200901.mosaic.5.ncx00xecxdbwxTUxfexc7qx10' We then create a buffered I/O stream from this using io.BytesIO, which is the form we want the information in for the next part: f = urllib2.urlopen(url) Next, we use the module gzip.GzipFile which simulates the methods of a gzip file: gzip.GzipFile(fileobj=fileobj) <gzip _io.BytesIO object at 0x105aea950 0x105bd2890> And then we read from this: f = urllib2.urlopen(url) This is now binary of netCDF format in this case. Next, we need to write these data to a file. In this case, we don’t want to really save the data anywhere, so we want to use a temporary file. In Python, you can create a temporary file using the module tempfile, which creates a temporary (unique) file on the system. tmp = tempfile.NamedTemporaryFile(delete=False) print tmp.name /var/folders/pt/z0y8dmcd7d77cs_0hnygpwh80000gn/T/tmpN1LPzf So we write the data to this file: tmp.write(data) Then, after we have done something with the data, we will want to tidy up and delete the file: tmp.unlink(tmp.name) To use this module then: import sys,os # put local directory into the path sys.path.insert(0,os.path.abspath('python')) # import local module gzurl from gzurl import gzurl import gdal ''' Method to read a GlobAlbedo file from earlier ''' file_template = 'NETCDF:"%s":%s' # allow filename to be overridden from filename= filename = filename or root + 'GlobAlbedo.%d%02d.mosaic.5.nc'%(year,month) g = gdal.Open ( file_template % ( filename, layer ) ) if g is None: raise IOError # return a numpy array return(np.array(data)) # codes for url specification on globalbedo.org years = range(1998,2012) codes = [95,95,97,97,26,66,54,54,29,25,53,56,56,78] XX = dict(zip(years,codes)) year = 2009 root = 'http://www.globalbedo.org/GlobAlbedo%d/mosaics/%d/0.5/monthly/'%\ (XX[year],year) print root # filename formatting string: use %02d for month eg 01 for 1 month = 1 url = root + '/GlobAlbedo.%d%02d.mosaic.5.nc.gz'%(year,month) f = gzurl(url) # read the netCDF file from f.filename print nc http://www.globalbedo.org/GlobAlbedo56/mosaics/2009/0.5/monthly/ [[ nan nan nan ..., nan nan nan] [ nan nan nan ..., nan nan nan] [ nan nan nan ..., nan nan nan] ..., [ 0.67017049 0.67017049 0.67017049 ..., 0.67199522 0.67199522 0.67199522] [ 0.67017049 0.67017049 0.67017049 ..., 0.67199522 0.67199522 0.67199522] [ 0.67017049 0.67017049 0.67017049 ..., 0.67199522 0.67199522 0.67199522]] Alternatively, to read all of the files into the directory files/data for the year 2011 and keep them: import sys,os # put local directory into the path sys.path.insert(0,os.path.abspath('files%spython'%os.sep)) # import local module gzurl from gzurl import gzurl import gdal # codes for url specification on globalbedo.org years = range(1998,2012) codes = [95,95,97,97,26,66,54,54,29,25,53,56,56,78] XX = dict(zip(years,codes)) year = 2009 root = 'http://www.globalbedo.org/GlobAlbedo%d/mosaics/%d/0.5/monthly/'%\ (XX[year],year) for month0 in range(12): # filename formatting string: use %02d for month eg 01 for 1 base = 'GlobAlbedo.%d%02d.mosaic.5.nc'%(year,month0+1) url = root + base + '.gz' # specify a local filename # work out how / why this works ... local = os.path.join('data{0}'.format(os.sep),base) print local f = gzurl(url,filename=local) # read the netCDF file from f.filename data/GlobAlbedo.200901.mosaic.5.nc data/GlobAlbedo.200902.mosaic.5.nc data/GlobAlbedo.200903.mosaic.5.nc data/GlobAlbedo.200904.mosaic.5.nc data/GlobAlbedo.200905.mosaic.5.nc data/GlobAlbedo.200906.mosaic.5.nc data/GlobAlbedo.200907.mosaic.5.nc data/GlobAlbedo.200908.mosaic.5.nc data/GlobAlbedo.200909.mosaic.5.nc data/GlobAlbedo.200910.mosaic.5.nc data/GlobAlbedo.200911.mosaic.5.nc data/GlobAlbedo.200912.mosaic.5.nc ### Doing this in unix¶ That’s not too complicated, but you might often do this sort of thing from unix instead: !rm -f data/GlobAlbedo.200901.mosaic.5.nc.gz !wget -O data/GlobAlbedo.200901.mosaic.5.nc.gz \ http://www.globalbedo.org/GlobAlbedo56/mosaics/2009/0.5/monthly/GlobAlbedo.200901.mosaic.5.nc.gz --2014-10-07 14:03:42-- http://www.globalbedo.org/GlobAlbedo56/mosaics/2009/0.5/monthly/GlobAlbedo.200901.mosaic.5.nc.gz Resolving www.globalbedo.org... 128.40.73.100 Connecting to www.globalbedo.org|128.40.73.100|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 4540366 (4.3M) [application/x-gzip] Saving to: data/GlobAlbedo.200901.mosaic.5.nc.gz' 100%[======================================>] 4,540,366 21.6M/s in 0.2s 2014-10-07 14:03:42 (21.6 MB/s) - data/GlobAlbedo.200901.mosaic.5.nc.gz' saved [4540366/4540366] !gunzip -f data/GlobAlbedo.200901.mosaic.5.nc.gz !ls -l data/GlobAlbedo.200901.mosaic.5.nc -rw-rw-r--. 1 plewis plewis 18669672 Sep 12 2013 data/GlobAlbedo.200901.mosaic.5.nc ## A3.2 Logical combinations in numpy¶ Let’s read in a different GlobAlbedo dataset. This time, we will read 8 day tile data (day of year: 001, 009 etc. every 8 days). The tile we will read is h17v03 which covers most of the UK. import sys,os sys.path.insert(0,os.path.abspath('files%spython'%os.sep)) from gzurl import gzurl from netCDF4 import Dataset years = range(1998,2012) codes = [95,95,97,97,26,66,54,54,29,25,53,56,56,78] XX = dict(zip(years,codes)) year = 2009 tile = 'h17v03' root = 'http://www.globalbedo.org/GlobAlbedo%d/tiles/%d/%s/'%\ (XX[year],year,tile) # filename formatting string: use %03d for doy eg 001 for 1 doy = 145 url = root + 'GlobAlbedo.%d%03d.%s.nc.gz'%(year,doy,tile) # see if you can make sense of this complicated formatting filename = url.split('/')[-1].replace('.gz','') local_file = 'files{0}data{0}{1}'.format(os.sep,filename) # try to read local file try: nc = Dataset(local_file,'r') except: f = gzurl(url,filename=local_file) nc = Dataset(f.filename,'r') f.close() # now pull some data vis = np.array(nc.variables['BHR_VIS']) nir = np.array(nc.variables['BHR_NIR']) ndvi = (nir - vis)/(nir + vis) -c:5: RuntimeWarning: invalid value encountered in divide Now plot it: import pylab as plt # figure size plt.figure(figsize=(8,8)) # title plt.title('NDVI: Tile %s %d doy %03d'%(tile,year,doy)) # colour map cmap = plt.get_cmap('Spectral') # plot the figure plt.imshow(ndvi,interpolation='none',cmap=cmap,vmin=0.,vmax=1.) # colour bar plt.colorbar() <matplotlib.colorbar.Colorbar instance at 0x10687ce60> We notice in this dataset that there are some ‘funnies’ (unreliable data) around the coastline, which are probably due to negative reflectance values. We could try, for instance to build a mask for these, supposing them to be some other ‘invalid’ number, but in this dataset, we have some other data layers that can help: nc.variables.keys() [u'metadata', u'DHR_VIS', u'DHR_NIR', u'DHR_SW', u'BHR_VIS', u'BHR_NIR', u'BHR_SW', u'DHR_sigmaVIS', u'DHR_sigmaNIR', u'DHR_sigmaSW', u'BHR_sigmaVIS', u'BHR_sigmaNIR', u'BHR_sigmaSW', u'Weighted_Number_of_Samples', u'Relative_Entropy', u'Goodness_of_Fit', u'Snow_Fraction', u'Solar_Zenith_Angle', u'lat', u'lon', u'crs'] # better have a look at the individual bands as well # plot the vis and nir bands plt.figure(figsize=(8,8)) plt.title('VIS: Tile %s %d doy %03d'%(tile,year,doy)) cmap = plt.get_cmap('Spectral') plt.imshow(vis,interpolation='none',cmap=cmap,vmin=0.,vmax=1.) plt.colorbar() plt.figure(figsize=(8,8)) plt.title('NIR: Tile %s %d doy %03d'%(tile,year,doy)) cmap = plt.get_cmap('Spectral') plt.imshow(nir,interpolation='none',cmap=cmap,vmin=0.,vmax=1.) plt.colorbar() <matplotlib.colorbar.Colorbar instance at 0x107236d88> Apart from a few minor outliers, these data look fine. mask = np.array(nc.variables['Data_Mask']).astype(bool) # plot it plt.figure(figsize=(8,8)) plt.title('Data_Mask: Tile %s %d doy %03d'%(tile,year,doy)) cmap = plt.get_cmap('Spectral') plt.colorbar() <matplotlib.colorbar.Colorbar instance at 0x1068ab710> The mask is True where there are valid (land) data. In a masked array, we want the opposite of this. We can’r directly use not, but we can use the bitwise operatoe ~: import numpy.ma as ma ndvi = (nir - vis)/(nir + vis) plt.figure(figsize=(8,8)) plt.title('NDVI: Tile %s %d doy %03d'%(tile,year,doy)) cmap = plt.get_cmap('Spectral') plt.imshow(ndvi,interpolation='none',cmap=cmap,vmin=0.,vmax=1.) plt.colorbar() <matplotlib.colorbar.Colorbar instance at 0x1072aa290> The data mask hasn’t solved the problem for NDVI then. A problem might arise from a small number of negative reflectance values in the dataset. We can create masks for these: mask1 = vis < 0. print 'number of -ve VIS pixels',np.sum(mask1) print 'number of -ve NIR pixels',np.sum(mask2) number of -ve VIS pixels 317 number of -ve NIR pixels 934 and we can combine them with a bitwise operator, | (or) or & (and) in this case (reversing the conditions): mask = np.array(nc.variables['Data_Mask']).astype(bool) & (vis > 0) & (nir > 0) # plot it plt.figure(figsize=(8,8)) plt.title('Data_Mask: Tile %s %d doy %03d'%(tile,year,doy)) cmap = plt.get_cmap('Spectral') plt.colorbar() <matplotlib.colorbar.Colorbar instance at 0x107c865f0> vis = ma.array(vis,mask=~mask) ndvi = (nir - vis)/(nir + vis) plt.figure(figsize=(8,8)) plt.title('NDVI: Tile %s %d doy %03d'%(tile,year,doy)) cmap = plt.get_cmap('Spectral') plt.imshow(ndvi,interpolation='none',cmap=cmap,vmin=0.,vmax=1.) plt.colorbar() <matplotlib.colorbar.Colorbar instance at 0x107320830> This hasn’t entirely sorted it either. Next have a look at a few more fields before going further: # demonstration of multiple subplots datasets = np.array([['DHR_VIS','DHR_NIR'],\ ['DHR_sigmaVIS','DHR_sigmaNIR'],\ # load up all datasets in dict data data = {} dlist = datasets.copy().flatten() for d in dlist: data[d] = np.array(nc.variables[d]) (data['DHR_VIS'] > 0.) | \ (data['DHR_NIR'] > 0.)) s = datasets.shape # how big for each subplot ? big = 5 # set the figure size plt.figure(figsize=(s[1]*big,s[0]*big)) # colorbars for subplots are a bit tricky # here's one way of sorting this # using dataset shapes from matplotlib import gridspec gs = gridspec.GridSpec(s[0],s[1]) # colour map cmap = plt.get_cmap('Spectral') for i,d0 in enumerate(datasets): for j,d in enumerate(d0): axes = plt.subplot(gs[i,j]) axes.set_title(d) # no axis ticks axes.set_xticks([]) axes.set_yticks([]) im = axes.imshow(data[d],cmap=cmap,interpolation='none',vmin=0.) plt.colorbar(im) ndvi = (data['DHR_NIR'] - data['DHR_VIS'])/(data['DHR_NIR'] + data['DHR_VIS']) plt.figure(figsize=(8,8)) plt.title('NDVI: Tile %s %d doy %03d'%(tile,year,doy)) cmap = plt.get_cmap('Spectral') plt.imshow(ndvi,interpolation='none',cmap=cmap,vmin=0.,vmax=1.) plt.colorbar() <matplotlib.colorbar.Colorbar instance at 0x116dd8ab8> So it looks as though we need to filter on 'Weighted_Number_of_Samples' as well, and perhaps on uncertainty: # demonstration of multiple subplots datasets = np.array([['DHR_VIS','DHR_NIR'],\ ['DHR_sigmaVIS','DHR_sigmaNIR'],\ # load up all datasets in dict data data = {} dlist = datasets.copy().flatten() for d in dlist: data[d] = np.array(nc.variables[d]) (data['Weighted_Number_of_Samples'] > 0.5) & \ (data['DHR_sigmaVIS'] <= 0.8) & \ (data['DHR_sigmaNIR'] <= 0.8) & \ (data['DHR_VIS'] >= 0.) & \ (data['DHR_NIR'] >= 0.) s = datasets.shape # how big for each subplot ? big = 5 # set the figure size plt.figure(figsize=(s[1]*big,s[0]*big)) # colorbars for subplots are a bit tricky # here's one way of sorting this # using dataset shapes from matplotlib import gridspec gs = gridspec.GridSpec(s[0],s[1]) # colour map cmap = plt.get_cmap('Spectral') for i,d0 in enumerate(datasets): for j,d in enumerate(d0): axes = plt.subplot(gs[i,j]) axes.set_title(d) # no axis ticks axes.set_xticks([]) axes.set_yticks([]) im = axes.imshow(data[d],cmap=cmap,interpolation='none',vmin=0.) plt.colorbar(im) ndvi = (data['DHR_NIR'] - data['DHR_VIS'])/(data['DHR_NIR'] + data['DHR_VIS']) plt.figure(figsize=(13,13)) plt.title('NDVI: Tile %s %d doy %03d'%(tile,year,doy)) cmap = plt.get_cmap('Spectral') plt.imshow(ndvi,interpolation='none',cmap=cmap,vmin=0.,vmax=1.) plt.colorbar() <matplotlib.colorbar.Colorbar instance at 0x115bb1cb0> Thats quite a bit better, but still not perfect. Experiment with the conditions of the masking to see how you can get rid of the odd pixels (the ‘red’ ones in the above). Do *not* filter on ndvi itself, as we *might* be interested in negative ndvi values in some cases. Once you think you have some useful filtering conditions, try it out on some different dates and tiles. Some other things to try: • Write parts of the code as functions. • Put the code developed into a file and run it from the unix command line. If you followed the advanced material for the previous chapter, you will have noted the use of pyephem as a module that we can use for calculating the solar zenith angle. There is a similar package pysolar that is a little easier to use for solar radiation calculations. We will install a package pysolar into your user area: at a unix prompt, type: !easy_install --user pysolar Searching for pysolar Best match: Pysolar 0.5 Processing Pysolar-0.5-py2.7.egg Pysolar 0.5 is already the active version in easy-install.pth Using /Users/plewis/.local/lib/python2.7/site-packages/Pysolar-0.5-py2.7.egg Processing dependencies for pysolar Finished processing dependencies for pysolar If all goes well, the text that comes up at the terminal should tell you that this has installed (e.g. in /home/plewis/.local/lib/python2.7/site-packages/Pysolar-0.5-py2.7.egg). We can test to see if we can load and run this package: # from https://github.com/pingswept/pysolar/wiki/examples import Pysolar from datetime import datetime # UCL lat/lon lat = 51.5248 lon = -0.1336 hour = 12 minute = 0 second = 0 month = 10 # ie October day = 13 year = 2013 d = datetime(year, month, day, hour, minute, second) altitude_deg = Pysolar.GetAltitude(lat, lon, d) zenith = 90. - altitude_deg # W m^-2 print zenith,solar 59.5052193764 834.866993323 def solar(year, month, day, hour, lat_deg, lon_deg, minute=0, second=0): '''Return solar zenith and clear sky radiation for given lat, lon and time/date ''' from datetime import datetime import Pysolar d = datetime(year, month, day, hour, minute, second) altitude_deg = Pysolar.GetAltitude(lat_deg, lon_deg, d) # W m^-2 # or import from local module import sys,os # put local directory into the path sys.path.insert(0,os.path.abspath('files%spython'%os.sep)) from solar import solar import numpy as np # UCL lat/lon lat = 51.5248 lon = -0.1336 second = 0 month = 10 # ie October day = 13 year = 2013 for hour in xrange(24): for minute in xrange(60): thr = hour + minute/60. # append data line as tuple solar(year, month, day, hour, lat, lon, minute=minute) +\ (month, day, lat, lon)) # convert to numpy array # transpose so access eg zenith as # so we have radiation as (7, 1440) 2 [[ 0.00000000e+00 1.66666667e-02 3.33333333e-02 ..., 2.39500000e+01 2.39666667e+01 2.39833333e+01] [ 1.36158787e+02 1.36145826e+02 1.36131901e+02 ..., 1.36561804e+02 1.36551449e+02 1.36540121e+02] [ 0.00000000e+00 0.00000000e+00 0.00000000e+00 ..., 0.00000000e+00 0.00000000e+00 0.00000000e+00] ..., [ 1.30000000e+01 1.30000000e+01 1.30000000e+01 ..., 1.30000000e+01 1.30000000e+01 1.30000000e+01] [ 5.15248000e+01 5.15248000e+01 5.15248000e+01 ..., 5.15248000e+01 5.15248000e+01 5.15248000e+01] [ -1.33600000e-01 -1.33600000e-01 -1.33600000e-01 ..., -1.33600000e-01 -1.33600000e-01 -1.33600000e-01]] import pylab as plt plt.xlabel('hour') [<matplotlib.lines.Line2D at 0x10681fed0>] import pylab as plt plt.title('Solar zenith UCL') plt.xlabel('hour') plt.ylabel('solar zenith / degrees') [<matplotlib.lines.Line2D at 0x1068bde50>] This is a better modelling of solar radiation that we did in the main part of the class today. There are several things we could do with this. For example, if we know the albedo, we can calculate the absorbed radiation as previously done (if we assume for the moment albedo is constant with solar zenith angle ... which it isn’t, generally), but can now extend over the whole day and integrate to get the total energy per metre squared. From above, we have power per unit area, in Watts per metre squared. This is the same as energy per unit area per second (i.e. the same as J/(m s)). So if we sum up the solar radiation from above over the day and multiply by the time interval in seconds (time interval above is 1 minute, so 60 seconds), we get: power_density = radiation[2].sum() * 60 print 'power per unit area = %.3f MJ / m^2'%(power_density/10**6) power per unit area = 23.687 MJ / m^2 and we could now look at e.g. variations in this over the year (NB this will take some time to calculate if you step every day and minute, so we step every 30 minutes here): import numpy as np def radiation(year, month, day, lat, lon,minute_step=30): for hour in xrange(24): for minute in xrange(0,60,minute_step): thr = hour + minute/60. # append data line as tuple solar(year, month, day, hour, lat, lon, minute=minute) +\ (month, day, lat, lon)) # convert to numpy array # transpose so access eg zenith as def days_in_month(month,year=2013): ''' number of days in month''' import calendar return calendar.monthrange(year,month)[1] # UCL lat/lon lat = 51.5248 lon = -0.1336 year = 2013 minute_step = 30 pd = [] for month in xrange(12): ndays = days_in_month(month+1,year=year) print month,ndays for day in xrange(ndays): pd = np.array(pd).T 0 31 1 28 2 31 3 30 4 31 5 30 6 31 7 31 8 30 9 31 10 30 11 31 import pylab as plt plt.title('Power per unit area, UCL') plt.xlabel('month') plt.ylabel('Power per unit area / MJ m^-2') plt.plot(pd[0],pd[1]/10**6) [<matplotlib.lines.Line2D at 0x106abdc50>] or, if we want to sum over a month: # UCL lat/lon lat = 51.5248 lon = -0.1336 year = 2013 pd = [] for month in xrange(12): pd_month = [] ndays = days_in_month(month+1,year=year) print month,ndays for day in xrange(ndays): pd_month = np.array(pd_month).T pd.append([month,pd_month.sum()]) pd = np.array(pd).T 0 31 1 28 2 31 3 30 4 31 5 30 6 31 7 31 8 30 9 31 10 30 11 31 import pylab as plt plt.title('Monthly total Power per unit area, UCL') plt.xlabel('month') plt.ylabel('Power per unit area / MJ m^-2') plt.plot(pd[0],pd[1]/10**6) [<matplotlib.lines.Line2D at 0x107234d10>] ## E3.3 Improved Solar Radiation Modelling¶ Using the material above and the global albedo datasets from the main class material, calculate an improved estimate of the total absorbed power per unit area per month (MJ per m^2 per month) for the Earth land surface. You should do this with a function that will take as input the year and returns the monthly total absorbed power density (MJ m^-2 per month) and the monthly total power density (MJ m^-2 per month). You might have an optional argument minute_step to control the resolution of the calculation as above. You could then use this to derive latitudinal variations in annual and latitudinal total absorbed power per unit area. ← E2. Exercises
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19395038485527039, "perplexity": 14158.956907449763}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171758.35/warc/CC-MAIN-20170219104611-00243-ip-10-171-10-108.ec2.internal.warc.gz"}
https://mathoverflow.net/questions/164051/quadratic-ternary-forms
# Quadratic - Ternary Forms [closed] Hi I have the following problems concerning quadratic and ternary forms. Any help would be greatly appreciated. 1. $3\displaystyle\sum_{x, y\in\mathbb{Z}}q^{x^2+xy+7y^2}=3\displaystyle\sum_{x, y\in\mathbb{Z}}q^{9(x^2+xy+y^2)}+P_{3,1}\left(\displaystyle\sum_{x,y\in\mathbb{Z}}q^{x^2+xy+y^2}\right)$, where $P_{3,1}$ is the operator which takes only the exponents of $q$ which are congruent to 1 modulo 3. 2. Which of the integers are not represented by the ternary form (5, 8, 11, -4, 1, 2) assuming that it is regular, though its not yet known whether its regular or not. Its genus has 2 elements. 3. If $F$ is a ternary positive quadratic form with Hessian, $H(F)=5$. Then min (F)=1. Comment: Though the upper bound for such minimum value turns out to be 2 inclusive, I am not able to eliminate the case when it is 2. Any help would be greatly appreciated. Thanks. ## closed as off-topic by Will Jagy, Lucia, Chris Godsil, Yemon Choi, Ryan BudneyApr 24 '14 at 17:46 This question appears to be off-topic. The users who voted to close gave this specific reason: • "MathOverflow is for mathematicians to ask each other questions about their research. See Math.StackExchange to ask general questions in mathematics." – Will Jagy, Lucia, Chris Godsil If this question can be reworded to fit the rules in the help center, please edit the question. • That's three questions that probably require three separate solutions. Better to ask them separately. Also, the notation $(5,8,11,-4,1,2)$ is too compressed for most of us to be sure what you meant: better to display the matrix or the quadratic polynomial you mean. – Noam D. Elkies Apr 23 '14 at 0:19 • @Noam, i imagine this is homework. The form is 1620: $5 x^2 + 8 y^2 + 11 z^2 -4 y z + z x + 2 x y$ from my paper with Kaplansky and Schiemann. I put several lists as ordinary text files at zakuski.utsa.edu/~jagy because at least one of them is too large to email. – Will Jagy Apr 23 '14 at 1:13 • @NoamD.Elkies, figured out who assigned these, cc'd you in the email – Will Jagy Apr 23 '14 at 2:55 • In fact I have been informed this is from a take-home exam; no hints until tomorrow I am told. – Todd Trimble Apr 23 '14 at 4:00 • This question appears to be off-topic because it is a question from an exam – Yemon Choi Apr 24 '14 at 13:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4890088140964508, "perplexity": 752.8277284253936}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000367.74/warc/CC-MAIN-20190626154459-20190626180459-00318.warc.gz"}
https://mathematica.stackexchange.com/questions/43135/how-to-define-an-n-variate-empirical-distribution-function-probability-for-any-n
How to define an n-variate empirical distribution function probability for any n? I'm using Mathematica 9.0 to calculate the probability according to the empirical distribution function (EDF) of some sample data. Afterwords this is included in a maximization stage so I define this probability as a function. I have to do apply this to groups of data in which each group has different dimensionality. Therefore I'd like to define a generic version of "probability given by an EDF" that I can apply to any of these groups. Here there is a toy example for some 2-dimensional samples: data = Transpose[{{1, 3, 4, 9, 8, 7, 8}, {2, 1, 1, 6, 7, 8, 9}}]; MatrixForm[data] Its EDF is simply: edf := EmpiricalDistribution[data]; Then by hand one can easily define its probability function: edfProbFunction[t1_, t2_] := NProbability[x1 <= t1 \[And] x2 <= t2, {x1, x2} \[Distributed] edf]; and compute the probability by just defining: edfProb[w1_, w2_] := Evaluate[edfProbFunction[w1, w2]]; In this way, given a new point (2,4) from this distribution, it has probability 0.142857 given by: edfProb[2,4] My question is how to define a function like edfProbFunction for any dimension, not a fixed dimension (2 in the example)? I tried to do it in different ways but didn't succeed. I lack background in Mathematica so these attempts may be nonesense. I summarize them anyway in case this can be of any help: First naive attempt -- use a vector of input variables, straightforward edfProbFunction[t__] := NProbability[x <= t, x \[Distributed] edf]; edfProb[w__] := Evaluate[edfProbFunction[w]]; Using this definition of edfProb together with the edfProbFunction defined in the toy example works, but not with this edfProbFunction here. This made me think that I had to somehow made explicit each of the individual predicates (the inequalities) in edfProbFunction Second attempt -- use MakeBoxes But a simple example shows that the expressions produced by this are not seen as variables in edfProbFunction: xP /: MakeBoxes[xP[x___], form_] := RowBox[Riffle[Map[MakeBoxes[#, form] &, {x}], ","]] x[1] = "" <> {"x", IntegerString[1]}; x[2] = "" <> {"x", IntegerString[2]}; varx = xP[x[1],x[2]] (*this produces a list x1,x2 *) edfProbFunction[t1_,t2_] := NProbability[x1 <= t1 \[And] x2 <= t2, {varx} \[Distributed] edf]; Third attempt -- I tried to define a function that recursively creates the predicate, but it doesn't work neither: table = Table[x[i] <= t[i], {i, 2}]; g[n_] := If[Length[n] > 1, n[[1]] \[And] g[Drop[n, 1]], If[Length[n] == 1, n[[1]], 0]] predicate = g[table] edfProbFunction[t_] := NProbability[predicate, {t[1], t[2]} \[Distributed] edf]; edfProb[w__] := Evaluate[edfProbFunction[w]]; edfProb[{2, 4}] Any suggestions will be welcome, specially complete answers to my question. The biggest issue in trying to put this together is that NProbability mixed with EmpiricalDistribution doesn't seem to like array indexed variables. Building up symbols programatically seems to fix the issue. edfP[data_?MatrixQ][t__?NumericQ] /; Length[{t}] == Length[data[[1]]] := Block[{vars, x}, vars = Table[Symbol["x" <> ToString[i]], {i, Length[{t}]}]; NProbability @@ {And @@ Thread[vars <= {t}], vars \[Distributed] EmpiricalDistribution[data]} ] First lets verify it works for your example... edfProb[3, 4] (* 0.285714 *) edfP[data][3, 4] (* 0.285714 *) And now to generalize... edfP[RandomVariate[NormalDistribution[], {10^4, 4}]][-1, 1, 2, 1] (* 0.1061 *) Note that if you are going to run this repeatedly for the same data you should evaluate the empirical distribution outside and pass that in rather than the data itself. EDIT: Now all that said, this seems like overkill to me. If I understand your question correctly why can't you just use CDF? SeedRandom[1]; dat = RandomVariate[NormalDistribution[], {10^4, 4}]; edfP[dat][-1, 1, 2, 1] (* 0.1143 *) CDF[EmpiricalDistribution@dat, {-1, 1, 2, 1}] (* 0.1143 *) • Both your approaches work really fine, they answer my question. So why can't I just use CDF? Well, I couldn't because I didn't even know this simple way, but now I can. Thanks a lot! – p-d Mar 6 '14 at 17:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7174679636955261, "perplexity": 2901.921654194723}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525094.53/warc/CC-MAIN-20190717061451-20190717083451-00263.warc.gz"}
http://cittadellarte.org/section-7-2-b-of-the-sex-discrimination-act-1975.php
# Section 7 2 b of the sex discrimination act 1975 Before "staff members" insert "teaching". Section 4 definitions of "staff member" and "year": Subsection 3 1 definition of "principal member": Insert the following sections: Exercise of jurisdiction by Registrar "8AAB. Before "staff members" insert "teaching". Stock held in official capacity "21A. Exercise of jurisdiction by Registrar "8AAB. Subsections 15 3 , 4 , 5 , 6 and 7: After "a law" insert "of the Commonwealth or". Australian National University Act After section 6: Omit "of the Commonwealth" last occurring. Subsection A 8 paragraph b of the definition of "relevant body": Paragraph 27 1 d: Omit "a staff member", insert "a non-teaching staff member, a teaching staff member". Original As Enacted or Made: Omit "the last preceding subsection", substitute "subsection 1 ". Subsection 5 1 paragraph c of the definition of "authorized celebrant": Insert the following Part: After paragraph 55 1 d: Omit "or that Act as amended" twice occurring. Subsection 5 1 definition of "medical procedure": Omit "the Registrar", substitute "the Master or Registrar, as the case requires,". Omit "Attorney-General" wherever occurring , substitute "Minister". After "jurisdiction in bankruptcy" insert ", by the Supreme Court of the Northern Territory exercising jurisdiction in bankruptcy". After "stock" insert "or a certified copy of such a power". Parliamentary Counsel Act Subsection 2 2: Additional powers of University "6AA. Omit "Governor-General", substitute "Treasurer". Omit "by reason of", substitute "under". After paragraph e insert the following paragraphs: ## Financial Services Industry: Restructuring & Integrating Banks and Deposit Insurance Reform (1987) Omit "afro", substitute "artificial conception". In increased in official capacity "21A. Compel "a service member", insert 19975 non-teaching specialize member, a jiffy staff member". Join Federal Out Act Section File "23", moment "24". Does 30A and No of Interracial " Paragraph 35A 2 a: Add at the end "and". The Organization shall be related to be a related authority for the members of the Lookout Act. ## 5 thoughts on “Section 7 2 b of the sex discrimination act 1975” 1. Zulkis says: Subsection 5 1 definition of "medical procedure": Subsections 29A 6 and 7: 2. Faukinos says: Appointment of Master " 3. Kegis says: After "means an agreement" insert " other than an agreement between bodies corporate that are related to each other ". Subsections 56 2 , 3 , 5 and 6: 4. Vitaxe says: Insert the following section: The Treasurer may, by signed instrument, delegate to a person occupying an office in the Department of the Treasury all or any of the Treasurer's powers under sections 35, 36 and 5. Baran says: Omit "18", substitute "20". Australian National University Act After section 6:
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8844687342643738, "perplexity": 25623.352176948883}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583660070.15/warc/CC-MAIN-20190118110804-20190118132804-00544.warc.gz"}
https://chemistry.stackexchange.com/questions/63974/finding-out-pka-of-acid-from-molar-conductivity
Finding out pKa of acid from molar conductivity I'm reading about the electrical properties of solution where there is a problem like that: The molar conductivity of $0.0250\ \mathrm M$ $\ce{HCOOH(aq)}$ is $4.61\ \mathrm{mS\ m^2\ mol^{-1}}$. Determine the $\mathrm pK_\mathrm a=-\log K_\mathrm a$ of the acid. (limiting ionic conductivity of $\ce{H+}=34.96\ \mathrm{mS\ m^2\ mol^{-1}}$ and limiting ionic conductivity of $\ce{OH-}=19.91\ \mathrm{mS\ m^2\ mol^{-1}}$) I have to solve the problem using the equation below: $$\frac1{\Lambda_\mathrm m}=\frac1{\Lambda_\mathrm m^0}+\frac{\Lambda_\mathrm mc}{K_\mathrm a\left(\Lambda_\mathrm m^0\right)^2}$$ Here which limiting ionic conductivity should I use to solve the problem? $\ce{H+}$ or $\ce{OH-}$? You need to add the limiting ionic conductivities for $\ce{H+}$ and $\ce{OH-}$ together to get the limiting ionic conductivity for all the ions in solution ($\Lambda_{0}$, which will replace $\Lambda^{0}_{\mathrm m}$ in your equation). This arises from a simplification for calculating $\Lambda_{0}$ in weak electrolyte solutions (such as yours) according to Kohlrausch's Law in which it is stated: Each ionic species makes a contribution to the conductivity of the solution that depends only on the nature of that particular ion, and is independent of the other ions present. from which we can then estimate $\Lambda_{0}$ as: $$\Lambda_{0} = \sum_{i}\lambda_{i,+}^{0} + \sum_{i}\lambda_{i,-}^{0}$$ $$\Lambda_{0} = \underbrace{(34.96 + 19.91)}_{54.87}\ \mathrm{mS\cdot m^{2}\cdot mol^{-1}}$$ $${1\over 4.61} = {1\over 54.87} + {4.61\times 0.025\over K_{\mathrm a}\times (54.87)^{2}}$$ $$K_{\mathrm a} = 1.926\times 10^{-4} \implies \mathrm{p}K_{\mathrm a} = 3.72$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8376170992851257, "perplexity": 487.39716148923077}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027322160.92/warc/CC-MAIN-20190825000550-20190825022550-00235.warc.gz"}
https://jcastellssala.com/tag/dd/
# Tag Archives: d&d ## The monster comes back In the previous post, We left a $\LaTeX$ table representing a Vargouille monster like this: Monster definition basic and attackless And we want it more threatening. In this post I will add the different attacks, base stats, languages and maybe equipment. All are optional, it is decoration to our dangerous monster 1 Comment Filed under code, documents ## The monster environment As I commented, I am working on a Dungeons & Dragons (D&D) $\LaTeX$ class to write small adventures. ### Objective The monster environment should deal with creating the corresponding table and all that it is necessary. A monster has a set of attributes (common to all monsters), and a set of powers different to each monster. In D&D 4e this is typically presented in a table, which I think is the best possible presentation. Typical d&d 4e monster stats table. Stats generated with the LaTeX class (There is a small error). Filed under code, documents, tips ## xkeyval and the 9 arguments Recently I’ve been playing D&D with some friends. Tired of writing my small homebrew adventures in OpenOffice I turned into my old friend $\LaTeX$. Seemed to me that a D&D adventure was specific enough to have its own document class, like article, book etc. And could not find it online, so right now I am writing one.html select jquery set selected In this article I will explai how to write $\LaTeX$ commands accepting more than the 9 default parameters.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 4, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48533567786216736, "perplexity": 3671.1234667493122}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103516990.28/warc/CC-MAIN-20220628111602-20220628141602-00168.warc.gz"}
https://simple.wikipedia.org/wiki/Solar_mass
# Solar mass Solar mass is a unit of measurement of mass. It is equal to the mass of the Sun, about 332,950 times the mass of the Earth, or 1,048 times the mass of Jupiter. Masses of other stars and groups of stars are listed in terms of solar masses. Its mathematical symbol and value are: ${\displaystyle M_{\odot }=1.98892\times 10^{30}{\hbox{ kg}}}$ (kg being kilograms)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9837262630462646, "perplexity": 454.6880607820162}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500215.91/warc/CC-MAIN-20230205032040-20230205062040-00019.warc.gz"}
https://zbmath.org/authors/liu.qi
× ## Liu, Qi Compute Distance To: Author ID: liu.qi Published as: Liu, Qi Documents Indexed: 129 Publications since 1994 Co-Authors: 196 Co-Authors with 115 Joint Publications 7,165 Co-Co-Authors all top 5 ### Co-Authors 0 single-authored 9 Li, Yongjin 6 Zhan, Jianming 5 Deng, Yong 5 West, Douglas Brent 5 Yang, Peng 4 Feng, Jing 4 Huang, Wenhua 4 Kong, Dexing 4 Zhou, Jinglun 3 Deng, Xinyang 3 Kurths, Jürgen 3 Li, Pengsong 3 Luo, Jun 3 Ru, Guobao 3 Sun, Bo 3 Xu, Yong 2 Baccarella, D. 2 Bai, Zhongming 2 Feng, Wenzhe 2 Fu, Linlin 2 Gan, Liangcai 2 Hasselhuhn, Alexander 2 Jin, Yuanfeng 2 Li, Chun 2 Li, Suoping 2 Li, Yongge 2 Li, Zhe 2 Liu, Lin-Xia 2 Liu, Wenchuan 2 Lu, Gang 2 Ma, Xikui 2 Markus, Mario 2 Mcgann, B. 2 Ouyang, Qi 2 Sarfraz, Muhammad 2 Schmick, Malte 2 Sheng, Guiquan 2 Shepherd, Bryan E. 2 Song, Changming 2 Sun, Wenqiang 2 Tang, Chunming 2 Wan, Pengbo 2 Xie, Jun 2 Xu, Junming 2 Ye, Liu 2 Yu, Gexin 2 Yu, Guidong 2 Zhang, Yongqiang 2 Zhuansun, Xu 1 Ai, Q. S. 1 Alhebaishi, Nawaf 1 Ali, Shahzad 1 Balogh, József 1 Boice, John D. jun. 1 Bu, Qingying 1 Chang, Shuhua 1 Chen, Dayue 1 Chen, Shiyu 1 Chernuka, Michael W. 1 Davvaz, Bijan 1 DeLaVina, Ermelinda 1 Deng, Penghai 1 Din, Anwarud 1 Ding, Cunsheng 1 Don, Wai Sun 1 Edmans, Alex 1 Fan, Xiang-Dong 1 Fang, Zhenlong 1 Feng, Feng 1 Feng, Xiangqian 1 Gao, Wenke 1 Gao, Xinbo 1 Gao, Zhen 1 Geng, Dexu 1 Goggans, Paul M. 1 Gu, Jinhong 1 Han, Ming-Fei 1 Hartke, Stephen G. 1 He, Juan 1 He, Lihuo 1 Hesthaven, Jan S. 1 Hou, Wen 1 Hou, Xiangyan 1 Jiang, Guisheng 1 Kang, Yong 1 Khadidos, Alaa Omar 1 Kim, Hee Sik 1 Kong, Feng 1 Kong, Min 1 Kwok, Peter K. 1 Lan, Xin 1 Lee, Chia-fon F. 1 Li, Chung-I 1 Li, Guanghui 1 Li, Mingzhi 1 Li, Qing 1 Li, Yixue 1 Lie, Seng Tjhen 1 Liu, Guoping 1 Liu, Hongbo ...and 105 more Co-Authors all top 5 ### Serials 3 Applied Mathematics and Computation 3 Journal of Nanjing University. Mathematical Biquarterly 3 Applied Mathematical Modelling 3 Fractals 3 Acta Mathematica Scientia. Series B. (English Edition) 3 Journal of Jilin University. Science Edition 3 Systems Engineering and Electronics 3 Fuzzy Systems and Mathematics 2 Journal of Fluid Mechanics 2 Physics Letters. A 2 COMPEL 2 Journal of Geodesy 2 Chaos 2 Wuhan University Journal of Natural Sciences (WUJNS) 2 Journal of Sichuan Normal University. Natural Science 2 Journal of Harbin Engineering University 2 Journal of Systems Engineering 2 Journal of Intelligent and Fuzzy Systems 2 Communications in Theoretical Physics 2 Journal of Nonlinear Science and Applications 1 Biological Cybernetics 1 The Canadian Journal of Statistics 1 Computers and Fluids 1 Computers and Structures 1 IEEE Transactions on Information Theory 1 Information Processing Letters 1 Journal of Mathematical Analysis and Applications 1 Physica A 1 Biometrics 1 Demonstratio Mathematica 1 Journal of Combinatorial Theory. Series B 1 Journal of Graph Theory 1 Numerical Mathematics 1 Journal of Wuhan University. Natural Science Edition 1 Journal of Northeast Normal University. Natural Science Edition 1 International Journal of Production Research 1 Acta Automatica Sinica 1 Order 1 Journal of Systems Science and Mathematical Sciences 1 Asia-Pacific Journal of Operational Research 1 Journal of Economic Dynamics & Control 1 Applied Mathematics Letters 1 SIAM Journal on Discrete Mathematics 1 Mathematica Applicata 1 Journal of Tsinghua University. Science and Technology 1 Science in China. Series A 1 Signal Processing 1 Multidimensional Systems and Signal Processing 1 Economics Letters 1 Designs, Codes and Cryptography 1 Numerical Algorithms 1 Journal of Contemporary Mathematical Analysis. Armenian Academy of Sciences 1 The Australasian Journal of Combinatorics 1 Applied Mathematics. Series A (Chinese Edition) 1 Applied Mathematics. Series B (English Edition) 1 Congressus Numerantium 1 Economic Theory 1 The Electronic Journal of Combinatorics 1 Reliable Computing 1 Mathematical Problems in Engineering 1 The Ramanujan Journal 1 Soft Computing 1 Journal of Discrete Mathematical Sciences & Cryptography 1 Italian Journal of Pure and Applied Mathematics 1 East-West Journal of Mathematics 1 Acta Mathematica Sinica. English Series 1 Communications in Nonlinear Science and Numerical Simulation 1 Engineering Computations 1 Physical Review Letters 1 Journal of Modern Optics 1 Journal of Liaoning Normal University. Natural Science Edition 1 Mathematical Theory and Application 1 IEEE Transactions on Image Processing 1 IEEE Transactions on Antennas and Propagation 1 Journal of Henan Normal University. Natural Science 1 Journal of Applied Mathematics 1 Journal of Natural Science of Hunan Normal University 1 Bulletin of the Malaysian Mathematical Sciences Society. Second Series 1 International Journal of Pure and Applied Mathematics 1 Journal of Natural Science. Nanjing Normal University 1 Statistical Applications in Genetics and Molecular Biology 1 Computational Biology and Chemistry 1 International Journal of Computational Methods 1 Journal of Hyperbolic Differential Equations 1 The Australian Journal of Mathematical Analysis and Applications 1 Mathematical Biosciences and Engineering 1 Review of Finance 1 Acta Mathematica Sinica. Chinese Series 1 Communications in Computational Physics 1 Journal of Mathematical Inequalities 1 Science 1 Scientia Sinica. Mathematica 1 Journal of Beihua University. Natural Science 1 Journal of Theoretical Biology 1 Operations Research Transactions 1 Journal of Mathematical Research with Applications 1 AIMS Mathematics 1 IEEE Transactions on Circuits and Systems I: Regular Papers all top 5 ### Fields 21 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 15 Statistics (62-XX) 13 Numerical analysis (65-XX) 12 Partial differential equations (35-XX) 11 Combinatorics (05-XX) 10 Systems theory; control (93-XX) 9 Information and communication theory, circuits (94-XX) 7 Functional analysis (46-XX) 7 Probability theory and stochastic processes (60-XX) 7 Computer science (68-XX) 7 Operations research, mathematical programming (90-XX) 6 Associative rings and algebras (16-XX) 6 Ordinary differential equations (34-XX) 6 Mechanics of deformable solids (74-XX) 6 Fluid mechanics (76-XX) 5 Difference and functional equations (39-XX) 5 Mechanics of particles and systems (70-XX) 4 Geophysics (86-XX) 4 Biology and other natural sciences (92-XX) 3 Differential geometry (53-XX) 3 Optics, electromagnetic theory (78-XX) 3 Classical thermodynamics, heat transfer (80-XX) 3 Quantum theory (81-XX) 2 Dynamical systems and ergodic theory (37-XX) 2 Global analysis, analysis on manifolds (58-XX) 2 Statistical mechanics, structure of matter (82-XX) 2 Relativity and gravitational theory (83-XX) 1 Order, lattices, ordered algebraic structures (06-XX) 1 Number theory (11-XX) 1 Commutative algebra (13-XX) 1 Special functions (33-XX) 1 Approximations and expansions (41-XX) 1 Harmonic analysis on Euclidean spaces (42-XX) 1 Integral equations (45-XX) 1 Operator theory (47-XX) 1 Calculus of variations and optimal control; optimization (49-XX) ### Citations contained in zbMATH Open 47 Publications have been cited 166 times in 144 Documents Cited by Year An advanced meshless method for time fractional diffusion equation. Zbl 1245.65133 Gu, Y. T.; Zhuang, P.; Liu, Q. 2011 A new rough set theory: rough soft hemirings. Zbl 1353.16050 Zhan, Jianming; Liu, Qi; Davvaz, Bijan 2015 On a characteristic equation of well-poised Bailey chains. Zbl 1172.05009 Liu, Q.; Ma, X. 2009 A belief-based evolutionarily stable strategy. Zbl 1302.91024 Deng, Xinyang; Wang, Zhen; Liu, Qi; Deng, Yong; Mahadevan, Sankaran 2014 Solving Fokker-Planck equation using deep learning. Zbl 1431.35210 Xu, Yong; Zhang, Hao; Li, Yongge; Zhou, Kuang; Liu, Qi; Kurths, Jürgen 2020 Some conjectures of Graffiti.pc on total domination. Zbl 1134.05070 DeLaViña, Ermelinda; Liu, Qi; Pepper, Ryan; Waller, Bill; West, Douglas B. 2007 Oriented diameter of graphs with diameter 3. Zbl 1209.05070 Kwok, Peter K.; Liu, Qi; West, Douglas B. 2010 2-restricted edge connectivity of vertex-transitive graphs. Zbl 1054.05062 Xu, Jun-Ming; Liu, Qi 2004 A split-step-scheme-based precise integration time domain method for solving wave equation. Zbl 1360.65215 Liu, Qi; Ma, Xikui; Bai, Zhongming; Zhuansun, Xu 2014 The sliding mode control for an airfoil system driven by harmonic and colored Gaussian noise excitations. Zbl 1480.93408 Liu, Qi; Xu, Yong; Xu, Chao; Kurths, Jürgen 2018 On the first-fit chromatic number of graphs. Zbl 1180.05042 Balogh, József; Hartke, Stephen G.; Liu, Qi; Yu, Gexin 2008 Self-adaptive win-stay-lose-shift reference selection mechanism promotes cooperation on a square lattice. Zbl 1410.91226 Deng, Xinyang; Zhang, Zhipeng; Deng, Yong; Liu, Qi; Chang, Shuhua 2016 Another approach to rough soft hemirings and corresponding decision making. Zbl 1381.16047 Zhan, Jianming; Liu, Qi; Zhu, William 2017 An efficient application of PML in fourth-order precise integration time domain method for the numerical solution of Maxwell’s equations. Zbl 1358.78079 Bai, Zhongming; Ma, Xikui; Zhuansun, Xu; Liu, Qi 2014 Matrix games with payoffs of belief structures. Zbl 1410.91010 Deng, Xinyang; Liu, Qi; Deng, Yong 2016 An infinite family of 4-tight optimal double loop networks. Zbl 1217.90046 Xu, Junming; Liu, Qi 2003 Hyers-Ulam stability of derivations in fuzzy Banach space. Zbl 1379.39020 Lu, Gang; Xie, Jun; Liu, Qi; Jin, Yuanfeng 2016 Bistability and stochastic jumps in an airfoil system with viscoelastic material property and random fluctuations. Zbl 1450.74014 Liu, Qi; Xu, Yong; Kurths, Jürgen 2020 Inside debt. Zbl 1210.91144 Edmans, Alex; Liu, Qi 2011 Unstart phenomena induced by mass addition and heat release in a model scramjet. Zbl 1422.76123 Im, S.; Baccarella, D.; Mcgann, B.; Liu, Q.; Wermer, L.; Do, H. 2016 3-variable Jensen $$\rho$$-functional inequalities and equations. Zbl 1379.39016 Lu, Gang; Liu, Qi; Jin, Yuanfeng; Xie, Jun 2016 Realization of perfect teleportation with W-states in cavity QED. Zbl 1392.81062 He, Juan; Ye, Liu; Ma, Chi; Liu, Qi; Ni, Zhi-Xiang 2008 Determination of the Newtonian gravitational constant $$G$$ with time-of-swing method. Zbl 1232.83002 Luo, Jun; Liu, Qi; Tu, Liang-Cheng; Shao, Cheng-Gang; Liu, Lin-Xia; Yang, Shan-Qing; Li, Qing; Zhang, Ya-Ting 2009 Probability-scale residuals for continuous, discrete, and censored data. Zbl 1357.62180 Shepherd, Bryan E.; Li, Chun; Liu, Qi 2016 Coupled modes of the torsion pendulum. Zbl 1217.70013 Fan, Xiang-Dong; Liu, Qi; Liu, Lin-Xia; Milyukov, Vadim; Luo, Jun 2008 Life-span of classical solutions to hyperbolic geometry flow equation in several space dimensions. Zbl 1399.35008 Kong, Dexing; Liu, Qi; Song, Changming 2017 Implications among linkage properties in graphs. Zbl 1182.05074 Liu, Qi; West, Douglas B.; Yu, Gexin 2009 Incentive contracting under ambiguity aversion. Zbl 1422.91200 Liu, Qi; Lu, Lei; Sun, Bo 2018 Classical solutions to a dissipative hyperbolic geometry flow in two space variables. Zbl 1428.53107 Kong, De-Xing; Liu, Qi; Song, Chang-Ming 2019 Bivariate Poisson models with varying offsets: an application to the paired mitochondrial DNA dataset. Zbl 1360.92014 Su, Pei-Fang; Mau, Yu-Lin; Guo, Yan; Li, Chung-I; Liu, Qi; Boice, John D.; Shyr, Yu 2017 Fuzzy parameterized fuzzy soft $$h$$-ideals of hemirings. Zbl 1310.16039 Liu, Qi; Zhan, Jianming 2014 Inexact feasibility pump for mixed integer nonlinear programming. Zbl 1402.90099 Li, M.; Liu, Q. 2017 Integrated CFD-aided theoretical demonstration of cavitation modulation in self-sustained oscillating jets. Zbl 1481.76036 Liu, Wenchuan; Kang, Yong; Wang, Xiaochuan; Liu, Qi; Fang, Zhenlong 2020 Hyperbolic Yamabe problem. Zbl 1389.35229 Kong, Dexing; Liu, Qi 2017 Modeling and analysis of a new locomotion control neural networks. Zbl 1400.92019 Liu, Q.; Wang, J. Z. 2018 Covariate-adjusted Spearman’s rank correlation with probability-scale residuals. Zbl 1414.62456 Liu, Qi; Li, Chun; Wanga, Valentine; Shepherd, Bryan E. 2018 Rough fuzzy (fuzzy rough) strong $$h$$-ideals of hemirings. Zbl 1335.16047 Zhan, Jianming; Liu, Qi; Kim, Hee Sik 2015 Contact processes on scale-free networks. Zbl 1202.60153 Chen, Dayue; Liu, Qi 2010 Pattern recognition of machine tool faults with a fuzzy mathematics algorithm. Zbl 0945.90562 Liu, Q.; Zhang, C.; Lin, A. C. 1998 On the solutions of a variational hyperbolic system. Zbl 1054.35030 Huang, Wenhua; Liu, Qi 2003 Effects of measurement errors on both the amplitude and the phase reconstruction in phase-shifting interferometry: a systematic analysis. Zbl 1060.78516 Cai, L. Z.; Liu, Q.; Yang, X. L. 2005 Tree-thickness and caterpillar-thickness under girth constraints. Zbl 1163.05321 Liu, Qi; West, Douglas B. 2008 Duality for semiantichains and unichain coverings in products of special posets. Zbl 1168.06002 Liu, Qi; West, Douglas B. 2008 An abnormal mode of torsion pendulum and its suppression. Zbl 1123.70329 Tu, Ying; Zhao, Liang; Liu, Qi; Ye, Hong-Ling; Luo, Jun 2004 Consistent finite element discretization of distributed random loads. Zbl 0894.73166 Liu, Q.; Orisamolu, I. R.; Chernuka, M. W. 1994 Global existence of classical solutions to the hyperbolic geometry flow with time-dependent dissipation. Zbl 1438.53128 Kong, Dexing; Liu, Qi 2018 Radial symmetry and monotonicity of the positive solutions for $$k$$-Hessian equations. Zbl 1498.35127 Zhang, Lihong; Liu, Qi 2023 Radial symmetry and monotonicity of the positive solutions for $$k$$-Hessian equations. Zbl 1498.35127 Zhang, Lihong; Liu, Qi 2023 Solving Fokker-Planck equation using deep learning. Zbl 1431.35210 Xu, Yong; Zhang, Hao; Li, Yongge; Zhou, Kuang; Liu, Qi; Kurths, Jürgen 2020 Bistability and stochastic jumps in an airfoil system with viscoelastic material property and random fluctuations. Zbl 1450.74014 Liu, Qi; Xu, Yong; Kurths, Jürgen 2020 Integrated CFD-aided theoretical demonstration of cavitation modulation in self-sustained oscillating jets. Zbl 1481.76036 Liu, Wenchuan; Kang, Yong; Wang, Xiaochuan; Liu, Qi; Fang, Zhenlong 2020 Classical solutions to a dissipative hyperbolic geometry flow in two space variables. Zbl 1428.53107 Kong, De-Xing; Liu, Qi; Song, Chang-Ming 2019 The sliding mode control for an airfoil system driven by harmonic and colored Gaussian noise excitations. Zbl 1480.93408 Liu, Qi; Xu, Yong; Xu, Chao; Kurths, Jürgen 2018 Incentive contracting under ambiguity aversion. Zbl 1422.91200 Liu, Qi; Lu, Lei; Sun, Bo 2018 Modeling and analysis of a new locomotion control neural networks. Zbl 1400.92019 Liu, Q.; Wang, J. Z. 2018 Covariate-adjusted Spearman’s rank correlation with probability-scale residuals. Zbl 1414.62456 Liu, Qi; Li, Chun; Wanga, Valentine; Shepherd, Bryan E. 2018 Global existence of classical solutions to the hyperbolic geometry flow with time-dependent dissipation. Zbl 1438.53128 Kong, Dexing; Liu, Qi 2018 Another approach to rough soft hemirings and corresponding decision making. Zbl 1381.16047 Zhan, Jianming; Liu, Qi; Zhu, William 2017 Life-span of classical solutions to hyperbolic geometry flow equation in several space dimensions. Zbl 1399.35008 Kong, Dexing; Liu, Qi; Song, Changming 2017 Bivariate Poisson models with varying offsets: an application to the paired mitochondrial DNA dataset. Zbl 1360.92014 Su, Pei-Fang; Mau, Yu-Lin; Guo, Yan; Li, Chung-I; Liu, Qi; Boice, John D.; Shyr, Yu 2017 Inexact feasibility pump for mixed integer nonlinear programming. Zbl 1402.90099 Li, M.; Liu, Q. 2017 Hyperbolic Yamabe problem. Zbl 1389.35229 Kong, Dexing; Liu, Qi 2017 Self-adaptive win-stay-lose-shift reference selection mechanism promotes cooperation on a square lattice. Zbl 1410.91226 Deng, Xinyang; Zhang, Zhipeng; Deng, Yong; Liu, Qi; Chang, Shuhua 2016 Matrix games with payoffs of belief structures. Zbl 1410.91010 Deng, Xinyang; Liu, Qi; Deng, Yong 2016 Hyers-Ulam stability of derivations in fuzzy Banach space. Zbl 1379.39020 Lu, Gang; Xie, Jun; Liu, Qi; Jin, Yuanfeng 2016 Unstart phenomena induced by mass addition and heat release in a model scramjet. Zbl 1422.76123 Im, S.; Baccarella, D.; Mcgann, B.; Liu, Q.; Wermer, L.; Do, H. 2016 3-variable Jensen $$\rho$$-functional inequalities and equations. Zbl 1379.39016 Lu, Gang; Liu, Qi; Jin, Yuanfeng; Xie, Jun 2016 Probability-scale residuals for continuous, discrete, and censored data. Zbl 1357.62180 Shepherd, Bryan E.; Li, Chun; Liu, Qi 2016 A new rough set theory: rough soft hemirings. Zbl 1353.16050 Zhan, Jianming; Liu, Qi; Davvaz, Bijan 2015 Rough fuzzy (fuzzy rough) strong $$h$$-ideals of hemirings. Zbl 1335.16047 Zhan, Jianming; Liu, Qi; Kim, Hee Sik 2015 A belief-based evolutionarily stable strategy. Zbl 1302.91024 Deng, Xinyang; Wang, Zhen; Liu, Qi; Deng, Yong; Mahadevan, Sankaran 2014 A split-step-scheme-based precise integration time domain method for solving wave equation. Zbl 1360.65215 Liu, Qi; Ma, Xikui; Bai, Zhongming; Zhuansun, Xu 2014 An efficient application of PML in fourth-order precise integration time domain method for the numerical solution of Maxwell’s equations. Zbl 1358.78079 Bai, Zhongming; Ma, Xikui; Zhuansun, Xu; Liu, Qi 2014 Fuzzy parameterized fuzzy soft $$h$$-ideals of hemirings. Zbl 1310.16039 Liu, Qi; Zhan, Jianming 2014 An advanced meshless method for time fractional diffusion equation. Zbl 1245.65133 Gu, Y. T.; Zhuang, P.; Liu, Q. 2011 Inside debt. Zbl 1210.91144 Edmans, Alex; Liu, Qi 2011 Oriented diameter of graphs with diameter 3. Zbl 1209.05070 Kwok, Peter K.; Liu, Qi; West, Douglas B. 2010 Contact processes on scale-free networks. Zbl 1202.60153 Chen, Dayue; Liu, Qi 2010 On a characteristic equation of well-poised Bailey chains. Zbl 1172.05009 Liu, Q.; Ma, X. 2009 Determination of the Newtonian gravitational constant $$G$$ with time-of-swing method. Zbl 1232.83002 Luo, Jun; Liu, Qi; Tu, Liang-Cheng; Shao, Cheng-Gang; Liu, Lin-Xia; Yang, Shan-Qing; Li, Qing; Zhang, Ya-Ting 2009 Implications among linkage properties in graphs. Zbl 1182.05074 Liu, Qi; West, Douglas B.; Yu, Gexin 2009 On the first-fit chromatic number of graphs. Zbl 1180.05042 Balogh, József; Hartke, Stephen G.; Liu, Qi; Yu, Gexin 2008 Realization of perfect teleportation with W-states in cavity QED. Zbl 1392.81062 He, Juan; Ye, Liu; Ma, Chi; Liu, Qi; Ni, Zhi-Xiang 2008 Coupled modes of the torsion pendulum. Zbl 1217.70013 Fan, Xiang-Dong; Liu, Qi; Liu, Lin-Xia; Milyukov, Vadim; Luo, Jun 2008 Tree-thickness and caterpillar-thickness under girth constraints. Zbl 1163.05321 Liu, Qi; West, Douglas B. 2008 Duality for semiantichains and unichain coverings in products of special posets. Zbl 1168.06002 Liu, Qi; West, Douglas B. 2008 Some conjectures of Graffiti.pc on total domination. Zbl 1134.05070 DeLaViña, Ermelinda; Liu, Qi; Pepper, Ryan; Waller, Bill; West, Douglas B. 2007 Effects of measurement errors on both the amplitude and the phase reconstruction in phase-shifting interferometry: a systematic analysis. Zbl 1060.78516 Cai, L. Z.; Liu, Q.; Yang, X. L. 2005 2-restricted edge connectivity of vertex-transitive graphs. Zbl 1054.05062 Xu, Jun-Ming; Liu, Qi 2004 An abnormal mode of torsion pendulum and its suppression. Zbl 1123.70329 Tu, Ying; Zhao, Liang; Liu, Qi; Ye, Hong-Ling; Luo, Jun 2004 An infinite family of 4-tight optimal double loop networks. Zbl 1217.90046 Xu, Junming; Liu, Qi 2003 On the solutions of a variational hyperbolic system. Zbl 1054.35030 Huang, Wenhua; Liu, Qi 2003 Pattern recognition of machine tool faults with a fuzzy mathematics algorithm. Zbl 0945.90562 Liu, Q.; Zhang, C.; Lin, A. C. 1998 Consistent finite element discretization of distributed random loads. Zbl 0894.73166 Liu, Q.; Orisamolu, I. R.; Chernuka, M. W. 1994 all top 5 ### Cited by 323 Authors 8 Henning, Michael Anthony 7 Liu, Qi 7 Zhan, Jianming 4 Deng, Yong 4 Xu, Yong 4 Zhou, Jinxin 4 Zhu, Kuanyun 3 Dankelmann, Peter 3 Luo, Jun 3 Shabir, Muhammad 2 Adillon, Romà J. 2 Alcantud, José Carlos Rodríguez 2 Babu, Jasine 2 Bai, Zhongming 2 Benson, Deepu 2 Bian, Tian 2 Chen, Xiebin 2 Czabarka, Éva 2 Deng, Xinyang 2 Fan, Suohai 2 Feng, Yanquan 2 Havet, Frédéric 2 Jin, Yuanfeng 2 Jorba, Lambert 2 Karaaslan, Faruk 2 Kurths, Jürgen 2 Li, Jing 2 Li, Ya 2 Li, Yongge 2 Liang, Jin 2 Linhares-Sales, Claudia 2 Liu, Di 2 Lu, Gang 2 Ma, Xikui 2 Ma, Xueling 2 Meng, Jixiang 2 Park, Choonkil 2 Qurashi, Saqib Mazher 2 Rajendraprasad, Deepak 2 Saadati, Reza 2 Shao, ChengGang 2 Shu, Gang 2 Sun, Bingzhen 2 Székely, László A. 2 Tu, Liang-Cheng 2 Wang, Bing 2 Wang, Zenggui 2 Wang, Zhen 2 Xiao, Ti-Jun 2 Xu, Hedong 2 Yang, Shan-Qing 2 Yue, XiaoLe 2 Zhang, Zhipeng 2 Zhuansun, Xu 1 Abbas, Fatima 1 Aeberhard, William H. 1 Agarwal, Ravi P. 1 Araujo-Pardo, Gabriela 1 Arumugam, Ponmana Selvan 1 Asté, Marie 1 Ayub, Saba 1 Azami, Shahroud 1 Bau, Sheng 1 Bauch, Chris T. 1 Bensoussan, Alain 1 Bhattacharyya, Samit 1 Bosek, Bartłomiej 1 Bossek, Jakob 1 Burkov, Andriy 1 Çağman, Naim 1 Campos, Victor A. 1 Çetkin, Vildan 1 Chaib-draa, Brahim 1 Chan, Felix T. S. 1 Chang, An 1 Chang, Gerard Jennhwa 1 Chang, Shuhua 1 Chen, Baoxing 1 Chen, Bin 1 Chen, Jinjin 1 Chen, LinCong 1 Chen, Meirun 1 Chen, Minghua 1 Chen, Xiaohong 1 Chen, Xiaoli 1 Chen, Yaojun 1 Cheng, Chia-Wen 1 Chevalier-Roignant, Benoît 1 Chu, Chen 1 Clarke, Nancy Ellen 1 Cochran, Garner 1 Contreras-Mendoza, Fernando Esteban 1 Cui, Su-Ping 1 Davila, Randy Ryan 1 Davvaz, Bijan 1 DeLaVina, Ermelinda 1 Desormeaux, Wyatt J. 1 Ding, Chenxi 1 Dong, Yukun 1 d’Onofrio, Alberto ...and 223 more Authors all top 5 ### Cited in 72 Serials 12 Applied Mathematics and Computation 9 Discrete Mathematics 9 Chaos, Solitons and Fractals 9 Soft Computing 7 Discrete Applied Mathematics 6 Applied Mathematical Modelling 4 Journal of Combinatorial Optimization 3 International Journal of Theoretical Physics 3 Computational and Applied Mathematics 3 Communications in Nonlinear Science and Numerical Simulation 3 Gravitation & Cosmology 2 Physica A 2 Journal of Geometry and Physics 2 Journal of Graph Theory 2 European Journal of Combinatorics 2 COMPEL 2 Journal of Economic Dynamics & Control 2 Journal of Scientific Computing 2 Science in China. Series A 2 Economic Theory 2 Journal of Multiple-Valued Logic and Soft Computing 2 Journal of Mathematical Inequalities 2 Symmetry 1 Computer Methods in Applied Mechanics and Engineering 1 Information Processing Letters 1 Journal of Computational Physics 1 Journal of the Franklin Institute 1 Journal of Mathematical Analysis and Applications 1 Journal of Statistical Physics 1 Linear and Multilinear Algebra 1 Moscow University Physics Bulletin 1 Physics Reports 1 Demonstratio Mathematica 1 Journal of Combinatorial Theory. Series B 1 Journal of Computer and System Sciences 1 Nonlinear Analysis. Theory, Methods & Applications. Series A: Theory and Methods 1 Quaestiones Mathematicae 1 Order 1 Acta Mathematicae Applicatae Sinica. English Series 1 Statistics 1 Graphs and Combinatorics 1 Algorithmica 1 International Journal of Approximate Reasoning 1 SIAM Journal on Discrete Mathematics 1 Annals of Operations Research 1 Communications in Statistics. Theory and Methods 1 International Journal of Bifurcation and Chaos in Applied Sciences and Engineering 1 SIAM Journal on Scientific Computing 1 Filomat 1 The Electronic Journal of Combinatorics 1 Discussiones Mathematicae. Graph Theory 1 Honam Mathematical Journal 1 Journal of Inequalities and Applications 1 Chaos 1 Discrete Dynamics in Nature and Society 1 Acta Mathematica Sinica. English Series 1 Physical Review Letters 1 Journal of Nonlinear Mathematical Physics 1 Nonlinear Analysis. Modelling and Control 1 Quantitative Finance 1 Bulletin of the Malaysian Mathematical Sciences Society. Second Series 1 Advances in Difference Equations 1 Iranian Journal of Fuzzy Systems 1 Science China. Mathematics 1 Journal of Agricultural, Biological, and Environmental Statistics 1 Afrika Matematika 1 Transactions on Combinatorics 1 Journal of Mathematics 1 Computational Methods for Differential Equations 1 Open Mathematics 1 Korean Journal of Mathematics 1 AIMS Mathematics all top 5 ### Cited in 38 Fields 38 Combinatorics (05-XX) 29 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 22 Computer science (68-XX) 12 Partial differential equations (35-XX) 10 Probability theory and stochastic processes (60-XX) 10 Numerical analysis (65-XX) 8 Associative rings and algebras (16-XX) 8 Biology and other natural sciences (92-XX) 7 Mathematical logic and foundations (03-XX) 6 Order, lattices, ordered algebraic structures (06-XX) 6 Difference and functional equations (39-XX) 6 Statistics (62-XX) 6 Operations research, mathematical programming (90-XX) 5 Ordinary differential equations (34-XX) 5 Statistical mechanics, structure of matter (82-XX) 4 Group theory and generalizations (20-XX) 4 Dynamical systems and ergodic theory (37-XX) 4 Relativity and gravitational theory (83-XX) 3 Functional analysis (46-XX) 3 Operator theory (47-XX) 3 Differential geometry (53-XX) 3 General topology (54-XX) 3 Mechanics of particles and systems (70-XX) 3 Mechanics of deformable solids (74-XX) 3 Quantum theory (81-XX) 2 Harmonic analysis on Euclidean spaces (42-XX) 2 Global analysis, analysis on manifolds (58-XX) 2 Fluid mechanics (76-XX) 2 Systems theory; control (93-XX) 1 General algebraic systems (08-XX) 1 Commutative algebra (13-XX) 1 Linear and multilinear algebra; matrix theory (15-XX) 1 Topological groups, Lie groups (22-XX) 1 Approximations and expansions (41-XX) 1 Calculus of variations and optimal control; optimization (49-XX) 1 Manifolds and cell complexes (57-XX) 1 Optics, electromagnetic theory (78-XX) 1 Geophysics (86-XX)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5437034368515015, "perplexity": 28117.461697196304}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500619.96/warc/CC-MAIN-20230207134453-20230207164453-00510.warc.gz"}
http://physics.stackexchange.com/tags/comets/hot
# Tag Info 70 There seems to be a fundamental misunderstanding as to how movement in space works. In space there is no air friction, that is, once you are moving toward your destination, you don't need a continuous source of power to keep going. Landing on a comet doesn't buy you anything, since in order to land you must first match the comet's orbit, at which point the ... 46 The maximum speed of an object that orbits the Sun at a certain distance $r$ is known as the escape velocity: $$v_\text{esc} = \sqrt{\frac{2GM_\odot}{r}},$$ where $M_\odot$ is the mass of the Sun. If the object would have a greater speed, it would eventually leave the solar system. So I'd say that the absolute maximum possible speed of any object in the ... 13 There are several points of evidence that the Oort Cloud exists, though it is indeed still a hypothesis and lacks direct observation. The first is indirectly observational, as proposed by Ernst Öpik back in 1932 as the source of long-period comets. This was revised by Jan Oort in 1950. All you need to determine an orbit is three observations of the ... 12 It depends. There are collisions amongst asteroids that have been caught on film that had no effect on us whatsoever. In your scenario, they would have to have a resultant vector towards us in order to cause any problems. Then there is the question of how many resultant particles are big enough to cause any problems (that is, big enough to get through our ... 12 When there aren't comets falling into the sun, Mercury is hard to beat. This NASA fact sheet lists Mercury's orbital velocity around the sun as varying from $38.86$ to $58.98$ km/sec, not so much greater than Earth (less than a factor $2$, even at maximum). 12 As @ChrisWhite has already stated, catching up with Halley's comet offers no appreciable benefit since you'd already need to be traveling at the same speed in a frictionless environment. Where you could get a big boost is if your probe was positioned in the comet's path, either coming across it at an oblique angle or simply stationary, relative to the ... 8 The Wikipedia article refers to a fireball, but as Wikipedia itself explains the word fireball has many meanings and doesn't necessarily literally mean a fire as in the combustion of a material in oxygen. In this case it means a ball of very high temperature gas. The gas is heated by the impact and gets hot enough to emit light just like the gas heated in a ... 8 A comet doesn't need to impact the sun in order to come very close to solar escape velocity at perihelion. There is a class of comets known as sungrazers that pass very close to the sun. Although small ones evaporate on their first pass near the sun, larger ones can survive several orbits, and be considered periodic comets. There is a class of sungrazing ... 8 It is unlikely that comets are a feature unique to our Solar System. Since comets are simply remnants of star and planetary formation, then anywhere stars and planets have formed would be fertile ground to expect comets. Their individual masses are relatively very small compared to discovered planets. For example, Halley's Comet has a mass of roughly ... 8 I'm not a professional, but I'll try to answer anyway. Meteor showers occur when the Earth passes through the orbit of a comet (or, in at least one case, an asteroid). Over time, the debris spreads over the entire orbit of the comet. A shower can last for several days, which is an indication of how wide the debris stream is. Assuming a duration of 1 day, ... 7 The Moon has no atmosphere, so meteors would not heat up and glow as they descended, as they do on Earth. Comets, however, reflect light from the Sun, and thus can be seen from any sufficiently dark location. 6 An estimate of impacts from long-period comets is available at this NASA JPL site. They appear to be randomly distributed . Given detection at about 5 AU, we'd only have a year of warning for potential impactors. Their mean impact velocity is in the order of 52 km/s with an impact probability crossing Earth's orbit of 2.2 - 2.5x${10^{ - 9}}$ per perihelion ... 4 There are a few phenomena that can cause sound to be heard from a meteorite. Here it says that sonic booms as well as shock waves due to larger fragments breaking up can reach and be detected by the human ear. There is also the so-called electrophonic effect. Given that most meteorites burn up at ~100km altitude, sonic boom and shock waves would take $t ... 4 The asteroid "1566 Icarus" has a perihelion distance of 0.187 au and a semi-major axis of$a=1.078$au, an orbital period of 1.119 years and eccentricity$e=0.827$. Using $$v_{\rm peri} = \sqrt{\frac{GM}{a}\frac{(1+e)}{(1-e)}},$$ where$M$is a solar mass, then its fastest speed is 93.5 km/s. So this does not come close to Comet Lovejoy (mentioned in other ... 4 The sense in which a comet could be a "free taxi" is that it is a big source of volatile compounds. A probe that lands on a comet could, in principle, use these to refuel. A comet may be easier to use for this purpose than a large icy moon because of its low gravity. 3 As the comet approaches the Sun, it starts to melt. It means that what was once rock and ice is exposed to very high temperatures and forms a liquid which flows over and behind the comet. The above is a picture of fluid boundary layer on a sphere. Notice how the fluid motion is visible, and my guess is that the fluid drags along with it, some of the ... 3 Conservation of energy means that it would take you just as much energy to catch up with the comet and make a soft landing (in the ideal case where you ignore the extra fuel costs for landing) as you would need to launch the spacecraft in a similar orbit as the comet. What you could do is let the comet give a bounce to the spacecraft, but I don't think this ... 3 There's vapour in our atmosphere all the time but it doesn't escape, at least not quickly enough to be of any concern. With the exceptions of hydrogen and helium, which really are escaping (not heavy enough), our atmosphere is stable in the medium-long term. And that includes water (you may recall that water vapour eventually comes back down as rain). A ... 3 It seems Comet Elenin broke up on this pass through the inner Solar System, and that's why it is not visible. http://www.msnbc.msn.com/id/45050612/ns/technology_and_science-space/t/comet-elenin-dead-along-doomsday-predictions/ 2 Wild unsubstantiated guess: Could it be flow of the sand down an incline, more like a glacier than sand dunes. Maybe the flow gets a boost from tidal forces flexing the comet whenever it passes near a massive body. Some of the patterns in the sand in the lower left corner look like what happens when sand slips down a critical incline like on an over ... 2 As you rightly pointed out, the fact that 67P is oddly-shaped should alter its gravitational attraction on various parts of the comet. That said, if we were to go by Wikipedia in a rather off-hand manner, we find that the lander is$100 kg$(as @fibonatic rightly pointed out) and 67P has an acceleration due to gravity of$\textbf{g'} = 10^{-3} m/s^2$. Its ... 2 The appeared weight of an object does not only depend on the mass of the celestial body by which it is attracted. If you simplify to spherical symmetry, which is not definitely not that case for the comet 67P (in to a lesser extend also not for the Earth) you can approximate the ratios of weight by using Newton's law of universal gravitation:$\$ ... 2 Kepler's Three Laws of Planetary Motion are particularly helpful when addressing this question. They state that (in informal language) The shape of a planet's orbit in an ellipse, with the Sun at one focus of the ellipse. As planets move around their elliptical orbits, the imaginary line drawn from the planet to the Sun sweeps out equal regions of equal ... 2 Astronomy -- especially exoplanet science -- has gotten very good at detecting impossibly faint signals. In this case a very recent Nature article, Two families of exocomets in the β Pictoris system, claims to see thousands of exocomet signatures in the not-too-distant β Pictoris system. This is a very young system with an edge-on debris disk -- essentially ... 2 As earlier answers have stated this method doesn't work to save fuel directly, however it might still be useful to hitch a ride on a comet using it for a gravitational assist that the probe would not survive without more shielding than is practical. Placing the body of the comet between the Sun and the probe would presumably shield it from much of the solar ... Only top voted, non community-wiki answers of a minimum length are eligible
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6622504591941833, "perplexity": 626.1699292602264}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860121423.81/warc/CC-MAIN-20160428161521-00008-ip-10-239-7-51.ec2.internal.warc.gz"}
https://stats.stackexchange.com/questions/173991/does-x-a-random-variable-usually-refer-to-the-population-or-the-sample?noredirect=1
# Does $X$ (a random variable) usually refer to the population or the sample? [duplicate] Like the title says, does $X$ (a random variable) usually refer to the population or the sample? A random variable, by definition, is a misurable function. If you estract a sample from a population, they follow the same distribution, so the random variable can describe both. When you use the term "Random Variable" you refer to a function from a set to another, the first being called "sample space" and the second $\mathbb{R}^n$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8239141702651978, "perplexity": 446.0807992518465}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153531.10/warc/CC-MAIN-20210728060744-20210728090744-00512.warc.gz"}
http://www.researchgate.net/publication/221596132_How_to_identify_and_estimate_the_largest_traffic_matrix_elements_in_a_dynamic_environment
Conference Paper # How to identify and estimate the largest traffic matrix elements in a dynamic environment. DOI: 10.1145/1005686.1005698 Conference: Proceedings of the International Conference on Measurements and Modeling of Computer Systems, SIGMETRICS 2004, June 10-14, 2004, New York, NY, USA Source: DBLP ABSTRACT In this paper we investigate a new idea for traffic matrix estimation that makes the basic problem less under-constrained, by deliberately changing the routing to obtain additional measurements. Because all these measurements are collected over disparate time intervals, we need to establish models for each Origin-Destination (OD) pair to capture the complex behaviours of internet traffic. We model each OD pair with two components: the diurnal pattern and the fluctuation process. We provide models that incorporate the two components above, to estimate both the first and second order moments of traffic matrices. We do this for both stationary and cyclo-stationary traffic scenarios. We formalize the problem of estimating the second order moment in a way that is completely independent from the first order moment. Moreover, we can estimate the second order moment without needing any routing changes (i.e., without explicit changes to IGP link weights). We prove for the first time, that such a result holds for any realistic topology under the assumption of . We highlight how the second order moment helps the identification of the top largest OD flows carrying the most significant fraction of network traffic. We then propose a refined methodology consisting of using our variance estimator (without routing changes) to identify the top largest flows, and estimate only these flows. The benefit of this method is that it dramatically reduces the number of routing changes needed. We validate the effectiveness of our methodology and the intuitions behind it by using real aggregated sampled netflow data collected from a commercial Tier-1 backbone. 0 Bookmarks · 125 Views • Source • ##### Conference Paper: A toolchain for simplifying network simulation setup [Hide abstract] ABSTRACT: Arguably, one of the most cumbersome tasks required to run a network simulation is the setup of a complete simulation scenario and its implementation in the target simulator. This process includes selecting a topology, provision it with all required parameters and, finally, configure traffic sources or generate traffic matrices. Many tools exist to address some of these tasks. However, most of them do not provide methods for configuring network and traffic parameters, while others only support a specific simulator. As a consequence, a user often needs to implement the desired features personally, which is both time-consuming and error-prone. To address these issues, we present the Fast Network Simulation Setup (FNSS) toolchain. It provides capabilities for parsing topologies from datasets or generating them synthetically, assign desired configuration parameters and generate traffic matrices or event schedules. It also provides APIs for a number of programming languages and network simulators to easily deploy the simulation scenario in the target simulator. Proceedings of the 6th International ICST Conference on Simulation Tools and Techniques; 03/2013 • Source 24 Downloads Available from May 19, 2014
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8333761692047119, "perplexity": 966.7933522272342}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422122220909.62/warc/CC-MAIN-20150124175700-00107-ip-10-180-212-252.ec2.internal.warc.gz"}
https://rpg.meta.stackexchange.com/questions/7681/can-our-tag-prompt-nudge-toward-including-system/7686
# Can our tag-prompt nudge toward including system? We get questions every day that need to be put on hold while we wait for a new querent to specify what game/edition they're playing. As of this writing I've seen three so far today--they get quickly closed, get a comment asking about system/edition, and reopened if OP specifies. Currently when asking a question one finds, below the text of the question, the following field: It's clear from this message that one must provide at least one tag, and the system rejects a submission without any tag. It's also clear that the suggestions have come from our list of tags. (There's a question on meta.se about where the example tags come from; it's not clear to me the answer's very authoritative.) Can we make one of the suggested (greyed-out) tag suggestions be a system tag? I've got to assume it would help nudge people in the right direction if submnitting a question included the subtle hint that "tell me what game you're playing" might be helpful. Part 2: the ugly truth. It's usually D&D/PF that's the problem. Should the "suggested" tag be one of the D&Ds? Is it 5e that's the worst offender, and should that be one of the provided suggestions? I ask because • I'm not good enough with SEDE to figure out which system tag--not that "system tag" is actually a thing in our software--tends to generate the most close-comment-edit-reopen cycles, and • I don't know user psychology enough to know if prompting toward the "worst offender" is most helpful--perhaps a near-neighbor is better? • Did we ever make any progress on this idea? Is it possible, from a technical standpoint, to customize our ask box to nudge users toward including the system their question is about? Jul 27 '18 at 16:35 • I'm marking this as status-deferred. We'd like to do this but we can't reasonably do this right now. More information here: rpg.meta.stackexchange.com/a/9591 Nov 13 '19 at 15:22 Something like this tag placeholder might work well: tag which game you're playing, if any (such as: dnd-5e, world-of-darkness), max 5 tags This requests the single most important thing to know: the game they're playing. The format has changed from the original since we're not just suggesting a bundle of random tags, we're suggesting a list from which they might pick just one. In this format, two tags should be picked from two different groups: • the first tag is one of [dnd-5e], [pathfinder], [dnd-3.5e] • the second tag is one of [world-of-darkness], [savage-worlds], [fate] These represent the three most popular game tags inside and outside the D&D family. This setup conveys how we specifically tag D&D games (two thirds of the time, at least) and it conveys that we service a range of RPGs, both within and outside the D&D family of games. • I would replace 'if any' with 'if relevant', I think. Jan 15 '18 at 23:05 • @thedarkwanderer I'd strongly prefer "if any": almost always (like >90% of the time) if a game's being played it's substantially relevant, and an over-correction here is preferable to an under-correction. When a game is specified but not ultimately relevant, it's harmless and fine. When a game isn't specified, it impairs answer quality or is a total show-stopper. In the scope of this question and considering the potential impact, I'd be absolutely fine with the over-correction. Jan 15 '18 at 23:20 • Also, until 2015-ish we had lots of people asking questions & deciding their game wasn't relevant & not mentioning it at all or burying it. (Revision 1 of this Q from 2014 is a great example: spoilers, the game was utterly important.) Nowadays burying that info is widely seen as counterproductive and I'm concerned "if relevant" would prompt that behaviour again. Picture: Q: "Hey how do I calculate damage for my sword? Game's not relevant I guess." // A: "It's on page X of the PHB." // Comment: "Oh, which Savage Worlds book is that?" Jan 15 '18 at 23:21 • >.< ok, yeah, let's avoid people doing that. Jan 16 '18 at 6:40 • I'd even drop the "if any," and generically demand a system. I suspect it would be easier to handle the exceptions where it's misleading than the ones where it's omitted. Jan 17 '18 at 1:15 • @fectin that's too far for me. We already have people who think every question needs a system tag (see the downvoted answer on this question). I don't want to encourage that sort of thinking. Jan 17 '18 at 3:50 • @thedarkwanderer I wouldn't necessarily recommend requiring a system mechanically, or enforcing that requirement through moderation (user or diamond). But that is the first question on just about every post which isn't tagged with a system. And as we are collapsing down to a single-sentence instruction on tagging, my best guess is that including "if any" will produce initial tags that need work more often than omitting it. Jan 17 '18 at 4:02 • @fectin It's true that that's among the first comments on any question not tagged with a system. That's a problem, and, among questions that aren't a user's first question on the site, it's not usually a problem with the question. The culture that leads to that question being asked even when it makes absolutely no sense with respect to the question, sometimes as a tounge-in-cheek reprimand for asking something other than how to handle a specific in-play situation (along the lines of 'what problem are you trying to solve?' on a history question) , is not something I want to encourage. Jan 17 '18 at 8:06 • Given the existing no-guess policy, might want to use an all caps "REQUIRED" in the placeholder text about the system tag. – GcL Sep 24 '18 at 14:39 Another option is to provide guidance in the sidebar help box. Currently, it looks like this: We prefer questions that can be answered, not just discussed. visit the help center » Could a line be added to this box to the effect of If your question is about a specific system or edition, be sure to tag it as such (dnd-5e, dungeons-and-dragons, pathfinder). This could even be in the "How to Tag" sidebar box that appears when editing the Tags box. Or in both, with "How to Ask" indicating that you should choose tags - "Be sure to properly tag your question" and "How to Tag" having the plea for system and edition info. • If this isn't possible to change, we could make a community ad that's a PSA reminding people to say or tag which game they're asking about. Not everyone will be shown it since they rotate, and some who are shown it won't notice it, but it'll catch a percentage. Jul 13 '18 at 17:59 NB: I no longer think that this is a good idea after more experience on this site I think this has its heart in the right place. But I'm also not in favour of forcing every question to have a system tag of some kind (even if it were technically viable): that just makes it a tag tax, and unnecessarily puts redundant tags on questions that don't need them. The idea is that not every question is about a system, so not every question needs a tag about system. – SevenSidedDie ↵ Jan 10 '18 at 14:44 # Alternate solution: require a system or system agnostic tag It seems like a decent solution would be to require a system tag or the system agnostic tag to any new post before allowing it to be submitted. It seems that as long as we were able to have a group of tags of which at least one was required to be present to make a new post then it would largely solve the issue (albeit in a very blunt fashion). However, I have no idea if such a thing is technically possible or viable in SE. Now that it is pointed out however, I do realize that this meta has a system just like I am talking about here. A downside I just realized would be that, because tags would have to be manually assigned as system (and thus required), first questions about a particular system would be difficult to handle. • We have something resembling this in the system: meta requires one of the four primary meta tags (support, discussion, feature-request, or bug). However a lot of our questions are not about systems, and I don't want to force them to pick a "not-a-system" tag. But then, I am against the necessity of using system-agnostic to begin with. Jan 10 '18 at 14:01 • Your downside's not a huge deal, IMO: it's really questions being asked about dnd5e, dnd3.5e, and PF that are the lion's share of the problem. Jan 10 '18 at 14:21 • I was asked in chat about the issue I had with system-agnostic. My reasoning is here -- it starts out as an explanation of my personal issues with the existence of the system-agnostic tag, and is then followed with my concerns of the problem that comes out of making a system tag or system-agnostic mandatory for every question. Jan 10 '18 at 14:27 • I think this has its heart in the right place. But I'm also not in favour of forcing every question to have a system tag of some kind (even if it were technically viable): that just makes it a tag tax, and unnecessarily puts redundant tags on questions that don't need them. The idea is that not every question is about a system, so not every question needs a tag about system. Jan 10 '18 at 19:44 • @SevenSidedDie your last sentence there is key Jan 11 '18 at 16:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2714439630508423, "perplexity": 1486.9765632437852}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585215.14/warc/CC-MAIN-20211018221501-20211019011501-00553.warc.gz"}
https://www.physicsforums.com/threads/good-bye-and-good-riddance.29000/
# Good bye, and Good Riddance . 1. Jun 3, 2004 ### phatmonky Good bye, and Good Riddance..... http://apnews.myway.com/article/20040603/D82VJOQG0.html If I did my job as badly as Tenet did his, I would have been fired - not given a chance to resign. 2. Jun 3, 2004 ### amp He probably resigned so he can write a bestseller like O'Neil and Clarke did. And because his conscience was probably starting to bother him due to all the lying to cover for Bush. 3. Jun 3, 2004 ### Loren Booda Might this resignation have something to do with the recent leak of intelligence to Iran about decoding their communications? 4. Jun 3, 2004 ### amp Maybe, but after enduring the storm over the 9/11 intelligence failures it seems anticlimatic. 5. Jun 3, 2004 ### kat Phat- I tend to agree with you. I think the Bush should have cleaned house a lot better when he came into office. I think it served Clinton well to clean out the 1st Bush presidents appointees and fill it with his own people, I really don't get why Bush didn't follow the same route. 6. Jun 3, 2004 ### kyleb Tenet was the one guy in higher government who was telling us that going to Iraq was a bad idea, it is a shame to see him go. 7. Jun 3, 2004 ### pelastration http://www.kesq.com/Global/story.asp?S=1915479 (quote)A man who was once in the same shoes says he thinks Tenet was forced to resign. Former C-I-A Director Stansfield Turner tells C-N-N he doesn't think Tenet would have stepped down during an election year unless he was "told to do that".(end quote) But, there may be several issues that can influence: 1. There is growing critic on CIA about the pre-9/11 period (not able to prevent). But remember CIA gave a report to Bush (who went then on vacation). Before 9/11 Tenet warned against Al-Qaeda and something major to come. 2. WMD: CIA didn't confirm WMD presence in Iraq, but was forced too. Remember the Feith/Wurmser cell (PNAC) in the Pentagon that fabricated conclusions (about Iraq) which were opposed by CIA field fact. Also CIA warned against the Nigeria papers. 3. CIA opposed to Wolfowitz/Rumsfeld friend Chalabi (who also fabricated 'proves'). Secretary of State Colin Powell recently pressed the CIA to account for the faulty intelligence that led Powell to tell the United Nations last year that Iraq definitely possessed illicit weapons (mobile biological weapons laboratories) but the sources were Iraqi defectors introduced to intelligence agencies by Ahmad Chalabi's Iraqi National Congress. 4. You have the Tenet-Plan to ease tensions between Israel and the Palestinians. Bush torpedoed it by his "Thank You, Ari!" -policy. 5. Then you have the (coming) information about the CIA/OGA's role in Abu Ghraib. It's unsure what this will bring. 6. What about the reorganization of the intelligence agencies. For sure there is a power game happening, but also for example CIA concern that it's sources would come in other hands (biggest fear of a spy organization). Remember the 'show' of Attorney General John Ashcroft and FBI Director Robert Mueller last week, but who was passed: Tenet (CIA) and Tom Ridge from Homeland Security. Power Games. 7. Speculation: Who knows that now again CIA gave warnings that Bush and PNAC deny or judge lite. Maybe Tenet don't want that under his name for another time. From his position Tenet could not criticize publicly the President or defend the CIA against unjust allegations. A loyal servant of the United States and it's President. Now many - such as Powell - point the CIA for the failures the PNAC-guys made. IMO these points gave Tenet indeed personal problems. It seems "too much is too much". Last edited: Jun 3, 2004 8. Jun 3, 2004 ### Njorl Tenet's biggest failure was not cutting off Cheney, Feith and Cambone from raw intelligence data they should not have seen. The Office of Special Plans was the biggest source of fraudulent intelligence concerning the Iraq war. Not just wrong, fraudulent. Tenet's only connection to it was his allowance of data to be sent to them. The OSP manipulated intelligence to get the war they wanted. The OSP was formed by Cheney and run by Feith. This is the same Douglas Feith that General Tommy Franks called "..the stupidest man in the world..." The White House was poorly served by Tenet's CIA because it demanded that it be poorly served. Tenet's fault was giving in to that demand. Njorl 9. Jun 3, 2004 ### pelastration But they used a trick to get highly classified material, which Wurmser ( closely linked to Israel's Likud)- who had not that clearance - could 'study' as raw data (including all junk of double-spies, rumors, etc. that CIA normally filters before internal use). Isn't that illegal? How is it called? (quote) Despite their access to the Pentagon leadership, Maloof and Wurmser faced resistance from the CIA and Defense Intelligence Agency. They were initially denied access, for example, to the most highly classified documents in the Pentagon computer system. So Maloof returned regularly to his old office in another branch of the Department of Defense, where he still could get the material.(end quote) http://www.iht.com/articles/517591.html I don't say Wurmser works for the Shin Bet but he had highly classified information - illicitly received without clearance - that could interest many people, like his good friend Benjamin Netanyahu. Shin Bet http://www.fas.org/irp/world/israel/shin_bet/ On Wurmser - actually Cheney's Top advisor on the Middle East - and Netanyahu: http://www.disinfopedia.org/wiki.phtml?title=David_Wurmser Also : http://www.antiwar.com/justin/?articleid=2727 Wurmser was also a stronger supporter of Ahmad Chalabi. And this guy is now designing the USA policy in the Middle East! :grumpy: 10. Jun 3, 2004 ### Njorl 11. Jun 4, 2004 ### amp Look whos in charge of Afganistan, former executives of Unicol, and the pipeline is going to be built. Which is what was wanted in the first place since millions had already been invested. Source - Michael Moore - Wheres my country? 12. Jun 4, 2004 ### phatmonky Unocal, not Unicol - Do you suggest that the attack on Afghanistan, supported by the world in a very large majority, was all a ploy to help the USA build a pipeline (which would be built anyways, considering Taliban representatives were at the location I worked at negotiatiing such a deal a year before 9/11!). Or did we just manipulate everyone, and then lose that ability with Iraq? 13. Jun 4, 2004 ### amp YES, I READ THAT Taliban reps were in Texas (were they at the Bush ranch?) before 9/11 to negotiate a deal to get some 16 billion for the pipeline. 14. Jun 4, 2004 ### phatmonky They were in Sugarland (near Houston) meeting with UNOCAL execs - and all was going well on the deal. The war wasn't needed to secure the deal. The war, from a profiteering point, would only be good if security was guaranteed, and instantaneous, as UNOCAL already had other vested interests in Afghanistan that would/were interrupted by the war. 15. Jun 4, 2004 ### pelastration http://www.washingtonpost.com/wp-dyn/articles/A14025-2004Jun3.html (quote) ... White House officials have sought to blame Tenet for leading the president into war based on bad intelligence. But even before the intelligence community had produced its definitive reports on Iraq, Vice President Cheney and other top administration officials were describing the threat from Saddam Hussein in more dramatic and unequivocal terms than the intelligence ever supported. Tenet's relationship with White House staff members grew tense when he refused to take sole blame for an inaccurate statement about Iraq in the president's State of the Union address in 2003. It worsened after a speech by Tenet at Georgetown University in February, in which he pointed out that the agency had never used the word "imminent" to characterize the threat from Hussein's weapons. ... (end quote) 16. Jun 4, 2004 It's absurd that more Americans aren't absolutely outraged over the OSP. But then, how many know about it?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22658351063728333, "perplexity": 11547.056762361226}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720238.63/warc/CC-MAIN-20161020183840-00394-ip-10-171-6-4.ec2.internal.warc.gz"}
http://www.sawaal.com/aptitude-reasoning/quantitative-aptitude-arithmetic-ability/stocks-and-shares-questions-and-answers.html
# Stocks and Shares Question & Answers ## Stocks and Shares Quantitative aptitude questions are asked in many competitive exams and placement exam. 'Stocks and Shares' is a category in Quantitative Aptitude. Quantitative aptitude questions given here are extremely useful for all kind of competitive exams like Common Aptitude Test (CAT), MAT, GMAT, IBPS Exam, CSAT, CLAT, Bank Competitive Exams, ICET, UPSC Competitive Exams, CLAT, SSC Competitive Exams, SNAP Test, KPSC, XAT, GRE, Defense Competitive Exams, L.I.C/ G. I.C Competitive Exams, Railway Competitive Exam, TNPSC, University Grants Commission (UGC), Career Aptitude Test (IT Companies) and etc., Government Exams etc. We have a large database of problems on "Stocks and shares" answered with explanation. These will help students who are preparing for all types of competitive examinations. The cost price of a Rs.100 stock at 4 discount, when brokerage is $\inline \frac{1}{4}$% is : A) Rs.95.75 B) Rs.96 C) Rs.96.25 D) Rs.104.25 Explanation: CP. =$\inline Rs.(100-4+\frac{1}{4})$ = Rs.96.25 Subject: Stocks and Shares - Quantitative Aptitude - Arithmetic Ability 6 A man invested Rs. 14,400 in Rs. 100 shares of a company at 20% premium. If his company declares 5% dividend at the end of the year, then how much does he get? A) Rs.500 B) Rs.600 C) Rs.650 D) Rs.720 Explanation: Number of shares = $\inline \left ( \frac{14400}{120} \right )$=120. Face value = Rs.(100 x 120) = Rs.12000. Annual income = $\inline Rs.\left ( \frac{5}{100}\times 12000 \right )$ = Rs. 600 Subject: Stocks and Shares - Quantitative Aptitude - Arithmetic Ability 8 A man buys Rs. 20 shares paying 9% dividend. The man wants to have an interest of 12% on his money. The market value of each share is: A) 12 B) 15 C) 18 D) 20 Explanation: Dividend on Rs. 20 = Rs. (9/100)x 20 = Rs.9/5. Rs. 12 is an income on Rs. 100. Rs.9/5 is an income on Rs.[ (100/12) x (9/5)] = Rs. 15. Subject: Stocks and Shares - Quantitative Aptitude - Arithmetic Ability 7 A 6% stock yields 8% . The market value of the stock is : A) Rs.48 B) Rs.75 C) Rs.96 D) Rs.133.33 Explanation: For an income of Rs. 8, investment = Rs. 100. For an income of Rs 6, investment =Rs. $\inline&space;\left&space;(&space;\frac{100}{8}&space;\times&space;6\right&space;)$ = Rs. 75 Market value of Rs. 100 stock = Rs. 75. Subject: Stocks and Shares - Quantitative Aptitude - Arithmetic Ability 9 Find the cash realised by selling Rs. 2440, 9.5% stock at 4 discount (brokerage $\inline \frac{1}{4}$ %) A) 2000 B) 2298 C) 2290 D) 2289 By selling Rs. 100 stock , cash realised = $\inline Rs.[(100-4)-\frac{1}{4}]=Rs.\frac{383}{4}$ By selling Rs. 2400 stock, cash realised =$\inline Rs.(\frac{383}{4}\times \frac{1}{100}\times 2400)$ = Rs 2298.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39728420972824097, "perplexity": 13035.5204325175}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719033.33/warc/CC-MAIN-20161020183839-00181-ip-10-171-6-4.ec2.internal.warc.gz"}
https://coral.ise.lehigh.edu/wiki/doku.php/info:tech_report_example?do=diff
# Lehigh ISE / COR@L Lab Wiki ### Site Tools info:tech_report_example # Differences This shows you the differences between two versions of the page. info:tech_report_example [2017/05/04 09:33]sertalpbilal We have a tex version of this file — (current) Both sides previous revision Previous revision 2017/05/04 09:33 sertalpbilal We have a tex version of this file2015/03/29 13:13 aykutbulut created Go 2017/05/04 09:33 sertalpbilal We have a tex version of this file2015/03/29 13:13 aykutbulut created Go Line 1: Line 1: - \def\coralreport{1} - %\documentclass{./​llncs2e/​llncs} - \documentclass{article} - - \usepackage{ifthen} - %\usepackage{float} - \usepackage[funcfont=italic,​full]{./​complexity/​complexity} - \usepackage{algorithm} % for algorithm environment - \usepackage{algpseudocode} % for algorithmic environment - ​%\usepackage[pdftex]{graphicx} for pdflatex - ​%\usepackage{graphicx} for latex - \usepackage{amsmath} - \usepackage{amssymb} - \usepackage{graphicx} - \usepackage[authoryear]{natbib} - - % for tiles and keywords - \usepackage{authblk} - \usepackage{geometry} - \usepackage{fullpage} - \newtheorem{theorem}{Theorem} - \newtheorem{claim}{Claim} - \newtheorem{definition}{Definition} - - - \renewcommand{\Re}{\mathbb{R}} - \algdef{SE}[DOWHILE]{Do}{doWhile}{\algorithmicdo}[1]{\algorithmicwhile\ #1} - % todo(aykut) this will create a problem with \P command of complexity package. - \renewcommand{\P}{\mathcal{P}} - - \setlength{\evensidemargin}{0in} - \setlength{\oddsidemargin}{0in} - \setlength{\parindent}{0in} - \setlength{\parskip}{0.06in} - - - \ifthenelse{\coralreport = 1}{ - \usepackage{./​isetechreport/​isetechreport} - \def\reportyear{15T} - % The report number is the same one used in the ISE tech report series - \def\reportno{001} - % This is the revision number (increment for each revision) - \def\revisionno{0} - % This is the date f the original report - \def\originaldate{March 13, 2015} - % This is the date of the latest revision - \def\revisiondate{March 13, 2015} - % Set these variables according to whether this should be a CORAL or CVCR - % report - \coralfalse - \cvcrfalse - \isetrue - - }{} - - %TODO(aykut):​ - % -> fix path problem of isetechreport.sty - - - \begin{document} - - \title{On the Complexity of Inverse MILP} - - \ifthenelse{\coralreport = 1}{ - \author{Aykut Bulut\thanks{E-mail:​ \texttt{[email protected]}}} - \author{Ted K. Ralphs\thanks{E-mail:​ \texttt{[email protected]}}} - \affil{Department of Industrial and Systems Engineering,​ Lehigh University, USA} - \titlepage - }{ - \author{Aykut Bulut} - \author{Ted K. Ralphs} - \affil{COR@L Lab, Department of Industrial and Systems Engineering,​ Lehigh University, USA} - \date{\today} - } - - \maketitle - - \begin{abstract} - Inverse optimization problems determine problem parameters that are closest to - the estimates and will make a given solution optimum. In this study we work - inverse \textbf{m}ixed \textbf{i}nteger \textbf{l}inear \textbf{p}roblems (MILP) - where we seek the objective function coefficients. This is the inverse problem - \cite{AhujaSeptember2001} studied for linear programs (LP). They - show that inverse LP can be solved in polynomial time under mild conditions. We - extend their result for the MILP case. We prove that the decision version of - the inverse MILP is $\coNP$--complete. We also propose a cutting plane algorithm for - solving inverse MILPs for practical purposes. - - \ifthenelse{\coralreport = 0}{ - \bigskip\noindent - {\bf Keywords:} Inverse optimization,​ mixed integer linear program, - computational complexity, polynomial hierarchy - }{} - \end{abstract}
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9171916842460632, "perplexity": 6423.487404531225}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585518.54/warc/CC-MAIN-20211022181017-20211022211017-00001.warc.gz"}
https://crypto.stackexchange.com/questions/32778/is-a-4des-or-5des-system-possible
# Is a 4DES or 5DES system possible? We know that 3DES is created with $E_{K_3}(D_{K_2}(E_{K_1}(m)))$ to extend DES's key length. Is it possible to extend it further by repeating this pattern? Perhaps using $E_{K_5}(D_{K_4}(E_{K_3}(D_{K_2}(E_{K_1}(m)))))$ for a 280 bit key. • of course, but why would you want to considering how slow DES is, and that its block size is only 64-bits? – Richie Frame Feb 16 '16 at 7:22 • @RichieFrame This is more theoretical than practical. However, I can imagine a scenario where technology has moved on and 128-bit keys are considered too small, but legacy systems still only support DES. – Daffy Feb 16 '16 at 7:24 ## 1 Answer There is a very interesting paper that relates to this exact question (but you wouldn't guess it from the title). The paper is titled Efficient Dissection of Composite Problems, with Applications to Cryptanalysis, Knapsacks, and Combinatorial Search Problems. In Section 3, the paper considers the multiple encryption problem and gives novel attacks that are better than what you would expect (for a general multiple encryption $r$ times). For example, for 2DES there is an attack taking about $2^{56}$ time and $2^{56}$ space. You would therefore expect that 4DES would give you $2^{112}$ time and $2^{112}$ space. However, they show an attack that takes time $2^{112}$ and only space $2^{56}$. In any case, in practice there is no good reason to use this. You are far better off using AES. DES's block size is too small, and it is very slow as it is (repeating 4 or 5 times will completely kill you). • Isn't 2^112 time and 2^56 space what you'd expect to get from extending the 2DES meet-in-the-middle attack to 4DES? – user253751 Feb 16 '16 at 8:30 • extending the 2DES MITM isn't straightforward due to the fact that meeting in the middle of 4DES does not mean all the 4 keys are the same . Such a property holds with high probability for 2DES, after testing another PC pair to make sure it's not a spurious meet in the middle collision. This kind of complication happens in the known MITM attack on 3DES as well. So something more sophisticated is most likely going on in the linked article. – kodlu Feb 16 '16 at 8:59 • Indeed, something MUCH more sophisticated than the regular MITM is happening in the paper. – Yehuda Lindell Feb 16 '16 at 9:13 • Hmm, tried it but I'm still alive. Although it doesn't make much sense generally, I can think of quite a few applications that could benefit from X-DES if DES is all that's available (in e.g. hardware). – Maarten Bodewes Feb 17 '16 at 0:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6848739981651306, "perplexity": 818.5596735187052}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400234232.50/warc/CC-MAIN-20200926040104-20200926070104-00163.warc.gz"}
http://indexsmart.mirasmart.com/ISMRM2018/PDFfiles/0253.html
### 0253 The Dot…wherefore art thou? Search for the isotropic restricted diffusion compartment in the brain with spherical tensor encoding and strong gradients Chantal M.W. Tax1, Filip Szczepankiewicz 2,3, Markus Nilsson2, and Derek K Jones1 1CUBRIC, School of Psychology, Cardiff University, Cardiff, United Kingdom, 2Clinical sciences, Lund, Lund University, Lund, Sweden, 3Random Walk Imaging AB, Lund, Sweden ### Synopsis The accuracy of biophysical models requires that all relevant tissue compartments are modelled. The so-called “dot compartment” is a conjectured compartment that represents small cells with apparent diffusivity approaching zero. We establish an upper limit of the “dot-fraction” across the whole brain in vivo, by using ultra-high gradients and optimized gradient waveforms for spherical tensor encoding. We report a notable signal above the noise floor in the cerebellar gray matter even for an extremely high b-value of 15000 s/mm2. For cerebral tissue, the dot-fraction seems negligible, and we consider how exchange may have affected this result. ### Introduction Biophysical modelling of the diffusion MRI (dMRI) signal can be used for tissue microstructure characterisation by carefully selecting model compartments with a relevant impact on the signal1. The inclusion of a “dot-compartment” is motivated by the ubiquity of small cells, wherein water may be trapped and its diffusion highly restricted2. Previous work investigating the minimum model requirements for brain white matter (WM) was based on “linear tensor encoding” (e.g., Stejskal-Tanner) and showed that including a dot-compartment better explained the dMRI signal3,4. However, its inclusion is not generally adopted in vivo5,6. Probing the dot-compartment in anisotropic tissue is challenging with linear encoding, due to the strong relation between encoding direction and orientation distribution of anisotropic tissue microenvironments. Here, we instead use “spherical tensor encoding” (STE) to render signals insensitive to orientation and anisotropy7,8. By so doing, the signal becomes the Laplace transform of the distribution of isotropic diffusivities8,9, where a signal plateau indicates a dot-compartment. Previous STE-based results suggest a negligible dot-fraction in WM10, but the use of gradient amplitudes of below 80mT/m limited the maximal b-value and SNR needed for accurate assessment of small dot fractions. In this work, we: 1) leverage asymmetric STE waveforms and ultra-strong gradients to significantly reduce the TE and increase SNR; 2) study the signal decay for b-values up to 15000 s/mm2; 3) extend the search to both the cerebellum and cerebrum. This facilitates a more accurate estimation of the dot signal fraction across the whole brain, in vivo. ### Theory The signal arising from each compartment represented by diffusion tensor $\mathbf{D}$ probed by b-tensor $\mathbf{B}$ can be described by $f\cdot\exp(\mathbf{B:D})$, which in the specific case of STE and two non-exchanging compartments (one dot-compartment) simplifies to $$S(b)=f_{\text{dot}}\cdot\exp(−bD_{\text{dot}})+(1−f_{\text{dot}}){\cdot}\exp(−bD_{\text{iso}})=f_{\text{dot}}+(1−f_{\text{dot}}){\cdot}\exp(−bD_{\text{iso}}).$$ For infinite SNR, the $S(0)$-normalised signal at the plateau equals the dot signal fraction $(f_{\text{dot}}=S_{\text{plateau}}/S(0))$. Fig.1 shows the simulated signal in the case of two non-exchanging compartments with isotropic diffusivities of 1 and 0 µm2/ms. In the absence of a plateau, the normalised signal at $b_{\text{max}}$ can serve as an upper limit of $f_{\text{dot}}$, because $f_{\text{dot}}≤S_{b_{\text{max}}}/S(0)$. The accuracy is limited by the presence of the noise floor but still yields an upper-limit of $f_{\text{dot}}$, at least in the absence of exchange. ### Methods Data: A healthy volunteer was scanned on a 3T, 300mT/m Siemens Connectom using a prototype spin-echo sequence that enables arbitrary b-tensor encoding. We used b-values=[0,250,1500,3000,...,15000]s/mm2 repeated [1,6,9,12,...,36] times, respectively. No in-plane acceleration was used, voxel size=4.4×4.4×6 mm3, matrix=64×64, 26 slices, TR/TE=5000/90ms, partial-Fourier=6/8, bandwidth=1594Hz/pix. Maxwell-compensated waveforms11 were optimized numerically12, and yielded a diffusion time of approximately 20ms. This approach renders superior encoding efficiency compared to standard 1-scan-trace imaging (requires TE=270ms for b=15000s/mm2). An MPRAGE was acquired for brain segmentation. Processing: To investigate whether a potential plateau arising in the signal decay was not solely an effect of the noise floor, the data was corrected for Rician bias13-15. Masks for different tissue types were created in Freesurfer based on the MPRAGE registered to the susceptibility-distortion corrected16 dMRI data, and used to guide the definition four regions for further analysis: cerebral white and gray matter (WM and GM), and cerebellar white and gray matter (cWM and cGM). ### Results Fig.2 shows that the signal in most of the cerebral tissue was reduced to the noise floor at $b$>9000 s/mm2. However, cGM was observed to have a remarkably high signal at high b-values, remaining well above the noise floor even at b=15000 s/mm2, unlike any other tissue. Fig.3 shows that the signal-verus-b curve in cGM was markedly non-monoexponential, with signal values above the noise floor for all sampled b-values. However, no plateau is found at b=15000s/mm2. In the absence of exchange, the estimated upper-bound of $f_{\text{dot}}$ in WM, GM, cWM and cGM is approximately 0.5%,0.5%,1% and 2%, respectively. ### Discussion & Conclusion In the cerebrum, we did not find evidence of a large dot-fraction. In the cerebellum, however, the non-negligible signal at very high b-values points to the existence of a dot-like compartment. The absence of a signal plateau could be caused by compartments with low but non-zero isotropic diffusivity, or a true dot-compartment with zero diffusivity but non-negligible exchange. The effect of exchange17,18 is simulated in Fig.4, and can result in an underestimation of $f_{\text{dot}}$. Regardless of model assumptions, the high signal retention at high b-values in the cerebellum demonstrates that its microstructure is remarkably different compared to the cerebrum. We speculate that this may originate from granule and/or Purkinje cells, that have a morphology consistent with this finding (Fig.5). If so, STE at extremely high b-values can become a very specific biomarker for better understanding of diseases such as autism spectrum disorders19, spinocerebellar ataxia20, and Alzheimer disease21,22, where such cells are affected. ### Acknowledgements We thank Siemens Healthcare for access to the pulse sequence programming environment, and Fabrizio Fasano from Siemens Healthcare for support. We thank Umesh Rudrapatna for technical support and feedback and Samuel St-Jean for useful discussions. The work was supported by a Wellcome Trust Investigator Award (096646/Z/11/Z) and a Wellcome Trust Strategic Award (104943/Z/14/Z). The data were acquired at the UK National Facility for In Vivo MR Imaging of Human Tissue Microstructure funded by the EPSRC (grant EP/M029778/1), and The Wolfson Foundation. CMWT is supported by a Rubicon grant (680-50-1527) from the Netherlands Organisation for Scientific Research (NWO) and Wellcome Trust grant (096646/Z/11/Z). ### References [1] Stanisz GJ, Szafer A, Wright GA, Henkelman M. An analytical model of restricted diffusion in bovine optic nerve. Magn Reson Med. 1997;37:103–111 [2] Alexander, D.C., Hubbard, P.L., Hall, M.G., Moore, E.A., Ptito, M., Parker, G.J.M., Dyrby, T.B., 2010. Orientationally invariant indices of axon diameter and density from diffusion MRI. Neuroimage 52, 1374–1389. [3] Panagiotaki, E., Schneider, T., Siow, B., Hall, M. G., Lythgoe, M. F., & Alexander, D. C., 2012. Compartment models of the diffusion MR signal in brain white matter: a taxonomy and comparison. Neuroimage, 59(3), 2241-2254. [4] Ferizi U, Schneider T, Panagiotaki E, Nedjati-Gilani G, Zhang H, Wheeler-Kingshott CA, Alexander DC., 2014. A ranking of diffusion MRI compartment models with in vivo human brain data. Magn Reson Med. 72(6):1785-92 [5] Ferizi U, Schneider T, Witzel T, Wald LL, Zhang H, Wheeler-Kingshott CA, Alexander DC., 2015. White matter compartment models for in vivo diffusion MRI at 300mT/m. Neuroimage 118:468-83. [6] Veraart J, Fieremans E, Novikov DS, Universal power-law scaling of water diffusion in human brain defines what we see with MRI. arXiv:1609.09145 [7] Eriksson, S., Lasič, S. & Topgaard, D. 2013. Isotropic diffusion weighting in PGSE NMR by magic-angle spinning of the q-vector. Journal of Magnetic Resonance, 226, 13-8. [8] Lasič, S., Szczepankiewicz, F., Eriksson, S., Nilsson, M. & Topgaard, D. 2014. Microanisotropy imaging: quantification of microscopic diffusion anisotropy and orientational order parameter by diffusion MRI with magic-angle spinning of the q-vector. Frontiers in Physics, 2, 11. [9] Westin, C. F., Knutsson, H., Pasternak, O., Szczepankiewicz, F., Özarslan, E., Van Westen, D., Mattisson, C., Bogren, M., O'donnell, L. J., Kubicki, M., Topgaard, D. & Nilsson, M. 2016. Q-space trajectory imaging for multidimensional diffusion MRI of the human brain. Neuroimage, 135, 345-62. [10] Dhital, B., Kellner, E., Reisert, M., Kiselev, V.G. 2015. Isotropic Diffusion Weighting Provides Insight on Diffusion Compartments in Human Brain White Matter In vivo. ISMRM 2788 [11] Szczepankiewicz, F. & Nilsson, M. 2018, Maxwell-compensated waveform design for asymmetric diffusion encoding. Submitted to Proc. Intl. Soc. Mag. Reson. Med. [12] Sjölund, J., Szczepankiewicz, F., Nilsson, M., Topgaard, D., Westin, C. F. & Knutsson, H. 2015. Constrained optimization of gradient waveforms for generalized diffusion encoding. Journal of Magnetic Resonance, 261, 157-168. [13] Veraart, J., Novikov, D. S., Christiaens, D., Ades-Aron, B., Sijbers, J., & Fieremans, E. 2016. Denoising of diffusion MRI using random matrix theory. NeuroImage, 142, 394-406. [14] Koay, C. G., Özarslan, E., & Basser, P. J. 2009. A signal transformational framework for breaking the noise floor and its applications in MRI. Journal of magnetic resonance, 197(2), 108-119. [15] St-Jean, S., Coupé, P., & Descoteaux, M. 2016. Non Local Spatial and Angular Matching: Enabling higher spatial resolution diffusion MRI datasets through adaptive denoising. Medical image analysis, 32, 115-130. [16] Andersson, J. L., Skare, S., & Ashburner, J. 2003. How to correct susceptibility distortions in spin-echo echo-planar images: application to diffusion tensor imaging. Neuroimage, 20(2), 870-888. [17] Kärger, J., Der einfluß der zweibereichdiffusion auf die spinechodämpfung unter berücksichtigung der relaxation bei messungen mit der methode der gepulsten feldgradienten, Anna Physik 482 (1) (1971) 107–109. [18] Nilsson, M., Alerstam, E., Wirestam, R., Stahlberg, F., Brockstedt, S. & Latt, J. 2010. Evaluating the accuracy and precision of a two-compartment Karger model using Monte Carlo simulations. Journal of Magnetic Resonance, 206, 59-67. [19] Chrobak, A. A., & Soltys, Z. (2017). Bergmann Glia, Long-Term Depression, and Autism Spectrum Disorder. Molecular Neurobiology, 54(2), 1156–1166. http://doi.org/10.1007/s12035-016-9719-3 [20] Xia, G., McFarland, K. N., Wang, K., Sarkar, P. S., Yachnis, A. T., & Ashizawa, T. (2013). Purkinje Cell Loss is the Major Brain Pathology of Spinocerebellar Ataxia Type 10. Journal of Neurology, Neurosurgery, and Psychiatry, 84(12), 1409–1411. http://doi.org/10.1136/jnnp-2013-305080 ### Figures Fig. 1: Simulations of two non-exchanging compartments with diffusivities of $D_{\text{iso}}$ = 1 and $D_{\text{dot}}$ = 0 µm2/ms for different SNR, and different signal fractions $f_{\text{dot}}$ (represented by the different colours). At low SNR, the rectified noise floor inflates the estimated upper limit of $f_{\text{dot}}$. As SNR increases, smaller signal fractions can be resolved, where the relative signal approaches $f_{\text{dot}}$. Fig. 2: Magnitude DWIs at central (top row) and inferior (bottom row) slice-positions at variable b values. Coloured outlines represent tissue types (red = white matter, yellow = gray matter, green = cerebellar white matter, blue = cerebellar gray matter), and were generated from the intersection of the mask-isosurface of each type with the slice. Fig. 3: shows the sample median and 1-99th percentile of the S(0) normalised measured signals (a) and Rician-bias corrected signals (b) across voxels in each ROI (location visualised in figure a, right). The horizontal line in (a) represents the estimated mean noise floor across voxels. Of the regions investigated, cGM and wGM show the most significant signal above the noise floor, indicating that the tissue is not comprised of a compartments with a single isotropic diffusivity. Fig. 4: Simulated signal at variable SNR, $f_{\text{dot}}$ and exchange times assuming a two compartment Kärger-model17,18. Here, diffusivities were set to 1 and 0 µm2/ms. At infinite exchange times, the relative signal approaches the true signal fraction of the dot compartment, as in Fig. 1. However, at exchange times as long as 500 ms there is a relevant loss of signal at high b-values caused by exchanging particles. This suggests that the estimated upper limit of $f_{\text{dot}}$ is negatively biased in the presence of exchange. Fig. 5: Coronal DWI that covers the cerebrum and cerebellum at b = 15 000 s/mm2 (left), nissl stained coronal slice (right) showing presence of cells in a rhesus monkey brain (http://www.brain-map.org/, access date: 2017-11-07). The figure showcases the agreement between signal retention and high cell density that is expected in the cerebellum. Proc. Intl. Soc. Mag. Reson. Med. 26 (2018) 0253
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8054811358451843, "perplexity": 11029.148354652938}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320299894.32/warc/CC-MAIN-20220129002459-20220129032459-00111.warc.gz"}
http://mprnotes.wordpress.com/2009/08/14/changing-background-image-of-latex-beamer/
## Changing background image of LaTeX Beamer I’ve learned a very nice trick to change the background image of your LaTeX Beamer presentations. First of all I will give an example how to change the background image for all your frames. All you have to do is to put the following code into the preamble of your .tex document: \usebackgroundtemplate{ \includegraphics[width=\paperwidth, height=\paperheight]{my_bg_image} } Now if you want to change the background only for one specific frame, then you have to create a block and set an image (in this example my_bg_image) as the background of this block and then you can enter the code of your frame, like the following example: { \usebackgroundtemplate{\includegraphics[width=\paperwidth]{my_bg_image}} \begin{frame} \frametitle{Frame with nice background} \begin{itemize} \item 1 \item 2 \item 3 \end{itemize} \end{frame} } That’s all. Now we are able to create some beautiful slides.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9721744656562805, "perplexity": 664.1972097191174}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802770130.120/warc/CC-MAIN-20141217075250-00018-ip-10-231-17-201.ec2.internal.warc.gz"}
http://hal.in2p3.fr/view_by_stamp.php?label=APC&langue=fr&action_todo=view&id=hal-00713418&version=1
159 articles – 2002 Notices  [english version] HAL : hal-00713418, version 1 arXiv : 1011.0210 Galactic sources of E>100 GeV gamma-rays seen by Fermi telescope (31/10/2010) We perform a search for sources of gamma-rays with energies E>100 GeV at low Galactic latitudes |b|<10 deg using the data of Fermi telescope. To separate compact gamma-ray sources from the diffuse emission from the Galaxy, we use the Minimal Spanning Tree method with threshold of 5 events in inner Galaxy (Galactic longitude |l|<60 deg) and of 3 events in outer Galaxy. Using this method, we identify 22 clusters of very-high-energy (VHE) gamma-rays, which we consider as "source candidates". 3 out of 22 event clusters are expected to be produced in result of random coincidences of arrival directions of diffuse background photons. To distinguish clusters of VHE events produced by real sources from the background we perform likelihood analysis on each source candidate. We present a list of 19 higher significance sources for which the likelihood analysis in the energy band E>100 GeV gives Test Statistics (TS) values above 25. Only 10 out of the 19 high-significance sources can be readily identified with previously known VHE gamma-ray sources. 4 sources could be parts of extended emission from known VHE gamma-ray sources. Five sources are new detections in the VHE band. Among these new detections we tentatively identify one source as a possible extragalactic source PMN J1603-4904 (a blazar candidate), one as a pulsar wind nebula around PSR J1828-1007. High significance cluster of VHE events is also found at the position of a source coincident with the Eta Carinae nebula. In the Galactic Center region, strong VHE gamma-ray signal is detected from Sgr C molecular cloud, but not from the Galactic Center itself. équipe(s) de recherche : APC - THEORIE Domaine : Physique/Astrophysique/Phénomènes cosmiques de haute energiePlanète et Univers/Astrophysique/Phénomènes cosmiques de haute energie Lien vers le texte intégral : http://fr.arXiv.org/abs/1011.0210 hal-00713418, version 1 http://hal.archives-ouvertes.fr/hal-00713418 oai:hal.archives-ouvertes.fr:hal-00713418 Contributeur : Dmitri Semikoz <> Soumis le : Dimanche 1 Juillet 2012, 08:38:26 Dernière modification le : Dimanche 1 Juillet 2012, 23:28:52
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9172799587249756, "perplexity": 5918.132347845622}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657141651.17/warc/CC-MAIN-20140914011221-00159-ip-10-234-18-248.ec2.internal.warc.gz"}
https://rank1neet.com/5-1-intermolecular-forces/
# 5.1 Intermolecular Forces Intermolecular forces are the forces of attraction and repulsion between interacting particles (atoms and molecules). This term does not include the electrostatic forces that exist between the two oppositely charged ions and the forces that hold atoms of a molecule together i.e., covalent bonds. Attractive intermolecular forces are known as van der Waals forces, in honour of Dutch scientist Johannes van der Waals (1837-1923), who explained the deviation of real gases from the ideal behaviour through these forces. We will learn about this later in this unit. van der Waals forces vary considerably in magnitude and include dispersion forces or London forces, dipole-dipole forces, and dipole-induced dipole forces. A particularly strong type of dipole-dipole interaction is hydrogen bonding. Only a few elements can participate in hydrogen bond formation, therefore it is treated as a separate category. We have already learnt about this interaction in Unit 4. At this point, it is important to note that attractive forces between an ion and a dipole are known as ion-dipole forces and these are not van der Waals forces. We will now learn about different types of van der Waals forces.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9223234057426453, "perplexity": 524.8830909562816}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401614309.85/warc/CC-MAIN-20200928202758-20200928232758-00707.warc.gz"}
https://infoscience.epfl.ch/record/130911?ln=fr
## Production and properties of substituted LaFeO3-perovskite tubular membranes for partial oxidation of methane to syngas Tubular membranes of La0.6Ca0.4Fe0.75Co0.25O3−δ and La0.5Sr0.5Fe1−yTiyO3−δ (y = 0, 0.2) for the application of partial oxidation of methane to syngas were produced by thermoplastic extrusion and investigated by oxygen permeation measurements. The optimum ceramic content in the feedstock for extrusion was found to be 51 vol% as a result of rheology measurements. Tubes with an outer diameter of 4.8–5.5mmand thickness of 0.25–0.47mm were produced with densities higher than 95% of the theoretical density. The oxygen permeation flux of the tubular membranes wasmeasured with air on one side and Ar or Ar +CH4 mixture on the other side. The oxygen permeation rate decreased with Ti-substitution while it was considerably increased by introduction of 5% methane into the system. The normalized oxygen fluxes in air/Ar gradient at 900 ◦C were measured to be 0.06, 0.051, and 0.012 mol cm−2 s−1 for LCFC, LSF, and LSFT2, respectively, and 0.18 mol cm−2 s−1 for LSFT2 with 5% methane. Publié dans: Journal of the European Ceramic Society, 27, 6, 2455-2461 Année 2007 Mots-clefs: Note: doi:10.1016/j.jeuceramsoc.2006.10.004 Autres identifiants: Laboratoires: Note: Le statut de ce fichier est: Seulement EPFL Notice créée le 2009-01-09, modifiée le 2019-03-16 n/a: PDF Évaluer ce document: 1 2 3 (Pas encore évalué)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8085873126983643, "perplexity": 12570.580109004393}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987779528.82/warc/CC-MAIN-20191021143945-20191021171445-00524.warc.gz"}
http://quant.stackexchange.com/questions?page=99&sort=newest
# All Questions 4k views ### What exactly is meant by “microstructure noise”? I see that term tossed around a lot, in articles relating to HFT, and ultra high frequency data. It says at higher frequencies, smaller intervals, microstructure noise is very dominant. What is ... 315 views ### What close price to assume for thinly traded stocks? If a thinly traded stock has not traded for the last few days (volume=0), is it better to use the last known trade price (i.e. roll over last non-missing trade price) or use last known ... 2k views ### How to price a calendar spread option? How do you price calendar spread options, that is, options on the same underlying and the same strike but different times to maturity? Clarification: I'm interested in the pricing of a a CSO ... 413 views ### Calculating Theta assuming other variables remain the same Is there any way to calculate theta at X day in future based solely on knowing 1) Total Current Option Price 2) Days Till Expiration How would this be done? Thank you 2k views ### How does Kalman filtering of beta in pairs trading model work in R? Could anyone show how this could be done in R? The dlm package seems to be a good start, but I can't really find any good examples to learn from. Currently I have ... 54 views ### Performance of 1X0/X0 funds vs. traditional benchmarks? Some years ago there was a proliferation of new products touting the ability of active managers to take short bets on securities: 130/30 funds, 150/50 funds, and the like. What is the empirical ... 355 views ### Why is there a price difference between 30 year principal and interest STRIPS? Sorry if this is obvious, I am not a professional. I like to trade 30 year treasury zero's. I have noticed that the price for a 30 year principal payment is never the same as a 30 year interest ... 2k views ### What distribution to assume for interest rates? I am writing a paper with a case study in financial maths. I need to model an interest rate $(I_n)_{n\geq 0}$ as a sequence of non-negative i.i.d. random variables. Which distribution would you advise ... 155 views ### What are some common models for one-sided returns? One typically models the log returns of a portfolio of equities by some unimodal, symmetric (or nearly symmetric) distribution with parameters like the mean and standard deviation estimated by ... 718 views ### Hasbrouck's information share Given a cointegrated set of price series, I am trying to compute the Hasbrouck's information share, as described in page 12-13 of this article. page 7-8 of this article I have the vector error ... 3k views ### How to combine multiple trading algorithms? Is it possible to combine different algorithms so as to improve trading performance? In particular, I have read that social media sentiment tracking, digital signal processing and neural networks all ... 910 views ### time series management system I'm happy how we store a single time series but we somehow lack a system that glues them all together. I'm talking about a few million time series coming from ~50 data vendors and representing maybe ... 2k views ### Tools in R for estimating time-varying copulas? Are there libraries in R for estimating time-varying joint distributions via copulas? Hedibert Lopes has an excellent paper on the topic here. I know there is an existing packaged called copula but ... 228 views ### Options: Vertical LEAPS I am developing an algorithm and it needs to know what to do in certain market conditions It takes on a Vertical Bull Call Debit Spread on LEAPS that are 12+ months out in the future. This means that ... 677 views ### Can options volume have an impact on the price of the underlying asset? Can options volume affect the underlying asset price indirectly? I know that options buying/selling does not directly affect the price of the underlying asset (rather, the asset price contributes most ... 266 views ### TA/Pattern algorithm analysis I have been building a momentum pattern detection algo (essentially involves fitting curves in overlapping windows at different timeframes) and wanted to see if anyone has done/seen similar work. I ... 222 views ### Can binary model lead to non-normal distribution? If we suppose an instrument goes up or down 1 tick per $\Delta t$ (binary model), its long term distribution will be normal, per the Central Limit Theorem. However, suppose we model as follows: The ... 624 views ### What is a good site to download historical stock 'events' such as earnings releases? [duplicate] Possible Duplicate: What data sources are available online? Earnings and valuation data sources online I'd like to backtest some strategies involving earnings release surprises, as well ... 495 views ### Standard Deviations out the money where options will respond to underlying asset price changes Is there an understood way of determining how far out the money an option can be, before it starts/stops responding to the underlying asset price changes? I usually look at the greeks, gamma, delta, ... 2k views ### Can social media be applied to algorithmic trading? Can social media sites, like Twitter, be used to analyze financial markets for algorithmic trading? How much research has been done on this topic? 666 views ### Monte carlo portfolio risk simulation My objective is to show the distribution of a portfolio's expected utilities via random sampling. The utility function has two random components. The first component is an expected return vector ... 2k views ### Correct way to calculate bond's Yield-to-Horizon I'm creating some .Net libraries for bond pricing and verifying its correctness with a bond pricing excel spreadsheet (Bond Pricing and Yield from Chrisholm Roth) but I believe it calculates the Yield ... 8k views ### The difference between Close price and Settelment Price for future contracts What is the difference between Close price and Settelment Price for future contracts? Is there a define rule for evaluating the settlement price or each instrument/exchange different rules applied? ... 1k views ### What does the VIX formula measure and how does it work? I have read the CBOE's white paper on the VIX and a lot of other things, but I need to honestly say, I don't really get it, or I am missing something important. In semi-layman's terms, is the VIX ... 208 views ### Tian third moment-matching tree with smoothing - implementation I was wondering if someone has an implementation of the Tian third moment-matching tree (http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1030143) with smoothing in code (e.g. c++, vba, c#, etc.)? ... 2k views ### What is the difference between STOXX and STOXXE? Could anyone explain the difference between STOXX and STOXXE? Which is the right index for benchmarking European stocks? Thanks. 478 views ### Option Portfolio Risk - Volatility/Skew - practical implementation I'm trying to improve my methods for calculating real-time US Equity option portfolio risk. My main problem is volatility "stability" across all strikes in an option series. The current ... 3k views ### How can I compare distributions using only mean and standard deviation? I only have means and standard deviations of samples of two random variables. What technique can I use to determine how similar the distributions these describe are? Assume that the values are built ... 737 views ### What skills and education are required for HFT? [closed] I'm a university student and I'm quite interested in High Frequency Trading Algorithms. What courses should I take and what skills should I acquire so that I can work in this field? So far, I've been ... 414 views ### How to value a floor when a loan is callable? Certain bank loans pay a spread above a floating-rate interest rate (typically LIBOR) subject to a floor. I would like to find the value of this floor to the investor. Assume for this example that ... 442 views ### How can I simulate portfolio risk (diversification) with a 'Wheel of Fortune' like investment options/returns? Say I have 6 possible investment options with the following probability of success and the corresponding returns: ... 4k views ### How are limit orders selected from the order book? I'm sure there is a simple answer to this but I haven't had any luck with searches. I'm just wondering when someone places a market order which order(s) from the limit order book are selected to fill ... 333 views ### Probability distributions in quantitative finance [closed] What are the most popular probability distributions in quantitative finance and what are their applications? 243 views ### How often do ETF creation units baskets change? Large institutions can swap baskets of underlying securities for ETF shares that can then be traded on an exchange as part of arbitrage between the price of the basket and the ETF share price. These ... 408 views ### Where are creation units baskets for ETFs published? Where can the specification of a creation unit basket for an ETF be found? This information is needed for calculating the arbitrage possible between the ETF instrument itself and the creation unit ... 3k views ### Calculating Portfolio Skewness & Kurtosis I need to calculate the skewness and kurtosis of 2 asset portfolio, can someone please help me with the formulas and definition of terms? Thank you. I have been using the matrices method and I am not ... 856 views ### How to normalize Futures data(different leverage) for cointegration test? For example I want to construct 2 time series, one for ES and the other for NQ and test for cointegration. ES one point equal to 50$. NQ one point equal to 20$. If I have the following data: ... 2k views ### How to cluster stocks and construct an affinity matrix? My goal is to find clusters of stocks. The "affinity" matrix will define the "closeness" of points. This article gives a bit more background. The ultimate purpose is to investigate the "cohesion" ... 26k views ### How to annualize Sharpe Ratio? I have a basic question about annualized Sharpe Ratio Calculation: if I know the daily return of my portfolio, the thing I need to do is multiply the Sharpe Ratio by $\sqrt{252}$ to have it ... 175 views ### Use of Local Times in Option Pricing I know two applications of local time in option pricing theory. First, it allows a derivation of Dupire's formula on local volatility in a neat way (i.e. without resorting to differential operator ... 3k views ### Evaluating automated trading strategies: accepted practice Both for private projects, and for clients, I've been working on code a lot this year to evaluate automated trading strategies. This often ends up turning into the task of how to fairly compare apples ... 212 views ### When does an ETF take out expenses? When does an Exchange Traded Fund (ETF) take out expenses (for example 0,3% on a yearly basis)? Does it happen daily, or once yearly or according to some other scheme? Where does it take them from? 470 views ### What weights should be used when adjusting a correlation matrix to be positive definite? I have a correlation matrix $A$ for an equity market that is not positive definite. Higham (2002) proposes the Alternating Projections Method, minimising the weighted Frobenius norm $||A-X||_W$ where ... 726 views ### What strategy would benefit most from having the fastest connection to the exchange? Imagine that you have the fastest connection to the exchange (receive quotes 1 ms earlier than everyone else) for both stocks and derivatives. How would you benefit from this? Of course almost any ... 309 views ### Which is a more appropriate choice of risk measurement in a utility function, CVaR or VaR? What is the consensus on which risk measure to use in measuring portfolio risk? I am researching what is the best risk measure to use in a portfolio construction process for a long/short option-free ... 1k views ### What are the most common/popular exotics in the interest rate markets these days? By "exotic" I mean anything that is not a plain vanilla swap, swaption, cap or floor. Also any IR hybrids if appropriate. Possible examples would be: CMS and CMS spread options Multi-callable swaps ... 474 views ### Should cointegration be tested using close or adjusted close prices? When doing cointegration tests should I use the adjusted close price or just close price for the time series? The dividend of each stock is on different dates and can cause jumps in the data. 2k views ### How to extrapolate implied volatility for out of the money options? Estimation of model-free implied volatility is highly dependent upon the extrapolation procedure for non-traded options at extreme out-of-the-money points. Jiang and Tian (2007) propose that the ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6335266828536987, "perplexity": 2179.4772486224983}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398447906.82/warc/CC-MAIN-20151124205407-00048-ip-10-71-132-137.ec2.internal.warc.gz"}
https://socratic.org/questions/550d7a1a581e2a2728e11d78
Chemistry Topics # Question 11d78 Mar 21, 2015 The partial pressure of ethane will be $\text{0.516 atm}$. There are two ways of solving for the partial pressure of ethane. The first one is to simply apply the same method used to determine the partial pressure of argon. To put this in practice, you need to determine the number of moles of ethane present in the mixture. Since you've calculated that the total number of moles is 0.0511, and that you have 0.0300 moles of argon, the difference between these numbers will be the number of moles of ethane n_("ethane") = n_("total") - n_("argon") = "0.0511 - 0.0300" = "0.0211 moles" This means that the partial pressure of ethane will be ${P}_{\text{ethane") = chi_("ethane") * P_("total}}$ P_("ethane") = "0.0211 moles"/"0.0511 moles" * "1.25 atm" = "0.516 atm" The second and quicker way of finding the partial pressure of ethane is to use Dalton's law of partial pressures. According to this, the total pressure exercited by a gas mixture that occupies a certain volume is equal to the sum of the partial pressures exercited by each gas if it alone would have occupied the same volume. ${P}_{\text{total") = P_("ethane") + P_("argon}}$ P_("ethane") = P_("total") - P_("argon") = "(1.25 - 0.734) atm" = "0.516 atm" ##### Impact of this question 9208 views around the world
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 6, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6544448137283325, "perplexity": 1224.0012943977483}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250594101.10/warc/CC-MAIN-20200119010920-20200119034920-00162.warc.gz"}