url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
---|---|---|---|
https://www.gradesaver.com/textbooks/science/physics/fundamentals-of-physics-extended-10th-edition/chapter-7-kinetic-energy-and-work-problems-page-172/22a | ## Fundamentals of Physics Extended (10th Edition)
We know that: $W_1-mgd=\Delta K_1=\frac{1}{2}mv_1^2$ This can be rearranged as: $W_1=mgd+\frac{1}{2}mv_1^2$ We plug in the known values to obtain: $W_1=80.0(9.8)(10.0)+\frac{1}{2}(80.0)(5.00)=8.84\times 10^3J$ | 2018-08-14 18:30:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7943432927131653, "perplexity": 631.2806436008542}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221209216.31/warc/CC-MAIN-20180814170309-20180814190309-00446.warc.gz"} |
https://www.maa.org/programs/faculty-and-departments/classroom-capsules-and-notes/browse?page=14 | # Browse Classroom Capsules and Notes
You can filter the list below by selecting a subject category from the drop-down list below, for example by selecting 'One-Variable Calculus'. Then click the 'APPLY' button directly, or select a subcategory to further refine your results.
Displaying 141 - 150 of 1211
A visual proof of the ordering of the means in the title is presented.
For what positive number $x$ is the $x$-th root of $x$ the greatest?
Bernoullis inequality is presented visually.
An arctangent identity is presented visually.
The author presents a simple proof of a special case of Dirichlets celebrated theorem on primes in arithmetic progressions.
Descriptions of the behavior of a family of functions
Solving an expected value problem without using geometric series
In the game of tennis, if the probability that player $A$ wins a point against player $B$ is a constant value $p$, then the probability that $A$ will win a game from deuce is \(p^2/(1 - 2p...
Solving Pell Equations using Fibonacci-like sequences
The author uses elementary geometric methods to calculate the fraction of the area of a soccer ball covered by pentagons. | 2022-10-01 02:20:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5891938805580139, "perplexity": 769.1766435171655}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335514.65/warc/CC-MAIN-20221001003954-20221001033954-00275.warc.gz"} |
http://tex.stackexchange.com/questions/553/what-packages-do-people-load-by-default-in-latex?answertab=active | # What packages do people load by default in LaTeX?
I'm getting the impression from reading the answers written by some of the real experts here that there are quite a few little packages that just tweak LaTeX2e's default behaviour a little to make it more sensible here and there.
Rather than try to pick these up one by one as I read answers to questions (and thus risk missing them), I thought I'd ask up front what LaTeX2e packages people load by default in (almost) every document.
As this is a "big list" question, I'm making it CW. I don't know if there are standard rules across all SE/SO sites for such questions, but on MathOverflow the rule is generally: one thing (in this case, package) per answer. I guess that if a couple of packages really do go together then it would be fine to group them.
This is perhaps a little subjective and a little close to the line, so I'll not be offended if it gets closed or voted down! (But please explain why in the comments.)
Also see our community poll question: “I have used the following packages / classes”
-
Personally, I'd find a single list, separated by headings (Ex. Format, Math, Bib,Images, Other for this question), with a list of everyone's packages and how they're different from other packages in the section much more readable and useful. That amsmath is the highest voted just says that the MO community is here in full force. The less-known, but equally relevant formatting packages linked by Vivi, Joseph, and András are invisible without a lot of scrolling and reading. – Kevin Vermeer Jul 29 '10 at 22:37
I think the list of one package per answer is a good idea, as we can vote on individual packages... – Amir Rachum Jul 30 '10 at 11:30
I just discovered the xparse package. It lets you define more flexible macros with more than one optional argument. I used it to make a very general partial derivative function.
\usepackage{xparse}
\DeclareDocumentCommand{\pder}{ O{} O{} m }{\frac{\partial^{#2}#1}{\partial#3^{#2}}}
Example
\pder{x} will give you
\pder[f]{x} will give you
\pder[f][3]{x} will give you
-
When using class book, I always load package emptypage.
It needs no particular skill since it doesn't introduce any new command to use, it removes headers and footers from empty pages at the end of chapters just by adding \usepackage{emptypage} in your preamble.
The default option is odd.
-
Nothing surprising here: I use natbib, hyperref and hypernat together.
Natbib for referencing.
Hyperref adds bookmarks for sections and lists and turns references and urls into links.
Hypernat allows natbib and hyperref to work together. -- Note (added 2015/02/11): natbib and hyperref have been working together just fine for at least ten years. hypernat is no longer needed for any TeX distribution with a vintage more recent than ca 2002.
-
I'm pretty sure that hypernat is superfluous these days. With only loading natbib and hyperref I get references as [1-5] with both 1 and 5 being hyperlinks. – Lev Bishop Aug 8 '10 at 14:51
And? Was it superfluous in 2010? Is it now? ;) – K.-Michael Aye Nov 23 '12 at 5:18
@K.-MichaelAye - hypernat was superfluous (and potentially troublesome) back in 2010 and in 2012, and it continues to be superfluous as of 2015. – Mico Feb 11 at 21:13
First line of the document should be
\RequirePackage{fixltx2e}
\documentclass{...}
, which fixes a few things in the LaTeX2e kernel.
Due to LaTeX's stability policy, these corrections have not been incorporated into the LaTeX2e kernel, but this package does things most people would agree are bugfixes. So to load this package is always recommended for newly created documents. The corrections have no commonalities, but the package's description has a nice summary:
• ensure one-column floats don't get ahead of two-column floats;
• correct page headers in twocolumn documents;
• stop spaces disappearing in moving arguments;
• allowing \fnsymbol to use text symbols;
• allow the first word after a float to hyphenate;
• \emph can produce caps/small caps text;
• bugs in \setlength and flushbottom.
-
No one mention tabulary.
Sometimes I make tables with multiline cells in several columns, where the total width must be just \textwidth. Use tabular with p{} columns here is a pain since one must take into account \tabcolsep.
For this, the sibling tabularx (cited in another answer) could make a good work ( X columns take all the available space), but often I need columns weighted according to the amount of text rather and with different alignments, but X columns of tabularx share equally that space.
Instead, tabulary allow the use L, C, R and J columns o automatic variable width. Not always a column layouts as LLCRL produce the desired result but since it is possible mix L,C,R columns with basic types (l,r,c,p{}, m{}...) find the best fit (i.e., some like Lcp{5em}RL) is a child play.
-
I usually use relsize package. It's easy to use it. It changes the font size of part of your text. Just type \relsize{x} where x is the number of steps you want to move through the hierarchy of font sizes.
-
\usepackage{mciteplus}
Allows you to combine multiple references: \cite{refa, *refc, *refc, refd} will produce one references with refa, refb, and refc combined (if they are not used independently elsewhere).
-
BTW: natbib supports this feature too. See p.19 in the documentation mirrors.ctan.org/macros/latex/contrib/natbib/natbib.pdf – amorua May 2 '12 at 1:24
One package that’s really general purpose is nag: It doesn’t do anything, per se, it just warns when you accidentally use deprecated LaTeX constructs from l2tabu (English / French / German / Italian / Spanish documentation).
From the documentation:
Old habits die hard. All the same, there are commands, classes and packages which are outdated and superseded. nag provides routines to warn the user about the use of those. As an example, we provide an extension that detects many of the “sins” described in l2tabu.
Therefore, I now always have the following in my header (before the \documentclass, thanks qbi):
\RequirePackage[l2tabu, orthodox]{nag}
It’s a bit like having use strict; in Perl: a useful best practice.
-
Somewhat better is \RequirePackage[l2tabu,orthodox]{nag} before \documentclass. The package docu also recommends this. – qbi Jul 29 '10 at 18:40
This package sounds useful. However, when I tested it with a large project, I started to get the message "Label(s) may have changed. Rerun to get cross-references right." no matter how many times I re-run Latex. – Jukka Suomela Jul 31 '10 at 9:36
Congrats on getting yet one more "great" answer! – Mico Jan 18 at 19:12
For citations and bibliographies, biblatex is the package of my choice. Key points:
• biblatex includes a wide variety of built-in citation/bibliography styles (numeric, alphabetic, author-year, author-title, verbose [full in-text-citations], with numerous variants for each one). A number of custom styles have been published.
• Modifications of the built-in or custom styles can be accomplished using LaTeX macros instead of having to resort to the BibTeX programming language.
• biblatex offers well-nigh every feature of other bibliography-related LaTeX packages (e.g. multiple/subdivided bibliographies, sorted/compressed citations, entry sets, ibidem functionality, back references). If a feature is not included, chances are high it is on the package authors' to-do list.
• The babel package is supported, and biblatex comes with localization files for about a dozen languages (with the list still growing).
• Although the current version of biblatex (2.8a) still allows to use BibTeX as a database backend, by default it cooperates with Biber which supports bibliographies using Unicode. Biber (currently at version 1.8) is included in TeX Live and MiKTeX. Many features introduced since biblatex 1.1 (e.g., advanced name disambiguation, smart crossref data inheritance, configurable sorting schemes, dynamic datasource modification) are "Biber only".
-
Nevertheless one should append about the usage of biblatex that some papers do not accept its usage. See: Biblatex: submitting to a journal – strpeter Jan 16 '14 at 9:25
I almost always load microtype. It plays with ever-so-slightly shrinking and stretching of the fonts and with the extent to which text protrudes into the margins in a way that yields results that look better, that have fewer instances of hyphenation, and fewer overfull hboxes. It doesn't work with latex, you have to use pdflatex instead. It also works with lualatex and (protrusion only) with xelatex.
-
You may want to use \usepackage[stretch=10]{microtype}, which allows font expansion up to 1% (default is 2%). – lockstep Aug 6 '10 at 12:03
Can we have an example of with versus without? – levesque Nov 15 '10 at 18:28
there's a nice example in the documentation for microtype mirror.ctan.org/macros/latex/contrib/microtype/microtype.pdf, though it requires adobe acrobat for the inline examples – Noah Aug 12 '11 at 22:37
Here is another example. – Juri Robl Oct 11 '12 at 11:13
The only texts for which I don't use microtype are those set raggedright. It seems to maximally stretch practically all lines. In any case, ragged2e then becomes the must include package. – Christian Jun 27 '13 at 16:41
I always end up loading the same packages, some of which were suggested by some answers to this question, such as hyperref, amsmath, nag, etoolbox, xparse, and others.
I created a style file latexdev.sty that I use in almost all my notes and publications, which loads all these standard packages:
https://github.com/olivierverdier/latexdev
-
I include: \usepackage{outlines} in my preamble. outlines is a quick and easy way to generate hierarchically embedded lists. Especially useful when I'm drafting up a paper (I like to outline it) or if I'm quickly typing up notes, e.g., at a conference.
-
\usepackage[parfill]{parskip}
I much prefer no indentation and space between paragraphs, so the parskip package is a must for me!
-
Have a look at the KOMA-Script-classes - they include a parskip option that is more powerful than the package of the same name. – lockstep Aug 8 '10 at 17:39
Since my files nowadays has UTF-8 character encoding, I use this
\usepackage[utf8]{inputenc}
-
XeLaTeX or LuaLaTeX would be my choice for this – Joseph Wright Aug 15 '10 at 13:05
Isn't it \usepackage[utf8x]{inputenc}? – Olivier Jul 19 '11 at 8:17
I've experienced several cases where utf8x had a symbol that utf8 hadn't – Mog Nov 24 '12 at 11:47
@Olivier: utf8 is LaTeX base, while utf8x comes from the ucs package. So utf8 is portable. – Martin Schröder Jun 27 '13 at 14:39
I always use \usepackage[utf8]{inputenx} instead. – Sveinung Jan 13 '14 at 16:03
I'm not just feigning surprise when I say I'm shocked that such an incredibly useful package set as xparse/expl3 (the latter is loaded by the former) hasn't been mentioned yet. I invariably find myself typing:
\documentclass{article}
\usepackage{xparse}
to begin a document.
-
So, what does it do? – fifaltra Dec 24 '13 at 0:32
with xparse, one can define commands and environments with multiple optional arguments before, between, and after mandatory arguments. Several new type of arguments can be defined, starred commands, and much more. – Michael P May 7 '14 at 10:17
As long as this list is, minted is missing. For code syntax highlighting it works really well and includes the long list of languages of pygments. The pieces of code end up looking like this:
\begin{minted}{language}
code
\end{minted}
In Beamer it requires frames to be marked as [fragile], and it takes some skill to set it up on Windows. But the results are well worth the effort.
-
@Christian: the main difference is that you can tap directly into pygments, which is a (very) well maintained source for syntax colouring for many languages and is used in many places other than LaTeX. There is a full discussion on the differences between lstlisting and minted here: tex.stackexchange.com/questions/102596/…, – FvD Jun 28 '13 at 13:17
For papers on the arXiv (maths, physics and computer science mostly) there's a list of packages sorted by frequency of use.
The top twenty packages are:
1. article
2. graphicx
3. amssymb
4. amsmath
5. revtex
6. revtex4
7. epsfig
8. amsfonts
9. bm
10. latexsym
11. amsart
12. dcolumn
13. amsthm
14. graphics
15. aastex
16. amscd
17. epsf
18. color
19. aa
20. times
-
That list is literally pain to my eyes. Loading bm?! Use proper bold math characters instead, please, and not poorman's bold. times? Outdated since ages, use mathptmx or XITS Math instead. I'll stop here... – Ingo Jan 30 '14 at 11:46
This has been mentioned in some of the “big answers”, but thought it deserved special attention. Probably most documents should include:
\usepackage[T1]{fontenc}
This is to resolve some deficiencies and inconsistencies of the default OT1 font encoding; while improving the support of special characters (e.g. the ability to copy&paste from the generated pdf document).
-
Another package I use is float. It allows for the placement H for floats, which is somewhat equivalent to h!, but a bit stronger, making sure the figure or table goes exactly where I want it to be.
-
Actually not equivalent to h! at all. h! floats still "float"- they can be moved around by LaTeX in an attempt to optimize the document layout. Figures using the H specifier are not floats at all, they are treated like one big character and are put exactly where they appear in the text. – Sharpie Aug 1 '10 at 3:59
pageslts: for being able to refer to the last page of a document
-
\usepackage{fancyvrb}
I use it for highly customisable verbatim. The abstract of the package documentation reads:
This package provides very sophisticated facilities for reading and writing verbatim TeX code. Users can perform common tasks like changing font family and size, numbering lines, framing code examples, colouring text and conditionally processing text.
Here's an example using the SaveVerbatim environment in combination with the \fcolorbox command:
-
I almost always use the enumitem package, which makes it much easier to make modifications to lists (especially enumerate lists). Most notably, changing the labels to something like (i), (ii), (iii) [no period] with this package is as easy as
\begin{enumerate}[label=(\roman*)]
\item The first item
\item The second item
\end{enumerate}
Furthermore, the code above will automatically get nesting right. Before I started using this package, my preamble always included the awkward macro (necessary to change the references and eliminate the extra period in the list itself)
\newcommand{\setenumroman}{%
\renewcommand{\theenumi}{(\roman{enumi})}%
\renewcommand{\labelenumi}{\theenumi}%
}
which would break if I ever used it for a nested list (all the enumis would have to be changed to enumiis, if I understand correctly).
The enumitem package is quite flexible; another option I sometimes use is [wide], which makes a list look like part of the body of the text (with numbers/labels at the beginning of relevant paragraphs).
-
I also find package lipsum fun to use. It lets you generate several versions of lorem ipsum placeholder text to see what your document would look like.
-
For the natural scientists among us, the package mhchem makes it very easy to typeset chemical symbols and chemical equations.
-
I always load the package xy to produce diagrams.
Also tikz to draw figures.
-
I use tikz-cd to get commutative diagrams drawn with tikz with a syntax highly reminiscent of the xy syntax. – Charles Staats Dec 6 '12 at 3:22
Very often a requirement for the documents I write is that the font should be Times (or Times New Roman), so the package I use to set the main roman font to Times and acceptable math is mathptmx.
Recently, I have experimented with newtxtext and newtxmath but, personally, I do not like the design of some symbols and there are a few cases where the spacing between characters is too tight.
For personal use I set the font to New Century Schoolbook and Fourier (for math) with the fouriernc package.
-
I always use
\usepackage[retainorgcmds]{IEEEtrantools} % sophisticated equation arrays
It offers a sophisticated environment for formatting equation arrays,IEEEeqnarray and also offers a few other constructions. I don't use the traditional eqnarrays any more. I usually set the option [retainorgcmds] because it prevents the package from overwriting the itemize, enumerate and description definitions.
Check out How to Typeset Equations in LaTeX. The author gives some good examples of how and why to use this package instead of the traditional ones. The Not So Short Introduction to LaTeX 2ε also mentions the package in section 3.5.2. This section actually seems to be a copy of the first link ;)
-
The following command before the \documentclass command permits Computer Modern fonts at arbitrary sizes: \RequirePackage{fix-cm}.
-
Edited by doncherry: Removed packages mentioned in separate answers.
The complete header Part of my header for most of my documents looks as follows:
\documentclass[ngerman,draft,parskip=half*,twoside]{scrreprt}
\usepackage{ifthen}
For some things I need if-then-constructs. This package provides an easy way to realise it.
\usepackage{index}
For generating an index.
\usepackage{xcolor}
xcolor is needed by several packages. For some historical reason I load it manually.
\usepackage{babel}
\usepackage{nicefrac}
nicefrac allows typesetting fractions like 1/2. It is sometimes more readable than \frac.
\usepackage[T1]{fontenc}
\usepackage[intlimits,leqno]{amsmath}
\usepackage[all,warning]{onlyamsmath}
This package warns if non-amsmath-environments are used.
\usepackage{amssymb}
\usepackage{fixmath}
Provides ISO conform greek letters.
\usepackage[euro]{isonums}
Defines comma as decimal delimiter.
\usepackage[amsmath,thmmarks,hyperref]{ntheorem}
for Theorems, definitions and stuff.
\usepackage{paralist}
Improves enumerate and itemize. Also provides some compact environments.
\usepackage{svn}
I work with VCS and svn displays some informations (keywords) from SVN.
\usepackage{ellipsis}
corrects \dots
\DeclarePairedDelimiter{\abs}{\lvert}{\rvert}
\DeclarePairedDelimiter{\norm}{\lVert}{\rVert}
These are the definitions for absolute value and norm.
\SVN $LastChangedRevision$
\SVN $LastChangedDate$
-
"one thing (in this case, package) per answer" – Jukka Suomela Jul 29 '10 at 19:02
Could you break this up into multiple answers please, so they can be voted on? Having a dozen answers is ok! – ShreevatsaR Jul 30 '10 at 14:41
It is usually recommended to load hyperref last. – Alex Hirzel May 1 '12 at 20:20
Edited by doncherry: Removed packages mentioned in separate answers.
I use TeX for a variety of documents: research papers, lectures/tutorials, presentations, miscellaneous documents (some in Japanese). Each of these different uses, requires different packages.
Depending on my mood, I like to use different fonts. A particular nice combination for mathematics papers is
\usepackage[T1]{fontenc} % better treatment of accented words
\usepackage{eulervm} % Zapf's Euler fonts
\usepackage{tgpagella} % TeXGyre Pagella fonts
For references,...
\usepackage[notref,notcite]{showkeys} % useful when writing the paper
\usepackage[noadjust]{cite} % [1,2,3,4,5] --> [1-5] useful in hep-th!
For lecture notes (again mathematical) I often like to section the document into "lectures" instead of sections and to add some colours to the titles,.... To do this it's useful to use
\usepackage{fancyhdr} % fancy headers
\usepackage{titlesec} % to change how sections are displayed
\usepackage{color} % to be able to do this in colour
and I also like to decorate using some silly glyphs, for which these fonts are useful:
\usepackage{wasysym,marvosym,pifont}
and also box equations and other things
\usepackage{fancybox,shadow}
\usepackage[rflt]{floatflt}
\usepackage{graphicx,subfigure,epic,eepic}
You may want to hide the answers to tutorial exercises, problems,... and this can be achieved with
\usepackage{version,ifthen} % ifthen allows controlling exclusions
I use XeLaTeX for documents containing Japanese, which works better with
\usepackage{fontspec} % makes it very easy to select fonts in XeLaTeX
\usepackage{xunicode} % accents
-
As the question suggested, could you write an answer per package/topic and explain what these packages do or why do you need them? – Juan A. Navarro Jul 29 '10 at 10:51
can you please add comments like \ usepackage{foo} % to get following features within your code? – Dima Jul 29 '10 at 11:06
To avoid breaking them up all the way, you could try grouping them a little (say, if there's one package that you wouldn't consider using without another one then put them together). – Loop Space Jul 29 '10 at 13:04 | 2015-04-26 10:05:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8482113480567932, "perplexity": 4344.609979248628}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246654264.98/warc/CC-MAIN-20150417045734-00309-ip-10-235-10-82.ec2.internal.warc.gz"} |
https://math.stackexchange.com/questions/1078450/maps-of-primitive-vectors-and-conways-river-has-anyone-built-this-in-sage | # Maps of primitive vectors and Conway's river, has anyone built this in SAGE?
I am attempting to teach number theory from John Stillwell's Elements of Number Theory in the upcoming semester. There are two sections (5.7 and 5.8) which describe the diagrammatic method for the derivation of primitive vectors which ultimately lead to a healthy understanding of values which the quadratic form $x^2-ny^2$ may attain for fixed $n$ and integers $x,y$. The "river" is a particular path in this "tree of integral bases" which separates positive and negative values for the quadratic form. Here is an example: an example from David Vogan of MIT To be fair, there is a good discussion in Stillwell, my question is simply this:
Has anyone implemented a routine, command, etc. which produces some part of the integral tree of bases or the more interesting diagrams as shown in section 5.8 of Stillwell?
I'm more inclined to cover it if I can create examples without falling prey to the inevitable arithmetic mistakes I will make in the creation of such a diagram. Also, for the homework, it would really be nice for them to be able to play around with it without investing too much time.
• I'm not entirely convinced the "trees" tag belongs, so, if someone from the forest feels otherwise, feel free to chop it down. – James S. Cook Dec 23 '14 at 5:33
• This is not in Sage. But I am familiar with the text you mention, and it would be a neat addition. – kcrisman Dec 24 '14 at 2:35
• did you ever find program to draw the diagrams? – Bob Woodley Apr 19 '18 at 0:17
• @BobWoodley I did not find such a program, although, a student of mine made one... then left... I'm not sure if he finished debugging it. I think it's an open problem to do nicely. – James S. Cook Apr 19 '18 at 19:41
EDIT: I think I should emphasize that I have no graphics program for this and am not competent to make one. The diagrams below were done by hand, then scanned on my one-page home scanner as jpegs; those seem to work better on MSE than pdf's. My programs give a good idea how the diagram ought to look, also eliminate simple arithmetic errors; however, a user needs to read some rather cryptic output and then draw the diagram.
ORIGINAL: Not Sage, but I have written several programs either using or helping to draw the river for a Pell form. First, i put four related excerpts at http://zakuski.utsa.edu/~jagy/other.html with prefix indefinite_binary. Second, the book by Conway that introduced this diagram is available at http://www.maths.ed.ac.uk/~aar/papers/conwaysens.pdf and for sale as a real book.
Especially for Pell forms, i have come to prefer a hybrid diagram, one that emphasizes the automorphism group of the form $x^2 - n y^2.$ See recent answer at Proving a solution to a double recurrence is exhaustive and, in fact, many earlier answers.
I can tell you that actually drawing these things is what explains them...Conway deliberately leaves out the automorphisms, he wanted a brief presentation I guess, I really wanted to include that and show how the diagram displays the generator of that group. Also discussed in many number theory books, including my favorite, Buell.
You are welcome to email me, gmail is better (click on my profile and go to the AMS Combined Membership Listings link). I have many diagrams, programs in C++, what have you.
Here is the simpler of two diagrams I did for $x^2 - 8 y^2.$ All I mean by the automorphism group is the single formula $$(3x+8y)^2 - 8 (x+3y)^2 = x^2 - 8 y^2,$$ with the evident visual column vector $(3,1)^T$ giving a form value of $1$ and the column vector $(8,3)^T$ directly below it giving a form value of $-8,$ thus replicating the original form.
This is another pretty recent, the very similar $x^2 - 2 y^2,$ where I was emphasizing finding all solutions to $x^2 - 2 y^2 = 7,$ and how there is more than one "orbit" of the automorphism group involved, i.e. every other pair...
Well, why not. One should be aware that the Gauss-Lagrange method of cycles of "reduced" forms is part of the topograph, in fact one such cycle is the exact periodocity of Conway's river. Reduced forms, that is $a x^2 + b xy + c y^2$ with $ac < 0$ and $b > |a+c|,$ occur at what Weissman calls "riverbends," where the action switches sides of the river. Anyway, all the following information is automatically part of the diagram for $x^2 - 13 y^2.$ As a result, the diagram is quite large, it took me two pages. Generate solutions of Quadratic Diophantine Equation
jagy@phobeusjunior:~/old drive/home/jagy/Cplusplus$./Pell 13 0 form 1 6 -4 delta -1 1 form -4 2 3 delta 1 2 form 3 4 -3 delta -1 3 form -3 2 4 delta 1 4 form 4 6 -1 delta -6 5 form -1 6 4 delta 1 6 form 4 2 -3 delta -1 7 form -3 4 3 delta 1 8 form 3 2 -4 delta -1 9 form -4 6 1 delta 6 10 form 1 6 -4 disc 52 Automorph, written on right of Gram matrix: 109 720 180 1189 Pell automorph 649 2340 180 649 Pell unit 649^2 - 13 * 180^2 = 1 ========================================= Pell NEGATIVE 18^2 - 13 * 5^2 = -1 ========================================= 4 PRIMITIVE 11^2 - 13 * 3^2 = 4 ========================================= -4 PRIMITIVE 3^2 - 13 * 1^2 = -4 ========================================= • Thanks Will. This answer will be very useful to those students who wish to dig further into this. We may have to wait a few years before we can just tell them the words they wish to hear "here's the ap for that". Well, maybe the students we seek never will say that, I will keep the email in mind. Thanks! – James S. Cook Dec 24 '14 at 4:21 I was surprised to learn, recently, that a simple idea ( made up, it appears, by Tito Piezas, without him knowing it was a neologism) allowed me to get the information in Conway's topograph with a fairly simple computer program, as long as what I wanted was to guarantee finding all solutions$(x,y)$to$ax^2 + bxy+ cy^2 = n$with integers$x,y > 0$and$a,b,c,n$fixed,$b^2 - 4ac>0$but not a square. It seems the first answer where I displayed this material was If$d>1$is a squarefree integer, show that$x^2 - dy^2 = c$gives some bounds in terms of a fundamental solution. and has good explanations, while Tito's comments began with Does the Pell-like equation$X^2-dY^2=k$have a simple recursion like$X^2-dY^2=1$? in the thread Does the Pell-like equation$X^2-dY^2=k$have a simple recursion like$X^2-dY^2=1$? There are a few programs involved for me, as one must tell the final program the fundamental solution to a relevant Pell equation. Anyway, you are welcome to the programs. For me, it means I can give all solutions, then draw the topograph when time permits. My first program on this is restricted to$x^2 - d y^2 = t,$and it must be told the "fundamental" solution to$x^2 - d y^2 = 1.$The output below are the two problems for which I posted topograph diagrams in my earlier answer =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= jagy@phobeusjunior:~/old drive/home/jagy/Cplusplus$ ./Pell 8
Pell unit
3^2 - 8 * 1^2 = 1
=========================================
jagy@phobeusjunior:~$./Pell_Target_Fundamental 3^2 - 8 1^2 = 1 x^2 - 8 y^2 = -7 Fri Apr 29 11:15:41 PDT 2016 x: 1 y: 1 ratio: 1 SEED x: 5 y: 2 ratio: 2.5 SEED x: 11 y: 4 ratio: 2.75 x: 31 y: 11 ratio: 2.818181818181818 x: 65 y: 23 ratio: 2.826086956521739 x: 181 y: 64 ratio: 2.828125 x: 379 y: 134 ratio: 2.828358208955223 x: 1055 y: 373 ratio: 2.828418230563003 x: 2209 y: 781 ratio: 2.82842509603073 x: 6149 y: 2174 ratio: 2.828426862925483 x: 12875 y: 4552 ratio: 2.828427065026362 x: 35839 y: 12671 ratio: 2.828427117038907 x: 75041 y: 26531 ratio: 2.828427122988202 x: 208885 y: 73852 ratio: 2.828427124519309 x: 437371 y: 154634 ratio: 2.82842712469444 x: 1217471 y: 430441 ratio: 2.828427124739511 x: 2549185 y: 901273 ratio: 2.828427124744667 x: 7095941 y: 2508794 ratio: 2.828427124745993 x: 14857739 y: 5253004 ratio: 2.828427124746145 Fri Apr 29 11:16:01 PDT 2016 x^2 - 8 y^2 = -7 jagy@phobeusjunior:~$
jagy@phobeusjunior:~/old drive/home/jagy/Cplusplus$./Pell 2 =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= Pell unit 3^2 - 2 * 2^2 = 1 ========================================= jagy@phobeusjunior:~$ ./Pell_Target_Fundamental
3^2 - 2 2^2 = 1
x^2 - 2 y^2 = 7
Fri Apr 29 11:20:19 PDT 2016
x: 3 y: 1 ratio: 3 SEED
x: 5 y: 3 ratio: 1.666666666666667 SEED
x: 13 y: 9 ratio: 1.444444444444444
x: 27 y: 19 ratio: 1.421052631578947
x: 75 y: 53 ratio: 1.415094339622641
x: 157 y: 111 ratio: 1.414414414414414
x: 437 y: 309 ratio: 1.414239482200647
x: 915 y: 647 ratio: 1.414219474497681
x: 2547 y: 1801 ratio: 1.414214325374792
x: 5333 y: 3771 ratio: 1.41421373640944
x: 14845 y: 10497 ratio: 1.414213584833762
x: 31083 y: 21979 ratio: 1.414213567496246
x: 86523 y: 61181 ratio: 1.414213563034275
x: 181165 y: 128103 ratio: 1.414213562523907
x: 504293 y: 356589 ratio: 1.414213562392558
x: 1055907 y: 746639 ratio: 1.414213562377534
x: 2939235 y: 2078353 ratio: 1.414213562373668
x: 6154277 y: 4351731 ratio: 1.414213562373226
Fri Apr 29 11:20:39 PDT 2016
x^2 - 2 y^2 = 7
jagy@phobeusjunior:~\$
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= | 2021-03-08 10:39:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6676536202430725, "perplexity": 793.2941029198499}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178383355.93/warc/CC-MAIN-20210308082315-20210308112315-00554.warc.gz"} |
http://teledynelecroy.com/doc/power-real-and-apparent | ### Basic Power Line Measurements
Oscilloscopes measure current and voltage and, through the magic of mathematics, calculate power. Unfortunately, power comes in a large number of guises: instantaneous, real, apparent, and reactive. This plethora of power terms often leads to confusion. The Power Analysis software package simplifies these measurements and eliminates the necessity of setting up the proper math operations.
Oscilloscopes, whether analog or digital, are voltage responding instruments. Current is measured using a suitable transducer, usually a current probe or resistive shunt. The oscilloscope display is the instantaneous function of voltage or current vs. time. The product of these quantities is instantaneous power.
A basic line power measurement is shown in Figure 1.
#### Figure 1:
The elements of a power measurement (instantaneous voltage, current, and power) show on an HDO 6000 oscilloscope equipped with the Power Analysis option. Real and apparent power are automatically computed and displayed
The product of the instantaneous voltage (channel 1) and current (channel2) is the instantaneous power shown in the lower, line power trace. Note that the power waveform consists of a waveform at twice the frequency of the current or voltage, with a DC offset. This DC offset represents the average power being delivered to the load. The average or real power, represented by the symbol P, is measured in units of Watts. In Figure 1 the real power is determined automatically by determining the mean or average value of the instantaneous power waveform. Real power is displayed as the parameter rpwr and has the value 25.11 W in this example.
The product of the effective (rms) current and effective (rms) voltage is called the apparent power. Apparent power is represented by the symbol S and is measured in units of Volt-Amps (VA). In our example above the apparent power is:
$$S =120.59 * 0.328 = 39.6 VA$$
Apparent power is automatically computed and displayed as the parameter apwr. For resistive loads, the apparent and average powers are equal.
The ratio of average to apparent power is the power factor. In the sinusoidal case, the power factor is equal to the cosine of the phase angle between the current and voltage waveforms. It is more generally computed as the ratio of real to apparent power. In our example the power factor is also computed automatically and displayed using the parameter pf. The value of the power factor is 0.633.
Icrest is the crest factor of the current waveform. Crest factor is the ratio of the peak-to-peak value of the current to the rms value.
The reactive power, N, can be derived from the real and apparent power, using the following equation:
$$N = (S^2 - P^2)^{1/2}$$
The units of reactive power are Volt Amperes Reactive or VAR. Most users have an interest in real power and power factor, so reactive power is not calculated automatically.
The Power Analysis software is useful in analyzing line power. It simplifies the determination of real power, apparent power, and power factor by eliminating the need to set up math traces and parameter math. It is even more convenient to use than dedicated line power analyzers. The scope is already on your bench and the answers are only a button push away. | 2018-09-22 04:51:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7390211224555969, "perplexity": 974.9311330316518}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267158045.57/warc/CC-MAIN-20180922044853-20180922065253-00444.warc.gz"} |
http://math.stackexchange.com/questions/402042/properties-of-quotient-categories | Properties of quotient categories.
Let $\mathcal{A}$ be an abelian category and $\mathcal{C}$ a localizing subcategory in the sense of Gabriel. (A Serre subcategory or "thick" subcategory, such that the quotient functor $T\colon \mathcal{A}\rightarrow\mathcal{A}/\mathcal{C}$ admits a right adjoint, the "section functor".) Then we can form the quotient category $\mathcal{A}/\mathcal{C}$.
Which properties inherits $\mathcal{A}/\mathcal{C}$ from $\mathcal{A}$? To be more precise:
1. If $\mathcal{A}$ has enough injectives (resp. projectives), does $\mathcal{A}/\mathcal{C}$ too? If not, under which conditions?
2. If $A\in \mathcal{A}$ is injective (resp. projective), is it $T(A)$, too? If not, under which conditions?
3. If $A\in \mathcal{A}$ is a cogenerator, is it $T(A)$ too? If not, under which conditions?
4. If $\mathcal{A}$ is complete, is it $\mathcal{A}/\mathcal{C}$ too? If not, under which conditions?
I know that:
1. If $\mathcal{A}$ is cocomplete then so is $\mathcal{A}/\mathcal{C}$. ($T$ is a left adjoint)
2. If $\{U_i\}$ is a set of generators then so is $\{T(U_i)\}$.
3. If $\mathcal{A}$ is AB5 then so is $\mathcal{A}/\mathcal{C}$. ($T$ commutes with limits and one can prove that taking directed limits is exact.)
-
This question continues on mathoverflow.net/questions/132334/… . – archipelago Jun 2 '13 at 11:31 | 2016-04-30 23:17:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9798609614372253, "perplexity": 410.28860042190973}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860113010.58/warc/CC-MAIN-20160428161513-00019-ip-10-239-7-51.ec2.internal.warc.gz"} |
https://openstax.org/books/introductory-business-statistics/pages/12-1-test-of-two-variances | # 12.1Test of Two Variances
Introductory Business Statistics12.1 Test of Two Variances
This chapter introduces a new probability density function, the F distribution. This distribution is used for many applications including ANOVA and for testing equality across multiple means. We begin with the F distribution and the test of hypothesis of differences in variances. It is often desirable to compare two variances rather than two averages. For instance, college administrators would like two college professors grading exams to have the same variation in their grading. In order for a lid to fit a container, the variation in the lid and the container should be approximately the same. A supermarket might be interested in the variability of check-out times for two checkers. In finance, the variance is a measure of risk and thus an interesting question would be to test the hypothesis that two different investment portfolios have the same variance, the volatility.
In order to perform a F test of two variances, it is important that the following are true:
1. The populations from which the two samples are drawn are approximately normally distributed.
2. The two populations are independent of each other.
Unlike most other hypothesis tests in this book, the F test for equality of two variances is very sensitive to deviations from normality. If the two distributions are not normal, or close, the test can give a biased result for the test statistic.
Suppose we sample randomly from two independent normal populations. Let $σ 1 2 σ 1 2$ and $σ 2 2 σ 2 2$ be the unknown population variances and $s 1 2 s 1 2$ and $s 2 2 s 2 2$ be the sample variances. Let the sample sizes be n1 and n2. Since we are interested in comparing the two sample variances, we use the F ratio:
$F= [ s 1 2 σ 1 2 ] [ s 2 2 σ 2 2 ] F= [ s 1 2 σ 1 2 ] [ s 2 2 σ 2 2 ]$
F has the distribution F ~ F(n1 – 1, n2 – 1)
where n1 – 1 are the degrees of freedom for the numerator and n2 – 1 are the degrees of freedom for the denominator.
If the null hypothesis is $σ 1 2 = σ 2 2 σ 1 2 = σ 2 2$, then the F Ratio, test statistic, becomes $Fc= [ s 1 2 σ 1 2 ] [ s 2 2 σ 2 2 ] = s 1 2 s 2 2 Fc= [ s 1 2 σ 1 2 ] [ s 2 2 σ 2 2 ] = s 1 2 s 2 2$
The various forms of the hypotheses tested are:
Two-Tailed Test One-Tailed Test One-Tailed Test
H0: σ12 = σ22 H0: σ12 ≤ σ22 H0: σ12 ≥ σ22
H1: σ12 ≠ σ22 H1: σ12 > σ22 H1: σ12 < σ22
Table 12.1
A more general form of the null and alternative hypothesis for a two tailed test would be :
$H0 : σ12 σ22 = δ0 H0: σ12 σ22 =δ0$
$Ha : σ12 σ22 ≠ δ0 Ha: σ12 σ22 ≠δ0$
Where if δ0 = 1 it is a simple test of the hypothesis that the two variances are equal. This form of the hypothesis does have the benefit of allowing for tests that are more than for simple differences and can accommodate tests for specific differences as we did for differences in means and differences in proportions. This form of the hypothesis also shows the relationship between the F distribution and the χ2 : the F is a ratio of two chi squared distributions a distribution we saw in the last chapter. This is helpful in determining the degrees of freedom of the resultant F distribution.
If the two populations have equal variances, then $s 1 2 s 1 2$ and $s 2 2 s 2 2$ are close in value and the test statistic, $Fc= s 1 2 s 2 2 Fc= s 1 2 s 2 2$ is close to one. But if the two population variances are very different, $s 1 2 s 1 2$ and $s 2 2 s 2 2$ tend to be very different, too. Choosing $s 1 2 s 1 2$ as the larger sample variance causes the ratio $s 1 2 s 2 2 s 1 2 s 2 2$ to be greater than one. If $s 1 2 s 1 2$ and $s 2 2 s 2 2$ are far apart, then $Fc= s 1 2 s 2 2 Fc= s 1 2 s 2 2$ is a large number.
Therefore, if F is close to one, the evidence favors the null hypothesis (the two population variances are equal). But if F is much larger than one, then the evidence is against the null hypothesis. In essence, we are asking if the calculated F statistic, test statistic, is significantly different from one.
To determine the critical points we have to find Fα,df1,df2. See Appendix A for the F table. This F table has values for various levels of significance from 0.1 to 0.001 designated as "p" in the first column. To find the critical value choose the desired significance level and follow down and across to find the critical value at the intersection of the two different degrees of freedom. The F distribution has two different degrees of freedom, one associated with the numerator, df1, and one associated with the denominator, df2 and to complicate matters the F distribution is not symmetrical and changes the degree of skewness as the degrees of freedom change. The degrees of freedom in the numerator is n1-1, where n1 is the sample size for group 1, and the degrees of freedom in the denominator is n2-1, where n2 is the sample size for group 2. Fα,df1,df2 will give the critical value on the upper end of the F distribution.
To find the critical value for the lower end of the distribution, reverse the degrees of freedom and divide the F-value from the table into one.
• Upper tail critical value : Fα,df1,df2
• Lower tail critical value : 1/Fα,df2,df1
When the calculated value of F is between the critical values, not in the tail, we cannot reject the null hypothesis that the two variances came from a population with the same variance. If the calculated F-value is in either tail we cannot accept the null hypothesis just as we have been doing for all of the previous tests of hypothesis.
An alternative way of finding the critical values of the F distribution makes the use of the F-table easier. We note in the F-table that all the values of F are greater than one therefore the critical F value for the left hand tail will always be less than one because to find the critical value on the left tail we divide an F value into the number one as shown above. We also note that if the sample variance in the numerator of the test statistic is larger than the sample variance in the denominator, the resulting F value will be greater than one. The shorthand method for this test is thus to be sure that the larger of the two sample variances is placed in the numerator to calculate the test statistic. This will mean that only the right hand tail critical value will have to be found in the F-table.
### Example 12.1
Two college instructors are interested in whether or not there is any variation in the way they grade math exams. They each grade the same set of 10 exams. The first instructor's grades have a variance of 52.3. The second instructor's grades have a variance of 89.9. Test the claim that the first instructor's variance is smaller. (In most colleges, it is desirable for the variances of exam grades to be nearly the same among instructors.) The level of significance is 10%.
Try It 12.1
The New York Choral Society divides male singers up into four categories from highest voices to lowest: Tenor1, Tenor2, Bass1, Bass2. In the table are heights of the men in the Tenor1 and Bass2 groups. One suspects that taller men will have lower voices, and that the variance of height may go up with the lower voices as well. Do we have good evidence that the variance of the heights of singers in each of these two groups (Tenor1 and Bass2) are different?
Tenor1 Bass 2 Tenor 1 Bass 2 Tenor 1 Bass 2
69 72 67 72 68 67
72 75 70 74 67 70
71 67 65 70 64 70
66 75 72 66 69
76 74 70 68 72
74 72 68 75 71
71 72 64 68 74
66 74 73 70 75
68 72 66 72
Table 12.2
Order a print copy
As an Amazon Associate we earn from qualifying purchases. | 2021-01-20 16:31:21 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 21, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7706118822097778, "perplexity": 389.4587619135212}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703521139.30/warc/CC-MAIN-20210120151257-20210120181257-00641.warc.gz"} |
https://gust.dev/r/power-ttests | ## Overview
This analysis will perform both independent two-sample t-tests and a paired-sample t-test on R’s sleep dataset. Each test’s confidence interval, p-value, and statistical power will be used to assess the test’s quality and whether or not we can safely reject the null hypothesis.
### Imports
require(pwr)
## Data
The data used in this notebook is from R’s built-in sleep dataset. The data shows the effect of two soporific drugs (increase in hours of sleep compared to control) on 10 patients. There are 3 variable fields:
• extra the amount of extra sleep a patient got
• group which drug they were given
• ID the patient ID
https://stat.ethz.ch/R-manual/R-devel/library/datasets/html/sleep.html
Relevant files:
Gust_INET4061Lab3_R.Rmd and Gust_INET4061Lab3_R.html
sleep
(sleep_wide <- data.frame(
ID=1:10,
group1=sleep$extra[1:10], group2=sleep$extra[11:20]
))
# Pooled Standard Deviation
SDpooled = sqrt((sd(sleep_wide$group1)**2 + sd(sleep_wide$group2)**2)/2)
# Effect Size (Cohen's d)
d = (mean(sleep_wide$group1) - mean(sleep_wide$group2))/SDpooled
## Exploratory Data Analysis
The dataset used is one built-in with R. As provided, it shows two groups concatenated together, thus creating duplicate ID fields. Traditionally, an ID field does not contain repeated values if it can be avoided.
In this instance, the IDs represent the same individual, so sleep_wide was created by splitting by group and reissuing the ID field. Depending on the function call, one varient may be less verbose than the other and as such, both are used.
### Two Sample t-tests
# Welch t-test
t.test(extra ~ group, sleep) # implicitly assumed: alternative="two.sided", var.equal=FALSE
##
## Welch Two Sample t-test
##
## data: extra by group
## t = -1.8608, df = 17.776, p-value = 0.07939
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -3.3654832 0.2054832
## sample estimates:
## mean in group 1 mean in group 2
## 0.75 2.33
# Using the widened version produces the same result
# t.test(sleep_wide$group1, sleep_wide$group2)
t.test(extra ~ group, sleep, var.equal=TRUE)
##
## Two Sample t-test
##
## data: extra by group
## t = -1.8608, df = 18, p-value = 0.07919
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -3.363874 0.203874
## sample estimates:
## mean in group 1 mean in group 2
## 0.75 2.33
pwr.t.test(n=10, d=d, type="two.sample")
##
## Two-sample t test power calculation
##
## n = 10
## d = 0.8321811
## sig.level = 0.05
## power = 0.4214399
## alternative = two.sided
##
## NOTE: n is number in *each* group
In both the case of Welch’s unequal variance t-test and Student’s t-test we fail to reject the null hypothesis
For both tests, the p-value was larger than our designated significance level of p=0.05.
Welch’s: 0.07939, Student’s: 0.07919
Welch’s confidence interval: -3.3654832 0.2054832
Student’s confidence interval: -3.363874 0.203874
From the 95th confidence interval of these tests, we can only say we are 95% confident that the difference between means is between approximately -3.36 and 0.20. Since the interval contains 0, there is not sufficient evidence to claim a difference.
The statistical power is ~0.421, with a power this low, the main conclusion to be drawn is either that our sample n=10 is too small or our testing method is flawed. Since we are making a direct comparison between two outcomes of the same individual and not using a paired t-test, the latter reasoning makes sense.
### Paired t-tests
# Sort by group then ID
sleep <- sleep[order(sleep$group, sleep$ID), ]
# Paired t-test
t.test(extra ~ group, sleep, paired=TRUE)
##
## Paired t-test
##
## data: extra by group
## t = -4.0621, df = 9, p-value = 0.002833
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -2.4598858 -0.7001142
## sample estimates:
## mean of the differences
## -1.58
# Resulting values are equivalent to a paired t-test
t.test(sleep_wide$group1 - sleep_wide$group2, mu=0, var.equal = TRUE)
##
## One Sample t-test
##
## data: sleep_wide$group1 - sleep_wide$group2
## t = -4.0621, df = 9, p-value = 0.002833
## alternative hypothesis: true mean is not equal to 0
## 95 percent confidence interval:
## -2.4598858 -0.7001142
## sample estimates:
## mean of x
## -1.58
pwr.t.test(n=10, d=d, type="paired", alternative="two.sided")
##
## Paired t test power calculation
##
## n = 10
## d = 0.8321811
## sig.level = 0.05
## power = 0.6500366
## alternative = two.sided
##
## NOTE: n is number of *pairs*
pwr.t.test(n=10, d=d, type="paired", alternative="less")
##
## Paired t test power calculation
##
## n = 10
## d = -0.8321811
## sig.level = 0.05
## power = 0.7828239
## alternative = less
##
## NOTE: n is number of *pairs*
With a paired t-test, we would now be able to reject the null hypothesis
the p-value is now 2.833e-3, well below the designated significance level of p=0.05.
Confidence interval: -2.4598858 -0.7001142
From the 95th confidence interval of this, we can say we are 95% confident that the true difference between means is approximately between -2.46 and -0.70. The interval does not contain 0, so rejecting the null hypothesis is no longer illogical.
The statistical power is ~0.650 with a two-sided alternative, an increase from the independent tests, but still lower than the desired 0.80. However, when the alternative is less, the statistical power increases to ~0.78, meaning if our null hypothesis was that the mean of group one is not less than the second group, we have much greater statistical power. To reliably increase power further a larger sample size may be used.
t.test(sleep$extra, mu=0) ## ## One Sample t-test ## ## data: sleep$extra
## t = 3.413, df = 19, p-value = 0.002918
## alternative hypothesis: true mean is not equal to 0
## 95 percent confidence interval:
## 0.5955845 2.4844155
## sample estimates:
## mean of x
## 1.54
pwr.t.test(n=10, d=d, type="one.sample")
##
## One-sample t test power calculation
##
## n = 10
## d = 0.8321811
## sig.level = 0.05
## power = 0.6500366
## alternative = two.sided
## Conclusions
This document conducted both independent two-sample t-tests and a paired t-test on R’s sleep dataset. From the analysis of p-values, confidence intervals, and power levels we were able to demonstrate how using independent tests on dependent data leads to flawed results.
Furthermore, we were able to conclude with a relevantly high level of confidence that there is a statistically significant difference between group1 and group2 but to be more certain we would need a larger sample size.
Future works may involve the use of a dataset with a larger sample size or expanding the analysis with ANOVA tests and their respective power tests. | 2020-08-03 18:05:24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46770381927490234, "perplexity": 2969.3220353538804}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735823.29/warc/CC-MAIN-20200803170210-20200803200210-00459.warc.gz"} |
https://quantumcomputing.stackexchange.com/tags/quantum-turing-machine/hot?filter=year | # Tag Info
## Hot answers tagged quantum-turing-machine
3
As far as we know, yes. This is essentially the Church-Turing thesis. Note that this is not a mathematical result, but more of a definition of what it means to be computable. You can find plenty of discussions about this around. A few notable examples are: What would it mean to disprove Church-Turing thesis? (on cstheory) Extended Church-Turing Thesis [and ...
2
I will address the first two parts based on what I understood so far. The extended Church–Turing thesis or (classical) complexity-theoretic Church–Turing thesis states that "A probabilistic Turing machine can efficiently simulate any realistic model of computation.", whereas the quantum extended Church–Turing thesis or quantum complexity-theoretic ...
2
Taking the questions head on. I'm not sure that original references are very much the point, although there are some. It's not a hard question. The statement is that realistic polynomial time equals what a quantum computer (if you want to be rigorous, say a QTM) can do in polynomial time. The question has been answered many times in QCSE that a quantum ...
2
do we need to come up with completely different quantum-based solutions for such problems, or is there a way to 'interpret' existing algorithms to the quantum domain and still expect some speedup? Generally speaking yes, you need to come up with different algorithms. You cannot simply take a classical algorithm and "quantize it" in a straightforward way. ...
2
There is evidence that quantum coherence and it's role in chemical reactivity is responsible for the magnetic field sensing in migratory birds, the so called avian compass, https://arxiv.org/abs/1206.5946v1. Similar quantum effects and chemistry could very well be occuring and playing a role in the brain, though as far as I know there isn't anything ...
1
As far as we know - and I know, so correct me anyone if there's research to the contrary - the neuron interactions in the brain are well within the classical regime. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5681944/
1
The reason that a quantum computer is faster in same tasks is given by different computational paradigm based on quantum mechanics laws. They mainly exploit superposition (i.e. state of qubit is linear combination of zero state and one state) and quantum entanglement (i.e. two or more qubits are connected and they behave as one system, or in other words ...
1
Regarding the "quantum (non-extended) Church-Turing Thesis," I think this asserts that there is no physical process, like a quasar or some other astronomical woo, that we know could produce a steady supply of qubits all in the same state $\alpha|0\rangle+\beta|1\rangle$, with the property that $\beta^2=\Omega_C$, that is, Chaitin's halting probability. We ...
1
Suppose we are given the ($n\times n$ adjacency matrix $M_0$ of graph $G_0$ and $M_1$ of graph $G_1$, and we wish to know whether $G_0\simeq G_1$. It is a folklore result that if we can prepare states: $$\vert\alpha_G\rangle=\sum\limits_{\sigma\in S_n}\vert \sigma (G)\rangle,$$ with $S_n$ being the symmetric group on $n$ elements, we can prepare such a ...
Only top voted, non community-wiki answers of a minimum length are eligible | 2020-08-15 20:27:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6687768697738647, "perplexity": 458.20841213930663}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439741154.98/warc/CC-MAIN-20200815184756-20200815214756-00198.warc.gz"} |
https://www.insideaiml.com/blog/SoftMax%0AActivation-Function-1034 | #### Machine Learning with Python & Statistics
4 (4,001 Ratings)
220 Learners
More webinars
# SoftMax Activation Function
Kajal Pawar
a year ago
normalized exponential function.
In mathematics, the softmax function, also known as softargmax or normalized exponential function. SoftMax function is described as a combination of multiple sigmoid functions. As the sigmoid functions returns the values in the range of 0 and 1, which can be treated as probabilities of a data point belonging to a particular class. That’s why sigmoid functions are mainly used for binary classification problems.
But on the other hand, the SoftMax function can be used for multiclass classification problems. SoftMax Activation function gives the probability for a data point belonging to each individual class.
In deep learning, the term logits is popularly used for the last neuron layer of the neural network for the classification task which produces raw prediction values as real numbers ranging from [-infinity, +infinity]. — Wikipedia
### What are logits?
Logits are the raw score values produce by the last layer of the neural network before applying any activation function on it.
### Why SoftMax function?
SoftMax function turn logits value into probabilities by taking the exponents of each output and then normalize each number by the sum of those exponents so that the entire output vector adds up to one.
The equation of the SoftMax function can be given as:
equation of the SoftMax function
The softmax function is similar to the sigmoid function, except that here in the denominator we sum together all of the things in our raw output. In simple words, when we calculate the value of softmax on a single raw output (e.g. z1) we cannot directly take the of z1 value alone. We have to consider z1, z2, z3, and z4 in the denominator as shown below:
softmax function
The softmax function ensures that the sum of all our output probability values will always be equal to one.
That means if we are classifying a dog, cat, boat and airplane and applying a softmax function to our outputs, in order for the network to increase the probability that a particular example is classified as an “ airplane” it needs to decrease the probabilities that the example is classified as some other classes such as a dog, cat or boat. Later we will see its example also.
### Comparison between sigmoid and softmax outputs:
Comparison between sigmoid and softmax outputs
The graph can be represented as:
softmax activation function
From the above graph, we can see there is not much difference between the sigmoid function graph and softmax function graph.
Softmax function has many applications in Multiclass Classification and neural networks. SoftMax is different from the normal max function: the max function only outputs the largest value and SoftMax ensures that smaller values have a smaller probability and will not be discarded directly. The denominator of the SoftMax function combines all factors of the original output value, which means that the different probabilities obtained by the SoftMax function are related to each other.
In the case of binary classification, for Sigmoid, the equation will be:
binary classification, for Sigmoid
For Softmax when K = 2, the equation will be:
For Softmax when K = 2
From the above the equation we can take common, which is:
equation
So, it can be seen from the equation that in the case of binary classification, Softmax is degraded to Sigmoid function.
While we try to build a network for a multiclass the problem, the output layer would have as many neurons as the number of classes in the target as shown below:
multi class classification
For example, if we are having three different classes, so there will be three neurons in the output layer.
Now let suppose you received the output from the neurons as [0.7, 1.5, 4.8].
If we apply the softmax function over the outputs of neurons, then we will get the output as: - [0.01573172, 0.03501159, 0.94925668].
These outputs represent the probability for the data belonging to different classes.
Note that the sum of all the output values will always be 1.
Now let’s take an example and understand softmax function in a better way.
## A Real Softmax example.
### To understand how softmax actually works lets us consider the below example.
compute cross-entropy loss
In the above example, our aim is to classify the image whether the image is an image of a dog, cat, boat or airplane.
From the image we can clearly see the image is an “airplane” image. However, let’s see does our softmax function correctly classify it?
As we see from the above figure. Here, I have taken the output of our scoring function f, for each of the four classes, respectively. These scoring values are our unnormalized log probabilities for the four different classes.
Note: Here, I have taken the scoring values randomly for this particular example. But in reality, these values would not be taken randomly instead these values would be the output of your scoring function f.
Now, when we exponentiate the output of the scoring function, this will result in unnormalized probabilities as shown in the figure below:
Exponentiating the output values from the scoring function gives us our unnormalized probabilities
Now, our next step is to take the denominator, sum the exponent's values and divide it by the sum which will give us the actual probabilities associated with each of the different class labels shown below:
To obtain the actual probabilities, we divide each individual unnormalized probability by the sum of unnormalized probabilities.
Finally, we can take the negative log, which will give us our final loss:
Taking the negative log of the probability for the correct ground-truth class yields the final loss for the data point
So, from the above example we can see that our Softmax classifier, correctly classify the image as the “airplane” with 93.15% confidence value.
This is how the Softmax function works behind the scene.
Let see how we can implement softmax function on python with a simple example:
``````#define the softmax function
def softmax_function(x):
a = np.exp(x)
z = a/a.sum()
return z
#call the function
softmax_function([0.7, 1.5, 4.8])
``````
Output:
``array([0.01573172, 0.03501159, 0.94925668])``
## Why Softmax is used in neural networks?
There are mainly some points which is very important about the SoftMax function, when we think about why it is mostly used in neural network.
• We cannot use argmax directly instead, we have to approximate its outcome with SoftMax because argmax is not differentiable and it is also not continuous. Therefore, argmax cannot be used while training neural networks with gradient descent-based optimization technique.
• SoftMax is having nice properties with regards to normalization as it can be differentiated. Hence, it’s very useful for optimizing the neural network.
## Implementation of SoftMax with Keras
In this example, we will build a deep neural network that will classify data into one of the four classes using Keras that make use of Softmax activation function for classification.
Before jump into coding make sure that you have these dependencies installed before you run this model. You can install them using pip install method as shown below:
pip install keras tensorflow matplotlib numpy scikit-learn
Now as your required libraries and dependencies is installed, So we can now start coding and build the neural network.
Let’s start
'''
### Keras model to the example with Softmax activation function.
'''
``````#Import libraries
import keras
from keras.models import Sequential
from keras.layers import Dense
from keras.utils import to_categorical
import matplotlib.pyplot as plt
import numpy as np
from sklearn.datasets import make_blobs
# Set Configuration options
total_num_samples = 1000
training_split = 250
cluster_centers = [(15,0), (15,15), (0,15), (30,15)]
num_classes = len(cluster_centers)
loss_function = 'categorical_crossentropy'
# Generate data for experiment
x, targets = make_blobs(n_samples = total_num_samples, centers = cluster_centers, n_features = num_classes, center_box=(0, 1), cluster_std = 1.50)
categorical_targets = to_categorical(targets)
X_training = x[training_split:, :]
X_testing = x[:training_split, :]
Targets_training = categorical_targets[training_split:]
Targets_testing = categorical_targets[:training_split].astype(np.integer)
# Set shape based on data
feature_vector_length = len(X_training[0])
input_shape = (feature_vector_length,)
print(f'Feature shape: {input_shape}')
# Generate scatter plot for training data
plt.scatter(X_training[:,0], X_training[:,1])
plt.title('Nonlinear data')
plt.xlabel('X1')
plt.ylabel('X2')
plt.show()
# Create the model
model = Sequential()
# Configure the model and start training
history = model.fit(X_training, Targets_training, epochs=30, batch_size=5, verbose=1, validation_split=0.2)
# Test the model after training
results = model.evaluate(X_testing, Targets_testing, verbose=1)
print(f'Results - Loss: {results[0]} - Accuracy
``````
### Results
When we run the above code, you should find an extremely well-performing model which produces the result as shown below:
``Results - Loss: 0.002027431168 - Accuracy: 100.0%``
But in real-world problems, the result will not be always as good as you desire. For the best result, you have to perform many different experiments and trail and then come up with the best one.
Generally, we use softmax activation instead of sigmoid with the cross-entropy loss because softmax activation distributes the probability throughout each output node. But, since it is a binary classification, using sigmoid is same as softmax. For multi-class classification use sofmax with cross-entropy.
### What’s the difference between sigmoid and softmax function?
difference between sigmoid and softmax function
Conclusion:
In this article, we saw that Softmax is an activation function which converts the inputs and output of the last layer of your neural network into a discrete probability distribution over the target classes. Softmax ensures that the criteria of probability distributions that the probabilities are nonnegative and also that the sum of probabilities should be equal 1.
After reading this article finally you came to know the importance of softmax activation functions. For more blogs/courses in data science, machine learning, artificial intelligence and new technologies do visit us at InsideAIML. | 2021-10-21 11:37:15 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.840729832649231, "perplexity": 1052.5566284527904}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585405.74/warc/CC-MAIN-20211021102435-20211021132435-00517.warc.gz"} |
http://openstudy.com/updates/516c1651e4b02ec89c5ad86e | ## twilaswift Group Title Isadora bought 4 trees to plant in her backyard and spent no less than eighty-four dollars altogether. How many dollars could each tree have cost? Let t represent how many dollars each tree cost. t is greater than eighty t is greater than or equal to eighty t is greater than twenty-one t is greater than or equal to twenty-one one year ago one year ago
1. twilaswift
4 divide 4 is 1
2. twilaswift
8 divided by 4?
3. nincompoop
$4t \ge 84$
4. twilaswift
t is greater than or equal to eighty
5. nincompoop
the question is: how much could EACH tree cost? meaning you will have to divide 4t by 4 and also 84 by 4 this gives you: $\frac{ 4t }{ 4 } \ge \frac{ 84 }{ 4 }$
6. twilaswift
84 divided by 4 = 21
7. nincompoop
8. twilaswift
1
9. nincompoop
no...
10. nincompoop
4/4 is 1 but you have t
11. twilaswift
t = 21
12. twilaswift
t > 21
13. nincompoop
NOT EQUAL. we are using inequality greater than or equal to, because in the original problem it says: spent no less than 84 dollars...
14. twilaswift
$t \ge 21$
15. nincompoop
awesome
16. Oriel
[$Tge21$ | 2014-10-25 14:51:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6184209585189819, "perplexity": 2948.9751290225868}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119648297.22/warc/CC-MAIN-20141024030048-00163-ip-10-16-133-185.ec2.internal.warc.gz"} |
https://www.hepdata.net/record/ins1215085 | • Browse all
Centrality determination of Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV with ALICE
The collaboration
Phys.Rev.C 88 (2013) 044909, 2013.
Abstract (data abstract)
CERN-LHC. This publication describes the methods used to measure the centrality of inelastic Pb-Pb collisions at a center-of-mass energy of 2.76 TeV per colliding nucleon pair with ALICE. The centrality is a key parameter in the study of the properties of QCD matter at extreme temperature and energy density, because it is directly related to the initial overlap region of the colliding nuclei. Geometrical properties of the collision, such as the number of participating nucleons and number of binary nucleon-nucleon collisions, are deduced from a Glauber model with a sharp impact parameter selection, and shown to be consistent with those extracted from the data. The centrality determination provides a tool to compare ALICE measurements with those of other experiments and with theoretical calculations.
• Table 1
Data from T A1
10.17182/hepdata.66916.v1/t1
• Table 4
Data from T A4
10.17182/hepdata.66916.v1/t4
Same as above with bigger centrality classes.
• Table 5
Data from T A4
10.17182/hepdata.66916.v1/t5
Same as above with bigger centrality classes.
• Table 6
Data from T A4
10.17182/hepdata.66916.v1/t6
Same as above with bigger centrality classes. | 2022-01-28 06:09:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8703765869140625, "perplexity": 1993.5926292816066}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305420.54/warc/CC-MAIN-20220128043801-20220128073801-00606.warc.gz"} |
https://markrucker.net/musings/whats-100-between-friends-v1.html | # What's $100 Between Friends (v1) ## posted on Nov 27th, 2015 For the last two years I’ve worked in the industry of recovery auditing. For me, that has entailed writing software to discover errors in historical financial transactions. After finding these errors we seek to correct them by asking the offending party to reimburse our clients. When I’m feeling philosophical, I build binary abstractions on social contracts respecting history. The work has expanded my existential limits. To explore this further let’s look at a concrete example with two imaginary companies: Acme and Omega. Ten years ago Omega installed windows at an Acme store for$100. After the installation Omega accidentally sent two invoices for $100 to Acme. Acme, who was opening many new stores simultaneously, didn’t notice and paid both invoices. Now, ten years later, my software discovers the mistake and we have requested that Omega reimburse Alpha the whole$100.
Reflecting, what value has the reimbursement request created? A simple answer might be that $100 has been freed that can now be spent on other productive services. Upon further examination, however, I believe the answer isn’t quite so trivial. Two things actually happen as a result of the discovery. Alpha recovers$100 while Omega loses $100. The net financial change is$0. No money was freed up. So what happened? A better explanation might be that the power to decide how to spend that $100 dollars shifted from Omega to Alpha. Focusing on power, I believe, starts to pull out the true value of recovery auditing. More on that later for now let’s do one more thought experiment. Imagine for a second that an old roommate from ten years ago shows up. This roommate claims that you owe her$100 dollars from a month when she covered your share of the rent. Would you pay her? Of course the answer would depend on the situation. Maybe this roommate forgot to feed your pet goldfish when you went on vacation. Or maybe you also remember that month and feel guilty. Or maybe you don’t remember it and ask for more proof. The power of their claim is dependent on external factors with no amount of external factors ever being sufficient to require your consent (excluding force). As long as you refuse her claim no “value” has been created for your old roommate because nothing has changed.
Coming back to the idea of power then. The value created in recovery auditing is an exchange of social power. When Omega reimburses Alpha for \$100 they are acknowledging that, in this situation, Alpha has the right to request the money. The interesting thing is this says nothing about Alpha’s relationship to Omega. Perhaps Omega is twice the size of Alpha. All that matters is there was a mistake and both companies believe they have a fiduciary responsibility to correct it, regardless of why. It is the shared social belief that gives value to my work. Without it recovery auditing would be value less.
This line of thinking brings us to the soft under-belly of capitalism. It has value because, when it involves money, we are all, by and large, obedient. Whether that is installing windows or correcting a 10 year old mistake. This obedience and balance is in no way required by natural laws as transactions in nature are. This obedience simply comes from generations of imparted cultural heritage. The obedience also becomes both savior and master. Savior, because by it we can influence our world without needing to resort to force. Master, because by it we can replace our deeper motivations with a more brittle veneer of greed. And so each day I encourage obedience from all, big and small. | 2022-10-05 06:45:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2501147389411926, "perplexity": 2129.1337519744698}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00265.warc.gz"} |
http://lrose.net/howtorun_HawkEye.html | # HawkEye¶
HawkEye is a next generation lidar and radar display tool. It can display real-time and archived CfRadial data either in BSCAN, PPI, or RHI geometry. Editing capability will be added in future releases.
## Running HawkEye¶
While relatively new to the research community, HawkEye is a mature viewing tool that has been powering real-time displays for NCAR S-Pol, HIAPER Cloud Radar, and the CSWR Doppler on Wheels for several years. As part of the LROSE project the tool has been updated to work in a research mode with archived CfRadial files. Like the Radx tools, HawkEye can be run in the Virtual Toolbox or compiled as a native application. The look and feel of HawkEye will be slightly different depending on the window environment, but the functionality remains the same across platforms.
To run HawkEye as a basic viewer it is not necessary to create a parameter file, but like the LROSE tools there is more functionality provided via the parameter options. In its simplest invocation:
lrose -- HawkEye -f </path/to/CfRadial_files>
If the data is organized under the YYYYMMHH date directory in the path, the user can specify specific time span to display:
lrose -- HawkEye -archive_url /scr/rain1/rsfdata/projects/pecan/CfRadial/kddc/moments -start_time "2015 06 26 00 00 00" -time_span 7200
This command would start it up to look at the specified data location, from the specified start time, for a time span of 2 hours. In this case, since we are searching by time, you are required to have a day directory in the path, so that actual data would be in the 20150626 subdirectory underneath the ‘moments’ directory.
To check all command line options for HawkEye, type the following command into a bash.
lrose -- HawkEye -h
To obtain the default parameter file for more options, use the following command:
lrose -- HawkEye -print_params > \$PWD/HawkEye.params
While HawkEye will do its best to display data properly, the parameter file can force the viewer to use POLAR_DISPLAY for PPI and RHI display and BSCAN_DISPLAY for BSCAN mode.
### Colors¶
HawkEye supports many different color scales for different fields that can be specified via external files. Default color palettes are applied to variable names with commonly used meanings (ie. ZDR), but can be overridden by the user.
While most of the GUI viewing options are intuitive, additional documentation on all the details of the HawkEye GUI is under development. | 2020-10-22 05:50:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3609393537044525, "perplexity": 4437.793970117263}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107878921.41/warc/CC-MAIN-20201022053410-20201022083410-00542.warc.gz"} |
http://openstudy.com/updates/55e49c4fe4b0819646d7e691 | ## anonymous one year ago (Question screenshot in comments) I found out the percent but didn't see that it was a negative fraction, and I can't find anything that explain how to convert negative fractions to decimal/percents. Someone help ??
1. anonymous
2. freckles
just bring over the negative
3. freckles
convert 9/20 to a a decimal then bring over the negative sign
4. freckles
covert 9/20 to a percent then bring over the negative sign
5. anonymous
What do you mean by "bring over" ? as in just add a negative sign to the percent ??
6. freckles
yes the number is negative the decimal is therefore negative the percent is therefore negative
7. freckles
example: -a/100=-a%
8. anonymous
sorry, thanks! it's been a while considering summer so I kind of have to refresh my math lingo, heh.
9. freckles
that is just 3 different ways to represent the same number if you don't put the negative sign on the other two values you are changing the value
10. freckles
another example: $-\frac{1}{4}=-.25=-25 \%$
11. nonopro
0.45 | 2016-10-22 03:52:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5550982356071472, "perplexity": 1717.4790623799706}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988718423.65/warc/CC-MAIN-20161020183838-00106-ip-10-171-6-4.ec2.internal.warc.gz"} |
http://cogsci.stackexchange.com/tags/methodology/hot?filter=year | # Tag Info
6
The study design sounds pretty good. Some of the good things you are proposing: Using a repeated measures design will give you more statistical power than a between subjects design, which is particularly useful when your sample size is small. Randomising or counterbalancing for order should mostly control for order effects. Double blind will focus the ...
6
Does the locking refer to the initiation of the measurement with starting cue being being the presentation of stimulus or the response of the subject? More or less, yes. When measuring brain activity, you usually make a long, continuous recording during which you expose your study participants to a task over and over again. There's a lot of noise ...
6
Antoine Tremblay has just released an advanced analysis toolbox: http://onlinelibrary.wiley.com/doi/10.1111/psyp.12299/abstract It's missing about half the features on your list, although fundamentally, spectral density is a simple task and LORETA is a stand-alone package anyways (although similar approaches, e.g. general CSD estimation, are implemented in ...
5
First off, what button-box you use is going to be influenced by what software you're using to run the experiment, so ideally you should specify that. The PST serial response box is probably the industry standard, and is what we have in my lab, although a lot of that is probably down to it coming from the makers of EPrime. EPrime doesn't work on OSX ...
5
Your question was a loooooong time ago, but I just ran across a couple of good references explaining what backward masking does and how to choose one. This(1) is a great paper examining the neural mechanisms and timing of visual backward masking; according to this (2) 2000 review of masking theory, there are four subtypes of backward masking. Backward ...
5
If n != m then it will not home in on the 50 % threshold. In these simple N-up/N-down staircases, you can modify either the stepsize (as you proposed) or the number of successes/failures to act as a criterion for upgrade/downgrade. A comprehensive introduction to these staircases and the effect of changing these properties can be found in this paper. The 80 ...
4
The number of samples that are necessary for a good parameter estimation does indeed depend on the estimation method. I am not aware of a simple rule of thumb to determine an optimal sample size, but there has been a lot of literature on this topic. A paper that might be a good starting point for a literature search is Van Zandt T. (2000) How to fit a ...
4
Disclaimer: I'm not generally doing experiments where reaction time is the primary DV. But I thought I'd look at this issue and explored RTs from a neuroimaging dataset, and I think the findings are relevant to the question. I think without further qualification, this question doesn't have an answer. Here I've plotted the estimation of reaction time/RT over ...
4
What you are actually asking about is the debate surrounding the question: Can psychological quantities be measured? Up until about 1800 psychological questions where discussed by philosophers. A separate psychological discipline did not yet exist. Answers to questions relating to perception, emotion and cognition where attempted on the basis of religious ...
3
There are a range of "adjective checklists" that have been developed to assess affective states, personality traits, or characteristics of individuals. Two of the most widely cited measures are the Multiple Affective Adjective Check List (MAACL) and the Multiple Affective Adjective Checklist-Revised (MAACL-R) (Zuckerman & Lubin, 1965; Zuckerman & ...
3
For an open source JavaScript/HTML/CSS solution, check out jsPsych: http://www.jspsych.org. It can be used for reaction time measurement and interactive designs. An article describing the library was recently published in Behavior Research Methods. de Leeuw, J. R. (2014). jsPsych: A JavaScript library for creating behavioral experiments in a Web browser. ...
3
As mentioned in the other comments, ANOVA is problematic when mixing types of predictor variables. (Generalized) mixed effects models are gaining popularity these days and actually provide a very convenient way for modelling such things. A paper demonstrating the efficacy of this approach as well as giving a tutorial-like introduction is: Davidson, D. J. ...
3
I'm still not clear on what is your question. You ask whether psychology and medicine differ in some aspect of their methodological approach. Experiments are typically analysed using statistics to test hypotheses. So those things all go together. Psychology and medicine both perform controlled experiments and observational studies. They both perform ...
3
You may want to have a look at our 40 questions to date that use the statistics tag. These may demonstrate the complexity of our applications in statistics. Wikipedia also has an entire psychological statistics page that seems intended to index other pages on specific applications. Psychological experiments commonly test null hypotheses (e.g., $H_0$: the ...
3
See this question on comparing scales with different response scales. In short, you have to do it with care. A good option for ensuring comparability is to get a sample of participants to provide responses on both response scales in order to see how they relate. This could then be used to create a conversion scale. More generally, there is a large debate ...
3
Both the Amsterdam University (UvA) and Radboud University use a public online system for applying for participation in experiments. I forget which system UvA uses, but Radboud uses the sona system (just google it, you can creat an account). There you can see ongoing studies and apply for experiments. Both these cities are big research hubs for neuroimaging. ...
3
Obviously, this question is highly under-specified. The sample size you need depends a lot on the aims of your analysis. In general, when thinking about sample size requirements, you need to think about power analysis and desired precision of estimation. This in turn requires you to think about your research question and expectations about results (e.g., ...
2
There's a new program called "Paradigm" that has direct support for typed responses. It will measure the input speed, time to first key press and record the typed response. It's very easy to use and has a number of other great features. Check it out: Paradigm http://www.paradigmexperiments.com
2
I just wrote three big comments on @JeromyAnglim's answer, but I'm opting to move them all to this answer instead. This isn't an attempt to answer the OP completely, but hopefully they'll be of some use for part of the question. From a mainly statistical and psychometric standpoint, I've heard (from an expert whose opinion I could mention confidently in a ...
2
It's often referred to as "doll therapy" or "play therapy" and applies to adults as much as it does to children. For example, this new product, the "Inner Critic Doll" enables adults to hold a physical manifestation of their inner critic and start a dialogue with it. It has a zipper mouth which can be zipped shut to physically silence this inner voice. The ...
2
Adding to Jeromy's answer, the answer to your question would depend on what you want to study. If you study normal behavior, common to all people, you assign your participants randomly to the experimental and control group. The experimental group receives the factor you want to study (e.g. watches some advertising), while the control group does not, and ...
2
www.cognitive-innovations.com just released an iPad based cognitive assessment. Looks pretty comprehensive.
2
Without knowing exactly what the factor you are interested in is, it is hard to predict how feasible it would be to manipulate it. For example, is it possible to make two videos of the speaker, one with the factor, and one without, with nothing else changing? My guess is that you probably can't do this, so I'm going to focus on how you might be able to run ...
2
There is a program called Paradigm that allows you to build millisecond-accurate neurocognitive experiments for iOS devices. The experiment builder is like E-Prime but easier to use. The app is available in the app store. You upload your experiments to a Dropbox and then log in to access them through the app. It's pretty flexible. I've used it to build ...
2
Tests for malingering are founded on the following assumption: there are symptomps that even the worst illness doesn't include. These tests aim at assessing these symptomps, that are unrealistic, fake. A famous memory test for malingering is Digit Memory Test (Hiscock et al., 1989) It is a very easy test, even a person with Alzheimer can perform well. ...
2
This article by Bobrov et. al seems to be similar to what you are looking for. They were able to classify (at above chance performance) whether subjects were imagining houses or faces. The training protocol is particularly interesting: they started by showing subjects pictures, but then used a feedback process of showing the subjects the output of the ...
1
In my opinion, Psychforums.com is by far the best place to post online studies, but your school/university needs to establish a partnership with them first. Best is to speak with your professor or someone responsible for your Psychology department in order to have that partnership in place, which is in fact really easy (see the post before). Once done, all ...
1
Psych Forums has a section for posting studies to a section of their forum. However, they do require some form of payment for posting. In fact it's not exact. As explained here : http://www.psychforums.com/surveys-studies/topic44450.html if you are a "professor/researcher/student in a university/college/institute/faculty of psychology" it's entirely ...
1
In simplest terms, they are searching for objective, observable truth. Wikipedia seems to have quite a few references to this, see the quote/link below. In terms of the One-Factor-at-a-time, OFAT method was first recorded to have been used by James Lind. According to Wikipedia: "In 1747, while serving as surgeon on HM Bark Salisbury, James Lind carried out ...
Only top voted, non community-wiki answers of a minimum length are eligible | 2014-12-22 19:55:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3749042749404907, "perplexity": 1260.0206551789881}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802776556.43/warc/CC-MAIN-20141217075256-00062-ip-10-231-17-201.ec2.internal.warc.gz"} |
https://ykureh.wordpress.com/category/language/ | # Pandämonium: Fernweh, Wandertrieb Und Zugunruhe
Our experiences do not define us, yet we are nothing more than our past and our future. Compelled to make new mistakes and relive old ones.
To be content is to be unhappy. To idle is to die young. We desire change. Our instinct is to wander. I grow restless.
Time slips away as moments become shorter. Darkness deafens the fragile senses. The Silence is blinding.
Who knocks? Is it death? …Is it reality?
amn’t a linguist by training, but every so often I come across a peculiarity of the English language that makes me wish I were. Whether it’s the differences in zero-marking between British and American English or the fact that we can ask “Aren’t I?” but not say “I aren’t,” the language is riddled with curiosities that developed for one reason or another and, when I’m lucky, they carry funny names like the “expletive it.” I want to seize this opportunity to write down some of my thoughts on two phenomena I recently encountered for which there does not seem to be suitable references online. These notes are rough, so take it with a vaguely predicated amount of salt.
Now, I’m certainly not the first to say this, but not all of “the rules” of English make sense, nor do they always seem to help clear up meaning. Irregardless, as a former student of The Greek, The Latin, and The Arabic, I enjoy learning about these international laws governing our languages. And you should too, if for no other reason than to enforce arbitrary syntactic structures upon your peers’ utterances at the most opportune (inopportune for them, of course) times. Perhaps follow it up with a “Speak English, please” to add insult to insult. [Whenever I’m caught in this situation with my proverbial pants down, I plead ignonce.] In all seriousness though, I still find it to be a constant struggle to adhere to MLA, APA, CMS, and other TLAs and my writing is often riddled with confusing punctuation and even more perplexing quasi-verbiage.
Caveat lector: The following has some math(s), but I’ll try to keep it self-contained.
1) Perhaps no distinction annoys the Descriptivists more than the fewer vs. less divide of 1770. Purists treat this as a matter of life and death, as if it were an eleventh commandment decreed by God herself that for all instances in which objects may be counted one must use “fewer” and for all other instances one must use “less.” I very much doubt any person truly follows this to a T, and even the Merriam-Webster Dictionary of English Usage prefers the common usage of less in many instances. Notwithstanding, I believe I have found proof that this rule is not from God, but in fact man-made and impossible to satisfy. Consider the rational numbers and the real numbers, which consists of the rational numbers and the irrational numbers. There are infinitely many rational numbers and thus there are infinitely many real numbers, and certainly more real numbers than rational numbers. However, there are only countably many rational numbers while there are uncountably many real numbers. That is to say, we can count off the rational numbers in a systematic way (1/1, 1/2, 2/1, 3/1, 1/3, 1/4, 2/3, 3/2, …) whereas we cannot do so with the real numbers. Therein lies the problem. Are there fewer rational numbers than real numbers, or are there less rational numbers than real numbers? [A more symmetric phrasing is: which are there fewer/less of: rational numbers or real numbers?] In just one sentence we are talking about a single fundamental type of thing: numbers. Yet, numbers, when gathered in big enough groups, go from being countable to not. Thus it becomes ambiguous whether we ought to employ fewer or less. I do not know of other things that can be both countable and uncountable, but it seems almost ironic that “number” is the very word that leads to a contradiction of the fewer/less rule.
2) This second example has to do with adjectives, or modifiers, in a broad sense. There are lots of adjectives out there. Rumor has it, they’re in the top five most used parts of speech. As a refresher, here are some adjectives: blue, round, tall, fast, fake, honest, upcoming, fuzzy, melted, et cetera. How do adjectives work? You can learn more than you probably ever thought was possible here. There’s a surfeit of neat stuff, but let me break down the relevant bits. Functionally, what does an adjective do? It modifies a noun. How it does that depends on the adjective-noun combination.
Say you start with a noun, which naturally has some definition. The definition defines a set of properties that the noun satisfies. Then you modify the noun with an adjective. This modification usually has the impact of introducing further properties that the noun phrase (adjective + noun) satisfies. For example, suppose you have the noun “paper” which clearly has some definition. We know that papers can come in many colors though, so we modify it with an adjective to “blue paper.” We’ve now added the property of blueness to the set of properties, which causes a restriction. Simply put, the more properties there are that need to be satisfied, the fewer things there are that can satisfy all of them. Notice though that “blue paper” is both “blue” and “paper;” it satisfies two sets of properties, the first being the singleton set of blueness and the other being the set of properties of being paper. Phrased differently, “blue paper” is in the intersection of things that are blue and things that are paper. Linguists call this kind of adjective “intersective.” For the mathematically inclined: $\{\mbox{blue paper}\} = \{\mbox{blue stuff}\} \cap \{\mbox{paper}\}$.
Another kind of adjective is the subsective adjective, and as the name suggests it has to do with subsets. Take the noun “programmer.” Again it has a set of properties that define it. Now suppose we modify it with “clever” to get a “clever programmer.” We certainly still have that a “clever programmer” is a “programmer, i.e. $\{\mbox{clever programmer}\} \subset\{\mbox{programmer}\}$. However, just because someone is a “clever programmer” does not mean they are “clever”. It seems that rather than being another separate property that the “programmer” satisfies, the modification from “clever” affects a property. So instead of the property set expanding to include an additional property, “clever” alters a property within the set of “programmer” properties. The property adaptation in this case roughly is: “knows how to write computer code” becomes “knows how to write efficient computer code.”
Other types of adjectives may or may not exist depending on the school of thought you’re working with. A common example of this is the privative adjective with words like “fake.” A fake gun is not a gun and a gun is not a fake gun. Thus we have that the intersection of the modified and unmodified noun phrases is empty: $\{\mbox{fake gun}\} \cap \{\mbox{gun}\} =\O$. What we see is that property set for “gun” is not expanded by the adjective”fake,” but rather a crucial property (a gun discharges projectiles such as bullets) is negated (a fake gun cannot discharge projectiles). We can see something somewhat similar with temporally shifting modifiers like in the phrase “past president.”
In math(s) though, there seems to be some examples of adjectives which are distinctly different in behavior from the ones above. Rather than adding to, narrowing, or negating the properties, some adjectives widen. A clear example of this is in the noun phrase “general eigenvector.” One definition of an “eigenvector” of a matrix $A$ is a vector $x$ that satisfies the three properties that 0) $x\neq 0$, 1) $\exists \lambda \in \mathbf{C}, m\in\mathbf{N}, (A-\lambda I)^m x= 0$, and 2) $m=1$. A “generalized eigenvector” is only required to satisfy the first two properties, i.e. it is not required that $m=1$. So we have that $\{\mbox{generalized eigenvector}\} \supset\{\mbox{eigenvector}\}$. Other examples include skew fields, gaussian/eulerian/algebraic/etc. integers, and non-associative rings. By symmetry to the term subsective adjectives, I think (and a handful of other people on the internet agree) these adjectives should be called supersective. They have the ability to remove a property, and therefore loosen the noun phrase. Whether or not real examples exists outside of math(s), I am not yet sure. The closest I’ve been able to get to one is “dog food” vs “food” but please be careful. | 2018-03-25 05:18:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 10, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6865879893302917, "perplexity": 1300.6547011206171}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257651820.82/warc/CC-MAIN-20180325044627-20180325064627-00469.warc.gz"} |
http://zbmath.org/?format=complete&q=an:0516.49003 | # zbMATH — the first resource for mathematics
##### Examples
Geometry Search for the term Geometry in any field. Queries are case-independent. Funct* Wildcard queries are specified by * (e.g. functions, functorial, etc.). Otherwise the search is exact. "Topological group" Phrases (multi-words) should be set in "straight quotation marks". au: Bourbaki & ti: Algebra Search for author and title. The and-operator & is default and can be omitted. Chebyshev | Tschebyscheff The or-operator | allows to search for Chebyshev or Tschebyscheff. "Quasi* map*" py: 1989 The resulting documents have publication year 1989. so: Eur* J* Mat* Soc* cc: 14 Search for publications in a particular source with a Mathematics Subject Classification code (cc) in 14. "Partial diff* eq*" ! elliptic The not-operator ! eliminates all results containing the word elliptic. dt: b & au: Hilbert The document type is set to books; alternatively: j for journal articles, a for book articles. py: 2000-2015 cc: (94A | 11T) Number ranges are accepted. Terms can be grouped within (parentheses). la: chinese Find documents in a given language. ISO 639-1 language codes can also be used.
##### Operators
a & b logic and a | b logic or !ab logic not abc* right wildcard "ab c" phrase (ab c) parentheses
##### Fields
any anywhere an internal document identifier au author, editor ai internal author identifier ti title la language so source ab review, abstract py publication year rv reviewer cc MSC code ut uncontrolled term dt document type (j: journal article; b: book; a: book article)
Multiple integrals in the calculus of variations and nonlinear elliptic systems. (English) Zbl 0516.49003
Annals of Mathematics Studies, 105. Princeton, New Jersey: Princeton University Press. VII, 296 p. $45.50;$ 21.50 (1983).
##### MSC:
49-02 Research monographs (calculus of variations) 35-02 Research monographs (partial differential equations) 49J20 Optimal control problems with PDE (existence) 35J60 Nonlinear elliptic equations 35D10 Regularity of generalized solutions of PDE (MSC2000) 35B45 A priori estimates for solutions of PDE 49J45 Optimal control problems involving semicontinuity and convergence; relaxation 26B05 Continuity and differentiation questions (several real variables) 26B15 Integration: length, area, volume (several real variables) | 2014-04-23 17:11:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7373468279838562, "perplexity": 12629.107274933935}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00085-ip-10-147-4-33.ec2.internal.warc.gz"} |
http://q.hatena.ne.jp/mobile/1179213337 | l͌͂Ă
ٔł\Ă܂PCł
a̔ɍׂ^ܲ̒œ^Ă̎Rdq顂̓dqɊւ鎟̎ȌꍇɂĤ̓dqmZo桂܂^ٷްp^2/2mŗ^邱Ƃ礊ԋyёNʂɂ鎩Rdqޥ۲̕h̐Ƃ
?ԂɂƂɤܲ̒[a/2܂ł̊ԂŎRdqm
?ԂɂƂɤa/32a/3̊Ԃœdqm
?N(Ԃٷް)ɂ̓dqJڂƂa/2œdqm
߰Ă̢܂^ٷް?Ƃ̉̕Ă
܂͂߰ẲĂ߰ނĂ
g傷
: shinmu
ú:wK
✍ܰ:ٷް ۲ ߰ m
:I
: 1/1
ŐV̉
1 shimarakkyo
60߲
Iǂ߂ǂ礂ɊmɓĂ黲Ăݸ\Ƃ܂
ЯȎɓ肳ꂽl(pꡎɂ͂܂)Fhttp://www.studentoffortune.com/
ƂŎ͎╶̉p
a̔ɍׂ^ܲ̒œ^Ă̎Rdq顉^ٷްp^2/2mŗ^邱Ƃ礊ԋyёNʂɂ鎩Rdqޥ۲̕h̐Ƃ
"Consider a free electron travelling with constant velocity thro' a straight wire with negligible width and length $a$. Given that the kinetic energy of this electron is $\frac{p^2}{2m}$, demonstrate that the free electron in the ground-state and in the first excited state possess the property of the de Broglie wave"
ŏ1͂łˡ
҂̕ԓ
xȂ܂
킴킴p܂łĂ{ɂ肪̂łpꂪoȂ̂œł
{ɂ肪Ƃ܂
֘A
ƒT
0.l͌͂Ăį
8.߰ނFBɏЉ
9.߰ނ̐擪
Ή@ꗗ
₢킹
/m点
۸
հްo^
͂Ăį | 2018-11-14 17:47:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.649572491645813, "perplexity": 907.0151977371236}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039742253.21/warc/CC-MAIN-20181114170648-20181114192648-00125.warc.gz"} |
http://www.ams.org/books/memo/1102/ | #### How to Order
For AMS eBook frontlist subscriptions or backfile collection purchases:
2. Complete and sign the license agreement.
3. Email, fax, or send via postal mail to:
Customer Services
American Mathematical Society
201 Charles Street Providence, RI 02904-2213 USA
Phone: 1-800-321-4AMS (4267)
Fax: 1-401-455-4046
Email: [email protected]
Visit the AMS Bookstore for individual volume purchases.
Browse the current eBook Collections price list
# memo_has_moved_text();Julia Sets and Complex Singularities of Free Energies
Jianyong Qiao
Publication: Memoirs of the American Mathematical Society
Publication Year: 2015; Volume 234, Number 1102
ISBNs: 978-1-4704-0982-1 (print); 978-1-4704-2029-1 (online)
DOI: http://dx.doi.org/10.1090/memo/1102
Published electronically: July 28, 2014
Keywords:Julia set, Fatou set, renormalization transformation, iterate
View full volume PDF
View other years and numbers:
Chapters
• Introduction
• Chapter 1. Complex dynamics and Potts models
• Chapter 2. Dynamical complexity of renormalization transformations
• Chapter 3. Connectivity of Julia sets
• Chapter 4. Jordan domains and Fatou components
• Chapter 5. Critical exponent of free energy
### Abstract
We study a family of renormalization transformations of generalized diamond hierarchical Potts models through complex dynamical systems. We prove that the Julia set (unstable set) of a renormalization transformation, when it is treated as a complex dynamical system, is the set of complex singularities of the free energy in statistical mechanics. We give a sufficient and necessary condition for the Julia sets to be disconnected. Furthermore, we prove that all Fatou components (components of the stable sets) of this family of renormalization transformations are Jordan domains with at most one exception which is completely invariant. In view of the problem in physics about the distribution of these complex singularities, we prove here a new type of distribution: the set of these complex singularities in the real temperature domain could contain an interval. Finally, we study the boundary behavior of the first derivative and second derivative of the free energy on the Fatou component containing the infinity. We also give an explicit value of the second order critical exponent of the free energy for almost every boundary point. | 2018-11-14 00:44:45 | {"extraction_info": {"found_math": true, "script_math_tex": 15, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37232506275177, "perplexity": 2079.499257621826}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039741569.29/warc/CC-MAIN-20181114000002-20181114022002-00423.warc.gz"} |
http://math.stackexchange.com/questions/250372/series-help-fourier-series | # Series help, fourier series
How do I know if a given function can be represented by a fourier series, that converges to the value of that function at non discontinuities. Also where did Fourier come up with the idea of representing periodic function in the way he did?
-
This is an eminently natural question! In addition to Peter's accurate remarks:
From the (admittedly mostly secondary) historical sources I've read, Fourier did not initially have the inner-product formula for the coefficients, and had no really mathematical argument in favor of the expressibility of periodic functions (and, in those days, what was a "function"?) except analogies from mechanics and "overtones" in vibrating systems. That heuristic, however, set him on a course that seemed very productive for (apparently) solving a certain incarnation of the heat equation. Thus, the (apparent) utility of the idea gave motivation to subsequent legitimization.
In those days, there was no clear sense of "convergence", except (not uniform!) pointwise. Certainly not any formal "$L^2$" convergence. Indeed, problems about pointwise convergence of Fourier series led Cantor to create set theory.
Even with improved vocabulary of modern times, there is considerable tension between the "natural" pointwise convergence and, for example, $L^2$ convergence, or convergence in Levi-Sobolev spaces, or distributional convergence. Arguably, the $L^2$ theory works most smoothly, and, arguably, the $L^2$ theory of Levi-Sobolev spaces gives a more coherent and robust approach to (uniform!) pointwise convergence, if that is truly needed. E.g., while, perhaps counter-intuitively, the Fourier series of a $C^1$ function provably converges to it (uniformly) pointwise, it does not typically converge to it in the $C^1$ norm. (Also, the Fejer kernel discussion, while proving that finite Fourier series are dense in $C^o$, does not at all promise that it is the finite truncations of the Fourier series that converge to the function.) Meanwhile, functions in the ${1\over 2}+\epsilon$ Levi-Sobolev space have Fourier series that converge to them in that topology, and (Levi-Sobolev imbedding thm) are continuous, and the Fourier series also converges in the $C^o$ topology, and so on.
That is, fixation on pointwise convergence as fundamental may be misguided, although we are "brought up" to think of functions as primarily giving pointwise values. :)
-
There's various sufficient conditions, depending on how technical you want them to be. You obviously want $f$ to be periodic no matter what, I'll assume that from now on. Dirichlet's Theorem on convergence of Fourier series states that if both $f$ and its derivative are piecewise continuous on $[0,2\pi]$ (or whatever interval its periodic on, you can always rescale and translate a periodic function to get one periodic on $[0,2\pi]$), then the Fourier series of $f$ always converges to $f(x)$ when $f$ is continuous at $f$, and to the average of the left and right limits at $x$ of $f$ when $f$ is discontinuous.
Carleson's Theorem states that if $f$ is square-integrable over $[0,2\pi]$, that is
$$\left(\int_{0}^{2\pi}{|f(x)|^{2}\ dx}\right)^{\frac{1}{2}}<\infty,$$
then the Fourier series of $f$ at $x$ converges to $f(x)$ for almost all $x\in[0,2\pi]$.
-
A classical theorem on pointwise convergence of Fourier series says that if $f(x)$ is piecewise smooth on $(-\ell,\ell)$, then the Fourier series of $f$ converges pointwise on $(-\ell,\ell)$. Moreover, the value to which the Fourier series converges at $x=x_0$ is $${f(x_0^+)+f(x_0^-)\over 2},$$ where the superscripts denote the one-sided limits $$f(x_0^+):=\lim_{x\to x_0^+}f(x)\quad\text{ and }\quad f(x_0^-):=\lim_{x\to x_0^-}f(x).$$
In other words, if $x=x_0$ is a point of continuity of $f$, then its Fourier series converges to $f(x_0)$ there, but if $x=x_0$ is a point of (suitable type of) discontinuity of $f$, then its Fourier series converges to the average of the left- and right-and limits of $f$ at $x=x_0$.
- | 2015-05-30 00:38:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9734042286872864, "perplexity": 339.680445340541}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207930866.20/warc/CC-MAIN-20150521113210-00319-ip-10-180-206-219.ec2.internal.warc.gz"} |
https://cs.stackexchange.com/questions/127482/constructing-xor-separable-boolean-upper-bound | # Constructing xor separable boolean upper bound
Problem statement Suppose I have a boolean function $$f: \mathbb{F}_2^n \times \mathbb{F}_2^m \to \mathbb{F}_2$$ where $$\mathbb{F}_2 = \{0,1\}$$.
I define two boolean functions $$h: \mathbb{F}_2^n \to \mathbb{F}_2$$ and $$g: \mathbb{F}_2^m \to \mathbb{F}_2$$ to be xor separable upper bound of $$f$$ if $$f(\vec{x}, \vec{y}) \leq h(\vec{x}) \oplus g(\vec{y})$$ for all $$\vec{x} \in \mathbb{F}_2^n$$ and $$\vec{y} \in \mathbb{F}_2^m$$. (Here $$\oplus$$ is the logical xor operator). Let $$S$$ denote the set of xor separable upper bounds of $$f$$.
I would like to like to optimize the following: $$\min_{(h,g) \in S} |\# h^{-1}(0) - \frac{1}{2} 2^n|$$
In particular, I wish to find the optimal $$h$$ and $$g$$. However, this seems incredibly difficult.
Question: Is there a heuristical method to find $$h$$ and $$g$$ such that $$|\# h^{-1}(0) - \frac{1}{2} 2^n|$$ is "small" (doesn't have to be the optimal value)? Feel free to assume any nice boolean expression format for $$f$$ (such as CNF or DNF, etc...). I cannot come up with an algorithm better than brute force iteration. Here, $$n$$ and $$m$$ can be very big (order of hundreds)
Comments Note that $$S$$ is not empty since the constant function pair of $$(h = 1, g = 0)$$ is in $$S$$. Thus the minimization problem is well posed. Furthermore, I am not certain if this will help, but does the problem become easier if $$\#f^{-1}(1)$$ very small compared to $$2^{n + m}$$?
Edit Instead of optimizing $$\min_{(h,g) \in S} |\# h^{-1}(0) - \frac{1}{2} 2^n|$$, getting the tightest upper bound would be interesting as well. | 2020-12-02 10:37:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 26, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9603541493415833, "perplexity": 153.69539016720782}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141706569.64/warc/CC-MAIN-20201202083021-20201202113021-00060.warc.gz"} |
http://math.stackexchange.com/questions/158534/lagrange-multipliers-formulation | # Lagrange Multipliers Formulation
Suppose we have the following problem: $$\text{minimize } \ f(x) \\ \text{subject to } \ Ax = b$$
How do we know whether to write the Lagrangian Dual as $$\text{minimize } f(x) + \lambda(Ax-b)$$ versus $$\text{minimize } f(x) + \lambda(b-Ax)?$$
-
We don't, and it does not matter. The $\lambda$ you are going to find will change sign. See also here.
Then why is that in minimization problems I see equality constraints $Ax = b$ being written as $Ax-b$ but in maximization problems it is written as $b-Ax$? In a minimization problem I could write it as $b-Ax$? – robbie Jun 15 '12 at 6:07
When you minimize, $(x, \lambda)$ is variable. In one case you'll get $(x_0,\lambda_0)$ as a solution, in the other case $(x_0,-\lambda_0)$. You are interested in $x_0$. The solution is the same, yes. (And both problems are equally difficult). – user20266 Jun 15 '12 at 6:10 | 2015-08-31 05:38:24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9431676268577576, "perplexity": 328.09304470614745}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644065534.46/warc/CC-MAIN-20150827025425-00011-ip-10-171-96-226.ec2.internal.warc.gz"} |
https://codereview.stackexchange.com/questions/108359/map-all-symbols-in-module-to-dictionary | # Map all symbols in module to dictionary
The div and add functions are located randomly in my package structure. I have a function with this signature:
def calc(x, y, func_name='add')
How do I implement this function?
My idea is to import all the modules, then use the dir function, like so:
def import_submodules(package, recursive=True):
""" Import all submodules of a module, recursively, including subpackages
:param package: package (name or actual module)
:type package: str | module
:rtype: dict[str, types.ModuleType]
"""
if isinstance(package, str):
package = importlib.import_module(package)
results = {}
for loader, name, is_pkg in pkgutil.walk_packages(package.__path__):
full_name = package.__name__ + '.' + name
print 'full_name =', full_name
results[full_name] = importlib.import_module(full_name)
if recursive and is_pkg:
results.update(import_submodules(full_name))
return results
def create_symbol_module_d(module_name_module_d, include=None, exclude=None):
""" Create mapping of symbol to module
:param module_name_module_d
:type module_name_module_d: dict[str, types.ModuleType]
:rtype: dict[str, types.*]
"""
inv_res = {}
for mod in module_name_module_d.itervalues():
for sym in dir(mod):
if include and sym in include:
inv_res[sym] = mod
elif exclude and sym not in exclude:
inv_res[sym] = mod
else:
inv_res[sym] = mod
return inv_res
Then I can just do:
sym2mod = create_symbol_module_d(import_submodules('package_name'))
def calc(x, y, func_name='add'): return sym2mod[func_name](x, y)
• Are you only adding custom computation for your custom type? Then all you might need is to implement some special methods. Oct 22 '15 at 9:52
• Thanks, but this calc is just a simple example. My real use-case is for a pseudo plugin system.
– A T
Oct 22 '15 at 10:02
Well I'm going to start by saying that your end result is confusing. sym2mod and create_symbol_module_d aren't clear names that indicate what's going on. In particular I don't know why this would be necessary, so it wouldn't occur to me that these are custom adding and dividing functions that are placed so erratically in a package that you can't even find them. Also there is already an operator module that contains these functions. If yours are more specialised or complex, then you should use different names to distinguish them and reduce confusion.
Which brings me to another problem, why are they in random places? It sounds like your problem entirely exists just because of the package structure. To me that means you ought to improve the actual structure rather than treating the symptom of the problem.
Anyway, onto your actual functions. The docstring for import_module is a little off. It says the import is recursive, but it isn't necessarily recursive, it's optionally so. I don't think you need to mention it since it's easy to see the arguments with introspection. ie.
>>> help(import_submodules)
Help on function import_submodules in module __main__:
import_submodules(package, recursive=True)
Import all submodules of a module, recursively, including subpackages
The fact that your importer is also printing out messages could be quite a problem for people who might have a large package, and there's no way to turn it off! I'd add an argument like logging and default it to False. Only print if its been explicitly asked for. After all there's a dictionary record of the importing that can be inspected for what was imported.
create_symbol_module_d is quite confusing to me. It's a bad name, so is module_name_module_d. They're both long and communicate little. Then there's include and exclude. Are they meant to be lists of symbols like ['+', '-', '/']? You give no examples or indication. Then when I look at what they do... they do nothing! In every result of your if statements you end up just performing the same test. In the current function there's no need for if statements at all, let alone include and exclude. This is equivalent to your function:
def create_symbol_module_d(module_name_module_d, include=None, exclude=None):
""" Create mapping of symbol to module
:param module_name_module_d
:type module_name_module_d: dict[str, types.ModuleType]
:rtype: dict[str, types.*]
"""
inv_res = {}
for mod in module_name_module_d.itervalues():
for sym in dir(mod):
inv_res[sym] = mod
return inv_res
You need better names and documentation for me to even be able to give feedback on it. I also don't know what inv_res is or means. | 2022-01-23 14:40:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34849584102630615, "perplexity": 3072.842475687372}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304287.0/warc/CC-MAIN-20220123141754-20220123171754-00187.warc.gz"} |
https://physics.aps.org/synopsis-for/10.1103/PhysRevB.82.180513 | # Synopsis: Outstanding in the field
Iron arsenide superconductors exhibit a surprising capacity to prevent magnetic field penetration.
When a superconductor in a magnetic field is cooled below its transition temperature, it expels the field. The details of how this Meissner effect occurs in real materials depend on the shape and physical properties of the sample, the type of superconductivity it exhibits, and the experimental conditions, but all superconductors are expected to behave as follows: As long as the external magnetic field is below a certain critical value, currents on the surface of the superconductor will form to cancel the field inside, but above this critical field magnetic flux will start to penetrate. The experimental signature of this field dependence is that the magnetization of the superconductor, which opposes the applied field, reaches a maximum at the critical field and then starts to decrease.
However, in a Rapid Communication published in Physical Review B, Ruslan Prozorov and collaborators from Ames Laboratory at Iowa State University, US, and the Institute of Physics of the Chinese Academy of Science in Beijing, show that two iron arsenide (pnictide) superconductors seem to defy this expected behavior. The researchers find that the magnetizations of ${\text{Ba(Fe}}_{0.926}{\text{Co}}_{0.074}{\right)}_{2}{\text{As}}_{2}$ and ${\text{Ba}}_{0.6}{\text{K}}_{0.4}{\text{Fe}}_{2}{\text{As}}_{2}$ continually increase in an approximately linear fashion, without reaching a maximum, even when the applied field far exceeds the estimated critical field for either material. Based on their results, Prozorov et al. suggest that the magnetic field suppresses magnetic scattering, yielding more resilient Cooper pairing in iron arsenide compounds than in other superconductors, but more experiments are needed to support or refute this proposal. – Matthew Eager
More Features »
### Announcements
More Announcements »
## Subject Areas
Superconductivity
## Previous Synopsis
Materials Science
Nanophysics
## Related Articles
Condensed Matter Physics
### Viewpoint: A Roadmap for a Scalable Topological Quantum Computer
A team of experimentalists and theorists proposes a scalable protocol for quantum computation based on topological superconductors. Read More »
Superconductivity
### Synopsis: Superconductivity Model Misses Its Target
Researchers have added dopant atoms to a quantum spin liquid in an effort to make it superconduct, but the material upended theory by remaining an insulator. Read More »
Condensed Matter Physics
### Focus: Nobel Prize—Topological Phases of Matter
The 2016 Nobel Prize in Physics was awarded to theoretical physicists whose work established the role of topology in understanding exotic forms of matter. Read More » | 2017-07-21 08:28:29 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5020382404327393, "perplexity": 1255.0925831596396}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423764.11/warc/CC-MAIN-20170721082219-20170721102219-00709.warc.gz"} |
https://plainmath.net/algebra-ii/5692-write-the-final-factorization-for-each-problem-12a-3-plus-20a-2b-9ab-2-15b | allhvasstH
2021-02-25
Write the final factorization for each problem.
$12{a}^{3}+20{a}^{2}b-9a{b}^{2}-15{b}^{3}$
Demi-Leigh Barrera
Step 1
GCF of $12{a}^{3}\phantom{\rule{1em}{0ex}}\text{and}\phantom{\rule{1em}{0ex}}20{a}^{2}bis=4{a}^{2}$
GCF of $-9a{b}^{2}\phantom{\rule{1em}{0ex}}\text{and}\phantom{\rule{1em}{0ex}}-15{b}^{3}is=-3{b}^{2}$
Factor out $4{a}^{2}$ from the first two terms and then factor out $-3{b}^{2}$ from the last two terms.
$12{a}^{3}+20{a}^{2}b-9a{b}^{2}-15{b}^{3}$
$=4{a}^{2}\left(3a+5b\right)-3{b}^{2}\left(3a+5b\right)$
Step 2
Then we can factor out (3a+5b) from both terms.
$4{a}^{2}\left(3a+5b\right)-3{b}^{2}\left(3a+5b\right)$
$=\left(3a+5b\right)\left(4{a}^{2}-3{b}^{2}\right)$
Result:$\left(3a+5b\right)\left(4{a}^{2}-3{b}^{2}\right)$
Jeffrey Jordon | 2023-03-23 19:47:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 10, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7190661430358887, "perplexity": 2205.2207592821724}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945183.40/warc/CC-MAIN-20230323194025-20230323224025-00083.warc.gz"} |
https://math.stackexchange.com/questions/2867312/calculate-wave-speed-and-amplitude-when-solving-pde-numerically | # Calculate wave speed and amplitude when solving PDE numerically
I'm an amateur in math. I have a system of nine PDE. The system is huge and I solve it numerically by an explicit finite difference scheme. The stencil I use:
One of PDE is a reaction-diffusion that creates a wave. It has a form:
\begin{align*} \frac{\partial}{\partial t}T(x,y,t)=D\Delta T(x,y,t) + R(T(x,y,t)) - F(T(x,y,t)) \end{align*}
$T(x,y,t)$ is the target function. $x,y$ are $2D$ space coordinates. $t$ is time. $R, F$ are reactions that depend on other PDE in the system.
Can I calculate how the wave of $T(x,y,t)$ is spreading? Its velocity and amplitude? I would be very grateful for a link to simple and clear materials about it.
Below is an example of waves spreading:
• Are you working with plane waves? And what kind of velocity would you like to have? For example, there are the phase velocity: en.wikipedia.org/wiki/Phase_velocity and the group velocity: en.wikipedia.org/wiki/Group_velocity – Botond Jul 30 '18 at 19:02
• @Botond Thank you for good questions. I work with 2d case. The wave is like a circle on the water surface. I've adjusted my post with a gif of it. Either of velocities will do fine for me. The one that is simpler to acquire is better. – vogdb Jul 30 '18 at 19:44
• Is linearization an option? – Botond Jul 30 '18 at 20:01
• I didn't understand about linearization. Linearization of what? Currently I'm solving numerically, so all functions calculations are linearised already. No? – vogdb Jul 31 '18 at 7:11 | 2019-06-26 14:23:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5277409553527832, "perplexity": 889.9342880825945}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000353.82/warc/CC-MAIN-20190626134339-20190626160339-00157.warc.gz"} |
https://answers.opencv.org/questions/12987/revisions/ | # Revision history [back]
### What is the meaning of cv::Point::cross()?
I noticed in the headers that cv::Point has a .cross() method returning a double.
Had it returned a cv::Point3d, it would have made sense as the cross-product of two 2D homogeneous points with the last coefficient implicitly being set to 1 (in this case, if the 2 points were 2D points in the image plane then the results is the 2D line passing through them).
In fact, the result of cv::Point::cross() is actually the 3rd element of the 3D cross-product above.
What is the intent and/or geometric meaning of the existing method?
### What is the meaning of cv::Point::cross()?
I noticed in the headers that cv::Point has a .cross() method returning a double.
Had it returned a cv::Point3d, it would have made sense as the cross-product of two 2D homogeneous points with the last coefficient implicitly being set to 1 (in this case, if the 2 points were 2D points in the image plane then the results is the 2D line passing through them).
In fact, the result of cv::Point::cross() is actually the 3rd element of the 3D cross-product above.
What is the intent and/or geometric meaning of the existing method?method?
More specifically, is there a projective interpretation for this value, given that 2D points are represented by 3-element vectors in projective space? | 2021-05-12 05:06:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5511317253112793, "perplexity": 699.675209760046}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991252.15/warc/CC-MAIN-20210512035557-20210512065557-00100.warc.gz"} |
https://sytech.co.zw/html5-canvas-tutorial/creating-canvas-fills-and-gradients/ | ## Creating canvas fills and gradients
Gradients can be created using the following methods:
• createLinearGradient(x1, y1, x2, y2) which creates a linear gradient object, where (x1; y1) is the starting point and (x2; y2) is the ending point of the gradient.
• createRadialGradient(x1, y1, r1, x2, y2, r2) which creates a radial (circular) gradient object, where (x1; y1) and r1 are the center and radius of the starting circle, and (x2, y2) and r2 are the center and radius of the ending circle of the gradient.
• Gradient.addColorStop(offset, color) which adds color and offset position in the gradient object, whereby the offset is a decimal number between 0 and 1, the starting point and the ending point in the gradient respectively.
Both the createLinearGradient(x1, y1, x2, y2) and createRadialGradient(x1, y1, r1, x2, y2, r2) return a CanvasGradient object which can be manipulated using the addColorStop(offset, color) method of the CanvasGradient object itself.
Besides accepting plain colors as values, both the fillStyle and strokeStyle attributes also accept gradient colors in the form of the CanvasGradient object.
### createLinearGradient(x1, y1, x2, y2)
The first thing to do when creating linear gradient is to creat a CanvasGradient object using the createLinearGradient() method. In the first example we are going to draw a gradient from the canvas origin in the top left, all the way to the bottom left.
The linear gradient produced is extends perpendicular to the line from the starting to the ending points. So to create a linear gradient going from top to bottom we need to use the top-left (0; 0) and the bottom-left (0; canvas.length) coordinates of the canvas as starting and ending point respectively.
After defining a CanvasGradient object we apply color to it using two calls to the addColorStop() method of the CanvasGradient object.
Now lets create the example code:
var canvas = document.getElementById("canvas1");
var context = canvas.getContext("2d");
//create the linear gradient object
var gradient = context.createLinearGradient(0, 0, 0, canvas.height);
//set the starting gradient color to black
//set the ending gradient color to blue
//assign the gradient to the fillStyle
//create a rectangle filled with the gradient
context.fillRect(0, 0, canvas.width, canvas.height);
Changing the starting color offset to 0.5 in the example above shifts the starting color of the gradient to 0.5 of the canvas height as shown in the following example:
var canvas = document.getElementById("canvas1");
var context = canvas.getContext("2d");
//create the linear gradient object
var gradient = context.createLinearGradient(0, 0, 0, canvas.height);
//set the starting gradient color to black
//set the ending gradient color to blue
//assign the gradient to the fillStyle
//create a rectangle filled with the gradient
context.fillRect(0, 0, canvas.width, canvas.height);
### createRadialGradient(x1, y1, r1, x2, y2, r2)
var canvas = document.getElementById("canvas1");
var context = canvas.getContext("2d");
//choose the center points of the gradient
var cX = canvas.width/2;
var cY = canvas.height/2;
context.fillRect(0, 0, canvas.width, canvas.height);
### Example using both linear and radial gradients
var canvas = document.getElementById("canvas1");
var context = canvas.getContext("2d");
//set the x and y coordinates of the ball at the center of the canvas
var ball_x = canvas.width/2;
var ball_y = canvas.height/2;
//create linear gradient from the top left to the bottom right
//set gradient color start
//set gradient color end
//use the gradient as fillstyle | 2022-01-19 23:09:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26594096422195435, "perplexity": 3279.9379228688704}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301592.29/warc/CC-MAIN-20220119215632-20220120005632-00337.warc.gz"} |
https://codegolf.stackexchange.com/questions/33137/good-versus-evil/34409 | # Results - July 19, 2014
The current King of the Hill is Mercenary by user Fabigler! Keep submitting entries and knock him off of his throne!
Programs submitted on or before July 19, 2014 were included. All other submissions will be included in future trials. New results should be posted around August 9, so that gives you plenty of time.
Illustrated by Chris Rainbolt, my brother and a fresh graduate from Savannah College of Art and Design
# Introduction
The angels and demons are fighting and, as usual, using earth as their battleground. Humans are stuck in the middle and are being forced to take sides. An unknown neutral force rewards those who consistently fight for the losing side.
# The Game
Each trial, you will be pseudorandomly paired and then shuffled with between 20 and 30 other submissions. Each trial will consist of 1000 rounds. Each round, you will be passed an input and be expected to produce output. Your output will be recorded and scored. This process will be repeated 1000 times.
Input
You will receive a single argument that represents the past votes of each player. Rounds are delimited by comma. A 0 represents a player who sided with Evil that round. A 1 represents a player who sided with Good. Within a trial, the players will always be in the same order. Your own vote will be included, but not explicitly identified. For example:
101,100,100
In this example, three rounds have been completed and three players are competing. Player one always sided with Good. Player two always sided with Evil. Player three swapped from Good in round 1 to Evil in rounds 2 and 3. One of those players was you.
Output
Java Submissions
• Return the string good if you want to side with Good.
• Return the string evil if you want to side with Evil.
Non-Java Submissions
• Output the string good to stdout if you want to side with Good.
• Output the string evil to stdout if you want to side with Evil.
If your program outputs or returns anything else, throws an exception, does not compile, or takes longer than one second to output anything on this exact machine, then it will be disqualified.
Scoring
Scores will be posted in a Google docs spreadsheet for easy viewing as soon as I can compile all the current entries. Don't worry - I will keep running trials for as long as you guys keep submitting programs!
• You receive 3 points for siding with the majority during a round.
• You receive n - 1 points for siding with the minority during a round, where n is the number of consecutive times you have sided with the minority.
Your score will be the median of 5 trials. Each trial consists of 1000 rounds.
# Deliverables
Non-Java Submissions
You must submit a unique title, a program, and a Windows command line string that will run your program. Remember that an argument may be appended to that string. For example:
• python Angel.py
• Note that this one has no args. This is round one! Be prepared for this.
• python Angel.py 11011,00101,11101,11111,00001,11001,11001
Java Submissions
You must submit a unique title and a Java class that extends the abstract Human class written below.
public abstract class Human {
public abstract String takeSides(String history) throws Exception;
}
# Testing
If you want to test your own submission, follow the instructions here.
You may submit as many different submissions as you want. Submissions that appear to be colluding will be disqualified. The author of this challenge will be the only judge on that matter.
A new instance of your program or Java class will be created every time it is called upon. You may persist information by writing to a file. You may not modify the structure or behavior of anything except your own class.
Players will be shuffled before the trial starts. Demon and Angel will participate in every trial. If the number of players is even, Petyr Baelish will also join. Demon fights for Evil, Angel for Good, and Petyr Baelish chooses a pseudorandom side.
• Comments purged, as they were obsolete and at request of OP. Please notify me of any comments that need to be undeleted. – Doorknob Jul 16 '14 at 11:28
• Woah, OP changes his username. Ok, so when will the result be displayed? – justhalf Jul 18 '14 at 6:52
• @Rainbolt This must be one freakin' hell of a job, running this challenge! The reason for this amount of attention is the simplicity of the protocol and the rules, making it accessible while also allowing simple, working entries. TL;DR: Your challenge is too good! :D – tomsmeding Jul 19 '14 at 18:54
• @dgel I'll post the raw data, upper, lower, averages, and maybe a line chart so we can see who did better as the competition dragged on. – Rainbolt Jul 21 '14 at 12:03
• One of the pods ended up with 10 entries that voted the same way every single time. Consequently, two users ended up with perfect or "one round short of perfect" scores of around 450,000. The same entries scored around 1900 in other trials. The average score is close to 2000. Because of the extreme imbalance in results, I decided that a more meaningful number would be a median. I edited the challenge so that after 5 trials, the winner will be the submission with the highest median. If anyone thinks that moving from mean to median is unfair or otherwise a poor choice, please comment. – Rainbolt Jul 21 '14 at 18:57
## The Mercenary
Always sides with the one who paid the most money last round.
Taking into account that good people earn statistically more.
package Humans;
public class Mercenary extends Human {
public String takeSides(String history) {
// first round random!
if (history.length() == 0) {
return Math.random() >= 0.5 ? "good" : "evil";
}
String[] rounds = history.split(",");
String lastRound = rounds[rounds.length - 1];
double goodMoneyPaid = 0;
double evilMoneyPaid = 0;
for (char c : lastRound.toCharArray()) {
switch (c) {
case '0':
goodMoneyPaid = goodMoneyPaid + 0.2; //statistically proven: good people have more reliable incomes
break;
case '1':
evilMoneyPaid++;
break;
default:
break;
}
}
if (goodMoneyPaid > evilMoneyPaid)
{
return "good";
} else {
return "evil";
}
}
}
• This is the second post to say something about money. Am I missing a reference or something? – Rainbolt Jul 14 '14 at 21:31
• True, but this guy is an even more evil bastard. Deserting his pals every turn, only for the sake of money. – fabigler Jul 14 '14 at 21:34
• Your switch statement was missing a return statement for the default case, causing it to not compile. I added a random one. – Rainbolt Jul 15 '14 at 0:47
• Congratulations, King of the Hill! I don't understand how this entry wins. Care to add an explanation, now that it has a 300 reputation bounty attached to it? – Rainbolt Jul 22 '14 at 16:10
• Possibly a bug, or I misunderstood the comments and description, but the Mercenary doesn't actually do what it was meant to do. Except for the first random round, he will always side with evil unless less than 1/6 of the people voted for evil on the previous round. – jaybz Jul 23 '14 at 11:17
## Hipster, Ruby
if ARGV.length == 0
puts ["good", "evil"].sample
else
last_round = ARGV[0].split(',').last
n_players = last_round.length
puts last_round.count('1') > n_players/2 ? "evil" : "good"
end
Simply goes with last round's minority, just because everything else is mainstream.
Run like
ruby hipster.rb
# Petyr Baelish
You never know whose side Petyr Baelish is on.
package Humans;
/**
* Always keep your foes confused. If they are never certain who you are or
* what you want, they cannot know what you are likely to do next.
* @author Rusher
*/
public class PetyrBaelish extends Human {
/**
* Randomly take the side of good or evil.
* @param history The past votes of every player
* @return A String "good" or "evil
*/
public String takeSides(String history) {
return Math.random() < 0.5 ? "good" : "evil";
}
}
This entry will only be included if the number of players is even. This ensures that there will always be a majority.
• On Petyr Baelish's side, obviously. – Cthulhu Jul 9 '14 at 7:25
• @Kevin It consistently beats most of the bots. It usually scores 27ish. – cjfaure Jul 9 '14 at 20:17
• @Kevin This entry was submitted by the author of the challenge. It wasn't meant to do well. It exists to make sure that there will always be a majority, because with an even number of players, there could be a tie. – Rainbolt Jul 9 '14 at 20:17
• Why oh God why has this one got the most votes? It's just not fair. – tomsmeding Jul 9 '14 at 20:27
• @tomsmeding No. It's a quote from Game of Thrones lol. – Rainbolt Jul 9 '14 at 20:55
## C++, The Meta Scientist
This one does essentially the same as The Scientist, but doesn't operate on rounds as a whole but on the individual players. It tries to map a wave (or a constant function) to each player separately and predicts their move in the next round. From the resulted round prediction, The Meta Scientist chooses whichever side looks like having a majority.
#include <iostream>
#include <utility>
#include <cstdlib>
#include <cstring>
#if 0
#define DBG(st) {st}
#else
#define DBG(st)
#endif
#define WINDOW (200)
using namespace std;
int main(int argc,char **argv){
if(argc==1){
cout<<(rand()%2?"good":"evil")<<endl;
return 0;
}
DBG(cerr<<"WINDOW="<<WINDOW<<endl;)
int nump,numr;
nump=strchr(argv[1],',')-argv[1];
numr=(strlen(argv[1])+1)/(nump+1);
int period,r,p;
int score,*scores=new int[WINDOW];
int max; //some score will always get above 0, because if some score<0, the inverted wave will be >0.
int phase,phasemax;
int predicted=0; //The predicted number of goods for the next round
int fromround=numr-WINDOW;
if(fromround<0)fromround=0;
pair<int,int> maxat; //period, phase
DBG(cerr<<"Players:"<<endl;)
for(p=0;p<nump;p++){
DBG(cerr<<" p"<<p<<": ";)
for(r=fromround;r<numr;r++)if(argv[1][r*(nump+1)+p]!=argv[1][p])break;
if(r==numr){
DBG(cerr<<"All equal! prediction="<<argv[1][p]<<endl;)
predicted+=argv[1][(numr-1)*(nump+1)+p]-'0';
continue;
}
max=0;
maxat={-1,-1};
for(period=1;period<=WINDOW;period++){
scores[period-1]=0;
phasemax=-1;
for(phase=0;phase<2*period;phase++){
score=0;
for(r=fromround;r<numr;r++){
if(argv[1][r*(nump+1)+p]-'0'==1-(r+phase)%(2*period)/period)score++;
else score--;
}
if(score>scores[period-1]){
scores[period-1]=score;
phasemax=phase;
}
}
if(scores[period-1]>max){
max=scores[period-1];
maxat.first=period;
maxat.second=phasemax;
}
DBG(cerr<<scores[period-1]<<" ";)
}
DBG(cerr<<"(max="<<max<<" at {"<<maxat.first<<","<<maxat.second<<"})"<<endl;)
DBG(cerr<<" prediction: 1-("<<numr<<"+"<<maxat.second<<")%(2*"<<maxat.first<<")/"<<maxat.first<<"="<<(1-(numr+maxat.second)%(2*maxat.first)/maxat.first)<<endl;)
predicted+=(1-(numr+maxat.second)%(2*maxat.first)/maxat.first);
}
DBG(cerr<<"Predicted outcome: "<<predicted<<" good + "<<(nump-predicted)<<" evil"<<endl;)
if(predicted>nump/2)cout<<"evil"<<endl; //pick minority
else cout<<"good"<<endl;
delete[] scores;
return 0;
}
If you want to turn on debug statements, change the line reading #if 0 to #if 1.
Compile with g++ -O3 -std=c++0x -o MetaScientist MetaScientist.cpp (you don't need warnings, so no -Wall) and run with MetaScientist.exe (possibly including the argument of course). If you ask really nicely I can provide you with a Windows executable.
EDIT: Apparently, the previous version ran out of time around 600 rounds into the game. This shouldn't do that. Its time consumption is controlled by the #define WINDOW (...) line, more is slower but looks further back.
• I humbly suggest you try picking the losing side. If you can consistently guess correctly, you'll get more than 3 points per round. – Kevin Jul 9 '14 at 20:18
• @Kevin That's true, but I figured that it might guess the wrong side pretty quickly, and you need to correctly guess the losing side more than seven times in a row to get an improvement over always getting the majority right. I might change it though. – tomsmeding Jul 9 '14 at 20:23
• @Kevin Also, I'd first like to see how these do (Scientist and Meta Scientist) when Rusher gets us a scoreboard this weekend, as he indicated in the comments to the OP. Rusher, sorry, but I'm too lazy to compile all the stuff myself... :) – tomsmeding Jul 9 '14 at 20:28
• No worries! It probably isn't safe to run these anyway. Just let me screw up my machine with code written by 50 strangers on the Internet. – Rainbolt Jul 9 '14 at 20:39
• @Kevin But that's so MANY! I can, indeed, but I don't like it. I'll see how these fare. – tomsmeding Jul 9 '14 at 20:43
# Angel
The purest player of all.
Program
print "good"
Command
python Angel.py
• Python is a good language. It seems only natural that the Angel should use it. – jpmc26 Jul 10 '14 at 1:24
• May I remind people that a Python is a Snake. A Serpent. – Mr Lister Jul 15 '14 at 7:52
• @MrLister May I remind you that Lucifer was a great Angel before God cast him out of heaven? – Zibbobz Jul 18 '14 at 18:36
• @Zibbobz Yeah... shame really, that they fell out. They could have achieved so much together. – Mr Lister Jul 18 '14 at 20:03
# Artemis Fowl
package Humans;
public class ArtemisFowl extends Human {
public final String takeSides(String history) {
int good = 0, evil = 0;
for(int i = 0; i < history.length(); i++) {
switch(history.charAt(i)) {
case '0': evil++; break;
case '1': good++; break;
}
}
if(good % 5 == 0){
return "good";
} else if (evil % 5 == 0){
return "evil";
} else {
if(good > evil){
return "good";
} else if(evil > good){
return "evil";
} else {
return Math.random() >= 0.5 ? "good" : "evil";
}
}
}
}
In Book 7, The Atlantis Complex, Artemis Fowl contracted a psychological disease (called Atlantis complex) that forced him to do everything in multiples of 5 (speaking, actions, etc). When he couldn't do it in some multiple of 5, he panicked. I do basically that: see if good or evil (intentional bias) is divisible by 5, if neither is, then I panic & see which was greater & run with that or panic even further & randomly choose.
• When I read Artemis Fowl in Junior High, only two books existed. It's nice to see that there are now seven, and that Disney is making it into a movie. – Rainbolt Jul 9 '14 at 1:11
• There's actually 8 books. – Kyle Kanos Jul 9 '14 at 1:12
• The more the merrier (unless you are reading The Wheel of Time) – Rainbolt Jul 9 '14 at 1:13
• And you forgot break; in your switch. – johnchen902 Jul 9 '14 at 12:33
• @johnchen902,@Manu: I am not very experienced in java (I use Fortran90+ & only see java here), hence my errors. I'll fix them when I get into the office in an hour. – Kyle Kanos Jul 9 '14 at 12:37
Disparnumerophobic
Odd numbers are terrifying.
package Humans;
public class Disparnumerophobic extends Human {
public final String takeSides(String history) {
int good = 0, evil = 0;
for(int i = 0; i < history.length(); i++) {
switch(history.charAt(i)) {
case '0': evil++; break;
case '1': good++;
}
}
if(good%2 == 1 && evil%2 == 0) return "evil";
if(evil%2 == 1 && good%2 == 0) return "good";
// well shit....
return Math.random() >= 0.5 ? "good" : "evil";
}
}
• Comment made me laugh/snort. – phyrfox Jul 9 '14 at 3:50
# Linus, Ruby
Seeks to confound analysts by always breaking the pattern.
num_rounds = ARGV[0].to_s.count(',')
LINUS_SEQ = 0xcb13b2d3734ecb4dc8cb134b232c4d3b2dcd3b2d3734ec4d2c8cb134b234dcd3b2d3734ec4d2c8cb134b23734ecb4dcd3b2c4d232c4d2c8cb13b2d3734ecb4dcb232c4d2c8cb13b2d3734ecb4dc8cb134b232c4d3b2dcd3b2d3734ec4d2c8cb134b234dcd3b2d3734ec4d2c8cb134b23734ecb4dcd3b2c4d2c8cb134b2
puts %w[good evil][LINUS_SEQ[num_rounds]]
Save as linus.rb and run with ruby linus.rb
# The BackPacker
Determinates a player that has chosen the matching minority the most yet and chooses his last vote.
package Humans;
public class BackPacker extends Human {
// toggles weather the BackPacker thinks majority is better vs. minority is better
private static final boolean goWithMajority = false;
@Override
public final String takeSides(String history) {
if (history == null || history.equals(""))
return "evil";
int[] winningPlayers = new int[players];
for (String nextRound : roundVotes) {
boolean didGoodWin = didGoodWin(nextRound, players);
for (int player = 0; player < nextRound.length(); player++) {
boolean playerVotedGood = nextRound.charAt(player) == '1';
winningPlayers[player] += didPlayerWin(didGoodWin, playerVotedGood);
}
}
int bestScore = -1;
for (int nextPlayer : winningPlayers)
if (bestScore < nextPlayer)
bestScore = nextPlayer;
int bestPlayer = 0;
for (int ii = 0; ii < players; ii++) {
if (winningPlayers[ii] == bestScore) {
bestPlayer = ii;
break;
}
}
return "good";
return "evil";
}
private int didPlayerWin(boolean didGoodWin, boolean playerVotedGood) {
if(goWithMajority) {
return ((didGoodWin && playerVotedGood) || (!didGoodWin && !playerVotedGood)) ? 1 : 0;
} else {
return ((!didGoodWin && playerVotedGood) || (didGoodWin && !playerVotedGood)) ? 1 : 0;
}
}
private boolean didGoodWin(String round, int players) {
int good = 0;
for (char next : round.toCharArray())
good += next == '1' ? 1 : 0;
return (good * 2) > players;
}
}
# The CrowdFollower
Determinates a player that has chosen the matching majority the most yet and chooses his last vote.
package Humans;
public class CrowdFollower extends Human {
// toggles weather the FrontPacker thinks majority is better vs. minority is better
private static final boolean goWithMajority = true;
@Override
public final String takeSides(String history) {
if (history == null || history.equals(""))
return "evil";
int[] winningPlayers = new int[players];
for (String nextRound : roundVotes) {
boolean didGoodWin = didGoodWin(nextRound, players);
for (int player = 0; player < nextRound.length(); player++) {
boolean playerVotedGood = nextRound.charAt(player) == '1';
winningPlayers[player] += didPlayerWin(didGoodWin, playerVotedGood);
}
}
int bestScore = -1;
for (int nextPlayer : winningPlayers)
if (bestScore < nextPlayer)
bestScore = nextPlayer;
int bestPlayer = 0;
for (int ii = 0; ii < players; ii++) {
if (winningPlayers[ii] == bestScore) {
bestPlayer = ii;
break;
}
}
return "good";
return "evil";
}
private int didPlayerWin(boolean didGoodWin, boolean playerVotedGood) {
if(goWithMajority) {
return ((didGoodWin && playerVotedGood) || (!didGoodWin && !playerVotedGood)) ? 1 : 0;
} else playerVotedGood return ((!didGoodWin && good) || (didGoodWin && !playerVotedGood)) ? 1 : 0;
}
}
private boolean didGoodWin(String round, int players) {
int good = 0;
for (char next : round.toCharArray())
good += next == '1' ? 1 : 0;
return (good * 2) > players;
}
}
• Very clean program! – Rainbolt Jul 10 '14 at 13:21
• Whoops, I think I may have copied your program in a different language. – PyRulez Jul 14 '14 at 0:55
• @Rusher I updated the code and would like to add this as two entries, one with goWithMajority = true and one where its false. Is that okay, or do I need to add a second BackPacker for this? – Angelo Fuchs Jul 17 '14 at 9:30
• @AngeloNeuschitzer I edited this post. This way, I won't forget to add both submissions. I suggest you change the really uncreative name I gave it, and maybe add a description to both if you want. – Rainbolt Jul 17 '14 at 13:03
• @Rainbolt I like your FrontPacker better, actually. Lol'd. – tomsmeding Jul 19 '14 at 18:57
## Fortune Teller
This is still work in progress. I haven't tested it yet. I just wanted to see if the OP thinks it breaks the rules or not.
The idea is to simulate the next round by executing all other participants a few times to get a probability of the outcome and act accordingly.
package Humans;
import java.io.File;
import java.io.IOException;
import java.io.UnsupportedEncodingException;
import java.net.JarURLConnection;
import java.net.URL;
import java.net.URLConnection;
import java.net.URLDecoder;
import java.util.ArrayList;
import java.util.Enumeration;
import java.util.jar.JarEntry;
import java.util.jar.JarFile;
import sun.net.www.protocol.file.FileURLConnection;
public class FortuneTeller extends Human {
/**
* Code from http://stackoverflow.com/a/22462785 Private helper method
*
* @param pckgname The package name to search for. Will be needed for
* getting the Class object.
* @param classes if a file isn't loaded but still is in the directory
* @throws ClassNotFoundException
*/
private static void checkDirectory(File directory, String pckgname,
ArrayList<Class<?>> classes) throws ClassNotFoundException {
File tmpDirectory;
if (directory.exists() && directory.isDirectory()) {
final String[] files = directory.list();
for (final String file : files) {
if (file.endsWith(".class")) {
try {
+ file.substring(0, file.length() - 6)));
} catch (final NoClassDefFoundError e) {
// do nothing. this class hasn't been found by the
// loader, and we don't care.
}
} else if ((tmpDirectory = new File(directory, file))
.isDirectory()) {
checkDirectory(tmpDirectory, pckgname + "." + file, classes);
}
}
}
}
/**
* Private helper method.
*
* @param connection the connection to the jar
* @param pckgname the package name to search for
* @param classes the current ArrayList of all classes. This method will
* @throws ClassNotFoundException if a file isn't loaded but still is in the
* jar file
* @throws IOException if it can't correctly read from the jar file.
*/
private static void checkJarFile(JarURLConnection connection,
String pckgname, ArrayList<Class<?>> classes)
throws ClassNotFoundException, IOException {
final JarFile jarFile = connection.getJarFile();
final Enumeration<JarEntry> entries = jarFile.entries();
String name;
for (JarEntry jarEntry = null; entries.hasMoreElements()
&& ((jarEntry = entries.nextElement()) != null);) {
name = jarEntry.getName();
if (name.contains(".class")) {
name = name.substring(0, name.length() - 6).replace('/', '.');
if (name.contains(pckgname)) {
}
}
}
}
/**
* Attempts to list all the classes in the specified package as determined
* by the context class loader
*
* @param pckgname the package name to search
* @return a list of classes that exist within that package
* @throws ClassNotFoundException if something went wrong
*/
private static ArrayList<Class<?>> getClassesForPackage(String pckgname)
throws ClassNotFoundException {
final ArrayList<Class<?>> classes = new ArrayList<Class<?>>();
try {
if (cld == null) {
throw new ClassNotFoundException("Can't get class loader.");
}
final Enumeration<URL> resources = cld.getResources(pckgname
.replace('.', '/'));
URLConnection connection;
for (URL url = null; resources.hasMoreElements()
&& ((url = resources.nextElement()) != null);) {
try {
connection = url.openConnection();
if (connection instanceof JarURLConnection) {
checkJarFile((JarURLConnection) connection, pckgname,
classes);
} else if (connection instanceof FileURLConnection) {
try {
checkDirectory(
new File(URLDecoder.decode(url.getPath(),
"UTF-8")), pckgname, classes);
} catch (final UnsupportedEncodingException ex) {
throw new ClassNotFoundException(
pckgname
+ " does not appear to be a valid package (Unsupported encoding)",
ex);
}
} else {
throw new ClassNotFoundException(pckgname + " ("
+ url.getPath()
+ ") does not appear to be a valid package");
}
} catch (final IOException ioex) {
throw new ClassNotFoundException(
"IOException was thrown when trying to get all resources for "
+ pckgname, ioex);
}
}
} catch (final NullPointerException ex) {
throw new ClassNotFoundException(
pckgname
+ " does not appear to be a valid package (Null pointer exception)",
ex);
} catch (final IOException ioex) {
throw new ClassNotFoundException(
"IOException was thrown when trying to get all resources for "
+ pckgname, ioex);
}
return classes;
}
private static boolean isRecursiveCall = false;
private static ArrayList<Class<?>> classes;
static {
if (classes == null) {
try {
classes = getClassesForPackage("Humans");
} catch (ClassNotFoundException ex) {
}
}
}
private String doThePetyrBaelish() {
return Math.random() >= 0.5 ? "good" : "evil";
}
@Override
public String takeSides(String history) {
if (isRecursiveCall) {
return doThePetyrBaelish();
}
isRecursiveCall = true;
int currentRoundGoodCount = 0;
float probabilityOfGood = 0;
int roundCount = 0;
int voteCount = 0;
do {
for (int i = 0; i < classes.size(); i++) {
try {
if (classes.get(i).getName() == "Humans.FortuneTeller") {
continue;
}
Human human = (Human) classes.get(i).newInstance();
String response = human.takeSides(history);
switch (response) {
case "good":
currentRoundGoodCount++;
voteCount++;
break;
case "evil":
voteCount++;
break;
default:
break;
}
} catch (Exception e) {
}
}
probabilityOfGood = (probabilityOfGood * roundCount
+ (float) currentRoundGoodCount / voteCount) / (roundCount + 1);
roundCount++;
currentRoundGoodCount = 0;
voteCount = 0;
} while (roundCount < 11);
isRecursiveCall = false;
if (probabilityOfGood > .7) {
return "evil";
}
if (probabilityOfGood < .3) {
return "good";
}
return doThePetyrBaelish();
}
}
• If your bot runs all the other bots each turns before answering, won't it take more than 1s to answer? – plannapus Jul 9 '14 at 12:41
• @plannapus I'm going to guess the assumption with this bot is that everyone else is going to err on the side of caution and avoid anything close 1 seconds worth of wait. I'm thinking it may be worthwhile submitting and entry that consists of a 0.9 second wait, before returning "good", just to mess with him. Actually, SBoss has beat me to it :D – scragar Jul 9 '14 at 12:44
• Yahhh! Then I would have to blacklist that bot in my code. That would be frustrating... Also with different entries in different environments like Python or Perl the reprated loading of the interpreter might just be enough to bring this code above the time limit. – Andris Jul 9 '14 at 12:55
• If someone else does the same thing as this, you get an infinite loop. – Brilliand Jul 9 '14 at 17:10
• The submission timed out. I attached a profiler, and nearly half a second is spent calling some submissions. It at least works though, so congrats for that. – Rainbolt Jul 14 '14 at 0:57
## C++, The Scientist
This one tries to, with the history of what the majority chose per round in wave (majority() gives the majority's choice on a round), fit a wave to the data, of wavelength 2*period and phase phase. Thus, given 0,1,1,1,0,1,0,1,1,1,0,0,0,1,0 it selects period=3, phase=5 (maxat=={3,5}): its scores become 9 3 11 5 5 3 5 7 9 7 7 7 7 7 7. It loops over all possible periods and if, for that period, the score is higher than for the current maximum, it stores {period,phase} for which that occurred.
It then extrapolates the found wave to the next round and takes the predicted majority.
#include <iostream>
#include <utility>
#include <cstdlib>
#include <cstring>
#if 0
#define DBG(st) {st}
#else
#define DBG(st)
#endif
#define WINDOW (700)
using namespace std;
int majority(const char *r){
int p=0,a=0,b=0;
while(true){
if(r[p]=='1')a++;
else if(r[p]=='0')b++;
else break;
p++;
}
return a>b;
}
int main(int argc,char **argv){
if(argc==1){
cout<<(rand()%2?"good":"evil")<<endl;
return 0;
}
DBG(cerr<<"WINDOW="<<WINDOW<<endl;)
int nump,numr;
nump=strchr(argv[1],',')-argv[1];
numr=(strlen(argv[1])+1)/(nump+1);
int fromround=numr-30;
if(fromround<0)fromround=0;
int period,r;
int *wave=new int[WINDOW];
bool allequal=true;
DBG(cerr<<"wave: ";)
for(r=fromround;r<numr;r++){
wave[r-fromround]=majority(argv[1]+r*(nump+1));
if(wave[r-fromround]!=wave[0])allequal=false;
DBG(cerr<<wave[r]<<" ";)
}
DBG(cerr<<endl;)
if(allequal){
DBG(cerr<<"All equal!"<<endl;)
if(wave[numr-1]==1)cout<<"evil"<<endl; //choose for minority
else cout<<"good"<<endl;
return 0;
}
int score,*scores=new int[WINDOW];
int max=0; //some score will always get above 0, because if some score<0, the inverted wave will be >0.
int phase,phasemax;
pair<int,int> maxat(-1,-1); //period, phase
DBG(cerr<<"scores: ";)
for(period=1;period<=WINDOW;period++){
scores[period-1]=0;
phasemax=-1;
for(phase=0;phase<2*period;phase++){
score=0;
for(r=fromround;r<numr;r++){
if(wave[r]==1-(r+phase)%(2*period)/period)score++;
else score--;
}
if(score>scores[period-1]){
scores[period-1]=score;
phasemax=phase;
}
}
if(scores[period-1]>max){
max=scores[period-1];
maxat.first=period;
maxat.second=phasemax;
}
DBG(cerr<<scores[period-1]<<" ";)
}
DBG(cerr<<"(max="<<max<<" at {"<<maxat.first<<","<<maxat.second<<"})"<<endl;)
DBG(cerr<<" max: ("<<numr<<"+"<<maxat.second<<")%(2*"<<maxat.first<<")/"<<maxat.first<<"=="<<((numr+maxat.second)%(2*maxat.first)/maxat.first)<<endl;)
if(1-(numr+maxat.second)%(2*maxat.first)/maxat.first==1)cout<<"evil"<<endl; //choose for minority
else cout<<"good"<<endl;
delete[] wave;
delete[] scores;
return 0;
}
Compile with g++ -O3 -std=c++0x -o Scientist Scientist.cpp (you don't need warnings, so no -Wall) and run with Scientist.exe (possibly including the argument of course). If you ask really nicely I can provide you with a Windows executable.
Oh, and don't dare messing with the input format. It'll do strange things otherwise.
EDIT: Apparently, the previous version ran out of time around 600 rounds into the game. This shouldn't do that. Its time consumption is controlled by the #define WINDOW (...) line, more is slower but looks further back.
• @Rusher I totally agree. If you do want problems, that's step one in the "for dummies" guide. My offer stands though :) – tomsmeding Jul 15 '14 at 6:14
• Got this one to compile (and compete) fine. – Rainbolt Jul 19 '14 at 7:47
## Code Runner
So, to make things interesting, I created a script to automatically download the code from every posted answer, compile it if necessary, and then run all of the solutions according to the rules. This way, people can check how they are doing. Just save this script to run_all.py (requires BeautifulSoup) and then:
usage:
To get the latest code: 'python run_all.py get'
To run the submissions: 'python run_all.py run <optional num_runs>'
A few things:
1. If you want to add support for more languages, or alternatively remove support for some, see def submission_type(lang).
2. Extending the script should be fairly easy, even for languages that require compilation (see CPPSubmission). The language type is grabbed from the meta code tag < !-- language: lang-java -- >, so make sure to add it if you want your code to be run (Remove the extra spaces before and after the <>). UPDATE: There is now some extremely basic inference to try and detect the language if it is not defined.
3. If your code fails to run at all, or fails to finish within the allotted time, it will be added to blacklist.text and will be removed from future trials automatically. If you fix your code, just remove your entry from the blacklist and re-run get,
Currently supported languages:
submission_types = {
'lang-ruby': RubySubmission,
'lang-python': PythonSubmission,
'lang-py': PythonSubmission,
'lang-java': JavaSubmission,
'lang-Java': JavaSubmission,
'lang-javascript': NodeSubmission,
'lang-cpp': CPPSubmission,
'lang-c': CSubmission,
'lang-lua': LuaSubmission,
'lang-r': RSubmission,
'lang-fortran': FortranSubmission,
'lang-bash': BashSubmission
}
import urllib2
import hashlib
import os
import re
import subprocess
import shutil
import time
import multiprocessing
import tempfile
import sys
from bs4 import BeautifulSoup
__run_java__ = """
public class Run {
public static void main(String[] args) {
String input = "";
Human h = new __REPLACE_ME__();
if(args.length == 1)
input = args[0];
try {
System.out.println(h.takeSides(input));
}
catch(Exception e) {
}
}
}
"""
__human_java__ = """
public abstract class Human {
public abstract String takeSides(String history) throws Exception;
}
"""
class Submission():
def __init__(self, name, code):
self.name = name
self.code = code
def submissions_dir(self):
return 'submission'
def base_name(self):
return 'run'
def submission_path(self):
return os.path.join(self.submissions_dir(), self.name)
def extension(self):
return ""
def save_submission(self):
self.save_code()
def full_command(self, input):
return []
def full_path(self):
file_name = "%s.%s" % (self.base_name(), self.extension())
full_path = os.path.join(self.submission_path(), file_name)
return full_path
def save_code(self):
if not os.path.exists(self.submission_path()):
os.makedirs(self.submission_path())
with open(self.full_path(), 'w') as f:
f.write(self.code)
def write_err(self, err):
with open(self.error_log(), 'w') as f:
f.write(err)
def error_log(self):
return os.path.join(self.submission_path(), 'error.txt')
def run_submission(self, input):
command = self.full_command()
if input is not None:
command.append(input)
try:
output,err,exit_code = run(command,timeout=1)
if len(err) > 0:
self.write_err(err)
return output
except Exception as e:
self.write_err(str(e))
return ""
class CPPSubmission(Submission):
def bin_path(self):
return os.path.join(self.submission_path(), self.base_name())
def save_submission(self):
self.save_code()
compile_cmd = ['g++', '-O3', '-std=c++0x', '-o', self.bin_path(), self.full_path()]
errout = open(self.error_log(), 'w')
subprocess.call(compile_cmd, stdout=errout, stderr=subprocess.STDOUT)
def extension(self):
return 'cpp'
def full_command(self):
return [self.bin_path()]
class CSubmission(Submission):
def bin_path(self):
return os.path.join(self.submission_path(), self.base_name())
def save_submission(self):
self.save_code()
compile_cmd = ['gcc', '-o', self.bin_path(), self.full_path()]
errout = open(self.error_log(), 'w')
subprocess.call(compile_cmd, stdout=errout, stderr=subprocess.STDOUT)
def extension(self):
return 'c'
def full_command(self):
return [self.bin_path()]
class FortranSubmission(Submission):
def bin_path(self):
return os.path.join(self.submission_path(), self.base_name())
def save_submission(self):
self.save_code()
compile_cmd = ['gfortran', '-fno-range-check', '-o', self.bin_path(), self.full_path()]
errout = open(self.error_log(), 'w')
subprocess.call(compile_cmd, stdout=errout, stderr=subprocess.STDOUT)
def extension(self):
return 'f90'
def full_command(self):
return [self.bin_path()]
class JavaSubmission(Submission):
def base_name(self):
class_name = re.search(r'class (\w+) extends', self.code)
file_name = class_name.group(1)
return file_name
def human_base_name(self):
return 'Human'
def run_base_name(self):
return 'Run'
def full_name(self, base_name):
return '%s.%s' % (base_name, self.extension())
def human_path(self):
return os.path.join(self.submission_path(), self.full_name(self.human_base_name()))
def run_path(self):
return os.path.join(self.submission_path(), self.full_name(self.run_base_name()))
def replace_in_file(self, file_name, str_orig, str_new):
new_data = old_data.replace(str_orig, str_new)
with open(file_name, 'w') as f:
f.write(new_data)
def write_code_to_file(self, code_str, file_name):
with open(file_name, 'w') as f:
f.write(code_str)
def save_submission(self):
self.save_code()
self.write_code_to_file(__human_java__, self.human_path())
self.write_code_to_file(__run_java__, self.run_path())
self.replace_in_file(self.run_path(), '__REPLACE_ME__', self.base_name())
self.replace_in_file(self.full_path(), 'package Humans;', '')
compile_cmd = ['javac', '-cp', self.submission_path(), self.run_path()]
errout = open(self.error_log(), 'w')
subprocess.call(compile_cmd, stdout=errout, stderr=subprocess.STDOUT)
def extension(self):
return 'java'
def full_command(self):
return ['java', '-cp', self.submission_path(), self.run_base_name()]
class PythonSubmission(Submission):
def full_command(self):
return ['python', self.full_path()]
def extension(self):
return 'py'
class RubySubmission(Submission):
def full_command(self):
return ['ruby', self.full_path()]
def extension(self):
return 'rb'
class NodeSubmission(Submission):
def full_command(self):
return ['node', self.full_path()]
def extension(self):
return 'js'
class LuaSubmission(Submission):
def full_command(self):
return ['lua', self.full_path()]
def extension(self):
return 'lua'
class RSubmission(Submission):
def full_command(self):
return ['Rscript', self.full_path()]
def extension(self):
return 'R'
class BashSubmission(Submission):
def full_command(self):
return [self.full_path()]
def extension(self):
return '.sh'
class Scraper():
file_name = hashlib.sha1(url).hexdigest()
if not os.path.exists('cache'):
os.makedirs('cache')
full_path = os.path.join('cache', file_name)
file_exists = os.path.isfile(full_path)
if use_cache and file_exists and not force_cache_update:
return html
opener = urllib2.build_opener()
response = opener.open(url)
if use_cache:
f = open(full_path, 'w')
f.write(html)
f.close()
return html
def parse_post(self, post):
name = post.find(text=lambda t: len(t.strip()) > 0)
pre = post.find('pre')
lang = pre.attrs['class'][0] if pre.has_attr('class') else None
code = pre.find('code').text
user = post.find(class_='user-details').find(text=True)
return {'name':name,'lang':lang,'code':code,'user':user}
def parse_posts(self, html):
soup = BeautifulSoup(html)
# Skip the first post
return [self.parse_post(post) for post in posts]
def get_submissions(self, page = 1, force_cache_update = False):
submissions = self.parse_posts(html)
return submissions
class Timeout(Exception):
pass
def run(command, timeout=10):
proc = subprocess.Popen(command, bufsize=0, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
poll_seconds = .250
while time.time() < deadline and proc.poll() == None:
time.sleep(poll_seconds)
if proc.poll() == None:
if float(sys.version[:3]) >= 2.6:
proc.terminate()
raise Timeout()
stdout, stderr = proc.communicate()
return stdout, stderr, proc.returncode
def guess_lang(code):
if re.search(r'class .* extends Human', code):
return 'lang-java'
if re.search(r'import sys', code):
return 'lang-python'
if re.search(r'puts', code) and (re.search(r'ARGV', code) or re.search(r'\%w', code)):
return 'lang-ruby'
if re.search(r'console\.log', code):
return 'lang-javascript'
if re.search(r'program', code) and re.search(r'subroutine', code):
return 'lang-fortran'
if re.search(r'@echo off', code):
return 'lang-bash'
return None
def submission_type(lang, code):
submission_types = {
'lang-ruby': RubySubmission,
'lang-python': PythonSubmission,
'lang-py': PythonSubmission,
'lang-java': JavaSubmission,
'lang-Java': JavaSubmission,
'lang-javascript': NodeSubmission,
'lang-cpp': CPPSubmission,
'lang-c': CSubmission,
'lang-lua': LuaSubmission,
'lang-r': RSubmission,
'lang-fortran': FortranSubmission,
'lang-bash': BashSubmission
}
klass = submission_types.get(lang)
if klass is None:
lang = guess_lang(code)
klass = submission_types.get(lang)
return klass
def instantiate(submission):
lang = submission['lang']
code = submission['code']
name = submission['name']
klass = submission_type(lang, code)
if klass is not None:
instance = klass(name, code)
return instance
print "Entry %s invalid - lang not supported: %s" % (name, lang)
return None
def get_all_instances(force_update):
scraper = Scraper()
print 'Scraping Submissions..'
pages = [1,2,3]
submissions_by_page = [scraper.get_submissions(page=i, force_cache_update=force_update) for i in pages]
submissions = [item for sublist in submissions_by_page for item in sublist]
# Get instances
raw_instances = [instantiate(s) for s in submissions]
instances = [i for i in raw_instances if i]
print "Using %i/%i Submissions" % (len(instances), len(submissions))
return instances
def save_submissions(instances):
print 'Saving Submissions..'
for instance in instances:
instance.save_submission()
def init_game(save=True, force_update=False):
instances = get_all_instances(force_update)
if save:
save_submissions(instances)
return instances
def one_run(instances, input):
valid = {
'good': 1,
'evil': 0
}
disqualified = []
results = []
for instance in instances:
out = instance.run_submission(input)
res = out.strip().lower()
if res not in valid:
disqualified.append(instance)
else:
results.append(valid[res])
return (results, disqualified)
def get_winner(scores, instances):
max_value = max(scores)
max_index = scores.index(max_value)
instance = instances[max_index]
return (instance.name, max_value)
def update_scores(results, scores, minority_counts, minority_num):
for i in range(len(results)):
if results[i] == minority_num:
minority_counts[i] += 1
scores[i] += (minority_counts[i] - 1)
else:
minority_counts[i] = 0
scores[i] += 3
def try_run_game(instances, num_runs = 1000, blacklist = None):
current_input = None
minority_str = None
num_instances = len(instances)
scores = [0] * num_instances
minority_counts = [0] * num_instances
print "Running with %i instances..." % num_instances
for i in range(num_runs):
print "Round: %i - Last minority was %s" % (i, minority_str)
results, disqualified = one_run(instances, current_input)
if len(disqualified) > 0:
for instance in disqualified:
print "Removing %s!" % instance.name
instances.remove(instance)
if blacklist is not None:
with open(blacklist, 'a') as f:
f.write("%s\n" % instance.name)
return False
latest_result = "".join(map(str,results))
current_input = "%s,%s" % (current_input, latest_result)
minority_num = 1 if results.count(1) < results.count(0) else 0
minority_str = 'good' if minority_num == 1 else 'evil'
update_scores(results, scores, minority_counts, minority_num)
name, score = get_winner(scores, instances)
print "%s is currently winning with a score of %i" % (name, score)
print "The winner is %s with a score of %i!!!" % (name, score)
return True
def find_instance_by_name(instances, name):
for instance in instances:
if instance.name == name:
return instance
return None
num_instances = len(instances)
if num_instances % 2 == 0:
print 'There are %i instances.' % num_instances
try:
instances.remove(baelish)
print 'Baelish Removed!'
except:
instances.append(baelish)
def remove_blacklisted(blacklist, instances):
blacklisted = []
try:
except:
return
print 'Removing blacklisted entries...'
for name in blacklisted:
name = name.strip()
instance = find_instance_by_name(instances, name)
if instance is not None:
print 'Removing %s' % name
instances.remove(instance)
def run_game(instances, num_runs):
blacklist = 'blacklist.txt'
remove_blacklisted(blacklist, instances)
baelish = find_instance_by_name(instances, 'Petyr Baelish')
while not try_run_game(instances, num_runs = num_runs, blacklist = blacklist):
print "Restarting!"
print "Done!"
if __name__ == '__main__':
param = sys.argv[1] if len(sys.argv) >= 2 else None
if param == 'get':
instances = init_game(save=True, force_update=True)
elif param == 'run':
instances = init_game(save=False, force_update=False)
num_runs = 50
if len(sys.argv) == 3:
num_runs = int(sys.argv[2])
run_game(instances, num_runs)
else:
self_name = os.path.basename(__file__)
print "usage:"
print "To get the latest code: 'python %s get'" % self_name
print "To run the submissions: 'python %s run <optional num_runs>'" % self_name
• Why no Fortran language?? – Kyle Kanos Jul 16 '14 at 18:01
• @KyleKanos - I added support for it, will update the code shortly. – WhatAWorld Jul 16 '14 at 18:17
• Yay! I (sorta) worked hard on my Fortran submission & Rusher can't get it to work so I'd like someone to get it :) – Kyle Kanos Jul 16 '14 at 18:19
• @Rusher: I agree with PeterTaylor on this one: syntax highlighting as the only suggested edit should be rejected. Edits should be used for substantial corrections, not minor stuff. – Kyle Kanos Jul 16 '14 at 18:47
• You do deserve the rep for this, but since this isn't exactly an answer to the question (and could probably benefit from the community adding stuff for other languages) I think this should technically be a community wiki. – Martin Ender Jul 16 '14 at 20:26
## The Beautiful Mind, Ruby
Makes its decision based on patterns of questionable significance in the bit representation of the last round
require 'prime'
if ARGV.length == 0
puts ["good", "evil"].sample
else
last_round = ARGV[0].split(',').last
puts Prime.prime?(last_round.to_i(2)) ? "good" : "evil"
end
Run like
ruby beautiful-mind.rb
# Piustitious, Lua
A superstitious program that believes in Signs and Wonders.
history = arg[1]
if history == nil then
print("good")
else
local EvilSigns, GoodSigns = 0,0
local SoulSpace = ""
for i in string.gmatch(history, "%d+") do
SoulSpace = SoulSpace .. i
end
if string.match(SoulSpace, "1010011010") then -- THE NUBMER OF THE BEAST!
local r = math.random(1000)
if r <= 666 then print("evil") else print("good") end
else
for i in string.gmatch(SoulSpace, "10100") do -- "I'M COMING" - DEVIL
EvilSigns = EvilSigns + 1
end
for i in string.gmatch(SoulSpace, "11010") do -- "ALL IS WELL" - GOD
GoodSigns = GoodSigns + 1
end
if EvilSigns > GoodSigns then
print("evil")
elseif GoodSigns > EvilSigns then
print("good")
elseif GoodSigns == EvilSigns then
local r = math.random(1000)
if r <= 666 then print("good") else print("evil") end
end
end
end
run it with:
lua Piustitious.lua
followed by the input.
# The Winchesters
Sam and Dean are good (most of the time).
package Humans;
public class TheWinchesters extends Human {
@Override
public String takeSides(String history) throws Exception {
return Math.random() < 0.1 ? "evil" : "good";
}
}
• Are you sure 9:1 is the right ratio? Maybe we should do some data mining and get a more precise ratio? – recursion.ninja Jul 15 '14 at 19:38
• @awashburn I started watching Supernatural 2 months ago (now stuck in season 9) and 9:1 seems ok to me ;) – CommonGuy Jul 16 '14 at 9:02
## Statistician
public class Statistician extends Human{
public final String takeSides(String history) {
int side = 0;
String[] hist = history.split(",");
for(int i=0;i<hist.length;i++){
for(char c:hist[i].toCharArray()){
side += c == '1' ? (i + 1) : -(i + 1);
}
}
if(side == 0) side += Math.round(Math.random());
return side > 0 ? "good" : "evil";
}
}
• That second last line is so awesome – cjfaure Jul 9 '14 at 7:56
• @Undeserved Instead of Math.ceil(Math.random()-Math.random()) you can also do just Math.round(Math.random()). – tomsmeding Jul 9 '14 at 8:04
## R, a somewhat Bayesian bot
Use the frequency table for each user as the prior probability of other users output.
args <- commandArgs(TRUE)
if(length(args)!=0){
history <- do.call(rbind,strsplit(args,","))
history <- do.call(rbind,strsplit(history,""))
tabulated <- apply(history,2,function(x)table(factor(x,0:1)))
result <- names(which.max(table(apply(tabulated, 2, function(x)sample(0:1,1, prob=x)))))
if(result=="1"){cat("good")}else{cat("evil")}
}else{
cat("good")
}
Invoked using Rscript BayesianBot.R followed by the input.
Edit: Just to clarify what this is doing, here is a step by step with the example input:
> args
[1] "11011,00101,11101,11111,00001,11001,11001"
> history #Each player is a column, each round a row
[,1] [,2] [,3] [,4] [,5]
[1,] 1 1 0 1 1
[2,] 0 0 1 0 1
[3,] 1 1 1 0 1
[4,] 1 1 1 1 1
[5,] 0 0 0 0 1
[6,] 1 1 0 0 1
[7,] 1 1 0 0 1
> tabulated #Tally of each player previous decisions.
[,1] [,2] [,3] [,4] [,5]
0 2 2 4 5 0
1 5 5 3 2 7
Then the line starting by result<-, for each player, picks randomly either 0 or 1 using this last table as weights (i. e. for player 1 the probability of picking 0 is 2/7th, of picking 1 5/7th, etc.). It picks one outcome for each player/column and finally returns the number that ended being the most common.
## Swiss
Always sustains neutrality. Doomed to never win.
package Humans;
/**
* Never choosing a side, sustaining neutrality
* @author Fabian
*/
public class Swiss extends Human {
public String takeSides(String history) {
return "neutral"; // wtf, how boring is that?
}
}
• I didn't write this! – Rainbolt Jul 9 '14 at 19:22
• That's the irony. Neutrality never wins – fabigler Jul 9 '14 at 19:23
• @Rusher ah I got it now :D – fabigler Jul 9 '14 at 19:26
• It doesn't even compile – there is a missing semicolon. – Paŭlo Ebermann Jul 22 '14 at 19:37
# HAL 9000
#!/usr/bin/env perl
print eval("evil")
Edit : maybe this is more suitable for HAL 9000, but be careful! It is very evil. I recommend cd to empty directory before running it.
#!/usr/bin/env perl
print eval {
($_) = grep { -f and !/$0$/ } glob('./*'); unlink; evil } This removes one file from cwd for each invocation! Not so obvious invocation: In M$
D:\>copy con hal_9000.pl
#!/usr/bin/env perl
print eval("evil")
^Z
1 file(s) copied.
D:>hal_9000.pl
evil
In *nix
[core1024@testing_pc ~]$tee hal_9000.pl #!/usr/bin/env perl print eval("evil") # Press C-D here [core1024@testing_pc ~]$ chmod +x $_ [core1024@testing_pc ~]$ ./$_ evil[core1024@testing_pc ~]$
• You need to provide a command that can be used to run your program. See the "Deliverables" section of the challenge for more information. – Rainbolt Jul 8 '14 at 20:58
• @Rusher Done ;) – core1024 Jul 8 '14 at 21:12
# Will of the Majority
import sys
import random
if len(sys.argv)==1:
print(random.choice(['good','evil']))
else:
rounds=sys.argv[1].split(',')
last_round=rounds[-1]
zeroes=last_round.count('0')
ones=last_round.count('1')
if ones>zeroes:
print('good')
elif zeroes>ones:
print('evil')
elif ones==zeroes:
print(random.choice(['good','evil']))
Save it as WotM.py, run as python3 WotM.py followed by the input.
A simple program, just to see how it will do. Goes with whatever the majority said last time, or else random.
• You need to provide a command that can be used to run your program. See the "Deliverables" section of the challenge for more information. – Rainbolt Jul 8 '14 at 20:58
• Damn it, that makes mine a duplicate. :D Changed mine to minority. – Martin Ender Jul 8 '14 at 21:01
• @Rusher Added the command. That what you were looking for? – isaacg Jul 8 '14 at 21:05
• @isaacg Perfect! – Rainbolt Jul 8 '14 at 21:12
• I computed the average ranking from the scores in the scoreboard, and this entry wins by that measure. – Brilliand Jul 22 '14 at 19:43
## Alan Shearer
Repeats whatever the person he's sitting next to has just said. If the person turns out to be wrong, he moves on to the next person and repeats what they say instead.
package Humans;
/**
* Alan Shearer copies someone whilst they're right; if they get predict
* wrongly then he moves to the next person and copies whatever they say.
*
* @author Algy
* @url http://codegolf.stackexchange.com/questions/33137/good-versus-evil
*/
public class AlanShearer extends Human {
private char calculateWinner(String round) {
int good = 0, evil = 0;
for (int i = 0, L = round.length(); i < L; i++) {
if (round.charAt(i) == '1') {
good++;
} else {
evil++;
}
}
return (good >= evil) ? '1' : '0';
}
/**
* Take the side of good or evil.
* @param history The past votes of every player
* @return A String "good" or "evil
*/
public String takeSides(String history) {
String[] parts = history.split(",");
String lastRound = parts[parts.length() - 1];
if (parts.length() == 0 || lastRound.length() == 0) {
return "good";
} else {
if (parts.length() == 1) {
return lastRound.charAt(0) == '1' ? "good" : "evil";
} else {
int personToCopy = 0;
for (int i = 0, L = parts.length(); i < L; i++) {
if (parts[i].charAt(personToCopy) != calculateWinner(parts[i])) {
personToCopy++;
if (personToCopy >= L) {
personToCopy = 0;
}
}
}
}
return lastRound.charAt(personToCopy) == '1' ? "good" : "evil";
}
}
}
• You reference a variable called lastRound before you even declare it. Also, you added parentheses to all of your String.length but it isn't a function. Can you get your submission to a point where it will compile? – Rainbolt Jul 15 '14 at 2:24
• @Rusher - done :) – Algy Taylor Jul 15 '14 at 14:25
• @Algy: lastRound.length is still accessed (in the first if) before lastRound is declared (in that if's else). Please try to compile (and maybe run) your code before submitting it here. – Paŭlo Ebermann Jul 15 '14 at 18:50
• @PaŭloEbermann - apologies, I'm not in an environment where I can run it - amendment made, though – Algy Taylor Jul 16 '14 at 10:11
• Now you're referencing a variable called "personToCopy" when it's out of scope. I just moved it inside of the else block so it would compile, but I don't know if that's what you wanted. – Rainbolt Jul 19 '14 at 6:56
# Later is Evil, JavaScript (node.js)
Measures the amount of time between executions. If the the time difference is greater than last time, it must be evil. Otherwise, good.
var fs = require('fs'),
currentTime = (new Date).getTime();
try {
} catch (e) { data = '0 0'; } // no file? no problem, let's start out evil at epoch
var parsed = data.match(/(\d+) (\d+)/),
lastTime = +parsed[1],
lastDifference = +parsed[2],
currentDifference = currentTime - lastTime;
fs.writeFileSync('./laterisevil.txt', currentTime + ' ' + currentDifference, 'utf8');
console.log(currentDifference > lastDifference? 'evil' : 'good');
Run with: node laterisevil.js
# Pattern Finder, Python
Looks for a recurring pattern, and if it can't find one, just goes with the majority.
import sys
if len(sys.argv) == 1:
print('good')
quit()
wins = ''.join(
map(lambda s: str(int(s.count('1') > s.count('0'))),
sys.argv[1].split(',')
)
)
# look for a repeating pattern
accuracy = []
for n in range(1, len(wins)//2+1):
predicted = wins[:n]*(len(wins)//n)
actual = wins[:len(predicted)]
n_right = 0
for p, a in zip(predicted, actual):
n_right += (p == a)
accuracy.append(n_right/len(predicted))
# if there's a good repeating pattern, use it
if accuracy:
best = max(accuracy)
if best > 0.8:
n = accuracy.index(best)+1
prediction = wins[:n][(len(wins))%n]
# good chance of success by going with minority
if prediction == '1':
print('evil')
else:
print('good')
quit()
# if there's no good pattern, just go with the majority
if wins.count('1') > wins.count('0'):
print('good')
else:
print('evil')
run with
python3 pattern_finder.py
• I love this code so much, when I run it, it always get 3000 pts, somehow. – Realdeo Jul 10 '14 at 8:08
# The Turncoat
The Turncoat believes that because of the other combatants so far, the majority will alternate after each round between good and evil more often than it stays on the same side. Thus he begins the first round by arbitrarily siding with good, then alternates every round in an attempt to stay on the winning or losing team more often than not.
package Humans;
public class Turncoat extends Human {
public final String takeSides(String history) {
String[] hist = history.split(",");
return (hist.length % 2) == 0 ? "good" : "evil";
}
}
After writing this, I realized that due to the entries based on statistical analysis, momentum would cause the majority to switch sides less as more rounds have been completed. Hence, the the Lazy Turncoat.
# The Lazy Turncoat
The Lazy Turncoat starts off like the Turncoat, but as rounds pass, he gets lazier and lazier to switch to the other side.
package Humans;
public class LazyTurncoat extends Human {
public final String takeSides(String history) {
int round = history.length() == 0 ? 0 : history.split(",").length;
int momentum = 2 + ((round / 100) * 6);
int choice = round % momentum;
int between = momentum / 2;
return choice < between ? "good" : "evil";
}
}
• The Lazy Turncoat is great! – Angelo Fuchs Jul 10 '14 at 11:34
• I'm including both if you don't mind. – Rainbolt Jul 15 '14 at 2:17
• Go ahead. I'm curious to see how both of them will do, particularly vs the ones that compile voting statistics. – jaybz Jul 21 '14 at 6:24
• @Rainbolt I just noticed a stupid bug with the Turncoat. No need to correct it though. It still works, just not entirely as intended, and even if it isn't too late to fix it, fixing it will just make it behave exactly like one of the newer entries anyway. Feel free to include/exclude if you want. – jaybz Jul 22 '14 at 9:03
# Biographer, Ruby
rounds = ARGV[0].split(',') rescue []
if rounds.length < 10
choice = 1
else
outcome_history = ['x',*rounds.map{|r|['0','1'].max_by{|s|r.count s}.tr('01','ab')}]
player_histories = rounds.map{|r|r.chars.to_a}.transpose.map{ |hist| outcome_history.zip(hist).join }
predictions = player_histories.map do |history|
(10).downto(0) do |i|
i*=2
lookbehind = history[-i,i]
@identical_previous_behavior = history.scan(/(?<=#{lookbehind})[10]/)
break if @identical_previous_behavior.any?
end
if @identical_previous_behavior.any?
(@identical_previous_behavior.count('1')+1).fdiv(@identical_previous_behavior.size+2)
else
0.5
end
end
simulations = (1..1000).map do
votes = predictions.map{ |chance| rand < chance ? 1 : 0 }
end
choice = case simulations.count(1)/10
when 0..15
1
when 16..50
0
when 51..84
1
when 85..100
0
end
end
puts %w[evil good][choice]
My attempt at an almost intelligent entry (an actually intelligent one would require testing against the field). Written in Ruby, so there's a chance this'll be too slow, but on my machine anyway this takes .11 seconds to calculate the last round when there are 40 random players, so I hope it'll work well enough.
save as biographer.rb, run as ruby biographer.rb
The idea is that for each player, it estimates their chances of picking "good" by looking at both their own choices for the last ten rounds, and the overall outcomes, and finding instances in the past where the identical circumstances (their votes + overall outcomes) occurred. It picks the longest lookbehind length, up to 10 rounds, such that there's any precedent, and uses that to create a frequency (adjusted according to Laplace's Law of Succession, so that we're never 100% confident about anyone).
It then runs some simulations and sees how often Good wins. If the simulations turned out mostly the same way, then it's probably going to do well predicting in general so it picks the predicted minority. If it's not confident, it picks the predicted majority.
## Judas
Judas is a really good person. It's a pity he'll betray the good guys for a few pennies.
package Humans;
public class Judas extends Human {
private static final String MONEY = ".*?0100110101101111011011100110010101111001.*?";
public String takeSides(String history) {
return history != null && history.replace(",","").matches(MONEY) ? "evil" : "good";
}
}
• This only ever votes evil if there are enough participants, you may want to remove the , out of history, even more so as Rusher is going to split up the game in groups. – Angelo Fuchs Jul 14 '14 at 7:52
• I didn't know he was going to split up the game in groups. I actually waited for this question to have enough submissions before posting my answer because of the string size. Thanks for letting me know. – William Barbosa Jul 14 '14 at 10:47
• If you know how to pass a 60000 character argument to a process in Windows, let me know. Otherwise, sorry for messing up your entry, and thank you for fixing it! I didn't anticipate receiving so many submissions. – Rainbolt Jul 14 '14 at 21:51
# The Fallacious Gambler (Python)
If one side has won majority a number of times in a row, the gambler realizes that the other side is more likely to be the majority next round (right?) and this influence his vote. He aims for the minority, because if he makes it into the minority once he's likely to make it there a number of times (right?) and get a lot of points.
import sys
import random
def whoWon(round):
return "good" if round.count("1") > round.count("0") else "evil"
if len(sys.argv) == 1:
print random.choice(["good", "evil"])
else:
history = sys.argv[1]
rounds = history.split(",")
lastWin = whoWon(rounds[-1])
streakLength = 1
while streakLength < len(rounds) and whoWon(rounds[-streakLength]) == lastWin:
streakLength += 1
lastLoss = ["good", "evil"]
lastLoss.remove(lastWin)
lastLoss = lastLoss[0]
print lastWin if random.randint(0, streakLength) > 1 else lastLoss
## Usage
For the first round:
python gambler.py
and afterward:
python gambler.py 101,100,001 etc.
• I like how you seem sure about your code, right? :P – IEatBagels Jul 14 '14 at 23:25
# Cellular Automaton
This uses conventional rules for Conway's Game of Life to pick a side. First, a 2D grid is created from the previous votes. Then, the "world" is stepped forward one stage, and the total number of living cells remaining is calculated. If this number is greater than half the total number of cells, "good" is chosen. Otherwise, "evil" is chosen.
Please forgive any mistakes, this was smashed out during my lunch hour. ;)
package Humans;
public class CellularAutomaton extends Human {
private static final String GOOD_TEXT = "good";
private static final String EVIL_TEXT = "evil";
private int numRows;
private int numColumns;
private int[][] world;
@Override
public String takeSides(String history) {
String side = GOOD_TEXT;
if (history.isEmpty()) {
side = Math.random() <= 0.5 ? GOOD_TEXT : EVIL_TEXT;
}
else {
world = new int[numRows][numColumns];
for (int i = 0; i < numColumns; i++) {
for (int j = 0; j < numRows; j++) {
world[j][i] =
}
}
int totalAlive = 0;
int total = numRows * numColumns;
for (int i = 0; i < numColumns; i++) {
for (int j = 0; j < numRows; j++) {
totalAlive += getAlive(world, i, j);
}
}
if (totalAlive < total / 2) {
side = EVIL_TEXT;
}
}
return side;
}
private int getAlive(int[][] world, int i, int j) {
int livingNeighbors = 0;
if (i - 1 >= 0) {
if (j - 1 >= 0) {
livingNeighbors += world[j - 1][i - 1];
}
livingNeighbors += world[j][i - 1];
if (j + 1 < numRows) {
livingNeighbors += world[j + 1][i - 1];
}
}
if (j - 1 >= 0) {
livingNeighbors += world[j - 1][i];
}
if (j + 1 < numRows) {
livingNeighbors += world[j + 1][i];
}
if (i + 1 < numColumns) {
if (j - 1 >= 0) {
livingNeighbors += world[j - 1][i + 1];
}
livingNeighbors += world[j][i + 1];
if (j + 1 < numRows) {
livingNeighbors += world[j + 1][i + 1];
}
}
return livingNeighbors > 1 && livingNeighbors < 4 ? 1 : 0;
}
}
• I removed the print line from the code for testing.. Java entries only need to return good or evil, not print it. – Rainbolt Jul 19 '14 at 7:02
# The Ridge Professor
I hope using libraries is allowed, don't feel like doing this without one =)
The basic idea is to train a ridge regression classifier for each participant on the last rounds, using the 30 results before each round as features. Originally included the last round of results for all players to predict the outcome for each player as well, but that was cutting it rather close for time when the number of participants gets larger (say, 50 or so).
#include <iostream>
#include <string>
#include <algorithm>
#include "Eigen/Dense"
using Eigen::MatrixXf;
using Eigen::VectorXf;
using Eigen::IOFormat;
using std::max;
void regress(MatrixXf &feats, VectorXf &classes, VectorXf &out, float alpha = 1) {
MatrixXf featstrans = feats.transpose();
MatrixXf AtA = featstrans * feats;
out = (AtA + (MatrixXf::Identity(feats.cols(), feats.cols()) * alpha)).inverse() * featstrans * classes;
}
float classify(VectorXf &weights, VectorXf &feats) {
return weights.transpose() * feats;
}
size_t predict(MatrixXf &train_data, VectorXf &labels, VectorXf &testitem) {
VectorXf weights;
regress(train_data, labels, weights);
return (classify(weights, testitem) > 0 ? 1 : 0);
}
static const int N = 30;
static const int M = 10;
// use up to N previous rounds worth of data to predict next round
// train on all previous rounds available
size_t predict(MatrixXf &data, size_t prev_iters, size_t n_participants) {
MatrixXf newdata(data.rows(), data.cols() + max(N, M));
newdata << MatrixXf::Zero(data.rows(), max(N, M)), data;
size_t n_samples = std::min(500ul, prev_iters);
if (n_samples > (8 * max(N, M))) {
n_samples -= max(N,M);
}
size_t oldest_sample = prev_iters - n_samples;
MatrixXf train_data(n_samples, N + M + 1);
VectorXf testitem(N + M + 1);
VectorXf labels(n_samples);
VectorXf averages = newdata.colwise().mean();
size_t n_expected_good = 0;
for (size_t i = 0; i < n_participants; ++i) {
for (size_t iter = oldest_sample; iter < prev_iters; ++iter) {
train_data.row(iter - oldest_sample) << newdata.row(i).segment<N>(iter + max(N, M) - N)
, averages.segment<M>(iter + max(N, M) - M).transpose()
, 1;
}
testitem.transpose() << newdata.row(i).segment<N>(prev_iters + max(N, M) - N)
, averages.segment<M>(prev_iters + max(N, M) - M).transpose()
, 1;
labels = data.row(i).segment(oldest_sample, n_samples);
n_expected_good += predict(train_data, labels, testitem);
}
return n_expected_good;
}
void fill(MatrixXf &data, std::string ¶ms) {
size_t pos = 0, end = params.size();
size_t i = 0, j = 0;
while (pos < end) {
switch (params[pos]) {
case ',':
i = 0;
++j;
break;
case '1':
data(i,j) = 1;
++i;
break;
case '0':
data(i,j) = -1;
++i;
break;
default:
std::cerr << "Error in input string, unexpected " << params[pos] << " found." << std::endl;
std::exit(1);
break;
}
++pos;
}
}
int main(int argc, char **argv) {
using namespace std;
if (argc == 1) {
cout << "evil" << endl;
std::exit(0);
}
string params(argv[1]);
size_t n_prev_iters = count(params.begin(), params.end(), ',') + 1;
size_t n_participants = find(params.begin(), params.end(), ',') - params.begin();
MatrixXf data(n_participants, n_prev_iters);
fill(data, params);
size_t n_expected_good = predict(data, n_prev_iters, n_participants);
if (n_expected_good > n_participants/2) {
cout << "evil" << endl;
} else {
cout << "good" << endl;
}
}
## To Compile
Save the source code in a file called ridge_professor.cc, download the Eigen library and unzip the Eigen folder found inside into the same folder as the source file. Compile with g++ -I. -O3 -ffast-math -o ridge_professor ridge_professor.cc.
## To Run
call ridge_professor.exe and supply argument as needed.
# Question
Since I can't comment anywhere yet, I'll ask here: doesn't the argument size limit on windows make it impossible to call the resulting binaries with the entire history at a few hundred turns? I thought you can't have more than ~9000 characters in the argument...
• Thank you for drawing my attention to this. I'll figure out some way to make it work if it doesn't already work fine in Java. If Java can't do it, research tells me that C++ can, and I'll take the opportunity to relearn C++. I'll be back shortly with test results. – Rainbolt Jul 11 '14 at 19:04
• As it turns out, Java is not subject the the limitations of the command prompt. It appears that only commands larger than 32k cause a problem. Here is my proof (I wrote it myself): docs.google.com/document/d/… . Again, I really appreciate you bringing this up before trials start tomorrow. – Rainbolt Jul 11 '14 at 19:47
• @Rusher There are already 57 bots and you plan on each run being composed of 1000 rounds. That would make your string 57k characters (therefore >32k), wouldn't it? – plannapus Jul 12 '14 at 9:03
• @Rusher I think it may be better to extend the timeline by another week and ask participants to change their programs to read stdin instead of using an argument string. Would be trivial for most programs to change – dgel Jul 13 '14 at 9:40
• @dgel The timeline for the challenge is infinitely long, but I don't want to change the rules in a way that everyone has to rewrite their answer. I'm pretty sure that the rule I added last night will only screw over a single submission, and I plan on helping that person if he ever gets his program to a point where it compiles. – Rainbolt Jul 13 '14 at 15:51
## Crowley
Because the Winchesters are much less interesting without this fellow. He obviously sides with evil...unless it is needed to take care of a bigger evil.
package Humans;
public class Crowley extends Human {
public String takeSides(String history) {
int gd = 0, j=history.length(), comma=0, c=0, z=0;
while(comma < 2 && j>0) {
j--;
z++;
if (history.charAt(j) == ',') {
comma++;
if(c> z/2) {gd++;}
z=0;
c=0;
} else if (history.charAt(j)=='1') {
c++;
} else {
}
}
if(gd == 0){
return "good";
} else {
return "evil";
}
}}
I look at the last two turns (0 commas so far and 1 comma so far) and if both of them let evil win, I vote good. Otherwise I vote evil.
• Do I get this right? You look at the last turn and if less than 50% are "good" votes you side with "good" else with evil? (Out of curiosity: Do you prefer cryptic variable names or is it an accident?) – Angelo Fuchs Jul 17 '14 at 8:10
• @AngeloNeuschitzer I look at the last two turns (0 commas so far and 1 comma so far) and if both of them let evil win, I vote good. Otherwise I vote evil. I prefer variable names that are short to type if the code is short enough the purpose of the code will not get confused. I'm not a professional programmer and this was the first time I've programmed in java or something someone else saw the code for in 6.5 years. I wrote this to refresh my memory.(TLDR they aren't cryptic to me and I'm the only one I usually code for.) – kaine Jul 17 '14 at 20:45
• For clarity... Crowley started out as a human so it was intentional he starts good...Did not expect him to stay good for all rounds though... damn – kaine Jul 25 '14 at 19:45 | 2019-06-27 04:44:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21431340277194977, "perplexity": 12342.63507662242}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000613.45/warc/CC-MAIN-20190627035307-20190627061307-00204.warc.gz"} |
https://physexams.com/lesson/ap-physics-vectors-practice-problems-with-answers_58 | # AP Physics 1: Vectors Practice Problems with Answers
All topics about vectors in AP Physics 1 exams are covered using the problem-solving method in this complete guide. It includes AP physics multiple-choice tests on vector addition and subtraction, dot and cross product, resultant vector, and so on.
## Solved Vector Problems :
Problem (1): Which of the following quantities are vectors in physics?
a. electromotive force (emf).
b. electric current.
c. Fluids pressure.
d. gravitational field.
Solution: Those quantities that have both a direction and a magnitude are defined as vector quantities in physics such as displacement, velocity, acceleration, force, and so on. Both electromotive force (emf) and electric current are not vectors since a number is sufficient to describe these completely. Pressure at the depth of a fluid is not a vector, too. The gravitation field at any point has both a direction and a magnitude.
Thus, the correct answer is d
Problem (2): Which of the following quantities are scalars in physics?
(a) Momentum
(b) Displacement
(c) Area
(d) Average velocity
Solution: Many quantities in physics do not have any direction associated with them. Just a number or a unit fully describes them. Such quantities are called scalar quantities. Examples of scalar quantities are mass, time, area, temperature, emf, electric current, etc.
Thus, the correct answer is c
Be sure to read this article: Definition of a vector in physics. There you will find more problems on vectors.
Problem (3): The components of a vector are given as $A_x=5.3$ and $A_y=2.9$. What is the magnitude of this vector?
(a) 3 (b) 6 (c) 4 (d) 5
Solution: the magnitude of a vector in components form is found using the Pythagorean formula as below $A=\sqrt{A_x^2+A_y^2}$ Substituting the numerical values of the components into the above equation, we have $A=\sqrt{(5.3)^2+(2.9)^2}=\boxed{6}$ The correct answer is b.
Problem (4): We are given the components of a displacement vector as $d_x=23.5$ and $d_y=34.3$. What angle does this vector make with the positive $x$-axis?
a. $34.4^\circ$ b. $67^\circ$
c. $55.6^\circ$ d. $17.3^\circ$
Solution: Once the components of a vector are known, we can find its direction from $x$-axis by the following formula $\alpha=\tan^{-1}\left(\frac{d_y}{d_x}\right)$
Substituting the components gives us $\alpha=\tan^{-1}\left(\frac{34.3}{23.5}\right)=55.6^\circ$ Since all components are in the first quadrant, so this is the correct angle.
Thus, the correct answer is c.
More Problems about vector components are in the article below
Vector practice problems
Problem (5): The components of a velocity vector at a moment are given as $v_x=-9.8\,{\rm m/s}$ and $v_y=6.4\,{\rm m/s}$. The direction, from the $+x$-axis, and magnitude of this velocity vector (in $\rm m/s$) are closest to
a. $11.7\, , 147^\circ$ b. $12.7\, , 139^\circ$
c. $11.7\, , -33.15^\circ$ d. $11.7\, , 211.15^\circ$
Solution: Its magnitude is simply obtained using Pythagorean theorem \begin{align*}v&=\sqrt{v_x^2+v_y^2}\\\\&=\sqrt{(9.8)^2+(6.4)^2}\\\\&=\boxed{11.7\,{\rm m/s}}\end{align*} The subtle point is in finding its direction. Note that the components of this vector lie in the second quadrant. Using the equation in the previous problem, it gives us \begin{align*} \alpha&=\tan^{-1}\left(\frac{v_y}{v_x}\right) \\\\ &=\tan^{-1}\left(\frac{6.4}{-9.8}\right)\\\\&=-33.15^\circ\end{align*}
Pay attention to these notes when using this equation for finding the direction of a vector relative to the $x$-axis through the smallest angle (as shown in the figure by $\alpha$):
Note (1): In the one and fourth quadrants, the formula gives the correct angle.
Note (2): In the second and third quadrants add $180^\circ$ to the angle obtained from the formula.
In this case, therefore, the correct angle is $\beta=180^\circ+(-33.15^\circ)=\boxed{146.85^\circ}$ which can be rounded to $147^\circ$, Hence, the correct answer is a.
We put a lot of effort into preparing these questions and answers. Please support us by purchasing this package that includes 550 solved physics problems for only $4. (the price of a cup of coffee ) or download a free pdf sample. Problem (6): Two equal magnitude forces, forming an angle of$60^\circ$with each other, act on a body. What is the ratio of their subtraction to their resultant? a.$\frac{\sqrt{3}}{3}$b.$\sqrt{3}$c.$1$d.$\frac{\sqrt{2}}{2}$Solution: This question aims to explore the concepts of vector subtraction and addition (the same resultant). Assume two vectors (say, force)$\vec{A}$and$\vec{B}$. The magnitude of their subtraction,$\vec{C}=\vec{A}-\vec{B}$, is defined to be as $C=\sqrt{A^2+B^2-2AB\cos\theta}$ where$A$and$B$are the magnitudes of each vector separately, and$\theta$is the angle between them. In this case, we're told that there are two equal magnitude forces, i.e.,$F_1=F_2=F. Substituting the values into the above, we will have \begin{align*} C&=\sqrt{F_1^2+F_2^2-2F_1 F_2 \cos\theta} \\\\ &=\sqrt{2F^2-2F^2 \cos 60^\circ}\\\\&=\boxed{F}\end{align*} where we used\cos 60^\circ=\frac 12$. On the other side, the resultant vector is another name for the addition of vectors,$\vec{R}=\vec{A}+\vec{B}$. The magnitude of the resultant of two arbitrary vectors making an angle of$\thetais given by $R=\sqrt{A^2+B^2+2AB\cos\theta}$ Thus, in this case, we have \begin{align*} R&=\sqrt{F_1^2+F_2^2+2F_1 F_2\cos\theta}\\\\ &=\sqrt{2F^2+2F^2 \cos 60^\circ}\\\\&=F\sqrt{3}\end{align*} Now their ratio is as $\frac{C}{R}=\frac{F}{F\sqrt{3}}=\frac{1}{\sqrt{3}}$ By rationalizing the denominator, the ratio is obtained as\frac{\sqrt{3}}{3}$. Thus, the correct answer is a More related problems on forces in the AP Physics 1 Exam: AP Physics 1 forces practice problems with MCQs Problem (7): The magnitudes of two displacement vectors are$d$and$2d$, and the magnitude of the total displacement is also$d\sqrt{3}$. What is the angle between these two displacement vectors? a.$\frac{3\pi}{4}$b.$\frac{\pi}{4}$c.$\frac{2\pi}{3}$d.$\frac{\pi}{2}$Solution: Displacement is a vector quantity in physics. Here, we have two displacement vectors of magnitudes$d$and$2d$. The total displacement vector is the same as the resultant or net vector that is obtained by adding the vectors. Recall that the magnitude of a resultant vector of two vectors$\vec{A}$and$\vec{B}$forming an angle of$\theta$is given by $R=\sqrt{A^2+B^2+2AB\cos\theta}$ where$A$and$B$are the magnitudes. Substituting the known data into it and solving for the unknown angle$\theta$, we will have \begin{gather*} R=\sqrt{A^2+B^2+2AB\cos\theta}\\\\ d\sqrt{3}=\sqrt{d^2+(2d)^2 +2(d)(2d)\cos\theta}\\\\ 3d^2 =d^2+4d^2+4d^2\cos\theta \\\\ -2d^2 =4d^2 \cos\theta \\\\ \Rightarrow \cos\theta =-\frac 12 \\\\ \Rightarrow \boxed{\theta=120^\circ=\frac{2\pi}{3}}\end{gather*} where in the second equality the both sides were squared. Hence, the correct answer is c Problem (8): Use all the information provided by the graph below and find the magnitude and direction (with$+x$axis) of vector$\vec{C}=\vec{A}-\vec{B}$. (a)$6.1\quad, 215^\circ$(b)$5.3\quad, 135^\circ$(c)$4\quad, 25^\circ$(d)$6.1\quad, -26.6^\circ$Solution: To find the subtraction of two vectors, knowing their magnitude and direction, first, you must resolve those into their components. Then, algebraically subtract them to find the result. The components of$\vec{A}$and$\vec{B}are found to be \begin{align*} A_x&=A\sin 45^\circ\\&=2.9\times \frac{\sqrt{2}}2\\&=2 \\\\ A_y&=A\cos 45^\circ\\&=2.9\times \frac{\sqrt{2}}2\\&=2\\\\ B_x &=B\cos 27^\circ\\&=3 \\\\ B_y&=B\sin 27^\circ\\&=1.5\end{align*} Note that the vector\vec{A}$lies in the third quadrant, so all their components are toward the negative$x$and$y$axes. Hence, its correct components is$\vec{A}=(-2,-2)$. Now we must form the subtraction vector of two vectors$\vec{A}$and$\vec{B}=(3,1.5), by subtracting the corresponding components as below \begin{align*} C_x&=A_x-B_x\\&=-2-3\\&=-5 \\\\ C_y&=A_y-B_y\\&=-2-1.5\\&=-3.5 \end{align*} Thus, these are the components of the subtraction vector whose magnitude is also found as $C=\sqrt{C_x^2+C_y^2}=\sqrt{(-5)^2+(-3.5)^2}=\boxed{6.1}$ Given the components of a vector, one can use the following formula and find the angle which the vector makes with the positivexdirection \begin{align*} \alpha&=\tan^{-1}\left(\frac{C_y}{C_x}\right) \\\\&=\tan^{-1}\left(\frac{-3.5}{-5}\right) \\\\&=35^\circ \end{align*} But note that the components of\vec{C}=(-5,-3.5)$are in the third quadrant, so we must add$180^\circ$to this angle to get the correct angle. Hence, $\alpha= 180^\circ+(35^\circ)=\boxed{215^\circ}$ The correct answer is a. Problem (9): Two vectors are given as below \begin{gather*} \vec{A}=-3\,\hat{i}+2\,\hat{j}+3\,\hat{k}\\ \vec{B}=\hat{i}+2\,\hat{k}\end{gather*} The dot product$\vec{A}\cdot \vec{B}$equals a. 2 b. 3 c. 4 d. 1 Solution: Assume the components of two vectors are given as$\vec{A}=(A_x,A_y,A_z)$and$\vec{B}=(B_x,B_y,B_z). Their dot (scalar) product is defined as follows $\vec{A}\cdot\vec{B}=A_xB_x+A_y B_y+A_z B_z$ In this case, we have \begin{align*}\vec{A}\cdot\vec{B}&=(-3)(1)+(2)(0)+(3)(2)\\&=\boxed{3}\end{align*} Note that the dot product of two vectors is just a number not another vector. Thus, the correct answer is b. Problem (10): What is the angle between the two vectors\vec{A}=4\,\hat{i}+4\,\hat{j}$and$\vec{B}=3\,\hat{i}-3\,\hat{j}$in radians? (a)$\pi$(b)$\frac{\pi}{2}$(c)$\frac{3\pi}{3}$(d)$\frac{3\pi}{4}$Solution: The angle between two vector$\vec{A}$and$\vec{B}is found using another definition of scalar (dot) product as below $\cos\theta=\frac{\vec{A}\cdot \vec{B}}{|\vec{A}||\vec{B}|}$ On the other side, recall that $\vec{A}\cdot\vec{B}=A_x B_x +A_y B_y$ Thus, we have \begin{align*} \cos\theta &=\frac{(4)(3)+(4)(-3)}{\sqrt{4^2+4^2}\sqrt{3^2+(-3)^2}}\\\\ &=\frac{12-12}{4\sqrt{2}\times 3\sqrt{2}}\\\\ &=0\end{align*} Take the inverse cosine of both sides above to find the desired angle $\theta=\cos^{-1}(0)=\frac{\pi}{2}$ Hence the correct answer is b. Problem (11): Two vectors\vec{A}=10\,\hat{i}-6\,\hat{j}$and$\vec{B}=-16\,\hat{j}$are given. What angle does the vector$\vec{C}=\vec{A}-\vec{B}$make with the positive$x$axis? a.$30^\circ$b.$60^\circ$c.$90^\circ$d.$45^\circ$Solution: We are given two vectors in components form as$\vec{A}=(10,-6)$and$\vec{B}=(0,-16)$. First, construct the subtraction vector$\vec{C}as below \begin{align*} C_x &=A_x-B_x\\&=10-0\\&=10 \\\\ C_y&=A_y-B_y\\&=-6-(-16)\\&=10\end{align*} Thus,\vec{C}=(10,10)$. The angle of a vector with the positive$xaxis, provided its components are known, is obtained by the following formula \begin{align*}\theta&=\tan^{-1}\left(\frac{C_y}{C_x}\right)\\\\&=\tan^{-1}\left(\frac{10}{10}\right)\\\\ &=\tan^{-1}(1)\end{align*} We know that the angle whose tangent is1$is$\boxed{45^\circ}$. The correct answer is d. Problem (12): The vector$\vec{A}=2\sqrt{3}\hat{i}+2\hat{j}$is perpendicular to which of the vectors$\vec{B}=3\sqrt{3}\hat{i}-3\hat{j}$and$\vec{C}=3\hat{i}-3\sqrt{3}\hat{j}$. a. Only$B$b. Only$C$c. Both$B$and$C$d. None of the above. Solution: Two vectors$\vec{A}=A_x \hat{i}+A_y \hat{j}$and$\vec{B}=B_x \hat{i}+B_y \hat{j}$are perpendicular to each other when their scalar product is zero \begin{gather*} \vec{A}\cdot\vec{B}=0\\ A_x B_x +A_y B_y =0 \end{gather*} So, first check the two vectors$\vec{A}$and$\vec{B}\begin{align*} \vec{A}\cdot\vec{B}&=(2\sqrt{3})(3\sqrt{3})+(2)(-3)\\ &\neq 0 \end{align*} Thus, these two vectors are not perpendicular. Now check\vec{A}$and$\vec{C}. \begin{align*} \vec{A}\cdot\vec{C}&=(2\sqrt{3})(3)+(2)(-3\sqrt{3})\\ &=0 \end{align*} So, these two vectors are perpendicular. Hence, the correct answer is b. Problem (13): What angle does vector\hat{i}+\sqrt{3}\,\hat{j}$make with vector$-\sqrt{3}\,\hat{i}$? (a) zero (b)$\frac{\pi}{3}$(c)$\frac{2\pi}{3}$(d)$\frac{5\pi}{6}$Solution: Consider two vectors$\vec{A}=(A_x,A_y)$and$\vec{B}=(B_x,B_y)$. Scalar or dot product definition gives us the angle between these two vectors as $\cos\theta=\frac{A_xB_x+A_y B_y}{AB}$ where$A$and$B$are the magnitudes of the vectors. In this problem, assume$\vec{A}=(1,\sqrt{3})$and$\vec{B}=(-\sqrt{3},0). Their magnitudes are \begin{align*} A&=\sqrt{A_x^2+A_y^2}\\ &=\sqrt{1^2+(\sqrt{3})^2}\\&=2 \\\\ B&=\sqrt{(-\sqrt{3})^2+0^2}\\&=\sqrt{3}\end{align*} Therefore, the angle between them is calculated as \begin{align*}\cos\theta &=\frac{(1)(-\sqrt{3})+(0)(\sqrt{3})}{2\times \sqrt{3}}\\\\&=\frac{-1}{2}\end{align*} Take the inverse cosine of both sides gives us the desired angle $\theta=\cos^{-1}\left(\frac{-1}{2}\right)=\frac{2\pi}{3}$ Thus, the correct angle is (c). Problem (14): Which of the following vectors is perpendicular to the vector\vec{a}=-2\,\hat{i}+3\,\hat{j}$? a.$3\,\hat{i}+3\,\hat{j}$b.$-\hat{i}+5\,\hat{j}$c.$-3\,\hat{i}+2\,\hat{j}$d.$3\,\hat{i}+2\,\hat{j}$Solution: When the dot product of two vectors becomes zero, those vectors are perpendicular (the angle between them is$90^\circ$), to each other. Having the components of a vector, we can write the dot product between them as below $\vec{a}\cdot\vec{b}=a_x b_x+a_y b_y$ In this problem, we must check each choice separately. a. This is false since $(-2)(3)+(3)(3)=3\neq 0$ b. False $(-2)(-1)+(3)(5)=18\neq 0$ c. False $(-2)(-3)+(3)(2)=12 \neq 0$ d. Correct $(-2)(3)+(3)(2)=0$ Hence, the correct answer is (d). Problem (15): Consider the vector$\vec{A}=0.5\,\hat{i}-\frac 23\,\hat{j}$. What is the magnitude of the vector$6\vec{A}$? a. -1 b. +1 c. 5 d.$\sqrt{7}$Solution: The purpose of this problem is to explore the concept of multiplying a vector by a scalar. If a vector with components$\vec{A}=(A_x,A_y)$is given, then the vector$\vec{B}=k\vec{A}$, where$k$is some number, is constructed as below $\boxed{\vec{B}=(kA_x,kA_y)}$ Thus, in this case, we will have $6\vec{A}=6(0.5,-\frac 23)=(3,-4)$ Its magnitude is also determined using the Pythagorean theorem $\sqrt{3^2+(-4)^2}=5$ The correct answer is c Problem (16): Two vectors are given as below: \begin{gather*} \vec{A}=3\,\hat{i}+2\,\hat{j}-\hat{k}\\\\\vec{B}=-2\,\hat{i}+4\,\hat{k}\end{gather*} What is the magnitude of the cross product$\vec{A}\times\vec{B}$? Solution: There are two methods to solve cross-product problems in ap physics exams. One is using the definition of cross product as below which only gives us its magnitude $|\vec{A}\times\vec{B}|=AB\sin\theta$ and the next using the determinants which is a bit difficult but, in turn, gives the vector itself. we choose the first method. In the above$|\cdots|$denotes the magnitude of the cross product. To use the definition of the cross product, you must know the angle$\thetabetween the given vectors. In the previous problems, you learned how to find the angle between two arbitrary vectors using the dot product. So, that angle is obtained as \begin{align*} \cos\theta&=\frac{A_xB_x+A_yB_y+A_zB_z}{AB}\\\\&=\frac{(3)(-2)+(2)(0)+(-1)(4)}{\sqrt{14}\sqrt{20}}\\\\ &=\frac{-10}{\sqrt{14\times 20}}\end{align*} Taking the inverse cosine of both sides, get $\boxed{\alpha=126.7^\circ}$ whereA$and$Bare the magnitudes of the given vectors. Now that the angle is known, we can simply use the cross product definition to find its magnitude as \begin{align*} |\vec{A}\times\vec{B}|&=AB\sin\theta\\&=\sqrt{14}\sqrt{20}\sin 126.7^\circ\\&=\boxed{13.4}\end{align*} Problem (17): Three vectors are shown in the figure below. The number next to each vector and between two adjacent vectors represent the magnitude and angle, respectively. Use these information and find the magnitude and direction of the following cross products (a)\vec{A}\times\vec{B}$, (b)$\vec{B}\times \vec{C}$. Solution: The cross product of two vectors$\vec{A}$and$\vec{B}$is another vector at the right angle (or, perpendicular) to both. The magnitude of this vector is calculated using the formula $|\vec{A}\times\vec{B}|=AB\sin\theta$ where$\theta$is the acute angle (the smallest) between the two vectors. The direction of the cross product is found using the right-hand rule. According to this rule, point fingers of your right hand along the first vector$\vec{A}$and turn those to the next vector$\vec{B}$. In this way, your thumb is directed along the direction of$\vec{A}\times \vec{B}$. (a) In this case,$\theta=42^\circ. Using the equation for cross product above, the magnitude is \begin{align*} |\vec{A}\times\vec{B}|&=AB\sin\theta \\ &=(3)(6) \sin 42^\circ\\&=12 \end{align*} According to the right hand rule prescription, the direction is into the page. (b) The angle is77^\circ+42^\circ=119^\circ, so \begin{align*} |\vec{B}\times\vec{C}|&=BC\sin\theta \\ &=(3)(7) \sin 119^\circ\\&=18.3 \end{align*} Its direction is also out of the page Refer to the page below to practice more problems on the right-hand rule. Right-hand rule: example problems Problem (18): Five equal magnitude forces apply to an object. If the magnitude of each force isF$, find the resultant force vector. a. 1F b. 2F c. 3F d. 5F Solution: Always the best method to find the resultant vector of a couple of vectors is to decompose all vectors along the horizontal and vertical direction, then use the rules of vector addition. Of five equal magnitude forces, two of them are directed at an angle of$30^\circ$. So, resolve those into their components. The force vector that is in the first quadrant has the following components \begin{gather*} F_x=F\cos 30^\circ=F\frac{\sqrt{3}}2\\\\ F_y=F\sin 30^\circ=F\left(\frac 12\right)\end{gather*} Similarly, the force in the second quadrant has the same components, but with a small difference. The$x$-component of this vector is to the left, so its correct component is$-F\frac{\sqrt{3}}2$. Now that all vectors decomposed along the$x$and$y$axes, in each direction algebraically add them to find the component of the resultant vector in that direction. In the$x$-direction, we have $F\frac{\sqrt{3}}2+F-F\frac{\sqrt{3}}2-F=0$ In the$y$-direction, $F+F\frac 12+F\frac 12=2F$ Therefore, the resultant vector$\vec{R}has the following components $\vec{R}=(0,2F)$ Its magnitude is also \begin{align*}R&=\sqrt{R_x^2+R_y^2}\\&=\sqrt{0^2+(2F)^2}\\&=\boxed{2F}\end{align*} Thus, the correct answer is b Problem (19): What is the resultant of the forces shown in the figure below? a. 28 b.12\sqrt{2}$c.$15\sqrt{2}$d. 20 Solution: First of all, resolve the angled vector into its components along the positive$x$and$y$axes. Recall that a vector of magnitude$A$which makes an angle of$\theta$with the positive$x$axis has a component of$A\cos\theta$along the$x$axis, and a component of$A\sin\theta$along the$y$axis. In this problem, the tilted vector has a magnitude of$5\sqrt{2}$and an angle of$45^\circwith the horizontal. So, its components are \begin{align*} A_x&=5\sqrt{2}\cos 45^\circ\\&=5\sqrt{2}\times \sqrt{2}/2 \\&=5 \\\\ A_y&=5\sqrt{2}\sin 45^\circ \\&=5\end{align*} Now we have several vectors along each direction of the coordinate system. The resultant vector of a couple of vectors is defined as the vector addition of them. To find it, we proceed as below Stage (I): Add vectors lying in each direction Along+x$:$16.5+5=21.5\,{\rm N}$Along$-x$:$5.5\,{\rm N}$Along$+y$:$11.5+5=16.5\,{\rm N}$Along$-y$:$4.5\,{\rm N}$Stage (II): Along each direction, subtract the vectors in the negative direction from the positive one. This gives the components of the resultant (net) vector along that direction. \begin{gather*} R_x = 21.5-5.5=16\,{\rm N} \\ R_y=16.5-4.5=12\,{\rm N}\end{gather*} where we called the resultant's components as$R_x$and$R_y. Thus, the resultant vector is written as $\vec{R}=16\,\hat{i}+12\,\hat{j}$ The magnitude of a vector is also found using the Pythagorean theorem \begin{align*} R&=\sqrt{R_x^2 +R_y^2}\\\\&=\sqrt{16^2+12^2}\\\\&=\boxed{20\,{\rm N}}\end{align*} Hence, the correct answer is d. Problem (20): What is the magnitude and direction of the resultant vector of the following vectors shown in the diagram below? (Take\cos 53^\circ=0.6$and$\sin 53^\circ=0.8$). a.$2\,{\rm N}$due west b.$2\,{\rm N}$due east c.$1\,{\rm N}$due east d.$1\,{\rm N}$due west Solution: There are two vectors directed due east and north, respectively. The$10\,{\rm N}$vector is also directed$53^\circ$west of south. Resolve this tilted vector into its components along the$x$and$ydirections. \begin{align*} A_x&=A\sin\theta \\&=10\,\sin 53^\circ\\&=8 \\\\ A_y&=A\cos\theta\\&=10\,\cos 53^\circ\\&=6\end{align*} Now along each direction, there are two vectors in opposite directions. So subtract them to find the net vector along that direction. \begin{gather*} R_x=10-8=2\,{\rm N}\\\\ R_y=6-6=0\end{gather*} These are the components of the resultant vector whose magnitude gets as $R=\sqrt{R_x^2+R_y^2}=\sqrt{2^2+0^2}=2\,{\rm N}$ The angle that a vector makes with the positivex$-axis is also found by $\alpha=\tan^{-1}\left(\frac{R_y}{R_x}\right)$ So, substituting the components gives us $\alpha=\tan^{-1}\left(\frac{0}{2}\right)=0^\circ$ Hence, the resultant vector lies toward the east and has a magnitude of$2\,{\rm N}$. The correct answer is b. Problem (21): As shown in the free-body diagram below, a body is subjected to three forces. The body is in an equilibrium condition. Find the magnitude and direction (the angle$\theta$) of the vector$\vec{F}$Solution: ''equilibrium'' means that the resultant or net force on the object is zero. Three forces are acting on the body. First, resolve the tilted force vectors, find their vector addition (resultant vector), then set it to zero. The vector$\vec{F}$is resolved into its components as below \begin{gather*} F_x=F\cos\theta \\F_y=F\sin\theta\end{gather*} Similarly, the components of$40\sqrt{3}-{\rm N}vector are \begin{align*} A_x&=A\cos 60^\circ\\&=40\sqrt{3}\left(\frac{1}{2}\right)\\&=20\sqrt{3}\,{\rm N}\\\\ A_y&=A\sin 60^\circ\\&=40\sqrt{3}\times \frac{\sqrt{3}}{2}\\&=60\,{\rm N}\end{align*} The algebraic addition of vectors along thex$direction gives the$x$component of the resultant vector $R_x=F\cos\theta-20\sqrt{3}$ Similarly, the$y$component of the resultant vector is also found as $R_y=F\sin\theta+60-80$ Since the object is in equilibrium, the net force on it must be zero. Thus, equate those components to zero \begin{gather*} R_x=0 \Rightarrow F\cos\theta=20\sqrt{3} \\\\\ R_y=0 \Rightarrow F\sin\theta=20\end{gather*} By dividing them, one of the unknowns,$F$, is removed. This way, the other unknown,$\theta$, is found. \begin{gather*} \frac{F\cos\theta}{F\sin\theta}=\frac{20\sqrt{3}}{20}\\\\ \Rightarrow \cot\theta =\sqrt{3} \\\\ \Rightarrow\quad \boxed{\theta=30^\circ}\end{gather*} Now, substitute the obtained angle into one of the equations$R_x=0$or$R_y=0$and solve for$F$. \begin{gather*}R_x=F\cos \theta-20\sqrt{3}=0\\ F\cos 30^\circ=20\sqrt{3} \\\Rightarrow \boxed{F=40\,{\rm N}}\end{gather*} where we set$\cos 30^\circ=\frac{\sqrt{3}}2$. Problem (22): In the figure below, the rope$OB$is horizontal and the tension force in the rope$OA$equals$60\sqrt{2}$. The system is in equilibrium. What is the mass of the hanging object? a.$\sqrt{2}$b.$6$c.$3\sqrt{2}$d.$3$Solution: This setup is in equilibrium, so the net or resultant force on the body must be zero. Three forces are acting on the block. Two tension forces in the ropes and one weight force. The tension force in the rope$OA$,$T_1$, is directed at an angle of$180^\circ-135^\circ=45^\circ$with the horizontal as shown in the free-body diagram below. Resolve this tension into its components, then find the net force vector resulting from these three forces. The components of tension force$T_1is found to be \begin{align*} T_{1x}&=T_1\cos 45^\circ\\&=60\sqrt{2}\times \frac{\sqrt{2}}2\\&=60\,{\rm N}\\\\T_{1y}&=T_1\sin 45^\circ\\&=60\sqrt{2}\times \frac{\sqrt{2}}2\\&=60\,{\rm N}\end{align*} The object does not move vertically, so the net force in this direction must be zero, i.e.,T_{1y}=W$. Therefore, we have $T_{1y}=W \Rightarrow W=60\,{\rm N}$ Using the definition of weight as$W=mg$, the mass of the object is $m=\frac{W}{g}=\frac{60}{10}=6\,{\rm kg}$ Hence, the correct answer is b. (Find the tension in the rope$OB$.) Problem (23): If the resultant of the following three vectors is zero, find$a$and$b$, respectively. \begin{gather*} \vec{A}=3\,\hat{i}+2\,\hat{j}\\ \vec{B}=-5\,\hat{i}+3\,\hat{j}\\\vec{C}=a\,\hat{i}+b\,\hat{j}\end{gather*} Solution: In all ap physics exams, the resultant vector means the addition of vectors. We call it$\vec{R}\$. So, we must find the addition of the three above vectors, then set it to zero and solve the obtained equations for the unknowns. $\vec{R}=\vec{A}+\vec{B}+\vec{C}=0$ We summarize each vector as below for simplicity $\vec{A}=(3,2) \, , \vec{B}=(-5,3) \, , \vec{C}=(a,b)$ Adding the corresponding components with each other, we will have \begin{align*} R_x &=3+(-5)+a\\R_y &= 2+3+b \end{align*} These are the components of the resultant vector, setting these to zero yields \begin{gather*} R_x=0 \Rightarrow \boxed{a=2} \\ R_y=0 \Rightarrow \boxed{b=-5} \end{gather*}
Author: Dr. Ali Nemati
Page Published: 10/19/2021 | 2022-09-27 14:26:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9865182638168335, "perplexity": 1282.0515087423037}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00665.warc.gz"} |
https://latex.org/forum/viewtopic.php?f=5&t=1115 | ## LaTeX forum ⇒ General ⇒ Annotated APA style bibliography
LaTeX specific issues not fitting into one of the other forums of this category.
davidar
Posts: 1
Joined: Sun Mar 23, 2008 11:41 am
### Annotated APA style bibliography
I've been using LaTeX for a few months now, and the one thing I've never really sorted out is bibliographies. Up until now, I've been using a slightly dodgy system for bibliographies (using the 'note' field to display 'accessed on' dates for websites), but now I need to present bibliographies in annotated APA style (I've been given this page as the standard to be used). I've tried using apacite, but it doesn't seem to be suitable.
So, long story short, what I'm looking for is a way to display APA bibliographies with annotations and retrieval dates. I apologise in advance if the solution is really simple, but I really have no idea what else to try and google doesn't seem to be helping.
David Roberts
Posts: 3
Joined: Tue Jun 24, 2008 12:26 am
well, i am enforced to use APA-style as well. after a whole night work on it, i managed sort off working it.
firstly i tried to install the apa packages, but i did that completly wrong i guess(88 error messages). so i installed the full miktex installation, which i recommend to all new users of latex. sure, it takes one hour to install it, but once you install that, there is no need for additional packages to install, which mean that i had only a few common errors.
it works now, however there are still some problems with it:
the bibliography is not alligned properly according apa. this means this:
how it shows:
how it needs to be shown, ignore the interline distance:
i am still figure it out, but i have no clue how i should do it.
in my tex document i state:
%-----------begin-preamble---------------------------\usepackage{graphicx} %to show graphics\usepackage{color} %to use 6 colors; useage: {\color{name}item} ;name= red, green, blue, cyan, yellow, magenta \usepackage{float} %to float graphics\usepackage{a4wide} %wide use of a4 paper\usepackage{url} %to show url's in url font\usepackage{amsmath} %extensive math possibilities\usepackage{algorithm,algorithmic} %use of algorithms\usepackage[latin1]{inputenc} %to type non-ascii chars\usepackage{eurosym} %use: \euro\usepackage{apacite} %bibliography in apa-style\usepackage{tocbibind} %bibliography in toc
and finally i show the bibliography in this way:
%====================================================%-----------------references-------------------------%====================================================\bibliographystyle{apacite}\bibliography{biblio}
and i ref to it in this way:
and it shows this, wat is perfect:
i include also my bib file for completeness, but i think it will not be of that importance:
@article{markov:07,author = {Deslauriers, Alexandre and L'Ecuyer, Pierre and Pichitlamken, Juta and Ingolfsson, Armann and Avramidis, Athanassios N. },title = {Markov chain models of a telephone call center with call blending},journal = {Computers {\&} Operations Research},year = {2007}, volume = {34}, number = {6}, pages = {1616-1645}} @article{tut:02,author = {Gans, Noah and Koole, Ger and Mandelbaum, Avishai },title = {Telephone Call Centers: Tutorial, review, and Research Prospects},journal = {Manufacturing {\&} Service Operations},year = {2002}, volume = {5}, pages = {79 - 141}} @book{koole:08,author = {Koole, Ger },title = {Call {Center} {Mathemathics}: {A} scientific method for understanding and improving contact centers},publisher = {Sophia Antipolis},year = {2008}} @proceedings{jason:03,author = {Mehotra, Vijay and Fama, Jason },title = {Call {C}enter {S}imulation {M}odeling: {M}ethods, {C}hallenges, and {O}pportunities},organization = {Winter {S}imulation {C}onference},year = {2003}} @proceedings{ld:03,author = {Bruno Woltzenlogel Paleo },title = {An Approximate Gazetteer for GATE based on Levenshtein Distance},organization = {Winter Simulation Conference},year = {2007}} @intechreport{we:08,author = {Bendermacher, Chantal and Stankiewicz, Jan and Mohnen, Jesper and Opas, Mike and Van Daele, Pascal},title = {{I}nterview with {S}mols {N.},a team coordinator for {T}raffic {\&} {M}anagement {V}odafone},school = {Maastricht University, MICC},year = {2008}} @article{hd:50,author = {Richard W. Hamming},title={Error Detecting and Error Correcting Codes},journal={Bell System Technical Journal}, year = {1950}, volume= {26}, number={2},pages={147 - 160}} @url{niel:08,author ={Nielsen, Jakob},title = {Ten Usability Heuristics},year = {2008},URL = {http://www.useit.com/papers/heuristic/heuristic_list.html}}
so, maybe someone has the magic solution for my problem, but it is of no hurry as i already hand in my paper. however i like to know the mistake i made.
if anyone has a recommandation, i like to hear it
GrzzZ,
wtmonroe
Posts: 1
Joined: Fri Sep 12, 2008 3:03 pm
Hello,
Has anyone found a solution for the problem raised here? I would also like a Latex template that would allow the display of APA style annotated bibliographies as they are shown here http://www.library.cornell.edu/olinuris ... htm#sample I am not very well versed in the details of Latex or Bibtex but it appears that the apacite package (that I generally use to format my bibliographies) does not make use of the annote field and thus will not permit the display of annotated bibliographies.
If anyone has found or created an appropriate template or bst style for this I would be very interested. Perhaps there is another way to achieve this formatting that I have overlooked.
Thanks,
Will
diaper
Posts: 1
Joined: Tue May 28, 2019 6:10 pm
Use \bibliographystyle{apacann}, not \bibliographystyle{apacite}. Also, remove \usepackage{tocbibind} because it prevents the hanging indent that APA requires
Ijon Tichy
Posts: 121
Joined: Mon Dec 24, 2018 10:12 am
The most APA conform bibliography can be achieved using »biblatex-apa – BibLaTeX citation and reference style for APA«. It implements citations and references conforming to the APA6 style guide. Unfortunately it does not support annotations. A simple suggestion could be to use note instead of annotation, but another suggestion is to add annotations to all drivers that should support them. This can be done using xpatch. Here is an example:
\begin{filecontents*}{\jobname.bib}@article { WGW1986, author={Waite, L. J. and Goldschneider, F. K. and Witsberger, C.}, year={1986}, title={Nonfamily living and the erosion of traditional family orientations among young adults}, journaltitle={American Sociological Review}, volume={51}, pages={541-554}, annotation={The authors, researchers at the Rand Corporation and Brown University, use data from the National Longitudinal Surveys of Young Women and Young Men to test their hypothesis that nonfamily living by young adults alters their attitudes, values, plans, and expectations, moving them away from their belief in traditional sex roles. They find their hypothesis strongly supported in young females, while the effects were fewer in studies of young males. Increasing the time away from parents before marrying increased individualism, self-sufficiency, and changes in attitudes about families. In contrast, an earlier study by Williams cited below shows no significant gender differences in sex role attitudes as a result of nonfamily living.},}\end{filecontents*} \documentclass[a4paper]{article}% For APA publications apa6 could be the % better class. \usepackage[main=USenglish]{babel}\usepackage{csquotes}\usepackage[style=apa]{biblatex}\addbibresource{\jobname.bib} \usepackage{xpatch}\xpatchbibdriver{article}{\usebibmacro{finentry}}{% \setunit{\adddot\par}\newblock \usebibmacro{annotation}% \usebibmacro{finentry}%}{}{\PaTCHFailURE} \begin{document}\section{Test}See also \autocite{WGW1986}.\printbibliography\end{document}
Please note: You need to use biber instead of BiBTeX with this example. And you must not load apacite, natbib, cite or another bibliography/cite package.
Note also: You should not use package a4wide (see l2tabu). And almost all LaTeX editors use UTF8 as default encoding, so \usepackage[latin1]{inputenc} is also not recommended.
Last edited by Ijon Tichy on Wed May 29, 2019 8:30 am, edited 1 time in total.
### Who is online
Users browsing this forum: No registered users and 18 guests | 2019-10-21 20:38:24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5011797547340393, "perplexity": 7393.648832637034}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987787444.85/warc/CC-MAIN-20191021194506-20191021222006-00469.warc.gz"} |
https://srtklaw.com/thought/in-terms-of-the-voucher-when-sending-a-qme-or-ame-letter-also-include-the-voucher-form-form-dwc-ad-10133-36/ | # In terms of the voucher, when sending a QME or AME letter, also include the voucher form (Form DWC-AD 10133.36).
In terms of the voucher, when sending a QME or AME letter, also include the voucher form (Form DWC-AD 10133.36). This way, the doctor is actually provided the form and if they don’t fill it out, the defendant has an argument to not provide the voucher, since defendant provided all of the documentation needed, but the doctor and Applicant did not follow through with obtaining same, i.e. the defendant didn’t have the information upon which to issue the voucher. | 2023-03-28 04:44:23 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8284335732460022, "perplexity": 6320.978783094276}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948765.13/warc/CC-MAIN-20230328042424-20230328072424-00580.warc.gz"} |
https://socratic.org/questions/how-is-the-empirical-formula-used | # How is the empirical formula used?
Apr 23, 2016
For the determination of molecular formula. How?
#### Explanation:
The empirical formula is the simplest whole number ratio that defines the constituent atoms in a species. The molecular formula is always a whole number multiple of the empirical formula. In some circumstances the empirical formula and the molecular formula are the same.
Combust a (typically) organic compound in a furnace, and you get carbon dioxide and water (and sometimes nitrogen gas). Feed these gases into a chromatograph, and you get a very accurate measurement of the the percentage by mass of $C$, $H$, $N$, (measurement of $O$ is not so straightforward, sometimes percentage of $O$ is simply taken as the balance percentage, i.e. O%=100%-C%-H%-N%). This can be converted into an empirical formula by standard means, and there are many examples of the process here.
Combustion analysis does not give the molecular formula. This must be determined by some other means: mass spectroscopy; molecular mass determination.
Now the molecular formula is always a multiple of the empirical formula.
i.e. $\text{Molecular formula}$ $=$ $\text{(Empirical formula)} \times n$. Of course this multiple, $n$, may be $1$, in some circumstances.
Knowledge of the empirical formula plus the molecular mass allows us to simply determine this number $n$, and thus knowledge of the molecular formula is provided | 2020-09-21 15:16:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 12, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8960559368133545, "perplexity": 773.397762808938}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400201826.20/warc/CC-MAIN-20200921143722-20200921173722-00234.warc.gz"} |
http://math.stackexchange.com/questions/215360/coefficent-of-x8-in-a-bionomial-theorem | # Coefficent of x^8 in a bionomial theorem
How would one go about solving this?
This is where i am stuck I am not even sure if I am on the right track, as you can that this is have to use nCr concept (Pascals triangle I believe) here
-
So you need the power of $x$ to be $8$. The power of $x$ is
$$24-3r-r=24-4r$$
Solving $24-4r=8$ Yields $r=4$.
Then, the coefficient is
$$\binom{8}{4}2.4^41.1^4$$
which can easely be calculated.
-
so I was half way there? – JackyBoi Oct 17 '12 at 3:39
Each term in $\left(ax^3+\frac{b}x\right)^8$, before you collect terms, is a product of $8$ factors, each of which is either $ax^3$ or $\frac{b}x$. Suppose that in a given term you have $k$ factors of $ax^3$ and therefore $8-k$ factors of $\frac{b}x$; then the term is
$$\left(ax^3\right)^k\left(\frac{b}x\right)^{8-k}=a^kb^{8-k}x^{3k-(8-k)}=a^kb^{8-k}x^{4k-8}\;.$$
You want the coefficient of $x^8$, and $x^{4k-8}=x^8$ if and only if $4k-8=8$, $4k=16$, and $k=4$. In other words, the only terms that give you $x^8$ are those of the form
$$\left(ax^3\right)^4\left(\frac{b}x\right)^4\;.$$
From the binomial theorem you know that $(u+v)^8=\sum_{k=0}^8\binom8ku^kv^{8-k}$. In your problem $u=ax^3$, $v=\frac{b}x$, and you want the $k=4$ term; that’s
$$\binom84u^4v^4=\binom84\left(ax^3\right)^4\left(\frac{b}x\right)^4=\binom84a^4b^4x^4\;,$$ so the coefficient of $x^4$ is $$\binom84a^4b^4=70a^4b^4\;.$$
-
just a silly question how did 8 and 4 in the bracket become 70? – JackyBoi Oct 17 '12 at 3:38
@JackyBoi: $\binom84$ is the binomial coefficient $\frac{8!}{4!4!}=\frac{8\cdot7\cdot6\cdot5}{4\cdot3\cdot2\cdot1}=70$. – Brian M. Scott Oct 17 '12 at 3:42 | 2016-06-29 14:52:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8808252811431885, "perplexity": 126.64113190106346}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397748.48/warc/CC-MAIN-20160624154957-00027-ip-10-164-35-72.ec2.internal.warc.gz"} |
https://www.bionicturtle.com/forum/threads/t2-7-kurtosis-of-a-probability-distribution.21738/ | What's new
# YouTubeT2-7 Kurtosis of a probability distribution
#### Nicole Seaman
##### Director of FRM Operations
Staff member
Subscriber
Kurtosis is the standardized fourth central moment and is a measure of tail density; e.g., heavy or fat-tails. Heavy-tailedness also tends to correspond to high peakedness. Excess kurtosis (aka, leptokurtosis) is given by (kurtosis-3). We subtract three because the normal distribution has kurtosis of three; in this way, kurtosis implicitly compares to the normal distribution and "positive excess kurtosis" means "tails are heavier than the normal" or "extreme outcomes are MORE likely than under the normal."
Here is David's XLS: http://trtl.bz/121817-yt-kurtosis-xls | 2021-12-02 04:25:39 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8855544924736023, "perplexity": 7446.883148682514}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964361064.69/warc/CC-MAIN-20211202024322-20211202054322-00125.warc.gz"} |
https://stats.stackexchange.com/questions/448802/probability-that-the-same-r-v-generates-the-rth-order-statistic-in-one-noise-ad/455429 | Probability that the same r.v. generates the rth order statistic in one noise-added set, and the sth order statistic in another noise-added set
(Note: The title is confusing, as I have no idea if a name / short description exists for the setting below. I'm open to pointers and/or suggestions.)
Setting
Let $$X_1, ..., X_N \overset{i.i.d.}{\sim} \mathcal{N}(0, \sigma^2_X)$$ for large but fixed $$N > 50$$. We generate two new sets $$Y_1, ..., Y_N$$ and $$Z_1, ..., Z_N$$ by adding two different levels of random noise to $$X_i$$:
\begin{align} Y_i & = X_i + \epsilon_{Yi} , \quad\epsilon_{Yi} \overset{i.i.d.}{\sim} \mathcal{N}(0, \sigma^2_Y)\\ Z_i & = X_i + \epsilon_{Zi} , \quad\epsilon_{Zi} \overset{i.i.d.}{\sim} \mathcal{N}(0, \sigma^2_Z), \end{align} where $$X_i$$, $$\epsilon_{Yi}$$ and $$\epsilon_{Zi}$$ are mutually independent $$\forall i$$.
We then rank $$Y_i$$ to get the order statistics $$Y_{(1)}, ..., Y_{(N)}$$ (where $$Y_{(1)}$$ represents the smallest of $$Y_i$$), and rank $$Z_i$$ to get a different set of order statistics $$Z_{(1)}, ..., Z_{(N)}$$. The order statistics here can correspond to, say, the result of ranking a set of items under two different noisy estimate of the items' score.
Note there is a 1-1 relationship between the sets $$\{X_1, ..., X_N\}$$ and $$\{Y_{(1)}, ..., Y_{(N)}\}$$, and another 1-1 relationship between the sets $$\{X_1, ..., X_N\}$$ and $$\{Z_{(1)}, ..., Z_{(N)}\}$$. As a result, two order statistics $$Y_{(r)}$$ (the $$r^{\textrm{th}}$$ ranked $$Y_n$$) and $$Z_{(s)}$$ (the $$s^{\textrm{th}}$$ ranked $$Z_n$$) might actually be generated by the same $$X_i$$. The question is, how likely is that the case?
Question(s)
The question in its general form reads:
What is the probability, in the setting above, that $$Y_{(r)}$$ and $$Z_{(s)}$$ are generated by the same $$X_i$$, for any given $$r, s \leq N$$?
I am looking for an explicit formula that may or may not take $$N$$, $$r$$, $$s$$, $$\sigma^2_X$$, $$\sigma^2_Y$$, and $$\sigma^2_Z$$ into account.
Other questions that are perhaps too tiny and coupled to warrant their own CrossValidated question:
1. Is there a name for the "add noise - rank separately - match back" scenario described above?
I know once you get the ranks $$Y_{(r)}$$/$$Z_{(s)}$$, the corresponding $$X_{Y[r]}$$/$$X_{Z[r]}$$ is known as the concomitant of the order statistics. However, the concomitants in the literature usually go the other way round, i.e. $$X_i$$ is ranked, and the behaviour of the induced ranks for $$Y_i$$/$$Z_i$$ is studied.
2. Are there anything in the literature that deals with the scenario described?
3. What if I replace the distributions for $$X_i$$, $$\epsilon_{Yi}$$, and $$\epsilon_{Zi}$$ with general distributions $$F(\cdot)$$, $$G(\cdot)$$, $$H(\cdot)$$ respectively, with mean and variance $$(\mu_X, \sigma^2_X)$$, $$(\mu_Y, \sigma^2_Y)$$, and $$(\mu_Z, \sigma^2_Z)$$?
Attempts
A natural first answer to the main question is to set the probability as $$\frac{1}{N}$$. This assumes that $$Y_{(r)}$$ is independent to $$Z_{(s)}$$ $$\forall r, s$$, and $$Z_{(s)}$$ can point to any $$X_i$$, one (out of $$N$$) of which happens to have also generated $$Y_{(r)}$$. This attempt is clearly wrong as $$Y_{(r)}$$ and $$Z_{(s)}$$ are not independent --- see e.g. David and Nagaraja (2004) which discuss the covariance between these quantities.
Some intuitive cases why the above is wrong: $$Y_{(N)}$$ and $$Z_{(N)}$$ are not always, but still quite likely to generated by the same $$X_i$$: If $$Y_i$$ is the largest amongst its set, chances are that the corresponding $$X_i$$ is quite large as well, and thus the corresponding $$Z_i$$ stands a higher-than-random chance to become the largest amongst its set.
On the other hand, one will have a hard time convincing others that $$Y_{(1)}$$ and $$Z_{(N)}$$ are generated by the same $$X_i$$ --- you will need $$X_i$$ to be somewhere in the middle, $$\epsilon_{Yi}$$ to be very negative, and $$\epsilon_{Zi}$$ to be very positive for that to happen. The latter two rarely happen together.
Initial simulations
I've also run some simulation to get a feel of how the probabilities behave. This require reformulating the main problem a bit as:
Fix $$r < N$$, what is the distribution across all $$s \in \{1, ..., N\}$$, where $$X_i$$ generated $$Y_{(r)}$$ and $$Z_i = X_i + \epsilon_{Zi}$$ is the $$s^{\textrm{th}}$$ ranked item?
We can get the answer to the original question above by picking out the probability corresponding to a particular $$s$$.
The following is a Python snippet that generates an empirical probability mass function for the reformulated problem. I've taken one off the ranks to enable the comparison with beta binomial distributions (see below).
import numpy as np
from scipy.stats import rankdata
N = 100; r = 1; n_runs = 100000
s = []
for run in range(0, n_runs):
Xi = np.random.normal(0, 1, N)
Yi = Xi + np.random.normal(0, 0.5, N)
Zi = Xi + np.random.normal(0, 0.4, N)
Ir = np.argwhere(rankdata(Yi) == r).flatten()[0]
s.append(rankdata(Zi)[Ir])
s = np.array(s) - 1
The following two figures shows the empirical PMF for $$r=1$$ and $$r=50$$ respectively. To me, they look quite like the beta binomial distributions, which might make sense as we are dealing with order statistics. I am able to fit the parameters for the distribution with reasonable accuracy, but I am more interested in how the parameters come about.
• Are you looking for a formula of the form $f(N,\sigma_X, \sigma_Y,\sigma_Z,r,s)$ or of the form $g(N,\sigma_X, \sigma_Y,\sigma_Z,r,s,Y_{(r)},Z_{(s)})$? – Matt F. Mar 2 at 9:41
• @MattF. I'm more inclined to the $f$ variant in your comment. – B.Liu Mar 2 at 10:15
• That seems hard. Even a very simple case like $N=2$, $\sigma_X=3$, $\sigma_Y=2$, $\sigma_Z=1$, $r=1$, $s=1$ has $$f = \int_{\Delta_x=-\infty}^{\infty} \int_{\Delta_y=-\infty}^{\Delta_x} \int_{\Delta_z=-\infty}^{\Delta_x}p d\Delta_x d\Delta_y d\Delta_z + \int_{\Delta_x=-\infty}^{\infty} \int_{\Delta_y=\Delta_x}^{\infty} \int_{\Delta_z=\Delta_x}^{\infty} p d\Delta_x d\Delta_y d\Delta_z$$ $$\text{with } p = \phi\left(\frac{\Delta_x}{3\sqrt{2}}\right)\phi\left(\frac{\Delta_y}{2\sqrt{2}}\right)\phi\left(\frac{\Delta_z}{1\sqrt{2}}\right)$$ and that is too hard for Mathematica to compute exactly. – Matt F. Mar 2 at 11:01
• @MattF. I realise it is not exactly an everyday textbook problem! Maybe the answer is analytically intractable at all, which I am happy to take. – B.Liu Mar 2 at 13:46
As Matt pointed out in the comments - it is indeed a hard problem. The answer below is a more credible attempt that (in my opinion) better approximates the quantity.
A general prior for the distribution
The reformulated question
Fix $$r < N$$, what is the distribution across all $$s \in \{1, ..., N\}$$, where $$X_i$$ generated $$Y_{(r)}$$ and $$Z_i = X_i + \epsilon_{Zi}$$ is the $$s^{\textrm{th}}$$ ranked item?
can be formalised as the probability that exactly $$s-1$$ other (assumed independent) $$Z_k$$ are less than $$Z_i$$, which makes $$Z_i$$ the $$s^\textrm{th}$$ ranked item:
$$\mathbb{P}\left(\sum_{k=1, k\neq i}^{N} \mathbb{I}_{\{Z_k < Z_i\}} = s - 1\right).$$
Let's call the sum of the indicator variables $$C$$ (for count).
To get the probability mass function of $$C$$, we first recognise that $$Z_i$$ is a continuous random variable with probability density $$f_{Z_i}(z)$$. Also for any realisation $$z$$, the probability that an independent $$Z_k$$ is less than $$z$$ is simply $${p = F_{Z_k}(z)}$$, where $$F_{Z_k}$$ is the cumulative density function of $$Z_k$$.
Hence, the probability $$\mathbb{P}(Z_k < Z_i)$$ is a distribution, as opposed to a fixed value, that results from transforming $$Z_i$$ using $$F_{Z_k}$$, with probability density \begin{align} f_{\mathbb{P}(Z_k < Z_i)}(p) = f_{Z_k}(z) \left|\frac{\textrm{d}z}{\textrm{d}p}\right| = \frac{f_{Z_i}(F_{Z_k}^{-1}(p))}{f_{Z_k}(F_{Z_k}^{-1}(p))} . \end{align} The distribution can then be used as a prior for $$C$$, which has a binomial likelihood.
Simplifying the prior to something we can work with
Now the problem here is that the prior above is likely to be intractable, even under the normal assumptions as specified in the original question. This is because $$Z_k$$ is normal, but $$Z_i$$ is almost certainly not as it carries ranking information from $$Y_{(r)}$$.
In order to have something that we can actually work with, we approximate the prior using beta distributions. Beta distributions are a natural choice to me as they are closely related to order statistics, and moreover is a conjugate prior to binomial likelihood, which eases the computation of the probability masses for $$C$$. The plots in the question seems to support such hypothesis as well.
The key idea is to find the beta distribution parameters using method of moments. We first need the mean and variance of $$Z_i$$ (before transforming it with $$F_{Z_k}$$), then the mean and variance of $$\mathbb{P}(Z_k < Z_i)$$ (post transformation, denoted $$\mu_{\mathbb{P}}$$ and $$\sigma^2_{\mathbb{P}}$$ respectively), and finally fit the beta parameters.
The mean and variance of $$Z_i$$ (before transforming it with $$F_{Z_k}$$) is:
$$\mathbb{E}(Z_i) \approx \frac{\sigma^2_X}{\sqrt{\sigma^2_X + \sigma^2_Y}} \Phi^{-1}\left(\frac{r - \alpha}{N - 2\alpha + 1}\right) , \quad \alpha \approx 0.4$$
$$\textrm{Var}(Z_i) \approx \frac{\sigma^2_1 \sigma^2_X}{\sigma^2_X + \sigma^2_1} \, + \frac{\sigma^4_X}{\sigma^2_X + \sigma^2_1} \frac{r(N-r+1)}{(N+1)^2 (N+2)} \frac{1}{\big(\phi\big(\Phi^{-1}\big(\frac{r}{N+1}\big)\big)\big)^2} + \sigma^2_2$$ (see this question, this question, [1] and [2] for detailed derivations).
The expected value and variance of $${\mathbb{P}(Z_k < Z_i) = F_{Z_k}(Z_i)}$$ can then be approximated using Taylor series expansion: \begin{align} \mu_{\mathbb{P}} \approx \, & F_{Z_k}\left(\mathbb{E}(Z_i)\right) + \frac{1}{2} f'_{Z_k}\left(\mathbb{E}(Z_i)\right) \cdot \textrm{Var}(Z_i) + \, ... \,,\\ \sigma^2_{\mathbb{P}} \approx \,& \left(f_{Z_k}(\mathbb{E}(Z_i))\right)^2 \cdot \textrm{Var}(Z_i) + \frac{1}{4}\left(f'_{Z_k}(\mathbb{E}(Z_i))\right)^2 \cdot \textrm{Var}\left(\left(Z_i - \mathbb{E}(Z_i)\right)^2\right) + \, ... \,, \end{align}
where $$f_{Z_k}$$ is the PDF of $$Z_k$$ (normal w/ mean 0, variance $$\sigma^2_X + \sigma^2_Z$$), and $$f'_{Z_k}$$ is the derivative of the PDF.
Finally, the beta distribution parameters can be fit as follow (see this question): \begin{align} \alpha_{\mathbb{P}} = \left(\frac{1 - \mu_{\mathbb{P}}}{\sigma^2_{\mathbb{P}}} - \frac{1}{\mu_{\mathbb{P}}} \right) \mu_{\mathbb{P}}^2 \;, \quad \beta_{\mathbb{P}} = \alpha_{\mathbb{P}}\left(\frac{1}{\mu_{\mathbb{P}}} - 1\right) \;. \end{align}
[1] H. A. David and H. N. Nagaraja (2004) Order statistics. Encyclopedia of Statistical Sciences.
[2] C. H. B. Liu and B. P. Chamberlain (2019) What is the value of experimentation & measurement? ICDM 2019.
• Full disclaimer: [2] is one of my previous work. – B.Liu Mar 23 at 21:49 | 2020-10-26 07:07:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 131, "wp-katex-eq": 0, "align": 4, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9620469212532043, "perplexity": 366.2907889650454}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107890586.57/warc/CC-MAIN-20201026061044-20201026091044-00201.warc.gz"} |
https://forums.developer.nvidia.com/t/how-to-make-the-ic-driver/57851 | # How to make the IC driver
Hi, I’d like to use TC358762 (this IC’s supplier is Toshiba).
I found a source code “Tc358767_dsi2edp” at “/usr/src/kernel/display/drivers/video/tegra/dc/”,
and I made a driver by modifing that source.
The compile is successed but I couldn’t make .ko file (module).
In the case of describing “tristate” in Kconfig, then compile error is appeared.
Is there any way to make the driver ?
Hi,
Please try to enable this module in “tegra18_defconfig”
I tried following action but compile error is appeared.
And a value is changed in …/display/drivers/video/tegra/Kconfig.display
from “bool” to “tristate”.
Then execute compile and it had became error.
The error message is following,
In file included from
drivers/video/tegra/…/…/…/…/display/drivers/video/tegra/dc/tc358767_dsi2edp.c:30:0:
drivers/video/tegra/…/…/…/…/display/drivers/video/tegra/dc/dsi.h:476:30:
error: expected identifier or ‘(’ before ‘struct’
#define tegra_dsi2edp_ops (*(struct tegra_dsi_out_ops *)NULL)
In the case of “CONFIG_TEGRA_DSI2EDP_TC358767=y” then compile is succeeded.
I’d like to compile it with “CONFIG_TEGRA_DSI2EDP_TC358767=m”.
Do you have any idea?
If you want to build it as a module, more change may require since our driver is for built-in.
After setting CONFIG_TEGRA_DSI2EDP_TC358767 to “m”, those macro in driver should be “CONFIG_TEGRA_DSI2EDP_TC358767_MODULE” instead of original one. | 2020-11-27 15:37:57 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8378505110740662, "perplexity": 10161.752724876684}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141193221.49/warc/CC-MAIN-20201127131802-20201127161802-00508.warc.gz"} |
http://hal.in2p3.fr/in2p3-01244201 | Structure of $^{26}$Na via a Novel Technique Using ($d,p\gamma$) with a Radioactive $^{25}$Na Beam
Abstract : States in 26Na were populated in the (d, pγ) reaction, induced by bombarding deuterium target nuclei with an intense reaccelerated beam of 25Na ions from the ISAC2 accelerator at TRIUMF. Gamma-rays were recorded in coincidence with protons and used to extract differential cross sections for 21 states up to the neutron decay threshold of 5 MeV. Results for levels below 3 MeV are discussed in detail and compared with shell model calculations and with previous work. The angular distributions of decay gamma-rays were measured for individual states and are compared to theoretically calculated distributions, highlighting some issues for future work.
Type de document :
Communication dans un congrès
49th Zakopane Conference on Nuclear Physics : Extremes of the Nuclear Landscape, Aug 2014, Zakopane, Poland. 46 (3), pp.527-536, 2015, 〈10.5506/APhysPolB.46.527〉
http://hal.in2p3.fr/in2p3-01244201
Contributeur : Sandrine Guesnon <>
Soumis le : mardi 15 décembre 2015 - 14:57:22
Dernière modification le : mardi 24 avril 2018 - 01:43:27
Citation
W.N. Catford, I.C. Celik, G.L. Wilson, A. Matta, N.A. Orr, et al.. Structure of $^{26}$Na via a Novel Technique Using ($d,p\gamma$) with a Radioactive $^{25}$Na Beam. 49th Zakopane Conference on Nuclear Physics : Extremes of the Nuclear Landscape, Aug 2014, Zakopane, Poland. 46 (3), pp.527-536, 2015, 〈10.5506/APhysPolB.46.527〉. 〈in2p3-01244201〉
Métriques
Consultations de la notice | 2018-05-20 15:18:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5822738409042358, "perplexity": 10316.729206786606}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794863626.14/warc/CC-MAIN-20180520151124-20180520171124-00504.warc.gz"} |
https://math.stackexchange.com/questions/1476371/epsilon-delta-definition-of-a-limit-smaller-epsilon-implies-smaller | # $\epsilon$ - $\delta$ definition of a limit - smaller $\epsilon$ implies smaller $\delta$?
The definition in my book is as follows:
Let $f$ be a function defined on an open interval containing $c$ (except possibly at $c$) and let $L$ be a real number. The statement $$\lim_{x \to c} f(x) = L$$
means that for each $\epsilon>0$ there exists a $\delta>0$ such that if $0<|x-c|<\delta$, then $|f(x)-L|<\epsilon$.
With the definition the way it is, I don't see how choosing a smaller and smaller $\epsilon$ implies a smaller and smaller $\delta$.
To me, in order to produce that implication, we would need to restrict $\epsilon$ to be small enough to force $f(x)$ to be strictly increasing/decreasing on $(L-\epsilon, L+\epsilon)$, and define increasing/decreasing without the use of derivatives. However, that is not part of the definition.
P.S. Please refrain from using too much notation for logic, I am not familiar with most of the symbols such as the upside down A and such.
• There's a "for every $x$ such that" in there as well, at least implicitly. I.e., $|f(x)-L| < \varepsilon$ for every $x$ satisfying $0 < |x-c| < \delta$. See what that gives you in your purported example. – mrf Oct 12 '15 at 13:42
• It is not necessary; consider $f$ defined on $[0,1]$ as the constant function $f(x)=1$. We can ask the limit of $f$ for $x \to 1/2$ (of course, it is $1$). In this case, we can "squeeze" the $\epsilon$ as we want but we are not forced to decrease the $\delta$. – Mauro ALLEGRANZA Oct 12 '15 at 13:44
• Saying ""there exists a δ and as ϵ decreases, so does δ" is not sufficient because it does NOT say that δ goes to 0. And, since the point is to DEFINE "limit", you would have to say precisely what you mean by "goes to 0" without using limits! – user247327 Oct 12 '15 at 14:14
• $f(x)=x\chi_\Bbb Q(x)$ is nowhere continuous except at $x=0$, and also nowhere monotonic, and nevertheless $\lim_{x\to0} f(x)$ exists and is $0$. ($\chi_\Bbb Q(x)$ is $1$ when $x$ is rational, and $0$ otherwise). – Jean-Claude Arbaut Oct 24 '15 at 10:47
First of all, there exists functions that every $\delta$ is sufficient for them. For example constant function $$f(x)=2$$
On the other hand there exists functions that force us to select as small $\delta$ as possible. For example take a strictly increasing function $f(x)=x+1$.
The greatest $\delta$ we can take is such that $f$ intersects the corners of rectangle created by four lines: $x=L\pm\varepsilon$ and $y=c\pm\delta$. So if $\varepsilon$ shrinks, $\delta$ has to get smaller too.
Now, take a function that is not monotonic, for example $g(x)=2x^2+1$.
How small does $\delta$ have to be? Similar situation as above, but now $\delta$ is bounded by upper corners.
First of all, if a bigger $\delta >0$ is found, you can always find a small one (e.g. $\delta_\epsilon = \min\{\delta, \epsilon\}$) to accompany $\epsilon$, so that it kind of match your intuition that the smaller the $\epsilon$, the smaller the $\delta$.
On the other hand, there are actually good functions around that do not require a smaller $\delta$: for example, if $f$ is a constant function, any $\delta >0$ would suffices no matter how small $\epsilon$ is.
Let me point out to you that a function like $x \in (0,+\infty) \mapsto x \sin (1/x)$ vanishes infinitely many times in any neighborhood of zero; it is impossible to make it monotonic by restricting its domain. Despite this, $$0 \leq \left|x \sin \frac{1}{x} \right| \leq |x|$$ and therefore $\lim_{x \to 0+} x \sin (1/x)=0$.
The fact that as $\epsilon$ decreases so does $\delta$ generally follows from the behavior of the function. Note that for almost all interesting functions $\delta$ will have to decrease as $\epsilon$ decreases. The only exception are locally constant functions.
As long as the function is not locally constant at $a$ (and has a limit $L$ there) then for every $\chi>0$ there is some $x$ and $\psi>0$ such that $0<|a-x|<\chi$ and $|f(x)-L|>\psi$ (otherwise $\forall x\forall \psi>0$ you have $0<|a-x|<\chi\implies |f(x)-f(a)|<\psi$ so $f$ is constant on $(a-\chi,a+\chi)\setminus\{a\}$). But that means that if you choose $0<\epsilon<\psi$ then $0<\delta<\chi$.
Just to clarify your comment about $x^2$ you can't have your $\delta$ "land you" around $-2$ since $\delta$ bounds the distance from $x=2$. If your delta is big enough to get you all the way to $x=-2$ ($\delta\geq 4$) then unless you $\epsilon>4$ the point $x=0$ (which will be withing $\delta$ of $x=2$) will be too far from the limit $L=4$.
Topologically speaking, it says that for any vicinity $V(L)$ of $L$ there exists a vicinity $W(c)$ of $c$ such that $f(W(c) \setminus \{c\}) \subseteq V(L)$. From this, topological, perspective, the answer to:
I don't see how choosing a smaller and smaller $\varepsilon$ implies a smaller and smaller $\delta$.
It shouldn't, otherwise we are in trouble defining (e.g.) limits when $x \rightarrow +\infty$ where $W(+\infty)$ is something like $(\delta , +\infty)$
The intuition of a limit is that the closer $x$ gets to $c$, the closer $f(x)$ gets to $L$. The situation isn't symmetrical, it is always possible to find $x$ close to $c$, but maybe not so easy to find $f(x)$ close to $L$.
If you just said, "let me choose $\delta$ and see how small $\epsilon$ can be", you couldn't conclude anything because you'd have no criterion to say that $\epsilon$ is small enough.
So you work the other way, saying "I can make $\epsilon$ as small as I want, and I can still find a $\delta$ that fits".
The exact behavior of the function in the $(\delta,\epsilon)$ neighborhood is irrelevant, it can be as irregular/chaotic/discontinuous as you want, provided it remains bounded. Monotonicity isn't required.
"With the definition the way it is, I don't see how choosing a smaller and smaller ϵ implies a smaller and smaller δ."
Not necessarily so. It does not have to be monotone.
Delta-epsilon just says: There exists a step in the ongoing appliance of the function which leads to a result nearer to the limit, the distance is smaller than any certain number ϵ. No matter how small this number might be.
The goal of limit of a function is to describe the behavior (the value) of a function $f$ when we are approaching $c$ (but not equal to $c$)
$\varepsilon$-$\delta$ definition of limit is to define the idea approaching in rigorous sense.
What is the meaning of approaching?
If someone gives you a sufficiently small value $\varepsilon > 0$ and he requires the distance between $L$ and $f(x)$, $x$ is the point you are approaching $L$, is within a distance of $\varepsilon$. (This gives $|f(x) - L| < \varepsilon$)
You can always find another sufficiently small $\delta > 0$ such that if you approach $c$ within a distance of $\delta$, the above requirement is satisfied. (This gives $0 < |x-c| < \delta$) | 2019-11-12 13:11:27 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9626286029815674, "perplexity": 157.30375943745477}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496665573.50/warc/CC-MAIN-20191112124615-20191112152615-00179.warc.gz"} |
https://www.doubtnut.com/question-answer/a-die-is-formed-so-that-the-probability-of-getting-a-number-i-when-it-is-rolled-is-proportional-to-i-376683119 | # A die is formed so that the probability of getting a number i when it is rolled is proportional to i. (i=1,2,3,4,5,6). The probability of getting an odd number on the die when it is rolled is
Step by step solution by experts to help you in doubt clearance & scoring excellent marks in exams.
Updated On: 20-10-2020
Apne doubts clear karein ab Whatsapp par bhi. Try it now.
Watch 1000+ concepts & tricky questions explained!
3.6 K+
100+
Text Solution
1/24/72/73/7 | 2021-11-28 17:27:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2724582552909851, "perplexity": 3590.7093878948276}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358570.48/warc/CC-MAIN-20211128164634-20211128194634-00538.warc.gz"} |
https://www.tutorialspoint.com/cost-of-5-kg-of-wheat-is-rs-91-50-a-what-will-be-the-cost-of-8-kg-of-wheat-b-what-quantity-of-wheat-can-be-purchased-in-rs-183 | # Cost of 5 kg of wheat is Rs. 91.50.(a) What will be the cost of 8 kg of wheat?(b) What quantity of wheat can be purchased in Rs. 183?
#### Complete Python Prime Pack
9 Courses 2 eBooks
#### Artificial Intelligence & Machine Learning Prime Pack
6 Courses 1 eBooks
#### Java Prime Pack
9 Courses 2 eBooks
Given:
The cost of $5\ kg$ of wheat is $Rs.\ 91.50$.
To do:
We have to find:
(a) The cost of $8\ kg$ of wheat.
(b) The quantity of wheat that can be purchased for $Rs.\ 183$.
Solution:
(a) The cost of $5\ kg$ wheat $=Rs.\ 91.50$
We know that,
The cost of $1\ kg$ can be obtained by dividing the given amount by the given quantity.
The cost of $1\ kg$ wheat $=\frac{91.50}{5}$
$=Rs.\ 18.30$
Therefore,
The cost of $8\ kg$ wheat $=Rs.\ 18.30\times8$
$= Rs.\ 146.40$
The cost of $8\ kg$ wheat is $Rs.\ 146.40$.
(b) The cost of $5\ kg$ wheat $=Rs.\ 91.50$
We know that,
The quantity of wheat for $Rs.\ 1$ can be obtained by dividing a given quantity by the given amount.
The quantity of wheat for $Rs.\ 1 = \frac{5}{91.50}\ kg$
Therefore,
The quantity of wheat for $Rs.\ 183 = \frac{5}{91.50}\times183$
$= 10\ kg$
The quantity of wheat for $Rs.\ 183$ is $10\ kg$.
Updated on 10-Oct-2022 13:36:49 | 2022-11-30 20:22:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3258359432220459, "perplexity": 4817.720528234047}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710771.39/warc/CC-MAIN-20221130192708-20221130222708-00004.warc.gz"} |
https://statkat.com/stattest.php?t=9&t2=10&t3=13 | # Two sample t test - equal variances not assumed - overview
This page offers structured overviews of one or more selected methods. Add additional methods for comparisons by clicking on the dropdown button in the right-hand column. To practice with a specific method click the button at the bottom row of the table
Two sample $t$ test - equal variances not assumed
Two sample $t$ test - equal variances assumed
Regression (OLS)
Independent/grouping variableIndependent/grouping variableIndependent variables
One categorical with 2 independent groupsOne categorical with 2 independent groupsOne or more quantitative of interval or ratio level and/or one or more categorical with independent groups, transformed into code variables
Dependent variableDependent variableDependent variable
One quantitative of interval or ratio levelOne quantitative of interval or ratio levelOne quantitative of interval or ratio level
Null hypothesisNull hypothesisNull hypothesis
H0: $\mu_1 = \mu_2$
Here $\mu_1$ is the population mean for group 1, and $\mu_2$ is the population mean for group 2.
H0: $\mu_1 = \mu_2$
Here $\mu_1$ is the population mean for group 1, and $\mu_2$ is the population mean for group 2.
$F$ test for the complete regression model:
• H0: $\beta_1 = \beta_2 = \ldots = \beta_K = 0$
or equivalenty
• H0: the variance explained by all the independent variables together (the complete model) is 0 in the population, i.e. $\rho^2 = 0$
$t$ test for individual regression coefficient $\beta_k$:
• H0: $\beta_k = 0$
in the regression equation $\mu_y = \beta_0 + \beta_1 \times x_1 + \beta_2 \times x_2 + \ldots + \beta_K \times x_K$. Here $x_i$ represents independent variable $i$, $\beta_i$ is the regression weight for independent variable $x_i$, and $\mu_y$ represents the population mean of the dependent variable $y$ given the scores on the independent variables.
Alternative hypothesisAlternative hypothesisAlternative hypothesis
H1 two sided: $\mu_1 \neq \mu_2$
H1 right sided: $\mu_1 > \mu_2$
H1 left sided: $\mu_1 < \mu_2$
H1 two sided: $\mu_1 \neq \mu_2$
H1 right sided: $\mu_1 > \mu_2$
H1 left sided: $\mu_1 < \mu_2$
$F$ test for the complete regression model:
• H1: not all population regression coefficients are 0
or equivalenty
• H1: the variance explained by all the independent variables together (the complete model) is larger than 0 in the population, i.e. $\rho^2 > 0$
$t$ test for individual regression coefficient $\beta_k$:
• H1 two sided: $\beta_k \neq 0$
• H1 right sided: $\beta_k > 0$
• H1 left sided: $\beta_k < 0$
AssumptionsAssumptionsAssumptions
• Within each population, the scores on the dependent variable are normally distributed
• Group 1 sample is a simple random sample (SRS) from population 1, group 2 sample is an independent SRS from population 2. That is, within and between groups, observations are independent of one another
• Within each population, the scores on the dependent variable are normally distributed
• The standard deviation of the scores on the dependent variable is the same in both populations: $\sigma_1 = \sigma_2$
• Group 1 sample is a simple random sample (SRS) from population 1, group 2 sample is an independent SRS from population 2. That is, within and between groups, observations are independent of one another
• In the population, the residuals are normally distributed at each combination of values of the independent variables
• In the population, the standard deviation $\sigma$ of the residuals is the same for each combination of values of the independent variables (homoscedasticity)
• In the population, the relationship between the independent variables and the mean of the dependent variable $\mu_y$ is linear. If this linearity assumption holds, the mean of the residuals is 0 for each combination of values of the independent variables
• The residuals are independent of one another
• Variables are measured without error
Also pay attention to:
• Multicollinearity
• Outliers
Test statisticTest statisticTest statistic
$t = \dfrac{(\bar{y}_1 - \bar{y}_2) - 0}{\sqrt{\dfrac{s^2_1}{n_1} + \dfrac{s^2_2}{n_2}}} = \dfrac{\bar{y}_1 - \bar{y}_2}{\sqrt{\dfrac{s^2_1}{n_1} + \dfrac{s^2_2}{n_2}}}$
Here $\bar{y}_1$ is the sample mean in group 1, $\bar{y}_2$ is the sample mean in group 2, $s^2_1$ is the sample variance in group 1, $s^2_2$ is the sample variance in group 2, $n_1$ is the sample size of group 1, and $n_2$ is the sample size of group 2. The 0 represents the difference in population means according to the null hypothesis.
The denominator $\sqrt{\frac{s^2_1}{n_1} + \frac{s^2_2}{n_2}}$ is the standard error of the sampling distribution of $\bar{y}_1 - \bar{y}_2$. The $t$ value indicates how many standard errors $\bar{y}_1 - \bar{y}_2$ is removed from 0.
Note: we could just as well compute $\bar{y}_2 - \bar{y}_1$ in the numerator, but then the left sided alternative becomes $\mu_2 < \mu_1$, and the right sided alternative becomes $\mu_2 > \mu_1$.
$t = \dfrac{(\bar{y}_1 - \bar{y}_2) - 0}{s_p\sqrt{\dfrac{1}{n_1} + \dfrac{1}{n_2}}} = \dfrac{\bar{y}_1 - \bar{y}_2}{s_p\sqrt{\dfrac{1}{n_1} + \dfrac{1}{n_2}}}$
Here $\bar{y}_1$ is the sample mean in group 1, $\bar{y}_2$ is the sample mean in group 2, $s_p$ is the pooled standard deviation, $n_1$ is the sample size of group 1, and $n_2$ is the sample size of group 2. The 0 represents the difference in population means according to the null hypothesis.
The denominator $s_p\sqrt{\dfrac{1}{n_1} + \dfrac{1}{n_2}}$ is the standard error of the sampling distribution of $\bar{y}_1 - \bar{y}_2$. The $t$ value indicates how many standard errors $\bar{y}_1 - \bar{y}_2$ is removed from 0.
Note: we could just as well compute $\bar{y}_2 - \bar{y}_1$ in the numerator, but then the left sided alternative becomes $\mu_2 < \mu_1$, and the right sided alternative becomes $\mu_2 > \mu_1$.
$F$ test for the complete regression model:
• \begin{aligned}[t] F &= \dfrac{\sum (\hat{y}_j - \bar{y})^2 / K}{\sum (y_j - \hat{y}_j)^2 / (N - K - 1)}\\ &= \dfrac{\mbox{sum of squares model} / \mbox{degrees of freedom model}}{\mbox{sum of squares error} / \mbox{degrees of freedom error}}\\ &= \dfrac{\mbox{mean square model}}{\mbox{mean square error}} \end{aligned}
where $\hat{y}_j$ is the predicted score on the dependent variable $y$ of subject $j$, $\bar{y}$ is the mean of $y$, $y_j$ is the score on $y$ of subject $j$, $N$ is the total sample size, and $K$ is the number of independent variables.
$t$ test for individual $\beta_k$:
• $t = \dfrac{b_k}{SE_{b_k}}$
• If only one independent variable:
$SE_{b_1} = \dfrac{\sqrt{\sum (y_j - \hat{y}_j)^2 / (N - 2)}}{\sqrt{\sum (x_j - \bar{x})^2}} = \dfrac{s}{\sqrt{\sum (x_j - \bar{x})^2}}$
with $s$ the sample standard deviation of the residuals, $x_j$ the score of subject $j$ on the independent variable $x$, and $\bar{x}$ the mean of $x$. For models with more than one independent variable, computing $SE_{b_k}$ is more complicated.
Note 1: mean square model is also known as mean square regression, and mean square error is also known as mean square residual.
Note 2: if there is only one independent variable in the model ($K = 1$), the $F$ test for the complete regression model is equivalent to the two sided $t$ test for $\beta_1.$
n.a.Pooled standard deviationSample standard deviation of the residuals $s$
-s_p = \sqrt{\dfrac{(n_1 - 1) \times s^2_1 + (n_2 - 1) \times s^2_2}{n_1 + n_2 - 2}}\begin{aligned} s &= \sqrt{\dfrac{\sum (y_j - \hat{y}_j)^2}{N - K - 1}}\\ &= \sqrt{\dfrac{\mbox{sum of squares error}}{\mbox{degrees of freedom error}}}\\ &= \sqrt{\mbox{mean square error}} \end{aligned} Sampling distribution of t if H0 were trueSampling distribution of t if H0 were trueSampling distribution of F and of t if H0 were true Approximately the t distribution with k degrees of freedom, with k equal to k = \dfrac{\Bigg(\dfrac{s^2_1}{n_1} + \dfrac{s^2_2}{n_2}\Bigg)^2}{\dfrac{1}{n_1 - 1} \Bigg(\dfrac{s^2_1}{n_1}\Bigg)^2 + \dfrac{1}{n_2 - 1} \Bigg(\dfrac{s^2_2}{n_2}\Bigg)^2} or k = the smaller of n_1 - 1 and n_2 - 1 First definition of k is used by computer programs, second definition is often used for hand calculations. t distribution with n_1 + n_2 - 2 degrees of freedomSampling distribution of F: • F distribution with K (df model, numerator) and N - K - 1 (df error, denominator) degrees of freedom Sampling distribution of t: • t distribution with N - K - 1 (df error) degrees of freedom Significant?Significant?Significant? Two sided: Right sided: Left sided: Two sided: Right sided: Left sided: F test: • Check if F observed in sample is equal to or larger than critical value F^* or • Find p value corresponding to observed F and check if it is equal to or smaller than \alpha t Test two sided: t Test right sided: t Test left sided: Approximate C\% confidence interval for \mu_1 - \mu_2C\% confidence interval for \mu_1 - \mu_2$$C\% confidence interval for \beta_k and for \mu_y, C\% prediction interval for y_{new} (\bar{y}_1 - \bar{y}_2) \pm t^* \times \sqrt{\dfrac{s^2_1}{n_1} + \dfrac{s^2_2}{n_2}} where the critical value t^* is the value under the t_{k} distribution with the area C / 100 between -t^* and t^* (e.g. t^* = 2.086 for a 95% confidence interval when df = 20). The confidence interval for \mu_1 - \mu_2 can also be used as significance test. (\bar{y}_1 - \bar{y}_2) \pm t^* \times s_p\sqrt{\dfrac{1}{n_1} + \dfrac{1}{n_2}} where the critical value t^* is the value under the t_{n_1 + n_2 - 2} distribution with the area C / 100 between -t^* and t^* (e.g. t^* = 2.086 for a 95% confidence interval when df = 20). The confidence interval for \mu_1 - \mu_2 can also be used as significance test. Confidence interval for \beta_k: • b_k \pm t^* \times SE_{b_k} • If only one independent variable: SE_{b_1} = \dfrac{\sqrt{\sum (y_j - \hat{y}_j)^2 / (N - 2)}}{\sqrt{\sum (x_j - \bar{x})^2}} = \dfrac{s}{\sqrt{\sum (x_j - \bar{x})^2}} Confidence interval for \mu_y, the population mean of y given the values on the independent variables: • \hat{y} \pm t^* \times SE_{\hat{y}} • If only one independent variable: SE_{\hat{y}} = s \sqrt{\dfrac{1}{N} + \dfrac{(x^* - \bar{x})^2}{\sum (x_j - \bar{x})^2}} Prediction interval for y_{new}, the score on y of a future respondent: • \hat{y} \pm t^* \times SE_{y_{new}} • If only one independent variable: SE_{y_{new}} = s \sqrt{1 + \dfrac{1}{N} + \dfrac{(x^* - \bar{x})^2}{\sum (x_j - \bar{x})^2}} In all formulas, the critical value t^* is the value under the t_{N - K - 1} distribution with the area C / 100 between -t^* and t^* (e.g. t^* = 2.086 for a 95% confidence interval when df = 20). n.a.Effect sizeEffect size -Cohen's d: Standardized difference between the mean in group 1 and in group 2:$$d = \frac{\bar{y}_1 - \bar{y}_2}{s_p}$$Cohen's d indicates how many standard deviations s_p the two sample means are removed from each other. Complete model: • Proportion variance explained R^2: Proportion variance of the dependent variable y explained by the sample regression equation (the independent variables):$$ \begin{align} R^2 &= \dfrac{\sum (\hat{y}_j - \bar{y})^2}{\sum (y_j - \bar{y})^2}\\ &= \dfrac{\mbox{sum of squares model}}{\mbox{sum of squares total}}\\ &= 1 - \dfrac{\mbox{sum of squares error}}{\mbox{sum of squares total}}\\ &= r(y, \hat{y})^2 \end{align} $$R^2 is the proportion variance explained in the sample by the sample regression equation. It is a positively biased estimate of the proportion variance explained in the population by the population regression equation, \rho^2. If there is only one independent variable, R^2 = r^2: the correlation between the independent variable x and dependent variable y squared. • Wherry's R^2 / shrunken R^2: Corrects for the positive bias in R^2 and is equal to$$R^2_W = 1 - \frac{N - 1}{N - K - 1}(1 - R^2)$$R^2_W is a less biased estimate than R^2 of the proportion variance explained in the population by the population regression equation, \rho^2. • Stein's R^2: Estimates the proportion of variance in y that we expect the current sample regression equation to explain in a different sample drawn from the same population. It is equal to$$R^2_S = 1 - \frac{(N - 1)(N - 2)(N + 1)}{(N - K - 1)(N - K - 2)(N)}(1 - R^2)$Per independent variable: • Correlation squared$r^2_k$: the proportion of the total variance in the dependent variable$y$that is explained by the independent variable$x_k$, not corrected for the other independent variables in the model • Semi-partial correlation squared$sr^2_k$: the proportion of the total variance in the dependent variable$y$that is uniquely explained by the independent variable$x_k$, beyond the part that is already explained by the other independent variables in the model • Partial correlation squared$pr^2_k$: the proportion of the variance in the dependent variable$y$not explained by the other independent variables, that is uniquely explained by the independent variable$x_k$Visual representationVisual representationVisual representation Regression equations with: n.a.n.a.ANOVA table -- n.a.Equivalent ton.a. -One way ANOVA with an independent variable with 2 levels ($I$= 2): • two sided two sample$t$test is equivalent to ANOVA$F$test when$I$= 2 • two sample$t$test is equivalent to$t$test for contrast when$I$= 2 • two sample$t$test is equivalent to$t$test multiple comparisons when$I$= 2 OLS regression with one categorical independent variable with 2 levels: • two sided two sample$t$test is equivalent to$F$test regression model • two sample$t$test is equivalent to$t$test for regression coefficient$\beta_1\$
-
Example contextExample contextExample context
Is the average mental health score different between men and women?Is the average mental health score different between men and women? Assume that in the population, the standard deviation of mental health scores is equal amongst men and women.Can mental health be predicted from fysical health, economic class, and gender?
SPSSSPSSSPSS
Analyze > Compare Means > Independent-Samples T Test...
• Put your dependent (quantitative) variable in the box below Test Variable(s) and your independent (grouping) variable in the box below Grouping Variable
• Click on the Define Groups... button. If you can't click on it, first click on the grouping variable so its background turns yellow
• Fill in the value you have used to indicate your first group in the box next to Group 1, and the value you have used to indicate your second group in the box next to Group 2
• Continue and click OK
Analyze > Compare Means > Independent-Samples T Test...
• Put your dependent (quantitative) variable in the box below Test Variable(s) and your independent (grouping) variable in the box below Grouping Variable
• Click on the Define Groups... button. If you can't click on it, first click on the grouping variable so its background turns yellow
• Fill in the value you have used to indicate your first group in the box next to Group 1, and the value you have used to indicate your second group in the box next to Group 2
• Continue and click OK
Analyze > Regression > Linear...
• Put your dependent variable in the box below Dependent and your independent (predictor) variables in the box below Independent(s)
JamoviJamoviJamovi
T-Tests > Independent Samples T-Test
• Put your dependent (quantitative) variable in the box below Dependent Variables and your independent (grouping) variable in the box below Grouping Variable
• Under Tests, select Welch's
• Under Hypothesis, select your alternative hypothesis
T-Tests > Independent Samples T-Test
• Put your dependent (quantitative) variable in the box below Dependent Variables and your independent (grouping) variable in the box below Grouping Variable
• Under Tests, select Student's (selected by default)
• Under Hypothesis, select your alternative hypothesis
Regression > Linear Regression
• Put your dependent variable in the box below Dependent Variable and your independent variables of interval/ratio level in the box below Covariates
• If you also have code (dummy) variables as independent variables, you can put these in the box below Covariates as well
• Instead of transforming your categorical independent variable(s) into code variables, you can also put the untransformed categorical independent variables in the box below Factors. Jamovi will then make the code variables for you 'behind the scenes'
Practice questionsPractice questionsPractice questions | 2022-01-28 20:31:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9099686145782471, "perplexity": 1550.4612496758077}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320306335.77/warc/CC-MAIN-20220128182552-20220128212552-00421.warc.gz"} |
http://www.ams.org/mathscinet-getitem?mr=0310865 | MathSciNet bibliographic data MR310865 55C20 (57C99) Kwun, K. W. Sense-preserving ${\rm PL}$${\rm PL}$ involutions of some lens spaces. Michigan Math. J. 20 (1973), 73–77. Article
For users without a MathSciNet license , Relay Station allows linking from MR numbers in online mathematical literature directly to electronic journals and original articles. Subscribers receive the added value of full MathSciNet reviews. | 2016-06-27 08:42:58 | {"extraction_info": {"found_math": true, "script_math_tex": 1, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.998753011226654, "perplexity": 8740.442227824844}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395679.18/warc/CC-MAIN-20160624154955-00110-ip-10-164-35-72.ec2.internal.warc.gz"} |
http://codeforces.com/blog/andreyv | ### andreyv's blog
By andreyv, 5 years ago, translation, ,
As you know, the C++ language assumes that the programmer is always correct. Therefore C++ compilers don't add additional checks to the program, such as checks for null pointer dereference or out-of-bounds array access. This is good, because C++ programs run as fast as possible, and this is bad, because sometimes we may spend a long time debugging some silly mistake. We would want that the compiler can find such mistakes automatically. And many compilers can! In this post I will show various GCC options that do this. Previously zakharvoit already wrote about this here.
All options that will follow should be added to the GCC command line. In various IDEs you can do it in IDE or compiler settings. Many of the options can also be used with Clang (for example, in Xcode). For MSVC++, I think, there is nothing better than Debug mode and /W4.
• +899
By andreyv, 7 years ago, translation, ,
TCO 2013 Algorithm Round 2A will be held today at 20:00 Moscow time. Those who previously advanced in Round 1 may compete in this round. Registration begins three hours before the contest. First 50 places advance to Round 3.
• +63
By andreyv, 7 years ago, translation, ,
### Div. 2 A — Dragons
Observe that if Kirito fights a dragon whose strength is less than Kirito's strength, then Kirito does not lose anything — in fact, he even gains a nonnegative strength increase. Taking note of this, let's for each step choose some dragon whose strength is less than Kirito's current strength, and fight it. After performing some amount of these steps we'll eventually end up in one of these two situations: either all dragons are slain (then the answer is "YES"), or only dragons whose strength is not less than Kirito's strength remain (then the answer is "NO"). On each step we can choose a suitable dragon to fight either by searching through all dragons or by sorting the dragons by strength in non-descending order in advance.
The complexity of the solution is O(n2) or . Sample solution: http://pastie.org/4897164
• +71
By andreyv, 7 years ago, translation, ,
Hello!
I wish to continue the discussion on C++ input/output performance, which was started by freopen long ago. freopen compared the speed of the two common input/output methods in C++: the stdio library, inherited from C (<stdio>), and the newer iostreams library (<iostream>/…). However, these tests did not account for the fact that there are several important iostreams optimizations which can be used. They have been mentioned more than once on Codeforces (first, second, third). I have written a program that compares performance of the stdio and iostreams libraries with these optimizations turned on. | 2020-01-22 00:52:03 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3382086157798767, "perplexity": 2333.4950277913244}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606226.29/warc/CC-MAIN-20200121222429-20200122011429-00096.warc.gz"} |
https://rapidlibunka.web.app/download-torrent-no-tracking-and-seed-xa.html | 19 Oct 2017 Reusing the magnet link does not work however only getting torrent file from AFAIK "stalled" is a status when your programs wants to download, but when you try to download torrent which have low number of seed/peers. 2 Jan 2020 It's not always immediately apparent which content is legal to torrent and which isn't. Download and install a VPN matching the criteria mentioned above. Users connected to the same tracker are called peers, and they fall into As a general rule, it's considered proper pirate etiquette to seed as much
## 4 Dec 2018 You can download the torrent file or magnet URL and open it in your favourite torrent A torrent tracker is a server that keeps track of the number of Solution torrent stuck; Increase the number of peers and seeds for a torrent
17 Mar 2009 A peer that only uploads content is called a seed, while a peer that A client that wants to download a file can learn about other peers that The algorithm ensures that no tracker will see an increase of more than $\tilde{x}$ 4 Jul 2017 Here's how to torrent without seeding, and why you might want to avoid seeding. However, sometimes you may want only to download files. anyone on the torrent network will not be able to trace any activity back to you. According to this article, you can use Amazon S3 as the torrent tracker only by you can download the *.torrent and start seeding from your local PC without 7 Oct 2006 Is it true that you can get banned from certain tracker for freeloading? How do You are not downloading the torrent from the actual site you get it from. So if you download a file that is 1GB then seed back 1GB. That is why it These are the best VPNs for BitTorrent, whether you're a seeder or a leecher on the Advertisers and others on the web will have a harder time tracking your and complete protection, or would rather not have their download interrupted. The torrent file can then be passed on to other users, who can download the file, a progress bar, download speed, how many seeds / leeches, and estimated time The tracker itself does not have a copy of the file, it only tracks the up- and | 2022-07-04 02:53:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22620850801467896, "perplexity": 1668.7730762706376}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104293758.72/warc/CC-MAIN-20220704015700-20220704045700-00424.warc.gz"} |
https://www.gamedev.net/forums/topic/266594-suns-image-loaders/ | This topic is 5020 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
Recommended Posts
sometimes (actually almost every seccond time) I do not get the image width and height when I use sun's getImage()?? anyone had the same problem before? any god solution for this, or is it best to write my own image loader?
Share on other sites
Do you use Toolkit.getDesfultToolkit().getImage(String)? if so you should also have a MediaTracker that keeps track if the image has been loaded or not.
try{MediaTracker track=new MediaTracker(myComponent);track.addImage(1,myImage);track.waitForAll();} catch (InterruptedException e) {}
Share on other sites
yes, I do.. but it is still something corky going on.. :)
Share on other sites
Could we see some code please :)
Share on other sites
If you can, use ImageIO.read(getClass().getResource("/yourimage.jpg")) to load images without requiring a MediaTracker. The only constraint here is that for an applet, you might have to put the images in the same JAR file as the code (I'm not sure if I recall this correctly though).
• 10
• 17
• 9
• 13
• 41 | 2018-05-27 03:48:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36210355162620544, "perplexity": 2470.2904762293765}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867995.55/warc/CC-MAIN-20180527024953-20180527044953-00465.warc.gz"} |
https://www.expii.com/t/simple-interest-formula-4249 | Expii
# Simple Interest Formula - Expii
The simple interest formula states that interest is equal to the principal (or starting amount) times the rate times the time. I=PRT. | 2021-03-08 00:25:42 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8510984778404236, "perplexity": 3650.0404701313173}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178381230.99/warc/CC-MAIN-20210307231028-20210308021028-00211.warc.gz"} |
https://ay201b.wordpress.com/tag/probes-of-composition/ | # Harvard Astronomy 201b
## CHAPTER: Measuring States in the ISM
In Book Chapter on February 26, 2013 at 3:00 am
(updated for 2013)
There are two primary observational diagnostics of the thermal, chemical, and ionization states in the ISM:
1. Spectral Energy Distribution (SED; broadband low-resolution)
2. Spectrum (narrowband, high-resolution)
#### SEDs
Very generally, if a source’s SED is blackbody-like, one can fit a Planck function to the SED and derive the temperature and column density (if one can assume LTE). If an SED is not blackbody-like, the emission is the sum of various processes, including:
• thermal emission (e.g. dust, CMB)
• synchrotron emission (power law spectrum)
• free-free emission (thermal for a thermal electron distribution)
#### Spectra
Quantum mechanics combined with chemistry can predict line strengths. Ratios of lines can be used to model “excitation”, i.e. what physical conditions (density, temperature, radiation field, ionization fraction, etc.) lead to the observed distribution of line strengths. Excitation is controlled by
• collisions between particles (LTE often assumed, but not always true)
• photons from the interstellar radiation field, nearby stars, shocks, CMB, chemistry, cosmic rays
• recombination/ionization/dissociation
Which of these processes matter where? In class (2011), we drew the following schematic.
A schematic of several structures in the ISM
Key
A: Dense molecular cloud with stars forming within
• $T=10-50~{\rm K};~n>10^3~{\rm cm}^{-3}$ (measured, e.g., from line ratios)
• gas is mostly molecular (low T, high n, self-shielding from UV photons, few shocks)
• not much photoionization due to high extinction (but could be complicated ionization structure due to patchy extinction)
• cosmic rays can penetrate, leading to fractional ionization: $X_I=n_i/(n_H+n_i) \approx n_i/n_H \propto n_H^{-1/2}$, where $n_i$ is the ion density (see Draine 16.5 for details). Measured values for $X_e$ (the electron-to-neutral ratio, which is presumed equal to the ionization fraction) are about $X_e \sim 10^{-6}~{\rm to}~10^{-7}$.
• possible shocks due to impinging HII region – could raise T, n, ionization, and change chemistry globally
• shocks due to embedded young stars w/ outflows and winds -> local changes in Tn, ionization, chemistry
• time evolution? feedback from stars formed within?
B: Cluster of OB stars (an HII region ionized by their integrated radiation)
• 7000 < T < 10,000 K (from line ratios)
• gas primarily ionized due to photons beyond Lyman limit (E > 13.6 eV) produced by O stars
• elements other than H have different ionization energy, so will ionize more or less easily
• HII regions are often clumpy; this is observed as a deficit in the average value of $n_e$ from continuum radiation over the entire region as compared to the value of ne derived from line ratios. In other words, certain regions are denser (in ionized gas) than others.
• The above introduces the idea of a filling factor, defined as the ratio of filled volume to total volume (in this case the filled volume is that of ionized gas)
• dust is present in HII regions (as evidenced by observations of scattered light), though the smaller grains may be destroyed
• significant radio emission: free-free (bremsstrahlung), synchrotron, and recombination line (e.g. H76a)
• chemistry is highly dependent on nT, flux, and time
C: Supernova remnant
• gas can be ionized in shocks by collisions (high velocities required to produce high energy collisions, high T)
• e.g. if v > 1000 km/s, T > 106 K
• atom-electron collisions will ionize H, He; produce x-rays; produce highly ionized heavy elements
• gas can also be excited (e.g. vibrational H2 emission) and dissociated by shocks
D: General diffuse ISM
• ne best measured from pulsar dispersion measure (DM), an observable. ${\rm DM} \propto \int n_e dl$ | 2020-01-26 06:43:24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 7, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8654639720916748, "perplexity": 7284.056583814305}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251687725.76/warc/CC-MAIN-20200126043644-20200126073644-00016.warc.gz"} |
https://www.projecteuclid.org/euclid.aos/1176345802 | ## The Annals of Statistics
### Nonparametric Estimation in the Presence of Length Bias
Y. Vardi
#### Abstract
We derive the nonparametric maximum likelihood estimate, $\hat{F}$ say, of a lifetime distribution $F$ on the basis of two independent samples, one a sample of size $m$ from $F$ and the other a sample of size $n$ from the length-biased distribution of $F$, i.e. from $G_F(x) = \int^x_0 u dF(u)/\mu, \mu = \int^\infty_0 x dF(x)$. We further show that $(m + n)^{1/2}(\hat{F} - F)$ converges weakly to a pinned Gaussian process with a simple covariance function, when $m + n \rightarrow \infty$ and $m/n \rightarrow$ constant. Potential applications are described.
#### Article information
Source
Ann. Statist., Volume 10, Number 2 (1982), 616-620.
Dates
First available in Project Euclid: 12 April 2007
https://projecteuclid.org/euclid.aos/1176345802
Digital Object Identifier
doi:10.1214/aos/1176345802
Mathematical Reviews number (MathSciNet)
MR653536
Zentralblatt MATH identifier
0491.62034
JSTOR | 2019-10-19 23:35:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4220559895038605, "perplexity": 843.1002968686645}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986700435.69/warc/CC-MAIN-20191019214624-20191020002124-00006.warc.gz"} |
https://hbfs.wordpress.com/2010/05/11/failed-experiment/ | ## Failed Experiment
Experiments do not always work as planned. Sometimes you may invest a lot of time into a (sub)project only to get no, or only moderately interesting results. Such a (moderately) failed experiment is the topic of this week’s blog post.
Some time ago I wrote a CSV exporter for an application I was writing and, amongst the fields I needed to export, were floating point values. The application was developed under Visual Studio 2005 and I really didn’t like how VS2005’s printf function handled the formats for floats. To export values losslessly, that is, you could read back exactly what you wrote to file, I decided to use the "%e" format specifier for printf. Turned out that it was neither lossless nor minimal!
If you use "%f" as a format specifier, printf uses a readable default format that is friendly only to moderate values, that is, neither too small nor too big. If the value printed is too big or too small, you lose precision or you get an exaggeratedly long representation. Consider:
```printf("%f\n",1e-20f);
printf("%f\n",3.14e20f);
```
you get the output:
```0.000000
314000008403897810944.000000
```
In the first case, you loose all information, in the second you have an extra long representation. However, if you use the engineering notation specifier "%e", you get the more sensible (but not always shorter) output:
```9.99999968e-021
-3.14000008e+020
```
With GCC, you get e+20 instead of e+020 but you still get extra non-significant + and zeroes, and, in bonus, weird rounding artifacts. Indeed, 3.14e20 is a valid and sufficient representation for the same number. Visually, we could reduce the printf format to "%e", but while it would be visually more pleasing, rescanning output values will not always yield the original machine-precision float. In some cases it does, but not always, and that’s bad.
So if "%e" isn’t sufficient (it defaults to 6 digits after the point), how many digits do we need? A float in IEEE 754 representation as one sign bit, 8 bits for exponent and 23 bits for the mantissa. Since it also has a virtual most significant bit (which is 1), the mantissa can be seen as 24 bits, with a leading 1. This leaves us with $\log_{10} 2^{24} = 24 \log_{10} 2 \approx 7.225$ digits, which makes ".7e" a bit short, as quick testing shows, and makes ".8e" format necessary.
So the first goal here is to save floats in text losslessly but we’d also like them to be as short as possible. Not so much for data compaction that for human reading; 3.14e20 is still better than 3.1400000e+020. So, naively, I set to write “clean up” code:
```////////////////////////////////////////
//
// Rules:
//
// floats are of the form:
//
// x.yyyyyyyyye±zzz
//
// Removes sign after e if it's a +
// Removes all non significant zeroes.
// Packs the resulting representation
//
static void fd_strip(char s[])
{
// find e
char * e = strchr(s,'e');
if (e)
{
char * d = e+1;
// remove (redundant) unary +
if (*d=='+') *d='_';
d++;
// strip leading zeroes in exponent
while (*d && (*d=='0')) *d++='_';
if (*d==0) // the exponent is all zeroes?
*e='_'; // remove e as well
// rewind and remove non-significant zeroes
e--;
while ((e>s) && (*e=='0')) *e--='_';
// go forward and pack over _
for (d=e; *e; e++)
if (*e!='_') *d++=*e;
*d=0;
}
else
{
// some kind of nan or denormal,
// don't touch! (like ±inf)
}
}
////////////////////////////////////////
//
// simplistic itoa-style conversion for
// floats (buffer should be at least 16
// char long) (0.12345678e±12 + \0)
//
void ftoa(float f, char buffer [])
{
snprintf(buffer,16,"%.8e",f); // redundant but safer
fd_strip(buffer);
}
```
As I have said in a previous post, if your function’s domain is small enough, you should use exhaustive testing rather than manual testing based on a priori knowledge you have. Floats are a pest to enumerate in strict order (because, for example, the machine-specific epsilon works only for numbers close to 1, and does nothing for 1e20) so I build a (machine-specific) bit-field:
```typedef union
{
float value; // The float's value
unsigned int int_value;
struct
{
unsigned int mantissa:23;
unsigned int exponent:8;
unsigned int sign:1;
} sFloatBits; // internal float structure
} tFloatStruct;
```
that allows me to control every part of a float. looping through signs, exponents, and mantissas will allow us to generate all possible floats, including infinities, denormals, and NaNs.
The main loop looks like:
```
tFloatStruct fx;
for (sign=0;sign<2;sign++)
{
fx.sFloatBits.sign = sign;
for (exponent=0; exponent<256; exponent++)
{
fx.sFloatBits.exponent=exponent;
for (mantissa=0; mantissa< (1u<<24); mantissa++)
{
float new_fx, diff;
char ftoa_buffer[40]; // some room for system-specific behavior1
char sprintf_buffer[40]; // ?
fx.sFloatBits.mantissa=mantissa;
if (isnan(fx.value))
{
// we don't really care for
// NaNs, but we should check
// that they decode all right?
//
// but nan==nan is always false!
}
else
{
// once in a while
if ((mantissa & 0xffffu)==0)
{
printf("\r%1x:%02x:%06x %-.8e",sign,exponent,mantissa, fx.value);
fflush(stdout); // prevents display bugs
}
how_many++;
ftoa(fx.value,ftoa_buffer);
sprintf(sprintf_buffer,"%.8e",fx.value);
// gather stats on length
//
ftoa_length+=strlen(ftoa_buffer);
sprintf_length+=strlen(sprintf_buffer);
// check if invertible
//
new_fx = (float)atof( ftoa_buffer );
if (new_fx!=fx.value)
{
diff = (new_fx - fx.value);
printf("\n%e %s %e %e\n", fx.value, ftoa_buffer, new_fx, diff);
}
}
} // for mantissa
} // for exp
} // for sign
printf("\n\n");
printf(" average length for %%-.8e = %f\n", sprintf_length/(float)how_many);
printf(" average length for ftoa = %f\n", ftoa_length/(float)how_many);
printf(" saved: %f\n",(sprintf_length-ftoa_length)/(float)how_many);
```
So, we run this and verify that 1) all floats are transcoded losslessly and 2) the ftoa is much shorter than printf‘s. Or is it?
*
* *
After a few hours of running the program (it takes a little more than 6 hours on my computer at home), the results are somewhat disappointing. First, it does transcode all floats correctly. But it doesn’t shorten the representation significantly.
Using GCC (and ICC), you get that the average representation length out of ".8e" without tweaking is 14.5 digits (including signs and exponents). Using ftoa (and fd_strip), the representation is shortened to 13.53 digits on average, an average saving of 0.96, which is far from fantastic.
With visual studio, the savings are a bit better, but clearly not supercalifragilisticexpialidocious either: from an average of 15.5 digits, it reduces to 13.6, an average saving of 1.9 digit.
With doubles, the results are quite similar. On GCC (and ICC) you start with an average length of 23.2, and of 22.5 after “simplification”. For double, you have to use ".16e" to get losslessness.
*
* *
So, it turns out that it was a lot of work for nothingnot much. On the bright side, we figured out that 7 digits aren’t always enough to save floats in text and get them back losslessly; while the documentation says it should be only seven. Maybe it’s a platform-specific artifact, maybe not; anyway, it’s much better to save numbers losslessly than saving them so that they are merely pretty to look at.
Acknowledgements
I would like to thank my friend Mathieu for running the experiments (and help debug) on Visual Studio Express 2008. The help is greatly appreciated, as I didn’t have a Windows+Visual Studio machine at hand at the time of writing.
### 8 Responses to Failed Experiment
1. Chris Chiesa says:
I haven’t read this entire thing — I’m looking at it at work, and it’s long, so I’m saving it on my “to be read later” list — but I do have one thing to say after looking at just your first few examples.
It’s not clear that you will EVER be able to ‘read back into memory exactly what you wrote out,” because the exact conversion between binary and decimal fractions may be nonterminating; 1/5, for example, can be represented exactly in decimal (0.2) but not in binary (0.0011 0011 0011 …). Other examples work oppositely, having finite representations in binary but not in decimal.
Second, the “weird rounding errors” you cite in your first example (3.14e20) are an unavoidable consequence of the usual, “standard,” binary representations of floating-point numbers, i.e. in a mantissa-and-exponent format like IEEE et al. The real numbers are an infinite set, while the numbers representable in a computer are a finite set. If you have an M-bit mantissa, only 2eM discrete values can be represented between 2eN and 2e(N+1), no matter how large N is. Thus, there are the same number of values available between e.g. 2e2 = 4 and 2e3 = 8 as there are between 2e20 = 1,048,576 and 2e21 =2,097,152. Clearly, the exactly-representable values will be 2e18 times as far apart in the latter case as in the first, which leaves room for a vast number of values that CAN’T be represented exactly. So your “weird rounding error” isn’t exactly a “rounding” error, it’s more of a “quantizing” error — when you code “float a = 3.14e20,” the value actually stored in the variable is only the nearest REPRESENTABLE value to 3.14e20 — and that’s what you get, in decimal, when you print it. Clearly the nearest available value to 3.14e20 is different from the exact value by 8,403,897,810,944 — over 8 trillion. That’s not very exact, in my book.
The only real way to “get out what you put in,” and later “get back in what you put out,” is to do all your math using numeric strings. There are packages for doing that. Look up “arbitrary precision arithmetic.”
• Steven Pigeon says:
The weird rounding errors I was thinking about/referring to aren’t this at all (though you make a valid point).
On the x86 platform, the FPU registers are 80 bits long; depending on how the compiler generates code, you may get unpredictable rounding errors—or let’s just say error. What happens is that to get faster code, the compiler will try to keep all computation into the registers, writing them back (and reducing their precision back to 64 or 32 bits) only when absolutely necessary. If you run a program on a platform (CPU, compiler) that enforces strict IEEE 754 compliance, you get the (IEEE) expected result. If you leave the compiler to optimize freely, result may vary.
But what I was thinking of was rather the printf conversion routines themselves. I remember (while I can’t materially demonstrate it right now) that visual studio 2003 gave different rounds than gcc, and gcc seemed more accurate back then.
While we can debate, as you point out, what exactly is a rounding error in a given context, I would still expect the standard C library to output predictable representations of floating point numbers, which still isn’t quite the case (c.f., e+020 vs e+20 vs e20)
2. Barrie says:
Not all floating point numbers are representable in a finite number of decimal digits. If you haven’t read it, check out “What Every Computer Scientist Should Know About Floating-Point Arithmetic” by David Goldberg.
To be lossless, one approach is to serialize floats as either a device-dependent hex blob or, for device independence, by serializing each of the sFloatBits’s fields as integers in decimal or hex.
– Barrie
• Steven Pigeon says:
You’re mistaking arbitrary ‘real number’ for ‘floating point number’ here. First, the only subset of numbers floats can represent are rational numbers; and represent with precision a very small number of them; the only thing you have to/can control is precision-limited rounding to make sure the ascii version of the number decodes back to the original value, with the limitations of rounding (and possibly implementation-specific defects)
Of course, you can’t expect to recover 1/3 exactly (or any fraction that has a periodic binary expansion) but you can make sure through explicit control of rounding that you recover the best-effort representation of 1/3 (or any other rational number allowable by your float) and that, all in all, you load back the same number you saved. The number we meant to save is, of course, irrelevant.
And yes, I have read the paper by Goldberg. And the books by Acton (before somebody suggest that I read those too). And a couple others :p
The binary blob approach is unfortunately not human-readable (which was one of my requirements back then) and may not play well with others—making importing in a spreadsheet impossible, for example. However, there’s a low risk of machine specificity (as in ISO/IEC 9898-1999 Annex F imposes IEC 60559 (a.k.a. IEEE 754-1985) as float point representation) and you could write quite portable code. Using existing code is much better, though.
3. Ken says:
Have you seen C99’s hexadecimal floats? I’ve adapted your test loop to run all floats through the hex double formatter %a, and as you might expect given that it uses powers of 2 rather than 10, all of the formatted values mapped back to the original values (neglecting nans). I haven’t tested doubles, but gcc’s documentation suggests this is true for those as well.
Anyway, the test code: http://c.pastebin.com/gvW1sdA4
As it is C99, I don’t know if Visual Studio will accept this even if you do pretend it’s a C++ source.
```// This is C99 code. In GCC, compile with --std=c99
#include <stdio.h>
#include <math.h>
typedef union
{
float value; // The float's value
unsigned int int_value;
struct
{
unsigned int mantissa:23;
unsigned int exponent:8;
unsigned int sign:1;
} sFloatBits; // internal float structure
} tFloatStruct;
int main() {
tFloatStruct fx;
unsigned long count=0, bad=0;
for (int sign=0;sign<2;sign++)
{
fx.sFloatBits.sign = sign;
for (int exponent=0; exponent<256; exponent++)
{
fx.sFloatBits.exponent=exponent;
for (int mantissa=0; mantissa< (1u<<24); mantissa++)
{
float new_fx, diff;
char sprintf_buffer[128];
fx.sFloatBits.mantissa=mantissa;
count++;
if (isnan(fx.value))
{
// we don't really care for
// NaNs, but we should check
// that they decode all right?
//
// but nan==nan is always false!
}
else
{
sprintf(sprintf_buffer,"%a",fx.value);
// check if invertible
//
sscanf(sprintf_buffer,"%a", &new_fx);
if (new_fx!=fx.value)
{
diff = (new_fx - fx.value);
printf("BAD: %g != %a by ~%g\n", fx.value, new_fx, diff);
}
}
} // for mantissa
printf("%a\n", fx.value);
} // for exp
} // for sign
printf("%lu bad out of %lu\n", bad, count);
}
```
• Steven Pigeon says:
That’s actually a pretty nifty way of exporting losslessly floats.
I don’t know how far Visual Studio is with supporting C99; I know I still hear friends complaining that it’s missing simple headers such as <stdint.h>. I know that in some cases, you can use third party header files. For %a, I would need to check.
To prevent the paste from expiring, I took the liberty of adding it to your post.
4. pete says:
I’m catching up here.
IEEE floats have an “implied bit” (http://en.wikipedia.org/wiki/Single_precision_floating-point_format), so it’s 23+1 bits = 7.2 digits, hence the need for 8 digits.
But note that even at eight digits, the decimal number is only an approximation: if a rational number described by a floating point number needs n “binary digits” to the right of the “binary point” to represent, then it will also need n decimal digits to the right of the decimal point; for example 1/16 = 0.0001xb = 0.0625, both have four “decimals” and this continues. [I have a more formal, algebraic proof, but informally 0.1xb is 0.5, and if you divide a decimal ending in 5 by 2, then–because its odd–you’re guranteed a digit to the right which is itself 5.] The upshot is the exact single-precision number used to represet the decimal 0.2 takes 24 digits to be written in decimal. Floats are evil.
I looked at the C-standard and can’t see portable way to make gurantees about rounding, beyond %a. Although there are ways to prevent the use of 80-bit numbers. (And in 64-bit mode you should be using the SSE regs, not the x87, so those kind of artifacts will be a thing of a past.)
• Steven Pigeon says:
That’s 24 bits precision counting the virtual most significant bit. That’s $\log_{10} 2^{24} = 24 \log_{10} 2 \approx 7.225$ digits, therefore 8. Good catch. | 2017-05-26 11:16:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5735480189323425, "perplexity": 1643.5045426035585}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608659.43/warc/CC-MAIN-20170526105726-20170526125726-00351.warc.gz"} |
https://dataspace.princeton.edu/handle/88435/dsp01st74ct39z | Please use this identifier to cite or link to this item: http://arks.princeton.edu/ark:/88435/dsp01st74ct39z
Title: Platonic Matters Authors: Rosenblatt, Aviv Kalman Advisors: Wildberg, Christian Contributors: Classics Department Subjects: PhilosophyClassical studies Issue Date: 2020 Publisher: Princeton, NJ : Princeton University Abstract: Plato’s thought begins with a sharp division between a higher and lower world. But as the lower world is by definition unintellectual, a question arises about how it can link to the higher world: Can sensible things participate in their Forms without having any receptivity to them? And wouldn’t that receptivity prove at least inchoately intellectual in some critical sense? This problem is endemic to Platonism, and felt acutely by Aristotle and Plotinus. Given that the higher principle was largely expected to be immutable, the challenge these philosophers faced was to find a way to endow the lower element with a capacity for relating to the higher element. Examining how the three thinkers address this problem can shed new light on some key aspects of their thought. Plotinian emanation can be read as the product of two Platonic approaches. Appropriating Pythagorean number theory, Plato envisaged a diminishing series of intelligible principles, ending in a final mediating figure that bridged the gap with the sensible world. Plotinian emanation systematizes this insight, also hypostasizing a mediating figure – Soul. Plotinus reserves an important role in emanation for desire and the will, which characterize the highest principle as well as the lowest, unintellectual things. The concept of desire used by Plotinus was born in Plato’s Symposium, where eros occupies an intermediate position between ugliness and beauty, lacking and having, and, as Socrates’ characteristic stance, between ignorance and wisdom. Love can thus be of the higher element, despite (or thanks to) not possessing it. Plato’s Symposium provided inspiration for Aristotle’s attempt to resolve the ontological gap through a teleological hylomorphism which binds together the two worlds. Aristotle and Plotinus’s attempts to bridge the ontological/epistemic gap attenuate the intellectual character of the higher world. Two basic elements of Platonism are thus affected: The connection between the two worlds comes to be based on desire rather than intellect; and the nature of the first principle is associated with actualization or simplicity/unity, rather than intellect. URI: http://arks.princeton.edu/ark:/88435/dsp01st74ct39z Alternate format: The Mudd Manuscript Library retains one bound copy of each dissertation. Search for these copies in the library's main catalog: catalog.princeton.edu Type of Material: Academic dissertations (Ph.D.) Language: en Appears in Collections: Classics
Files in This Item:
File Description SizeFormat | 2020-09-28 23:06:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.370604008436203, "perplexity": 6597.76242851712}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401614309.85/warc/CC-MAIN-20200928202758-20200928232758-00772.warc.gz"} |
http://www.acooke.org/cute/ | # C[omp]ute
Welcome to my blog, which was once a mailing list of the same name and is still generated by mail. Please reply via the "comment" links.
Always interested in offers/projects/new ideas. Eclectic experience in fields like: numerical computing; Python web; Java enterprise; functional languages; GPGPU; SQL databases; etc. Based in Santiago, Chile; telecommute worldwide. CV; email.
© 2006-2015 Andrew Cooke (site) / post authors (content).
## [News] 2019: The year revolt went global
From: andrew cooke <andrew@...>
Date: Thu, 12 Dec 2019 19:29:26 -0300
https://thefifthwave.wordpress.com/2019/12/10/2019-the-year-revolt-went-global/
https://www.bloomberg.com/opinion/articles/2019-10-21/protesters-worldwide-are-united-by-something-other-than-politics
Andrew
## Previous Entries
### [Politics] The world's most-surveilled cities
From: andrew cooke <andrew@...>
Date: Thu, 19 Sep 2019 17:23:23 -0300
https://www.comparitech.com/vpn-privacy/the-worlds-most-surveilled-cities
Andrew
### [Bike] Hope Freehub
From: andrew cooke <andrew@...>
Date: Sat, 31 Aug 2019 21:54:18 -0400
Some notes on the Hope specific freehub and related cassettes. In
general, more force is needed than you might expect when manhandling
these things.
* The two-piece cassette is difficult to separate. It helps to cool
the entire cassette (since aluminium has a higher thermal expansion
coeff than steel) and then lever them apart with something wooden.
* Putting them back together is similarly fraught. The spot on the
aluminium spider aligns with the biggest gap in the internal
'teeth' of the steel part. Aligned correctly you can see all teeth
should be OK (any other alignment and some teeth are 'blocked').
With that, place the alloy side down on a flat wooden surface and
then lean on the steel part (I used some stiff gloves to protect my
hand and a fair amount of weight).
* Removing end-caps on the front hub is easily done by pushing them
out with a pencil from the other side.
* Removing the existing freehub was possibly by clamping it in a
wooden vice and pulling the hub upwards.
* The replacement freehub goes on easily enough - you need to push
the pawls into place - but the green seal again needs a fair amount
of force from a wooden implement before it clicks inside.
* The funny looking endcap is QR; the shorter normal endcap is
12x135; the longer 12x142.
And hope are responsive (if not overly effusive) to enquiries at the
Andrew
### [Restaurant] Mama Chau's (Chinese, Providencia)
From: andrew cooke <andrew@...>
Date: Sat, 31 Aug 2019 21:10:22 -0400
https://www.instagram.com/mamachaus/
Really excellent food. A refreshing change.
We went here Friday evening, a belated celebration of Paulina's
birthday. Fairly early, because they close some time around 8pm.
Paulina ordered a selection of dumplings and a bao (stuffed steamed
bread); I ordered a crepe. Sharing, so that we sampled as much as
possible, there was more than enough for us both. Relatively healthy
food with plenty of taste that was still solid enough to leave you
contentedly full.
It has a very small eating area, but also does take-aways. Everyone
else appeared to be half our age. It was very popular, perhaps
because of this recent review -
http://finde.latercera.com/comer/mama-chaus-chino-providencia/ - or
perhaps because it's damn good.
Service is minimal - you order and receive a pinger. When the pinger
pings you go collect your tray of food. There's a fair amount of
packaging, but it's mainly paper-based.
Andrew
### [Politics] Brexit Podcast
From: andrew cooke <andrew@...>
Date: Sat, 31 Aug 2019 21:01:55 -0400
Not dumb.
https://fivethirtyeight.com/features/politics-podcast-is-britain-in-the-middle-of-a-constitutional-crisis/?ex_cid=trump-approval
Andrew
### [Diary] Pneumonia
From: andrew cooke <andrew@...>
Date: Fri, 30 Aug 2019 11:25:00 -0400
I want to make some notes (similar to those on the bike accident) to
help remember the sequence of recent events related to me being
hospitalized for pneumonia.
On Thu Aug 1 we flew to Edinburgh. The day before (or two days
before?) Paulina's brother had stayed in our flat, apparently quite
ill, coughing and vomiting.
In Edinburgh we were in good condition, walking a fair amount (I was /
am still recovering from the broken leg and ensuing problems).
On Tue Aug 6 my sister drove me down to my parents (Paulina stayed in
Edinburgh at a conference). In the car I was coughing a lot.
On Fri Aug 9 I went to meet Paulina at the local train station and
wasn't feeling so good.
The plan was to take the family (including sister) to dinner on Sunday
evening. I spent most of Sunday in bed, hoping I would be well enough
for the meal to go ahead; in the later afternoon I had a temperature
and we cancelled.
The next few days I thought I had the flu - intermittent temperature,
shivers, coughing, etc. At one point I noticed that I was coughing up
phlegm that contained some blood.
On Tue Aug 13 the rest of the family insisted I go see a local doctor.
The doctor sent me directly to the local hospital, where I stayed for
two nights. Initially there was concern I had TB (so I had a
'private' room), but test showed pneumonia (strep). I was on a drip
for hydration (maybe 24 hours) antibiotics (48 hours).
I had been taking Ibuprofen-based flu medication to help with MS
symptoms, but apparently this raised the chance of Kidney problems so
I was switched to Paracetamol.
On Thu Aug 15 I was released with oral antibiotics (2 kinds, 6 days).
On Fri Aug 16 Paulina flew to Chile. On the main flight (LHR - GRU)
she had a fever and was placed on a drip in the airport clinic at Sao
Paulo, but later flew on to Santiago. She saw a local doctor on
Sunday, was diagnosed with pneumonia, and was prescribed antibiotics.
One motivation for Paulina returning (apart from work which was the
original reason for the early flight) was that her brother had
disappeared. He was later found in a hospital in the South of Chile.
I do not know what his diagnosis was.
Meantime (sorry, don't have exact dates) my parents were also
diagnosed with bronchitis and given antibiotics. My sister was OK.
I was intending to fly back on Mon Aug 19, but the local doctor felt
until that date. After some discussion with my doctors in Chile we
decided to delay the flight a week and skip the Betaferon (the risk of
an MS outbreak was low and the drug is not commonly available in the
UK).
I increased the spacing of my final two injections, so the final
injection history was:
August 2019
Mo Tu We Th Fr Sa Su
- 2 - 4
- 6 - 8 - 10 -
12 - 14 - - 17 -
- 20 - - - - -
- 27 - 29 - 31
I flew back on the 26th, arriving 27th (injection on arrival).
Currently we are all easily tired, with coughs, but otherwise OK.
Andrew
### [Politics] Britain's Reichstag Fire moment
From: andrew cooke <andrew@...>
Date: Fri, 30 Aug 2019 11:21:39 -0400
https://www.prospectmagazine.co.uk/magazine/britain-proroguing-boris-johnson-parliament-suspension-richard-evans-weimar
Andrew
### [Programming] GCC Sanitizer Flags
From: andrew cooke <andrew@...>
Date: Thu, 16 May 2019 15:35:19 -0400
https://lemire.me/blog/2016/04/20/no-more-leaks-with-sanitize-flags-in-gcc-and-clang/
Andrew
### [GPU, Programming] Per-Thread Program Counters
From: andrew cooke <andrew@...>
Date: Mon, 8 Apr 2019 19:37:14 -0400
https://medium.com/@farkhor/per-thread-program-counters-a-tale-of-two-registers-f2061949baf2
Andrew
### My Bike Accident - Looking Back One Year
From: andrew cooke <andrew@...>
Date: Mon, 11 Mar 2019 09:39:13 -0300
One year ago today, Sunday 11 March 2018, just after breakfast, I was
looking for my favorite cycling shirt, getting ready to ride a route I
hoped to share later in the week with a friend.
That is all I remember.
https://www.strava.com/activities/1458282810/overview
Then, in a warm haze, I am thinking "this could be serious;" "maybe I
should pay attention;" "focus." Sometime around Tuesday or Wednesday,
in Intermediate Care at Clinica Alemana, with nurses and beeping
machines, and Paulina explaining to me (patiently, every hour or so
for the last few days - I had been "conscious but absent" since Monday
morning) that a car had hit me; that I had been injured and operated
my leg; that I was now OK.
- Fractured right clavicle (collarbone)
- Exposed, fractured left femur (thigh)
- Fractured metatarsal (hand)
- Fractured right ribs (3rd, 4th, and 5th)
I was in hospital for 7 days. The last few in normal care. Final day
I asked to have a shower. When I saw blood running with the water I
fainted.
The ambulance that took me home had the same crew that had taken me
in. I asked how I had been - the EMT said "in some pain."
Final cost $17.894.596 (CLP - around$30k USD).
Home, Paulina had the bed raised, an extra seat on the toilet, a seat
in the shower, a wheelchair. I remember my first shower - it was a
huge effort to lift my foot over the (4 inch) shower wall, and I
collapsed, twitching, on the seat.
I was high as a kite - even back home - on opioids for a couple of
weeks.
My recovery was slow but steady. A physiotherapist came to visit and
taught me some exercises. After a month or two I was walking with
crutches.
Paulina was exhausted from caring for me while still trying to work.
For a while we had someone visit, a few times a week, to clean and
prepare some food.
On Sundays many roads here are closed to cars, given over to cyclists,
runners, inline skaters. A week after my accident a friend returned
to the intersection. He found a witness, someone who flagged when
traffic could or could not pass through the ciclovia, who said I was
hit by a pickup that had run a red light.
I later learned that the driver stopped (to his credit). Someone
called the police and an ambulance. I was on the ground, dazed,
trying to stand but unable. The police asked me where I lived -
apparently I replied "Europa," which is the name of our street, but
also, literally, "Europe." So they assumed I was a tourist - a
wealthy gringo with travel insurance - and sent me to the best
hospital in town.
An investigation was opened by the police. My medical records include
a blood test showing no alcohol. We informed the investigating
magistrate of the witness but later, when called to the police station
to give evidence, they had not received the information. We gave it
again. By the time it was investigated video records from street
After the accident my bike was in a police compound; Paulina collected
it and I started repairs. The front wheel was tacoed, so I bought a
new rim (which Paulina collected - I am so grateful for all the
legwork she has done over the last year) and spokes, and laced it to
the old hub.
Mounting the new wheel on the bike, I realized that the thru-axle was
bent, so I ordered a new axle. When I received the axle I realized
the hub itself was bent, so I ordered a new hub. Given how Shimano
thru-axle hubs work, I only needed to replace the inner sleeve (so I
didn't need to rebuild the entire wheel).
Mounting the new wheel again, I realized that the fork was bent, so I
ordered a new fork. This was delivered to the UK, because mid August
I felt good enough to travel home and see my parents.
I also replaced the handlebars, although the (slight) damage there was
caused by me over-tightening the brakes, not the accident. In
addition I had to replace the rear light (stolen while in police
custody) and my helmet.
The weekend of September 8/9 I was feeling good enough to travel with
Paulina to La Serena. We wanted to check on my old flat, where a
friend had been living rent-free, to make sure it was OK for Paulina's
father to move there.
The flat was a mess. So bad we did not sleep there, but instead
walked into town and stayed at a hotel. The next day we returned, to
continue cleaning. By the end of the weekend the place wasn't too
bad, but my leg was painful.
That was the high point of my recovery.
Post operation, my thighs were asymmetric - on the left hand side was
a "bulge" which, clearly visible in the X-rays, enclosed the end of
the rod that held my femur together. The rod was "too long." It
appeared to be "rubbing" on the inside of the leg, placing a limit on
how far I could walk. As it became more inflamed, I could walk less
distance. The upper limit was around 3,000 steps a day (a few km).
The day after returning from La Serena (Sept 10) I asked the doctor
what could be done. The answer was: nothing, until the bone had
healed, which takes a year.
On September 11 I attended court. The police claimed that the driver
had illegally run a red light. Chilean law is different to UK law -
for a "light" infraction like this (running a red light and not
killing me) the emphasis is on compensating the victim. In general
terms, either we agree some kind of compensation, or the driver is
prosecuted. The driver has to balance the amount of compensation
against the inconvenience of being prosecuted, the likelihood of being
convicted, and the possibility of any sanction.
To start negotiations over compensation we needed to know the amount
outstanding after (the driver's) accident and (my) medical insurance,
but we still had not been billed by the hospital.
So the case was postponed and we returned home to chase up the
paperwork. Once we had the bill Paulina took it to the driver's
insurers, who agreed to pay $5.854.407. Then she went to my medical insurance, who eventually (December 21) agreed to pay$8.327.938,
leaving a balance of $3.712.251. And this is where we stand. The case appears to be stalled pending further police investigation. Since it was difficult to walk I tried cycling again. This was clearly better for my health, and I could manage around 20 minutes without hurting my leg too much. But, perhaps related to this exercise, a new problem surfaced. The rod appeared to get "caught" on something (tendon? muscle?). This hurt, I froze and slowly wiggled my leg to "undo" the blockage. Afraid to walk, I hobbled slowly round the house. Despite my reduced movements this repeated, more severely. Frustrated, and now nearly a year after my operation (February 18, 2019), I returned to the doctor. He was, I think, surprised. The next day I received a call from the hospital - someone had canceled an operation, there was a free slot Fri February 22. I agreed immediately. The operation to remove the rod went smoothly. I entered theater late in the day and was kept for observation overnight. The leg had two dressings - one near the knee (incisions to remove screws) and another on the upper thigh (more screws and the rod itself). These were the usual clear plastic sheets, with external padding for protection, to be left in place as the wound heals. Thursday, February 28, I was feeling good enough to be sat at the computer, working, when I felt a drop of liquid hit my leg. Removing the padding, visible through the dressing, were blisters. One had burst. Back at the hospital, the dressings were removed, the skin wiped clean. I was sent back home with basic antibiotics and anti-histamines. Life with exposed wounds and stitches is boring and uncomfortable (although the anti-histamines meant I slept much of the time). The stitches catch clothing and the wound has to be kept clean and open to the air, so you're either lying in bed or wandering cold and naked through the house. It was uncomfortable to be seated for any length of time, making work difficult (credit to my employers for their support). Monday March 4 I returned to hospital. Although I felt things were improving (no blood / pus stains on the bedsheets on the last night, for example) it still didn't look good (quite frankly, it looked terrifying - red, yellow and blistered - but it was not painful and did not smell). A nurse (a nice nurse - senior and smart and friendly) thought it looked more like an infection than an allergy, and the doctor agreed, changing the antibiotic to something more specific. The next few days, although still boring and uncomfortable, showed real improvement. On Wednesday March 6 my stitches were removed. Since then, the skin has continued to heal. Importantly, the pain from the rod - at least the worst, when it got "hooked" around tendons - has gone. There is still some pain when walking, but it is difficult to know if it the old soreness, or associated with the bruising from the operation. A year after the accident, I still do not know if I will be able to walk, or cycle, as before. Andrew ### [Python] Geographic heights are incredibly easy! From: andrew cooke <andrew@...> Date: Sun, 13 Jan 2019 20:47:20 -0300 Wanted to add heights to bike rides in Choochoo, given the GPS coordinates. At first I considered stealing the data from Strava, but their T&C don't like it and anyway I couldn't find it in the API. Then I considered Google's Elevation Service, but it's$5 for 1,000
values. Then I considered the free OpenStreetMap equivalent, but that
seemed to be broken. Then I realised it's pretty damn easy to do by
hand!
Turns out that the Space Shuttle scanned the entire Earth (except the
Poles) at a resolution of one arcsecond (about 30m on the equator) and
the data have been publicly released.
The project is called SRTM, and the 30m data are "v3" or SRTM3. More
info at https://wiki.openstreetmap.org/wiki/SRTM and
http://dwtkns.com/srtm30m/ and there's an excellent (although buggy)
blog post on how to parse the data at
code at https://github.com/aatishnn/srtm-python
Because the coords are in the file name there's no need for any kind
of RTree or similar index - you just take your coords and infer what
stick it in an array, and return the array value!
My own code to do this is at
https://github.com/andrewcooke/choochoo/blob/master/ch2/sortem/__init__.py
and includes bilinear interpolation (you could cut+paste that code
except for the constructor which is specific to Choochoo - just replace
the messing around w Constant with a literal directory string).
The tests are at
https://github.com/andrewcooke/choochoo/blob/master/tests/test_sortem.py
and from the contours there, which are plotted across 4 tiles, it's
pretty clear that the interpolation is seamless.
So easy! I thought this would take a week of work and I did it all
this afternoon....
Andrew
From: andrew cooke <andrew@...>
Date: Tue, 25 Dec 2018 17:30:46 -0300
125g butter
1 egg
60g sugar (or more?)
45g cocoa
130g flour (w raising powder)
120g chocolate
Mix butter, egg and sugar. Sieve in and mix cocoa and flour. Add a
little water if necessary - want a thick, sticky mix, as solid as
possible, but not powder.
Break the chocolate into pieces and add.
Cool in fridge. Pre-heat oven to 180C.
Place blobs of dough on lightly greased baking paper on tray. Cook
for 15m. Should flatten but not spread much.
Not very sweet, except for the chocolate. Very cocoay and good
texture. Good w ice-cream?
Variations: more sugar? vanilla?
Andrew | 2020-02-19 06:51:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.276816189289093, "perplexity": 10299.33961666168}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144058.43/warc/CC-MAIN-20200219061325-20200219091325-00199.warc.gz"} |
https://brilliant.org/problems/who-asks-that/ | # Who asks that
Geometry Level 3
If the inequality $\sin^{2}x+a\cos x+a^{2} \geq 1 +\cos x$ holds for any $$x \in \mathbb{R}$$, the number of integral values a cannot take is
× | 2017-01-17 21:27:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9587422609329224, "perplexity": 1184.3784306311325}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00573-ip-10-171-10-70.ec2.internal.warc.gz"} |
https://www.nature.com/articles/s41598-018-24352-9/tables/2?error=cookies_not_supported&code=aa1aadf5-0841-41b9-8acb-6e11c560ee4c | # Table 2 Estimated mean concentrations of TSH, FT4, and FT3 in the study individuals divided according to their Ferritin levels.
Ferritin (µg/L) P for difference P for trend
<15 15–30 30–100 >100
TSH (µUI/mL) 1.98 ± 0.07 2.07 ± 0.06 2.02 ± 0.04 2.05 ± 0.04 0.565 0.451
FT4 (pmol/L) 14.83 ± 0.10 15.06 ± 0.09 15.09 ± 0.06 15.33 ± 0.07 <0.001 <0.001
FT3 (pmol/L) 4.91 ± 0.03 4.91 ± 0.03 5.01 ± 0.02 5.11 ± 0.02 <0.001 <0.001
1. Data are estimated marginal means ± standard errors calculated in a general linear model, adjusted to age, sex, UI, BMI and smoking status. | 2019-07-19 03:43:29 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9267844557762146, "perplexity": 6416.239541472516}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525974.74/warc/CC-MAIN-20190719032721-20190719054721-00243.warc.gz"} |
https://terapia-grupowa.pl/18826_correct/decreasing/reactivity/order/for/metal/in/tajikistan.html | # correct decreasing reactivity order for metal in tajikistan
### Transport Phenomena: Challenges and Opportunities for …
2020/7/15· Metal–organic frameworks (MOFs) are appealing heterogeneous support matrices that can stabilize molecular alysts for the electrochemical conversion of small molecules. However, moving from a homogeneous environment to a porous film necessitates the transport of both charge and substrate to the alytic sites in an efficient manner. This presents a significant challenge in the …
### NCERT Exemplar Class 11 Chemistry Unit 13 …
NCERT Class 11 Chemistry Chapter 13 is for Hydrocarbons. The type of questions that will be asked from NCERT Class 11 Chemistry Chapter 13 are displayed in the below provided NCERT Exemplar Class 11 Chemistry Chapter 13. With the help of it, candidates
### Directing the reactivity of metal hydrides for selective …
We use thermodynamic relationships to understand the reactivity of metal hydrides, a branch point in the reactivity for formation of either H 2 or HCO 2 −. We applied our analysis to construct a diagram that defines alyst parameters for achieving selective CO 2 reduction by targeting an appropriate hydricity.
### MCQ Questions for Class 10 Science Metals and Non …
Free PDF Download of CBSE Class 10 Science Chapter 3 Metals and Non-Metals Multiple Choice Questions with Answers. MCQ Questions for Class 10 Science with Answers was Prepared Based on Latest Exam Pattern. Students can solve NCERT Class 10
### Arrange the compounds of each set in order of …
Question 16 Arrange the compounds of each set in order of reactivity towards SN2 displacement: (i) 2-Bromo-2-methylbutane, 1-Bromopentane, 2-Bromopentane (i) An S N 2 reaction involves the approaching of the nucleophile to the carbon atom to which the leaving group is attached.
### inorganic chemistry - What is the Reactivity order of …
The metal oxide also solids that means their reactivity also should decrease down the group because of their decreasing stability. I am totally confused inorganic-chemistry reactivity alkali-metals
### Explain, not just describe, the relationship between these …
Reactivity that metals exhibit can be elucidated through metal reactivity series. This difference between their reactivity occurs because of the difference in stability due to different electronic
### Metal–Metal Bonds: From Fundamentals to Appliions …
While the study of metal–metal bonds has historically been limited to the transition elements, uranium is an element that has fascinated many researchers. In 1984, Cotton remarked that the outlook for compounds with U–U bonds was “rather dim”, but recent work …
### IB Questionbank
Which statement is correct for the halogens $${\text{(F}} \to {\text{I)}}$$? A. Electronegativity decreases from fluorine to iodine. B. Atomic radius decreases from fluorine to iodine. C. First ionization energy increases from fluorine to iodine. D. Reactivity of the
### Rank the nonmetals in each set from most reactive (1) to …
It is known that the reactivity of group 7 elements decreases down the group. The most reactive element in this group is Flourine with reactivity decreasing down the group. The reason for this decrease in reactivity is that as you go down the group, the distance between the positive nucleus that attracts valence electrons increases, decreasing the electrostatic attraction between the nucleus
### What is the order of reactivity of alcohol towards …
2013/7/20· a) primary< secondary> tertiary b)primary >secondary >tertiary c)primary secondary
### Chemistry
the field produced by the ligand. In accordance with the spectrochemical series, the increasing order of field strength is Thus, is the strong field ligand and will produce highest magnitude of . 52. Arrange these in correct order of decreasing reactivity.
### Solid Copper And Silver Nitrate Lab Answers
(d) one reactions: Copper, Lead (e) no reaction: Silver (Authors: Celine and Junsung) Use the answers from above to list the five metals in order of decreasing reactivity. Chemistry Lab Behavior of Copper in a Solution of Silver Nitrate PROBLEM In this experiment, you will observe the reaction of a weighed quantity of copper wire with a solution of silver nitrate.
### Alkali Metal Reactivity | Chemdemos
Alkali Metal Reactivity In this dramatic demonstration, lithium, sodium, and potassium react with water to produce hydrogen gas and the hydroxides of the metals. Lithium reacts fairly slowly, fizzing. Sodium reacts more quickly, generating enough heat to melt
### Spectrochemical series - Wikipedia
A spectrochemical series is a list of ligands ordered on ligand strength and a list of metal ions based on oxidation nuer, group and its identity.In crystal field theory, ligands modify the difference in energy between the d orbitals (Δ) called the ligand-field splitting parameter for ligands or the crystal-field splitting parameter, which is mainly reflected in differences in color of
### Use of Enthalpy Changes of Metal Reactions - 982 Words …
If the hypothesis is proved correct through experimentation then I can apply it to predict the place of the metals I have investigated in the reactivity series. This will be as simple as arranging the metals investigated in order of decreasing enthalpy change of reaction
### A X D - PMT
What is the correct order of these metals in the reactivity series (most reactive first)? A X, W, Y, Z B X, Y, W, Z C Z, W, Y, X D Z, Y, W, X 5 The diagrams show …
### Electrolysis
The electrochemical series lists the elements in order of their standard electric potentials. Which one of the following is the correct order for decreasing reactivity …
### Metals - Reactivity Series - LinkedIn SlideShare
Metals - Reactivity Series 1. Last Lesson… METALS: The Physical Properties of Metals 2. Physical properties of metals METALS Solid state at room temp Shiny appearance High density Good heat conductors Good conductors of electricity
### Halogens,nonmetal, electron affinity,oxidizing property.
The decreasing order of oxidizing power is F 2 >Cl 2 >Br 2 >I 2 11. Physical State : Fluorine and chlorine are yellow , green gases. Br 2 is volatile red liquid while I 2 is violet solid. 12. Reactivity: Flourine is most reactive among all the halogens. Reactivity of
### UNIVERSITY OF CARIDGE INTERNATIONAL EXAMINATIONS …
Choose the one you consider correct and record your choice in soft pencil on the separate Answer Sheet. 30 Below are some metals in decreasing order of reactivity. magnesium zinc iron copper Titanium reacts with acid and cannot be extracted from its
### SECTION A - CBSE Online
The arrangement of metals in a vertical column in the decreasing order of their reactivities is called the reactivity series or activity series of metals. The most reactive metal is at the top position of the reactivity series. The least reactive metal is at the bottom of the
### C3. 1 Periodic Table | Periodic Table Quiz - Quizizz
decreasing reactivity, decreasing atomic mass, decreasing melting point. Tags: Question 13 SURVEY 60 seconds Q. Transition metal have multiple charged ion such as Fe2+ and Fe3+ answer choices True False Tags: Question 16 SURVEY True
### Lab: Activity Series of Metals | Curriki
This lab deals with the activity series of metals. Objectives include 1) Observe the reactivity of common metals. 2) Be able to predict the products of a single replacement reaction between a metal and an acid. 3) Be able to write and
### how to learn the reactivity series - Chemistry - …
The arrangement of metals in the order of decreasing reactivities is called the reactivity series of metals. Answered by Ramandeep | 17th Dec, 2018, 12:44: PM Related Videos | 2021-06-21 23:07:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5822836756706238, "perplexity": 4197.915685186829}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488504838.98/warc/CC-MAIN-20210621212241-20210622002241-00465.warc.gz"} |
https://cstheory.stackexchange.com/questions/24911/can-the-definition-of-ambiguity-of-cfg-be-extended-to-csg | Can the definition of ambiguity of CFG be extended to CSG?
Usually,ambiguity of grammar is defined for constext-free languages and grammars,sometime it is extended to indexed languages and grammar,but the extension of definition of the definition is same except that the grammar and language are indexed one.
Now,let's try to extend the definition to context-sensitive grammar and language.One definition of context-sensitive grammar is as following:
The production or rewriting rule is in the form: $$\alpha A \beta \rightarrow \alpha \gamma \beta$$ where $\alpha , \gamma, \beta$ are string over terminal or non-terminal symbols,$A$ is non-terminal symbol.
Parsing every sentence of CSL by CSG ,we can get a tree or directed graph,the leaves or the final vertex are the terminal symbol,and the root is the $S$ symbol,the nodes or non-final vertex are non-terminal symbol.In this way,every sentence corresponds to one or more trees or directed graphs.
Now,the question: is this definition consistent with the one of CFG and CFL? If not,any suggestion for modification to make them consist to CFG or CFL?
If they are consist,now the definition of ambiguity of CSG is:if a sentence of a CSL having been parsed by CSG corresponds to more than one trees or graphs which are not isomorphic to each other(with labels),the CSG is ambiguous.
In this way,is there any context-sensitive language that is inherently ambiguous?That is every CSG of the language is ambiguous.
• intuitively think the concept likely translates. however CFGs are also CSGs. so you probably want to exclude CFGs. – vzn Jun 17 '14 at 15:06 | 2019-07-18 05:00:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8541878461837769, "perplexity": 2155.279011451695}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525500.21/warc/CC-MAIN-20190718042531-20190718064531-00325.warc.gz"} |
https://www.physicsforums.com/threads/shear-stress-at-the-boundary-of-beam.899122/ | # Shear stress at the boundary of beam
1. Jan 3, 2017
### fonseh
1. The problem statement, all variables and given/known data
We all know that $$\tau = VQ/ It$$
how to determine the shear stress at **G** ?
I'm having problem of finding Ay
centroid if the solid that i found earlier is y = 98.5mm
There's a dashed line from G to the bottom . It's making me confused . for Q , should i consider the area either to the left or right of the G ? like the red and orange part
2. Relevant equations
3. The attempt at a solution
so , my working is
Q = Ay = (250x10^-3)(50x10^-3)( 50+125-98)(10^-3) = 9.63x10^-4
Is my concept correct ?
I'm considering the area above the G .
File size:
31.1 KB
Views:
24
File size:
43.1 KB
Views:
30
File size:
168.2 KB
Views:
36
2. Jan 5, 2017
### PhanthomJay
Yes, excellent work, that is the best way to do it. If the flanges were thinner, you could use the orange area and calculate the horizontal shear stress in the lower flange, and get the same result, but that doesn't work well for thick flanges, so your method and answer is correct, except you used 98 instead of 98.5 on your calc, no big deal.
3. Jan 5, 2017
### fonseh
why
there's a dashed line over there ? it makes me confused .....
4. Jan 6, 2017
### PhanthomJay
Yes it is confusing. If you use the area to the right of the dashed line and calculate Q using that area times the distance from its centroid to the horizontal neutral axis , you calculate the horizontal shear stress , but this calculation does not include the vertical shear stress in that flange,which although small, adds to the shear stress value.
5. Jan 6, 2017
### fonseh
It's not stated in the question , right ? How to know what to find ? vertical shear stress or horizontal shear stress ?
6. Jan 6, 2017
### PhanthomJay
The shear stress in the green area is vertical only, with its complimentary longitudinal shear stress. The shear stress in the orange area is both vertical/longitudinal and horizontal/longitudinal, which, when summed properly, should in theory equal the longitudinal shear stress calculated for the green area at G.
7. Jan 6, 2017
### fonseh
Do you mean the sum of both vertical shear stress and horizontal shear stress = vertical shear stress of green part ?
8. Jan 6, 2017
### PhanthomJay
yes, which is the same as saying that the sum of both the longitudinal shear stress (due to vert shear stress in flange)and long shear stress (due to horiz shear stress in flange) is equal to the long shear stress in green part (due to vert shear stress in green part).
9. Jan 6, 2017
### fonseh
How do you know that ? It's a concept or we need to do the caluclation to know it ? Is it possible to know it without calculation ?
10. Jan 6, 2017
### PhanthomJay
Regardless of calculation of shear stress at G, it should be the same no matter which method is used. But it is a lot easier to use the green area for the calc, as it avoids the complications that you get when using the orange area. If the flange was thin, say 5 mm instead of 50 mm, then either method could be used without complication, because there would essentially be no vert shear stress in flange. But the flange is not thin here, so your original method using the green area is best.
Last edited: Jan 6, 2017
11. Jan 6, 2017
### fonseh
Can you explain why when the flange is thin , the vertical shear stress in the flange can be ignored ?
12. Jan 6, 2017
### PhanthomJay
well, shear stress is (VQ/It), and Q is A(y_bar) ,and since with a thin flange Q is rather small since A is so small, and since t is rather large since you use the flange width, b, for the t value, then the shear stress is very small since A is small and t is large. look at a wide flange I beam for example, you can see how small is the vert shear stress in the flange, it then sharply increases in the web at the web interface since now t is web thickness instead of b flange width. | 2017-10-17 11:52:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8144874572753906, "perplexity": 1049.9016955483685}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187821088.8/warc/CC-MAIN-20171017110249-20171017130249-00323.warc.gz"} |
http://www.ck12.org/algebra/Distributive-Property/lesson/Distributive-Property-MSM6/ | <img src="https://d5nxst8fruw4z.cloudfront.net/atrk.gif?account=iA1Pi1a8Dy00ym" style="display:none" height="1" width="1" alt="" />
# Distributive Property
## The distributive property dictates that when multiplying, the multiplication of real numbers distributes over two or more terms in parentheses. Learn more.
Estimated8 minsto complete
%
Progress
Practice Distributive Property
MEMORY METER
This indicates how strong in your memory this concept is
Progress
Estimated8 minsto complete
%
Distributive Property
Josh is at the yarn store for their annual sale. Each bundle of yarn is on sale for $3.49. He buys 10 bundles of red yarn, 8 bundles of green, 6 bundles of blue, and another 6 bundles of yellow. How much did Josh spend on yarn? In this concept, you will learn to use the distributive property to evaluate numerical expressions. ### Distributive Property To evaluate an expression means to simplify an expression to find the value or quantity. Expressions of the product of a number and a sum can be evaluated using the distributive property. Distributive Property The distributive property is a property that allows you to multiply a number and a sum by distributing the multiplier outside the parentheses with each addend inside the parentheses. Then you can evaluate the expression by finding the sum of the products. Here an expression of the product of a number and a sum. To use the distributive property, take the 4 and distribute the multiplier to the addends inside the parentheses. Then, find the sum of the products. Therefore, the value of the product of 4 times the sum of 3 plus 2 is 20. Here is another example, this time with a variable. Evaluate the expression using the distributive property. First, distribute the 8 to each addend inside the parentheses. Then, find the sum of the products. This is as far as this expression can be evaluated. If there is a known value for , you can substitute with the value to continue evaluating the expression. Evaluate the expression if . The value of the product of 8 times the sum of 9 plus , when equals 4, is 104. ### Examples #### Example 1 Earlier, you were given a problem about Josh at the yarn store. Josh buys 10 bundles of red yarn, 8 bundles of green, 6 bundles of blue, and another 6 bundles of yellow for$3.49 each. Multiply the sum of bundles of yarn by $3.49 to find the total cost of the yarn. First, write an expression to find the total cost of the yarn. Then, distribute the multiplier to each value in the parentheses. Then, find the sum of the products. Josh spent$104.70 on yarn.
Use the distributive property to evaluate the following expressions.
#### Example 2
First, distribute the 4 and multiply it by each value in the parentheses.
Then, find the sum of the products.
The value of the product of 4 times the sum of 9 plus 2 is 44.
#### Example 3
First, distribute the multiplier to each value in the parentheses.
Then, find the sum of the products.
The value of the product of 5 times the sum of 6 plus 3 is 45.
#### Example 4
First, distribute the multiplier to each value in the parentheses.
Then, find the sum of the products.
The value of the product of 2 times the sum of 8 plus 1 is 18.
#### Example 5
First, distribute the multiplier to each value in the parentheses.
Then, find the sum of the products.
The value of the product of 12 times the sum of 3 plus 2 is 60.
### Review
Evaluate each expression using the distributive property.
1.
To see the Review answers, open this PDF file and look for section 4.5.
### Notes/Highlights Having trouble? Report an issue.
Color Highlighted Text Notes
### Vocabulary Language: English
TermDefinition
Evaluate To evaluate an expression or equation means to perform the included operations, commonly in order to find a specific value.
Numerical expression A numerical expression is a group of numbers and operations used to represent a quantity.
Product The product is the result after two amounts have been multiplied.
Property A property is a rule that works for a given set of numbers.
Sum The sum is the result after two or more amounts have been added together. | 2017-03-27 10:10:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6729660034179688, "perplexity": 1498.7327001253018}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189466.30/warc/CC-MAIN-20170322212949-00307-ip-10-233-31-227.ec2.internal.warc.gz"} |
http://mathhelpforum.com/advanced-algebra/119936-automorphism-print.html | # Automorphism
• Dec 11th 2009, 02:21 PM
mms
Automorphism
let p be a prime. Show that $
Aut\left( {\mathbb{Z}_p } \right) \simeq \mathbb{Z}_{p - 1}
$
and show that $
Aut\left( {\mathbb{Z}_8 } \right) \simeq \mathbb{Z}_2
$
• Dec 11th 2009, 02:58 PM
Bruno J.
In general, $\mbox{Aut } \mathbb{Z}_n \cong \mathbb{Z}_n^\times$.
It's not trivial at all that $\mathbb{Z}_p^\times$ is cyclic. It's not very difficult to show once you know it, but you are probably not expected to come up with the proof. Instead you are probably expected to use this as a fact.
The second one is false - you have $\mbox{Aut } \mathbb{Z}_8 \cong \mathbb{Z}_8^\times \cong \mathbb{Z}_2 \times \mathbb{Z}_2$.
• Dec 11th 2009, 03:04 PM
mms
thanks for taking the time for explaining...
but when you write the x on top of Zn you mean its the multiplicative group?
also, how could you prove that aut(Z8) is isomorphic to Z2 X Z2?
• Dec 11th 2009, 04:18 PM
Bruno J.
Yes that is what I mean! Sorry for not being more clear.
To see that $
\mbox{Aut } \mathbb{Z}_8 \cong \mathbb{Z}_8^\times \cong \mathbb{Z}_2 \times \mathbb{Z}_2
$
, just notice that every one of the four elements of $\mathbb{Z}_8^\times$ has order 2. There is only one group of order 4 in which every element has order 2; it is the Klein 4-group.
• Dec 11th 2009, 04:56 PM
mms
i see.. thank you
i can't prove the first problem though :/
• Dec 11th 2009, 05:55 PM
Jose27
Notice that an homomorphism $f:G \rightarrow H$ when $G$ is cyclic is completely determined once you know the image of a generator. Knowing this $g: \mathbb{Z}_p \rightarrow \mathbb{Z}_p$ is determined by $g(1)$, but if $g$ is to be an automorphism then $g(1)=a$ must be a generator ie. a nonzero element, but we have $p-1$ such elements. Now take $g,h : \mathbb{Z}_p \rightarrow \mathbb{Z}_p$ automorphisms then if $a=h(1), b=g(1)$ then $gh(1)=ba$ so we can identify an automorphism $g$ with it's value at $1$ and this defines an isomorphism between $Aut( \mathbb{Z}_p ) \cong \mathbb{Z}_p^{\times }$
• Dec 11th 2009, 06:14 PM
Bruno J.
Yes, that is how one goes about showing that $
\mbox{Aut } \mathbb{Z}_p \cong \mathbb{Z}_p^\times
$
. However, it is showing that $\mathbb{Z}_p^\times \cong \mathbb{Z}_{p-1}$ which is difficult (and which I doubt has to be proven by mms - it is probably to be assumed).
• Dec 11th 2009, 06:26 PM
mms
For every $a \in \mathbb{Z}_p^\times$, let $\sigma_a : \mathbb{Z}_p \rightarrow \mathbb{Z}_p$ be the automorphism $x \mapsto ax$ (prove that it is an automorphism). Then the map $\pi : \mathbb{Z}_p^\times \rightarrow \mbox{Aut }\mathbb{Z}_p$ defined by $a \mapsto \sigma_a$ is the isomorphism you are looking for. (Prove it!) | 2016-12-10 03:56:04 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 27, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9645954370498657, "perplexity": 386.2416741544053}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542938.92/warc/CC-MAIN-20161202170902-00162-ip-10-31-129-80.ec2.internal.warc.gz"} |
https://pykeen.readthedocs.io/en/latest/_modules/pykeen/losses.html | # Source code for pykeen.losses
# -*- coding: utf-8 -*-
r"""Loss functions integrated in PyKEEN.
Rather than re-using the built-in loss functions in PyTorch, we have elected to re-implement
some of the code from :mod:pytorch.nn.modules.loss in order to encode the three different
links of loss functions accepted by PyKEEN in a class hierarchy. This allows for PyKEEN to more
dynamically handle different kinds of loss functions as well as share code. Further, it gives
more insight to potential users.
Throughout the following explanations of pointwise loss functions, pairwise loss functions, and setwise
loss functions, we will assume the set of entities $\mathcal{E}$, set of relations $\mathcal{R}$, set of possible
triples $\mathcal{T} = \mathcal{E} \times \mathcal{R} \times \mathcal{E}$, set of possible subsets of possible triples
$2^{\mathcal{T}}$ (i.e., the power set of $\mathcal{T}$), set of positive triples $\mathcal{K}$, set of negative
triples $\mathcal{\bar{K}}$, scoring function (e.g., TransE) $f: \mathcal{T} \rightarrow \mathbb{R}$ and labeling
function $l:\mathcal{T} \rightarrow \{0,1\}$ where a value of 1 denotes the triple is positive (i.e., $(h,r,t) \in \mathcal{K}$) and a value of 0 denotes the triple is negative (i.e., $(h,r,t) \notin \mathcal{K}$).
.. note::
In most realistic use cases of knowledge graph embedding models, you will have observed a subset of positive
triples $\mathcal{T_{obs}} \subset \mathcal{K}$ and no observations over negative triples. Depending on the training
assumption (sLCWA or LCWA), this will mean negative triples are generated in a variety of patterns.
.. note::
Following the open world assumption (OWA), triples $\mathcal{\bar{K}}$ are better named "not positive" rather
than negative. This is most relevant for pointwise loss functions. For pairwise and setwise loss functions,
triples are compared as being more/less positive and the binary classification is not relevant.
Pointwise Loss Functions
------------------------
A pointwise loss is applied to a single triple. It takes the form of $L: \mathcal{T} \rightarrow \mathbb{R}$ and
computes a real-value for the triple given its labeling. Typically, a pointwise loss function takes the form of
$g: \mathbb{R} \times \{0,1\} \rightarrow \mathbb{R}$ based on the scoring function and labeling function.
.. math::
L(k) = g(f(k), l(k))
Examples
~~~~~~~~
.. table::
:align: center
:widths: auto
============================= ============================================================
Pointwise Loss Formulation
============================= ============================================================
Square Error $g(s, l) = \frac{1}{2}(s - l)^2$
Binary Cross Entropy $g(s, l) = -(l*\log (\sigma(s))+(1-l)*(\log (1-\sigma(s))))$
Pointwise Hinge $g(s, l) = \max(0, \lambda -\hat{l}*s)$
Pointwise Logistic (softplus) $g(s, l) = \log(1+\exp(-\hat{l}*s))$
============================= ============================================================
For the pointwise logistic and pointwise hinge losses, $\hat{l}$ has been rescaled from $\{0,1\}$ to $\{-1,1\}$.
The sigmoid logistic loss function is defined as $\sigma(z) = \frac{1}{1 + e^{-z}}$.
Batching
~~~~~~~~
The pointwise loss of a set of triples (i.e., a batch) $\mathcal{L}_L: 2^{\mathcal{T}} \rightarrow \mathbb{R}$ is
defined as the arithmetic mean of the pointwise losses over each triple in the subset $\mathcal{B} \in 2^{\mathcal{T}}$:
.. math::
\mathcal{L}_L(\mathcal{B}) = \frac{1}{|\mathcal{B}|} \sum \limits_{k \in \mathcal{B}} L(k)
Pairwise Loss Functions
-----------------------
A pairwise loss is applied to a pair of triples - a positive and a negative one. It is defined as $L: \mathcal{K} \times \mathcal{\bar{K}} \rightarrow \mathbb{R}$ and computes a real value for the pair. Typically,
a pairwise loss is computed as a function $g$ of the difference between the scores of the positive and negative
triples that takes the form $g: \mathbb{R} \times \mathbb{R} \rightarrow \mathbb{R}$.
.. math::
L(k, \bar{k}) = g(f(k), f(\bar{k}))
Examples
~~~~~~~~
Typically, $g$ takes the following form in which a function $h: \mathbb{R} \rightarrow \mathbb{R}$
is used on the differences in the scores of the positive an the negative triples.
.. math::
g(f(k), f(\bar{k})) = h(f(k) - f(\bar{k}))
In the following examples of pairwise loss functions, the shorthand is used: $\Delta := f(k) - f(\bar{k})$. The
pairwise logistic loss can be considered as a special case of the soft margin ranking loss where $\lambda = 0$.
.. table::
:align: center
:widths: auto
=============================== ==============================================
Pairwise Loss Formulation
=============================== ==============================================
Pairwise Hinge (margin ranking) $h(\Delta) = \max(0, \Delta + \lambda)$
Soft Margin Ranking $h(\Delta) = \log(1 + \exp(\Delta + \lambda))$
Pairwise Logistic $h(\Delta) = \log(1 + \exp(\Delta))$
=============================== ==============================================
Batching
~~~~~~~~
The pairwise loss for a set of pairs of positive/negative triples $\mathcal{L}_L: 2^{\mathcal{K} \times \mathcal{\bar{K}}} \rightarrow \mathbb{R}$ is defined as the arithmetic mean of the pairwise losses for each pair of
positive and negative triples in the subset $\mathcal{B} \in 2^{\mathcal{K} \times \mathcal{\bar{K}}}$.
.. math::
\mathcal{L}_L(\mathcal{B}) = \frac{1}{|\mathcal{B}|} \sum \limits_{(k, \bar{k}) \in \mathcal{B}} L(k, \bar{k})
Setwise Loss Functions
----------------------
A setwise loss is applied to a set of triples which can be either positive or negative. It is defined as
$L: 2^{\mathcal{T}} \rightarrow \mathbb{R}$. The two setwise loss functions implemented in PyKEEN,
:class:pykeen.losses.NSSALoss and :class:pykeen.losses.CrossEntropyLoss are both widely different
in their paradigms, but both share the notion that triples are not strictly positive or negative.
.. math::
L(k_1, ... k_n) = g(f(k_1), ..., f(k_n))
Batching
~~~~~~~~
The pairwise loss for a set of sets of triples triples $\mathcal{L}_L: 2^{2^{\mathcal{T}}} \rightarrow \mathbb{R}$
is defined as the arithmetic mean of the setwise losses for each set of
triples $\mathcal{b}$ in the subset $\mathcal{B} \in 2^{2^{\mathcal{T}}}$.
.. math::
\mathcal{L}_L(\mathcal{B}) = \frac{1}{|\mathcal{B}|} \sum \limits_{\mathcal{b} \in \mathcal{B}} L(\mathcal{b})
"""
from typing import Any, Callable, ClassVar, Mapping, Optional, Set, Type, Union
import torch
from class_resolver import Resolver, normalize_string
from torch.nn import functional
from torch.nn.modules.loss import _Loss
__all__ = [
# Base Classes
'Loss',
'PointwiseLoss',
'PairwiseLoss',
'SetwiseLoss',
# Concrete Classes
'BCEAfterSigmoidLoss',
'BCEWithLogitsLoss',
'CrossEntropyLoss',
'MarginRankingLoss',
'MSELoss',
'NSSALoss',
'SoftplusLoss',
'has_mr_loss',
'has_nssa_loss',
# Utils
'loss_resolver',
]
_REDUCTION_METHODS = dict(
mean=torch.mean,
sum=torch.sum,
)
class Loss(_Loss):
"""A loss function."""
synonyms: ClassVar[Optional[Set[str]]] = None
#: The default strategy for optimizing the model's hyper-parameters
hpo_default: ClassVar[Mapping[str, Any]] = {}
def __init__(self, size_average=None, reduce=None, reduction: str = 'mean'):
super().__init__(size_average=size_average, reduce=reduce, reduction=reduction)
self._reduction_method = _REDUCTION_METHODS[reduction]
[docs]class PointwiseLoss(Loss):
"""Pointwise loss functions compute an independent loss term for each triple-label pair."""
[docs] @staticmethod
def validate_labels(labels: torch.FloatTensor) -> bool:
"""Check whether labels are in [0, 1]."""
return labels.min() >= 0 and labels.max() <= 1
[docs]class PairwiseLoss(Loss):
"""Pairwise loss functions compare the scores of a positive triple and a negative triple."""
[docs]class SetwiseLoss(Loss):
"""Setwise loss functions compare the scores of several triples."""
[docs]class BCEWithLogitsLoss(PointwiseLoss):
r"""A module for the binary cross entropy loss.
For label function :math:l:\mathcal{E} \times \mathcal{R} \times \mathcal{E} \rightarrow \{0,1\} and interaction
function :math:f:\mathcal{E} \times \mathcal{R} \times \mathcal{E} \rightarrow \mathbb{R},
the binary cross entropy loss is defined as:
.. math::
L(h, r, t) = -(l(h,r,t) \cdot \log(\sigma(f(h,r,t))) + (1 - l(h,r,t)) \cdot \log(1 - \sigma(f(h,r,t))))
where represents the logistic sigmoid function
.. math::
\sigma(x) = \frac{1}{1 + \exp(-x)}
Thus, the problem is framed as a binary classification problem of triples, where the interaction functions' outputs
are regarded as logits.
.. warning::
This loss is not well-suited for translational distance models because these models produce
a negative distance as score and cannot produce positive model outputs.
.. seealso:: :class:torch.nn.BCEWithLogitsLoss
"""
synonyms = {'Negative Log Likelihood Loss'}
[docs] def forward(
self,
scores: torch.FloatTensor,
labels: torch.FloatTensor,
) -> torch.FloatTensor: # noqa: D102
return functional.binary_cross_entropy_with_logits(scores, labels, reduction=self.reduction)
[docs]class MSELoss(PointwiseLoss):
"""A module for the mean square error loss.
.. seealso:: :class:torch.nn.MSELoss
"""
synonyms = {'Mean Square Error Loss', 'Mean Squared Error Loss'}
[docs] def forward(
self,
scores: torch.FloatTensor,
labels: torch.FloatTensor,
) -> torch.FloatTensor: # noqa: D102
assert self.validate_labels(labels=labels)
return functional.mse_loss(scores, labels, reduction=self.reduction)
MARGIN_ACTIVATIONS: Mapping[str, Callable[[torch.FloatTensor], torch.FloatTensor]] = {
'relu': functional.relu,
'softplus': functional.softplus,
}
[docs]class MarginRankingLoss(PairwiseLoss):
"""A module for the margin ranking loss.
.. seealso:: :class:torch.nn.MarginRankingLoss
"""
synonyms = {"Pairwise Hinge Loss"}
hpo_default: ClassVar[Mapping[str, Any]] = dict(
margin=dict(type=int, low=0, high=3, q=1),
)
def __init__(
self,
margin: float = 1.0,
margin_activation: Union[str, Callable[[torch.FloatTensor], torch.FloatTensor]] = 'relu',
reduction: str = 'mean',
):
r"""Initialize the margin loss instance.
:param margin:
The margin by which positive and negative scores should be apart.
:param margin_activation:
A margin activation. Defaults to 'relu', i.e. $h(\Delta) = max(0, \Delta + \lambda)$, which is the
default "margin loss". Using 'softplus' leads to a "soft-margin" formulation as discussed in
https://arxiv.org/abs/1703.07737.
:param reduction:
The name of the reduction operation to aggregate the individual loss values from a batch to a scalar loss
value. From {'mean', 'sum'}.
"""
super().__init__(reduction=reduction)
self.margin = margin
if isinstance(margin_activation, str):
self.margin_activation = MARGIN_ACTIVATIONS[margin_activation]
else:
self.margin_activation = margin_activation
[docs] def forward(
self,
pos_scores: torch.FloatTensor,
neg_scores: torch.FloatTensor,
) -> torch.FloatTensor: # noqa: D102
return self._reduction_method(self.margin_activation(
neg_scores - pos_scores + self.margin,
))
[docs]class SoftplusLoss(PointwiseLoss):
"""A module for the softplus loss."""
def __init__(self, reduction: str = 'mean') -> None:
super().__init__(reduction=reduction)
self.softplus = torch.nn.Softplus(beta=1, threshold=20)
[docs] def forward(
self,
logits: torch.FloatTensor,
labels: torch.FloatTensor,
) -> torch.FloatTensor:
"""Calculate the loss for the given scores and labels."""
assert 0. <= labels.min() and labels.max() <= 1.
# scale labels from [0, 1] to [-1, 1]
labels = 2 * labels - 1
loss = self.softplus((-1) * labels * logits)
loss = self._reduction_method(loss)
return loss
[docs]class BCEAfterSigmoidLoss(PointwiseLoss):
"""A module for the numerically unstable version of explicit Sigmoid + BCE loss.
.. seealso:: :class:torch.nn.BCELoss
"""
[docs] def forward(
self,
logits: torch.FloatTensor,
labels: torch.FloatTensor,
**kwargs,
) -> torch.FloatTensor: # noqa: D102
post_sigmoid = torch.sigmoid(logits)
return functional.binary_cross_entropy(post_sigmoid, labels, **kwargs)
[docs]class CrossEntropyLoss(SetwiseLoss):
"""A module for the cross entopy loss that evaluates the cross entropy after softmax output.
.. seealso:: :class:torch.nn.CrossEntropyLoss
"""
[docs] def forward(
self,
logits: torch.FloatTensor,
labels: torch.FloatTensor,
**kwargs,
) -> torch.FloatTensor: # noqa: D102
# cross entropy expects a proper probability distribution -> normalize labels
p_true = functional.normalize(labels, p=1, dim=-1)
# Use numerically stable variant to compute log(softmax)
log_p_pred = logits.log_softmax(dim=-1)
# compute cross entropy: ce(b) = sum_i p_true(b, i) * log p_pred(b, i)
sample_wise_cross_entropy = -(p_true * log_p_pred).sum(dim=-1)
return self._reduction_method(sample_wise_cross_entropy)
[docs]class NSSALoss(SetwiseLoss):
"""An implementation of the self-adversarial negative sampling loss function proposed by [sun2019]_."""
hpo_default: ClassVar[Mapping[str, Any]] = dict(
margin=dict(type=int, low=3, high=30, q=3),
)
def __init__(self, margin: float = 9.0, adversarial_temperature: float = 1.0, reduction: str = 'mean') -> None:
"""Initialize the NSSA loss.
:param margin: The loss's margin (also written as gamma in the reference paper)
:param adversarial_temperature: The negative sampling temperature (also written as alpha in the reference paper)
:param reduction:
The name of the reduction operation to aggregate the individual loss values from a batch to a scalar loss
value. From {'mean', 'sum'}.
.. note:: The default hyperparameters are based the experiments for FB15K-237 in [sun2019]_.
"""
super().__init__(reduction=reduction)
self.margin = margin
[docs] def forward(
self,
pos_scores: torch.FloatTensor,
neg_scores: torch.FloatTensor,
) -> torch.FloatTensor:
"""Calculate the loss for the given scores.
.. seealso:: https://github.com/DeepGraphLearning/KnowledgeGraphEmbedding/blob/master/codes/model.py
"""
neg_score_weights = functional.softmax(neg_scores * self.adversarial_temperature, dim=-1).detach()
neg_distances = -neg_scores
weighted_neg_scores = neg_score_weights * functional.logsigmoid(neg_distances - self.margin)
neg_loss = self._reduction_method(weighted_neg_scores)
pos_distances = -pos_scores
pos_loss = self._reduction_method(functional.logsigmoid(self.margin - pos_distances))
loss = -pos_loss - neg_loss
if self._reduction_method is torch.mean:
loss = loss / 2.
return loss
_LOSS_SUFFIX = 'Loss'
_LOSSES: Set[Type[Loss]] = {
MarginRankingLoss,
BCEWithLogitsLoss,
SoftplusLoss,
BCEAfterSigmoidLoss,
CrossEntropyLoss,
MSELoss,
NSSALoss,
}
losses_synonyms: Mapping[str, Type[Loss]] = {
normalize_string(synonym, suffix=_LOSS_SUFFIX): cls
for cls in _LOSSES
if cls.synonyms is not None
for synonym in cls.synonyms
}
loss_resolver = Resolver(
_LOSSES,
base=Loss,
default=MarginRankingLoss,
suffix=_LOSS_SUFFIX,
synonyms=losses_synonyms,
)
[docs]def has_mr_loss(model) -> bool:
"""Check if the model has a marging ranking loss."""
return isinstance(model.loss, MarginRankingLoss)
[docs]def has_nssa_loss(model) -> bool:
"""Check if the model has a NSSA loss."""
return isinstance(model.loss, NSSALoss) | 2021-05-09 06:31:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9029585123062134, "perplexity": 14435.127375224953}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988961.17/warc/CC-MAIN-20210509062621-20210509092621-00619.warc.gz"} |
http://mathonline.wikidot.com/polynomial-functions-as-functions-of-bounded-variation | Polynomial Functions as Functions of Bounded Variation
# Polynomial Functions as Functions of Bounded Variation
Recall from the Continuous Differentiable-Bounded Functions as Functions of Bounded Variation page that if $f$ is continuous on the interval $[a, b]$, $f'$ exists, and $f'$ is bounded on $(a, b)$ then $f$ is of bounded variation on $[a, b]$.
We will now apply this theorem to show that all polynomial functions are of bounded variation on any interval $[a, b]$.
Theorem 1: Let $f$ be a polynomial function. Then $f$ is of bounded variation on any interval $[a, b]$.
• Proof: Let $f$ be a polynomial function of degree $n$. Then for $a_0, a_1, ..., a_n \in \mathbb{R}$ with $a_n \neq 0$ we have that:
(1)
\begin{align} \quad f(x) = a_0 + a_1x + a_2x^2 + ... + a_nx^n \end{align}
• Since $f$ is a polynomial, we have that $f$ is continuous on any interval $[a, b]$. Now consider the derivative of $f$ is therefore:
(2)
\begin{align} \quad f'(x) = a_1 + 2a_2x + ... + na_nx^{n-1} \end{align}
• Notice that $f'$ is itself a polynomial. Therefore $f'$ exists and is continuous on any interval $[a, b]$. Since $f'$ is continuous on the closed and bounded interval $[a, b]$ we must have by the Boundedness Theorem that $f'$ is bounded on $[a, b]$ and hence bounded on $(a, b)$.
• Hence $f$ is continuous on $[a, b]$, $f'$ exists, and $f'$ is bounded on $(a, b)$, so by the theorem referenced earlier, we must have that $f$ is of bounded variation on $[a, b]$. $\blacksquare$ | 2017-10-18 05:50:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9848155379295349, "perplexity": 61.205025447181434}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187822747.22/warc/CC-MAIN-20171018051631-20171018071631-00620.warc.gz"} |
http://www.math.kochi-u.ac.jp/docky/kogi/kogi2016_1/chushodaisu/witt10/index.html | # , , and the ring of Witt vectors
No.10:
LEMMA 10.1 Let be a commutative ring. Then:
1. For any , we have
2. If satisfies for some positive integer , then we have
3. Let be a positive integer. If satisfies
such that
then we have
such that
Recall that the ring of -adic Witt vectors is a quotient of the ring of universal Witt vectors. We have therefore a projection . But in the following we intentionally omit to write .
PROPOSITION 10.2 Let be a prime number. Let be a ring of characteristic. Then:
1. Every element of is written uniquely as
2. For any , we have
3. A map
is a ring homomorphism from to .
4. .
5. An element is invertible in if and only if is invertible in .
COROLLARY 10.3 If is a field of characteristic , then is a local ring with the residue field . If furthermore the field is perfect (that means, every element of has a -th root in ), then every non-zero element of may be writen as
(i.e. $x$:invertible)
Since any integral domain can be embedded into a perfect field, we deduce the following
COROLLARY 10.4 Let be an integral domain of characteristic . Then is an integral domain of characteristic 0 .
PROOF.. is always an injection when is.
ARRAY(0x35e8850)ARRAY(0x35e8850) | 2018-12-15 14:19:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.988675594329834, "perplexity": 845.7179535184445}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376826856.91/warc/CC-MAIN-20181215131038-20181215153038-00581.warc.gz"} |
http://www.eoearth.org/view/article/153111/ | # Great Transition: Where Are We Headed
Topics:
The original publication can be found on the Great Transition Initiative website
In the past, new historical eras emerged organically and gradually out of the crises and opportunities presented by the dying epoch. In the planetary transition, reacting to historical circumstance is insufficient. With the knowledge that our actions can endanger the well-being of future generations, humanity faces an unprecedented challenge—to anticipate the unfolding crises, envision alternative futures and make appropriate choices. The question of the future, once a matter for dreamers and philosophers, has moved to the center of the development and scientific agendas.
## Many Futures
How do scientific forecasters predict the future of a national economy, local weather or other systems? The key steps are description, analysis and modeling—data are gathered on current conditions, factors are identified that drive change, and future behavior is represented as a set of mathematical variables that evolves smoothly over time. This is a powerful approach when the system under study is well understood and the time horizon is limited. But predictive modeling is inadequate for illuminating the long-range future of our stunningly complex planetary system.
Global futures cannot be predicted due to three types of indeterminacy—ignorance, surprise and volition. First, incomplete information on the current state of the system and the forces governing its dynamics leads to a statistical dispersion over possible future states. Second, even if precise information were available, complex systems are known to exhibit turbulent behavior, extreme sensitivity to initial conditions and branching behaviors at critical thresholds—the possibilities for novelty and emergent phenomena render prediction impossible. Finally, the future is unknowable because it is subject to human choices that have not yet been made.
In the face of such indeterminacy, how can we think about the global future in an organized manner? Scenario analysis offers a means of exploring a variety of long-range alternatives. In the theater, a scenario is a summary of a play. Analogously, development scenarios are stories with a logical plot and narrative about how the future might play out. Scenarios include images of the future—snapshots of the major features of interest at various points in time—and an account of the flow of events leading to such future conditions. Global scenarios draw on both science—our understanding of historical patterns, current conditions and physical and social processes—and the imagination to articulate alternative pathways of development and the environment. While we cannot know what will be, we can tell plausible and interesting stories about what could be.
Rather than prediction, the goal of scenarios is to support informed and rational action by providing insight into the scope of the possible. They illuminate the links between issues, the relationship between global and regional development and the role of human actions in shaping the future. Scenarios can provide a broader perspective than model-based analyses, while at the same time making use of various quantitative tools. The qualitative scenario narrative gives voice to important non-quantifiable aspects such as values, behaviors and institutions. Where modeling offers structure, discipline and rigor, narrative offers texture, richness and insight. The art is in the balance.
## Global Scenarios
What global futures could emerge from the turbulent changes shaping our world? To organize thinking, we must reduce the immense range of possibilities to a few stylized story lines that represent the main branches. To that end, we consider three classes of scenarios—Conventional Worlds, Barbarization and Great Transitions. These scenarios are distinguished by, respectively, essential continuity, fundamental but undesirable social change, and fundamental and favorable social transformation.
Conventional Worlds assume the global system in the twenty-first century evolves without major surprise, sharp discontinuity, or fundamental transformation in the basis of human civilization. The dominant forces and values currently driving globalization shape the future. Incremental market and policy adjustments are able to cope with social, economic and environmental problems as they arise. Barbarization foresees the possibilities that these problems are not managed. Instead, they cascade into self-amplifying crises that overwhelm the coping capacity of conventional institutions. Civilization descends into anarchy or tyranny. Great Transitions, the focus of this essay, envision profound historical transformations in the fundamental values and organizing principles of society. New values and development paradigms ascend that emphasize the quality of life and material sufficiency, human solidarity and global equity, and affinity with nature and environmental sustainability.
For each of these three scenario classes, we define two variants, for a total of six scenarios. In order to sharpen an important distinction in the contemporary debate, we divide the evolutionary Conventional Worlds into Market Forces and Policy Reform. In Market Forces, competitive, open and integrated global markets drive world development. Social and environmental concerns are secondary. By contrast, Policy Reform assumes that comprehensive and coordinated government action is initiated for poverty reduction and environmental sustainability. The pessimistic Barbarization perspective also is partitioned into two important variants, Breakdown and Fortress World. In Breakdown, conflict and crises spiral out of control and institutions collapse. Fortress World features an authoritarian response to the threat of breakdown, as the world divides into a kind of global apartheid with the elite in interconnected, protected enclaves and an impoverished majority outside.
The two Great Transitions variants are referred to as Eco-communalism and New Sustainability Paradigm. Eco-communalism is a vision of bio-regionalism, localism, face-to-face democracy and economic autarky. While popular among some environmental and anarchistic subcultures, it is difficult to visualize a plausible path from the globalizing trends of today to Eco-communalism, that does not pass through some form of Barbarization. In this essay, Great Transition is identified with the New Sustainability Paradigm, which would change the character of global civilization rather than retreat into localism. It validates global solidarity, cultural cross-fertilization and economic connectedness while seeking a liberatory, humanistic and ecological transition. The six scenario variants are illustrated in Figure 4, which shows rough sketches of the time behavior of each for selected variables.
Figure 4. Scenario Structure with Illustrative Patterns. (Source: Gallopín et al. (1997)[1])
The scenarios are distinguished by distinct responses to the social and environmental challenges. Market Forces relies on the self-correcting logic of competitive markets. Policy Reform depends on government action to seek a sustainable future. In Fortress World it falls to the armed forces to impose order, protect the environment and prevent a collapse into Breakdown. Great Transitions envision a sustainable and desirable future emerging from new values, a revised model of development and the active engagement of civil society.
Table 2. Archetypal Worldviews
Worldview Antecedents Philosophy Motto
Conventional Worlds
-- Market
-- Policy Reform
Smith Market optimism; hidden & enlightened hand Don’t worry, be happy
Keynes; Brundtland Policy stewardship Growth, environment, equity through better technology & management
Barbarization
-- Breakdown
-- Fortress World
Malthus Existential gloom; population/resource catastrophe The end is coming
Hobbes Social chaos; nasty nature of man Order through strong leaders
Great Transitions
-- Eco-communalism
Morris & social utopians; Ghandhi Pastoral romance; human goodness; evil of industrialism Small is beautiful
Mill Sustainability as progressive global social evolution Human solidarity, new values, the art of living
Muddling Through Your brother-in-law (probably) No grand philosophies Que será, será
The premises, values and myths that define these social visions are rooted in the history of ideas (Table 2). The Market Forces bias is one of market optimism, the faith that the hidden hand of well-functioning markets is the key to resolving social, economic and environmental problems. An important philosophic antecedent is Adam Smith[2], while contemporary representatives include many neo-classical economists and free market enthusiasts. In Policy Reform, the belief is that markets require strong policy guidance to address inherent tendencies toward economic crisis, social conflict and environmental degradation. John Maynard Keynes, influenced by the Great Depression, is an important predecessor of those who hold that it is necessary to manage capitalism in order to temper its crises[3]. With the agenda expanded to include environmental sustainability and poverty reduction, this is the perspective that underlay the seminal Brundtland Commission report[4] and much of the official discourse since on environment and development.
The dark belief underlying the Breakdown variant is that the world faces an unprecedented calamity in which unbridled population and economic growth leads to ecological collapse, rampaging conflict and institutional disintegration. Thomas Malthus[5], who projected that geometrically increasing population growth would outstrip arithmetically increasing food production, is an influential forerunner of this grim prognosis. Variations on this worldview surface repeatedly in contemporary assessments of the global predicament [6]. The Fortress World mindset was foreshadowed by the philosophy of Thomas Hobbes[7], who held a pessimistic view of the nature of man and saw the need for powerful leadership. While it is rare to find modern Hobbesians, many people in their resignation and anguish believe that some kind of a Fortress World is the logical outcome of the unattended social polarization and environmental degradation they observe.
The forebears of the Eco-communalism belief system lie with the pastoral reaction to industrialization of William Morris and the nineteenth-century social utopians[8]; the small-is beautiful philosophy of Schumacher[9]; and the traditionalism of Gandhi[10]. This anarchistic vision animates many environmentalists and social visionaries today[11]. The worldview of New Sustainability Paradigm has few historical precedents, although John Stuart Mill, the nineteenth century political economist, was prescient in theorizing a post-industrial and post-scarcity social arrangement based on human development rather than material acquisition[12]. Indeed, the explication of the new paradigm is the aim of the present treatise.
Another worldview—or more appropriately anti-worldview—is not captured by this typology. Many people, if not most, abjure speculation, subscribing instead to a Muddling Through bias, the last row of Table 2[13]. This is a diverse coterie, including the unaware, the unconcerned and the unconvinced. They are the passive majority on the grand question of the global future.
## Driving Forces
While the global trajectory may branch in very different directions, the point of departure for all scenarios is a set of driving forces and trends that currently condition and change the system:
### Demographics
Populations are growing larger, more crowded and older. In typical projections, global population increases by about 50 percent by 2050, with most of the additional three billion people in developing countries. If urbanization trends continue, there will be nearly four billion new city dwellers, posing great challenges for infrastructure development, the environment and social cohesion. Lower fertility rates will lead gradually to an increase in average age and an increase in the pressure on productive populations to support the elderly. A Great Transition would accelerate population stabilization, moderate urbanization rates and seek more sustainable settlement patterns.
### Economics
Product, financial and labor markets are becoming increasingly integrated and interconnected in a global economy. Advances in information technology and international agreements to liberalize trade have catalyzed the process of globalization. Huge transnational enterprises more and more dominate a planetary marketplace, posing challenges to the traditional prerogatives of the nation-state. Governments face greater difficulty forecasting or controlling financial and economic disruptions as they ripple through an interdependent world economy. This is seen directly in the knock-on effects of regional financial crises, but also indirectly in the impacts of terrorist attacks and health scares, such as mad cow disease in Europe. In a Great Transition, social and environmental concerns would be reflected in market-constraining policies, a vigilant civil society would foster more responsible corporate behavior and new values would change consumption and production patterns.
### Social Issues
Increasing inequality and persistent poverty characterize the contemporary global scene. As the world grows more affluent for some, life becomes more desperate for those left behind by global economic growth. Economic inequality among nations and within many nations is growing. At the same time, the transition to market-driven development erodes traditional support systems and norms, leading to considerable social dislocation and scope for criminal activity. In some regions, infectious disease and drug-related criminal activity are important social factors affecting development. A central theme of a Great Transition is to make good on the commitments in the 1948 Universal Declaration on Human Rights to justice and a decent standard of living for all, in the context of a plural and equitable global development model.
### Culture
Globalization, information technology and electronic media foster consumer culture in many societies. This process is both a result and a driver of economic globalization. Ironically, the advance toward a unified global marketplace also triggers nationalist and religious reaction. In their own ways, both globalization, which leaves important decisions affecting the environment and social issues to transnational market actors, and religious fundamentalist reaction to globalization pose challenges to democratic institutions[14]. The 9/11 attacks on the United States left no doubt that global terrorism has emerged as a significant driving force in world development. It appears to have contradictory causes—too much modernism and too little. Its hardcore militants seem energized by utopian dreams of a pan-Islamic rejection of Western-oriented global culture. Its mass sympathy seems rooted in the anger and despair of exclusion from opportunity and prosperity. In the clamor for consumerism or its negation, it is sometimes difficult to hear the voices for global solidarity, tolerance and diversity. Yet, they are the harbinger of the ethos that lies at the heart of a Great Transition.
### Technology
Technology continues to transform the structure of production, the nature of work and the use of leisure time. The continued advance of computer and information technology is at the forefront of the current wave of technological innovation. Also, biotechnology could significantly affect agricultural practices, pharmaceuticals and disease prevention, while raising a host of ethical and environmental issues. Advances in miniaturized technologies could revolutionize medical practices, material science, computer performance and many other applications. A Great Transition would shape technological development to promote human fulfillment and environmental sustainability.
### Environment
Global environmental degradation is another significant transnational driving force. International concern has grown about human impacts on the atmosphere, land and water resources, the bioaccumulation of toxic substances, species loss and the degradation of ecosystems. The realization that individual countries cannot insulate themselves from global environmental impacts is changing the basis of geo-politics and global governance. A core element of a new sustainability paradigm would be the understanding of humanity as part of the web of life with responsibility for the sustainability of nature.
### Governance
There is a significant trend toward democratization and decentralization of authority. On an individual level, there is increased emphasis on “rights,” such as women’s rights, indigenous rights and human rights broadly conceived. In the private sector, it is reflected in “flatter” corporate structures and decentralized decision-making. Some entities, such as the Internet or NGO networks, have no formal authority structure. The emergence of civil society as an important voice in decision-making is a notable development. A Great Transition would see the emergence of a nested governance structure from the local to the global that balances the need to sustain global social and environmental values with the desire for diversity in cultures and strategies.
## Market-driven Development and its Perils
In the Market Forces scenario, dominant forces and trends continue to shape the character of global development in the coming decades. The tendencies supporting a sustainability transition remain secondary forces. This is the tacit assumption of “business-as-usual” scenarios. But it should be underscored that, like all scenarios, Market Forces is a normative vision of the future. Its success requires policy activism, and it will not be easy. Comprehensive initiatives will be required to overcome market barriers, create enabling institutional frameworks and integrate the developing world into the global economic system. This is the program of the IMF, WTO and the so-called “Washington consensus”—we call it the conventional development paradigm.
An earlier study analyzed the Market Forces scenario in depth for each global region[15]. A thumbnail sketch of selected global indicators is shown in Figure 5. The use of energy, water and other natural resources grows far less rapidly than GDP. This “dematerialization” is due both to structural shifts in the economy—from industry to the less resource-intensive service sector—and to market-induced technological change. But despite such reductions, the pressures on resources and the environment increase as the growth in human activity overwhelms the improved efficiency per unit of activity. The “growth effect” outpaces the “efficiency effect.”
Among the projections in the Market Forces scenario:
Figure 5. Global Indicators in Market Forces Scenario. (Source: Great Transitions)
• Between 1995 and 2050, world population increases by more than 50 percent, average income grows over 2.5 times and economic output more than quadruples.
• Food requirements almost double, driven by growth in population and income.
• Nearly a billion people remain hungry as growing populations and continuing inequity in the sharing of wealth counterbalance the poverty-reducing effects of general economic growth.
• Developing region economies grow more rapidly than the average, but the absolute difference in incomes between industrialized and other countries increases from an average of about $20,000 per capita now to$55,000 in 2050, as incomes soar in rich countries.
• Requirements for energy and water increase substantially.
• Carbon dioxide emissions continue to grow rapidly, further undermining global climate stability, and risking serious ecological, economic and human health impacts.
• Forests are lost to the expansion of agriculture and human settlement areas and other land-use changes.
A Market Forces future would be a risky bequest to our twenty-first century descendants. Such a scenario is not likely to be either sustainable or desirable. Significant environmental and social obstacles lie along this path of development. The combined effects of growth in the number of people, the scale of the economy and the throughput of natural resources increase the pressure that human activity imposes on the environment. Rather than abating, the unsustainable process of environmental degradation that we observe in today’s world would intensify. The danger of crossing critical thresholds in global systems would increase, triggering events that could radically transform the planet’s climate and ecosystems.
The increasing pressure on natural resources is likely to cause disruption and conflict. Oil would become progressively scarcer in the next few decades, prices would rise and the geopolitics of oil would return as a major theme in international affairs. In many places, rising water demands would generate discord over the allocation of scarce fresh water both within and between countries—and between human uses and ecosystem needs. To feed a richer and larger population, forests and wetlands would continue to be converted to agriculture, and chemical pollution from unsustainable agro-industrial farming practices would pollute rivers and aquifers. Substantial expansion of built-up areas would contribute significantly to land cover changes. The expansion of irrigated farming would be constrained sharply by water shortage and lack of suitable sites. Precious ecosystems—coastal reefs, wetlands, forests and numerous others—would continue to degrade as a result of land change, water degradation and pollution. Increasing climate change is a wild card that could further complicate the provision of adequate water and food, and the preservation of ecosystem goods, services and amenities.
The social and economic stability of a Market Forces world would be compromised. A combination of factors—persistence of global poverty, continued inequity among and within nations and degradation of environmental resources—would undermine social cohesion, stimulate migration and weaken international security. Market Forces is a precarious basis for a transition to an environmentally sustainable future. It may also be an inconsistent one. The economic costs and social dislocation of increasing environmental impacts could undermine a fundamental premise of the scenario—perpetual global economic growth.
Fraught with such tensions and contradictions, the long-term stability of a Market Forces world is certainly not guaranteed. It could persist for many decades, reeling from one environmental, social and security crisis to the next. Perhaps its very instability would spawn powerful and progressive initiatives for a more sustainable and just development vision. But it is also possible that its crises would reinforce, amplify and spiral out of control.
## Barbarization and the Abyss
Barbarization scenarios explore the alarming possibility that a Market Forces future veers toward a world of conflict in which the moral underpinnings of civilization erode. Such grim scenarios are plausible. For those who are pessimistic about the current drift of world development, they are probable. We explore them to be forewarned, to identify early warning signs and to motivate efforts that counteract the conditions that could initiate them.
The initial driving forces propelling this scenario are the same as for all scenarios. But the momentum for sustainability and a revised development agenda, which seemed so compelling at the close of the twentieth century, collapses. The warning bells—environmental degradation, climate change, social polarization and terrorism—are rung, but not heeded. The conventional paradigm gains ascendancy as the world enters the era of Market Forces. But instead of rectifying today’s environmental and socio-economic tensions, a multi-dimensional crisis ensues.
As the crisis unfolds, a key uncertainty is the reaction of the remaining powerful institutions—country alliances, transnational corporations, international organizations, armed forces. In the Breakdown variant, their response is fragmented as conflict and rivalry amongst them overwhelm all efforts to impose order. In Fortress World, powerful regional and international actors comprehend the perilous forces leading to Breakdown. They are able to muster a sufficiently organized response to protect their own interests and to create lasting alliances. The forces of order view this as a necessary intervention to prevent the corrosive erosion of wealth, resources and governance systems. The elite retreat to protected enclaves, mostly in historically rich nations, but in favored enclaves in poor nations, as well. A Fortress World story is summarized in this narrative.
The stability of the Fortress World depends on the organizational capacity of the privileged enclaves to maintain control over the disenfranchised. The scenario may contain the seeds of its own destruction, although it could last for decades. A general uprising of the excluded population could overturn the system, especially if rivalry opens fissures in the common front of the dominant strata. The collapse of the Fortress World might lead to a Breakdown trajectory or to the emergence of a new, more equitable world order.
## On Utopianism and Pragmatism
The Market Forces worldview embraces both an ambitious vision and a cosmic gamble. The vision is to forge a globally integrated free market by eliminating trade barriers, building market-enabling institutions and spreading the Western model of development. The colossal gamble is that the global market will not succumb to its internal contradictions—planetary environmental degradation, economic instability, social polarization and cultural conflict.
As environments degrade, it is true that some automatic correction acts through the subtle guidance of the “hidden hand” of the market. Environmental scarcity will be reflected in higher prices that reduce demand, and in business opportunities that promote technological innovation and resource substitution. This is why environmental economics draws attention to the critical importance of “internalizing the externalities”—ensuring that the costs of the degradation of environmental resources are monetarized and borne by the producers and consumers who impose such costs. Will such self-correcting mechanisms provide adjustments of sufficient rapidity and scale? To believe so is a matter of faith and optimism with little foundation in scientific analysis or historical experience. There is simply no insurance that the Market Forces path would not compromise the future by courting major ecosystem changes and unwelcome surprises.
Another article of faith is that the Market Forces development strategy would deliver the social basis for sustainability. The hope is that general economic growth would reduce the ranks of the poor, improve international equity and reduce conflict. But again, the theoretical and empirical foundations for such a salutary expectation are weak. Rather, the national experience in industrial countries over the last two centuries suggests that directed social welfare programs are required to ameliorate the dislocation and impoverishment induced by market-driven development. In this scenario, global poverty would likely persist as population growth and skewed income distributions combine to negate the poverty-reducing effect of growth in average income.
Even if a Market Forces future were able to deliver a stable global economic system—itself a highly uncertain hypothesis—the scenario offers no compelling basis for concluding that it would meet the ethical imperatives to pass on a sustainable world to future generations and to sharply reduce human deprivation. Economic and social polarization could compromise social cohesion and make liberal democratic institutions more fragile. Resource and environmental degradation would magnify domestic and international tensions. The unfettered market is important for economic efficiency, but only a fettered market can deliver on sustainability. Environment, equity and development goals are supra-market issues that are best addressed through democratic political processes based on widely shared ethical values and informed by scientific knowledge.
The dream of a Market Forces world is the impulse behind the dominant development paradigm of recent years. As the tacit ideology of influential international institutions, politicians and thinkers, it often appears both reasonable and the only game in town. But drifting into the complexity of a global future by relying on such old mind-sets is the sanctuary for the complacent and the sanguine. Ensuring a transition to a sustainable global future requires an alternative constellation of policies, behaviors and values. “Business-as-usual” is a utopian fantasy—forging a new social vision is a pragmatic necessity.
## Notes
1. ^ Gallopín, G. A. Hammond, P. Raskin and R. Swart. 1997. Branch Points: Global Scenarios and Human Choice. Stockholm, Sweden: Stockholm Environment Institute. PoleStar Series Report No. 7.
2. ^ Smith, A. (1776). 1991. The Wealth of Nations. Amherst, NY: Prometheus.
3. ^ Keynes, J. M. 1936. The General Theory of Employment, Interest, and Money. London: MacMillan.
4. ^ WCED (World Commission on Environment and Development). 1987. Our Common Future. Oxford: Oxford University Press.
5. ^ Malthus, T. (1798). 1983. An Essay on the Principle of Population. U.S.: Penguin.
6. ^ Ehrlich, P. 1968. The Population Bomb. NY: Ballantine.
– Meadows, D. H., D. L. Meadows, J. Randers and W. W. Behrens. 1972. Limits to Growth. New York: Universe Books.
– Kaplan, R. 2000. The Coming Anarchy. NY: Random House.
7. ^ Hobbes, T. (1651). 1977. The Leviathan. NY: Penguin.
8. ^ Thompson, P. 1993. The Work of William Morris. Oxford: Oxford University Press.
9. ^ Schumacher, E. F. 1972. Small is Beautiful. London: Blond and Briggs.
10. ^ Gandhi, M. 1993. The Essential Writings of Mahatma Gandhi. NY: Oxford University Press.
11. ^ Sales, K. 2000. Dwellers in the Land. The Bioregional Vision. Athens, GA: University of Georgia Press.
– Bossel, H. 1998. Earth at a Crossroads. Paths to a Sustainable Future. Cambridge, UK: Cambridge University Press.
12. ^ Mill, J. S. (1848). 1998. Principles of Political Economy. Oxford, UK: Oxford University Press.
13. ^ Lindblom, C. 1959. “The science of ‘Muddling Through’” Public Administration Review XIX: 79–89.
14. ^ Barber, B. 1995. Jihad vs. McWorld. New York: Random House.
15. ^ Raskin, P., G. Gallopín, P. Gutman, A. Hammond and R. Swart 1998. Bending the Curve: Toward Global Sustainability. Stockholm, Sweden: Stockholm Environment Institute. PoleStar Series Report No. 8.
This is a chapter from Great Transition: The Promise and Lure of the Times Ahead (e-book). Previous: Where Are We? | Table of Contents | Next: Where Do We Want To Go?
Glossary
### Citation
Initiative, G., Raskin, P., Banuri, T., Gallopín, G., Gutman, P., Hammond, A., Kates, R., & Swart, R. (2008). Great Transition: Where Are We Headed. Retrieved from http://www.eoearth.org/view/article/153111 | 2014-09-16 19:28:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18639609217643738, "perplexity": 5419.8476694096835}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657119220.53/warc/CC-MAIN-20140914011159-00197-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"} |
https://codereview.stackexchange.com/questions/97681/memoizing-decorator-that-can-retry | # Memoizing decorator that can retry
I have some tasks that I'd like to memoize because they connect to a rather slow network and have to wait for the data. Unfortunately this network can be a little finnicky and we get occasional connection issues. Thus I'd like if my memoizer could know to retry a certain number of times.
import functools
# Python 3.3
def enum(*sequential, **named):
enums = dict(zip(sequential, range(len(sequential))), **named)
return type('Enum', (), enums)
RetryTypes = enum("TRUE", "FALSE", "REMEMBER_EXCEPTION")
class Memoizer:
def __init__(self, retry=RetryTypes.FALSE, retry_times=0, exception_types=Exception):
self.retry = retry
self.retry_times = retry_times
self.exception_types = exception_types
def __call__(self, function):
d = {}
# Retry but don't store exceptions
if self.retry is RetryTypes.TRUE and self.retry_times > 0:
@functools.wraps(function)
def wrapper(*args, forget=False):
if args not in d or forget:
# Try n-1 times and suppress exceptions
for i in range(self.retry_times-1):
try:
d[args] = function(*args)
except self.exception_types:
continue
else:
break
else:
# If we have to try n times don't suppress any exception
d[args] = function(*args)
return d[args]
# Retry and store any exceptions
elif self.retry is RetryTypes.REMEMBER_EXCEPTION:
@functools.wraps(function)
def wrapper(*args, forget=False):
if args not in d or forget:
# If we're retrying at all, try n-1 times and suppress exceptions
if self.retry_times > 1:
for i in range(self.retry_times-1):
try:
d[args] = function(*args)
except self.exception_types:
continue
else:
break
else:
# If we have to try n times catch the exception and store it
try:
d[args] = function(*args)
except Exception as e:
d[args] = e
# If we're not retrying, just catch any exception and store it
else:
try:
d[args] = function(*args)
except Exception as e:
d[args] = e
if isinstance(d[args], Exception):
raise d[args]
else:
return d[args]
# Don't retry
else:
@functools.wraps(function)
def wrapper(*args, forget=False):
if args not in d or forget:
d[args] = function(*args)
return d[args]
return wrapper
Tests:
import math
import unittest
from memoizer import Memoizer, RetryTypes
class TestNoRetry(unittest.TestCase):
def test_no_retry_no_error(self):
@Memoizer()
def f(a):
return 1745*a
result = f(19)
self.assertIs(result, f(19))
def test_no_retry_with_error(self):
@Memoizer()
def f(a):
raise TypeError
with self.assertRaises(TypeError):
f(17)
def test_retry_no_error(self):
@Memoizer(retry=RetryTypes.TRUE)
def f(a):
return 123987/a
result = f(245)
self.assertIs(result, f(245))
def test_retry_no_error_retry_times(self):
@Memoizer(retry=RetryTypes.TRUE, retry_times=2)
def f(a):
return 123987/a
result = f(245)
self.assertIs(result, f(245))
def test_retry_error_suppressed(self):
global time
time = 0
times = 2
@Memoizer(retry=RetryTypes.TRUE, retry_times=times)
def f(a):
global time
time += 1
if time < times:
raise TypeError
else:
return math.pow(a, a)
result = f(13)
self.assertIs(result, f(13))
def test_retry_other_error_not_suppressed(self):
global time
time = 0
times = 2
@Memoizer(retry=RetryTypes.TRUE, retry_times=times, exception_types=AttributeError)
def f(a):
global time
time += 1
if time < times:
raise OSError
else:
return math.pow(a, a)
with self.assertRaises(OSError):
f(13)
def test_retry_too_many_errors(self):
@Memoizer(retry=RetryTypes.TRUE, retry_times=2)
def f(a):
raise OSError
with self.assertRaises(OSError):
f(13)
def test_no_retry_cache_errors_no_error(self):
@Memoizer(retry=RetryTypes.REMEMBER_EXCEPTION)
def f(a):
return 129384716/a
result = f(245)
self.assertIs(result, f(245))
def test_retry_cache_errors_no_error(self):
@Memoizer(retry=RetryTypes.REMEMBER_EXCEPTION, retry_times=2)
def f(a):
return 129384716/a
result = f(245)
self.assertIs(result, f(245))
def test_no_retry_cache_errors_error(self):
@Memoizer(retry=RetryTypes.REMEMBER_EXCEPTION)
def f(a):
raise OSError
error = None
try:
f(245)
except OSError as e:
error = e
self.assertIsNot(error, None)
try:
f(245)
except OSError as e:
self.assertIs(e, error)
def test_retry_cache_errors_error(self):
@Memoizer(retry=RetryTypes.REMEMBER_EXCEPTION, retry_times=2)
def f(a):
raise OSError
error = None
try:
f(245)
except OSError as e:
error = e
self.assertIsNot(error, None)
try:
f(245)
except OSError as e:
self.assertIs(e, error)
def test_retry_cache_errors_eventual_success(self):
global time
time = 0
times = 2
@Memoizer(retry=RetryTypes.REMEMBER_EXCEPTION, retry_times=2)
def f(a):
global time
time += 1
if time < times:
raise OSError
else:
import random
return a & random.getrandbits(64)
result = f(213125)
self.assertIs(result, f(213125))
if __name__ == '__main__':
unittest.main()
Everything works as expected, but I'm not a huge fan of the __call__ implementation, and I feel as though I'm missing some test cases. I'm also not positive which, if any, of the test values would be cached or interned anyway. I also wonder if this might be well served by making d a weakref.WeakValueDictionary.
• Sorry, I must be missing something obvious in your code, but I'm not sure what it is. (1) Why do you try n times regardless of the outcome of your function call? Wouldn't you want to exit the loop early if you had no exception? Also one critique: shouldn't you put forget as the first in your A or B tests to take advantage of lazy evaluation in cases where you set forget=True? – sunny Jul 24 '15 at 21:52
• @sunny look at the else case of the try-except blocks; I do exit the loop early. You're right - putting forget first will short circuit the boolean, but I tend to assume that I'm not very likely to set forget=True, and short circuiting the boolean isn't going to give me that much of a performance boost anyway, considering that dictionary lookup is O(1)* anyway – Dannnno Jul 24 '15 at 22:51
# Naming
• self.retry_times
You retry retry_times - 1 times, in all your loops.
Consider renaming this or removing the - 1's to make it more understandable.
• forget
This remembers, always. It just doesn't lookup.
Consider using a temporary variable or renaming this.
• RetryTypes
This is a constant, not a class. This should be RETRY_TYPES.
# PEP8
• Lines are to be a maximum of 79 characters.
The exception to this are comments at 72.
• Surround operators with one space on both sides. self.retry_times - 1.
The exception to this is to show precedence, 2 + 2*2.
• Keep try statements as small as possible. try:a = fn(). Not try:d['a'] = fn(). This is incase d['a'] = raises an error.
• Constants use UPPER_SNAKE_CASE.
Classes use CammelCase.
Everything else normally uses snake_case.
Your code is really good actually.
# Code
Consider changing your if self.retry is RetryTypes.TRUE and self.retry_times > 0:.
• Remove the and self.retry_times > 0:. or;
• Change it to self.retry_times > 1:.
Two small changes. The former adds readability, but makes the code worse. The latter improves the code, as if you try once then it will use the quicker wrapper.
I'm more in favour of the latter.
The second wrapper, is too over complicated. There is no need for the outer if else statement.
if self.retry_times > 1:
# ...
else:
# Duplicate code.
The else doesn't run if the for loop breaks. If the for loop never runs, it can never break. remove the if-else statement to get smaller and simpler code. Just like the first wrapper.
The first two wrappers are quite alike. The difference is the else statement, and the return/raise. Also if you ignore the for loop, all the wrappers are roughly the same.
You can make four functions, and select two that are needed.
# In the __call__
def plain_else(args):
return function(*args)
def exception_else(args):
try:
return function(*args)
except Exception as e:
return e
def plain_return(r_value):
return r_value
def exception_return(r_value):
if isinstance(r_value, Exception):
raise r_value
return r_value
This allows you to change the way the function works, with no expensive if/else statement.
As my previous solution used the root of all evil, premature optimisations. This one will only use one main loop.
First we select the correct function to use. We then need to make it so if RetryTypes.FALSE is passed, it will never loop. Then we will make a simple main.
if self.retry is RetryTypes.REMEMBER_EXCEPTION:
wrapper_else = exception_else
wrapper_return = exception_return
else:
wrapper_else = plain_else
wrapper_return = plain_return
self.retry_times -= 1
if self.retry is RetryTypes.FALSE:
self.retry_times = 0
@functools.wraps(function)
def wrapper(*args, forget=False):
if args not in d or forget:
for _ in range(self.retry_times):
try:
d[args] = function(*args)
except self.exception_types:
continue
else:
break
else:
d[args] = wrapper_else(args)
return wrapper_return(d[args])
This allows the use of only one main wrapper. It handles if you want to handle errors at the end, and allows you to force it to not loop with RetryTypes.FALSE.
It uses the fact that list(range(0)) == [], this means that the for loop will never run, however the else will.
It may be better to force self.retry_times to handle if the function will retry, rather than be a state of the module, RetryTypes.{TRUE|FALSE}. Also passing 1 to the function, via self.retry_times, makes it work the same way as passing 0 or less. (Even without my changes). And so it would be confusing if the end user wants to retry once, and the function executes once.
Removing the self.retry_times -= 1 statement removes this problem, as then if I want to retry once if my initial try fails I can.
You can merge RetryTypes.FALSE and RetryTypes.TRUE so your function is less confusing, by retrying if retry_times is set. Then if you want it to save errors pass RetryTypes.REMEMBER_EXCEPTION.
If I were to change your code to get all the 'fixes'.
RETRY_TYPES = enum("DEFAULT", "REMEMBER_EXCEPTION")
class Memoizer:
def __init__(self,
retry=RETRY_TYPES.DEFAULT,
retry_times=0,
exception_type=Exception):
self.retry = retry
self.retry_times = retry_times
self.exception_type = exception_type
def __call__(self, function):
d = {}
def plain_else(args):
return function(*args)
def exception_else(args):
try:
return function(*args)
except Exception as e:
return e
def plain_return(r_value):
return r_value
def exception_return(r_value):
if isinstance(r_value, Exception):
raise r_value
return r_value
if self.retry is RETRY_TYPES.REMEMBER_EXCEPTION:
wrapper_else = exception_else
wrapper_return = exception_return
else:
wrapper_else = plain_else
wrapper_return = plain_return
@functools.wraps(function)
def wrapper(*args, overwrite=False):
if args not in d or overwrite:
for _ in range(self.retry_times):
try:
tmp = function(*args)
except self.exception_type:
continue
else:
break
else:
tmp = wrapper_else(args)
d[args] = tmp
return wrapper_return(d[args])
return wrapper
It is simpler, and is easier to use / understand.
• If you want to retry pass the amount of times you want to retry.
• If you want to overwrite the memoized variable, pass overwrite=True.
Forget is ambiguous to what you are forgetting.
If you want to handle **kwargs at a later date, you may want to change it to overwrite_memoized, or an alternate less common name.
• Less code, and duplication of code.
• Proper retry_times, rather than max_trys. max_trys = retry_times + 1, the initial try is not a retry.
• You only handle one exception, and so exception_types was renamed to exception_type.
• Use a tmp variable, as dict set item is $O(n)$ worst case.
• Prevents the function being $O(n^2)$.
• Follows PEP8 on simple and small try statements.
• Actually, I try up to self.retry_times, always (the last time is outside of the loop, and any failure is not suppressed). How would you suggest renaming forget? It is supposed to indicate that any previously memoized value should be forgotten - is that unclear? Why would you suggest changing the check to self.retry_times > 1? I'm in complete agreement over the second wrapper being too complicated, but I don't get your comment "The else doesn't run if the for loop breaks. If it never runs, ...". I like your suggestion about using a main function to reduce duplication. – Dannnno Jul 24 '15 at 18:25
I don't like the fact that an enumeration with three elements has two named TRUE and FALSE. It sound like the enumeration is simply a boolean, and a third element is not expected. I think they should be called NEVER and ALWAYS or something similar. Those names allude to a possibility of more elements.
The RetryType enum is actually not intuitive at all. When the type is REMEMBER_EXCEPTION or TRUE, retry_times controls whether or not to do retries. When the type is FALSE the retry_times attribute doesn't do anything. This is what lead to my confusion. I suggest you remove RetryTypes and separate it's meaning to two arguments/attributes: retry_times (zero means to not retry) and remember_exception which is a boolean.
If you make these changes __call__ could be simplified a lot. You can define just one wrapper function that handles all cases. It just needs an if self.remember_exception statement, the retry loop should just be skipped if self.retry_times is less than one.
Document your code! I assumed that REMEMBER_EXCEPTION was only meant for use with retry turned on. It was not obvious that it's also designed to be used when retry_times is zero. Use docstrings at the top of modules, classes, methods and functions to make use of your code clear.
Don't import a module inside a nested function somewhere in a file (like in test_retry_cache_errors_eventual_success). Import all modules at the top of your files to be consistent and to provide an overview of what modules a given file makes use of.
Add a test that checks the return value of a decorated function against the return value of the same function not decorated.
def f():
# ...
df = Memoizer()(f)
self.assertIs(f(), df()) | 2020-01-17 17:33:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30849653482437134, "perplexity": 6488.560481013286}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250589861.0/warc/CC-MAIN-20200117152059-20200117180059-00471.warc.gz"} |
https://tex.stackexchange.com/questions/357224/solutions-of-exercises-produced-with-tcolorbox-numbering-titles | # Solutions of exercises produced with tcolorbox-Numbering titles
Using the brilliant macros from the tcolorbox package, I write in a book of probability, in every chapter, exercises with solutions. Because I want to write all the solutions together in a chapter, I use the really ingenious macro developed in my question here:
Solutions of exercises produced with tcolorbox
Now, I use "literal" numbering of the chapters, by using the following macro:
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% MACRO FOR LITERAL NUMBERING CHAPTERS %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\newcommand\words[1]{\expandafter\xwords\csname c@#1\endcsname}
\def\xwords#1{\ifcase#1\or
one\or
two\or
three\or
four\or
five\or
six\or
seven\or
eight\or
nine\or
ten
\else
I need more words\fi}
\usepackage{etoolbox} %% comment if 'etoolbox' have been loaded before
\makeatletter
\makeatother
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% END MACRO FOR LITERAL NUMBERING %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
and I want to use the same numbering for the titles of solutions, i.e. something like Solutions of the exercises of the chapter one,
Solutions of the exercises of the chapter two.. etc.
How can I do this ?
I give here the .tex file used:
\documentclass{book}
\usepackage[most]{tcolorbox}
\usepackage{xpatch}
% Formatting command as a 'headline' of the solutions of chapter X
\NewDocumentCommand{\solutionchapterformat}{m}{%
\noindent \bgroup\bfseries Solutions of the exercises of the chapter #1\egroup%
}
\makeatletter
\xpretocmd{\chapter}{%
\begingroup
%% \ifnum\value{chapter}>0\relax
%% \tcbrecord{\string\clearpage}% Write a clearpage after the first chapter for each new chapter
%% \fi %% Uncomment these 3 lines if you want a pagebreak between the solutions of 2 chapters
\c@chapter \numexpr\c@chapter+1% Increase the count register \@chapter by one to trick \thechapter using the 'correct' chapter number
\tcbrecord{%
\solutionchapterformat{\thechapter}}%
\endgroup
}{}{}
\NewDocumentCommand{\extrasolutioncontent}{+m}{%
%%\tcbrecord{Extra solution stuff\par}% Remove this later on!
\tcbrecord{\detokenize{#1}}%
}
\newcommand{\fetchsolutions}{%
%For the first chapter
\begingroup
\c@chapter1%
\solutionchapterformat{\thechapter}%
\endgroup% Now get the rest of the stuff
\tcbinputrecords
}
\makeatother
\NewTColorBox[auto counter,number within=chapter]{exercise}{m+O{}}{%
enhanced,
colframe=green!20!black,
colback=yellow!10!white,
coltitle=green!40!black,
fonttitle=\bfseries,
underlay={\begin{tcbclipinterior}
(interior.north west) circle (2cm);
\draw[help lines,step=5mm,yellow!80!black,shift={(interior.north west)}]
(interior.south west) grid (interior.north east);
\end{tcbclipinterior}},
title={Exercise~ \thetcbcounter:},
label={exercise:#1},
after upper={\par\hfill\textcolor{green!40!black}%
{\itshape Solution on page~\pageref{solution:#1}}},
lowerbox=ignored,
savelowerto=solutions/exercise-\thetcbcounter.tex,
record={\string\solution{#1}{solutions/exercise-\thetcbcounter.tex}},
#2
}
\NewTotalTColorBox{\solution}{mm}{%
enhanced,
colframe=red!20!black,
colback=yellow!10!white,
coltitle=red!40!black,
fonttitle=\bfseries,
underlay={\begin{tcbclipinterior}
(interior.north west) circle (2cm);
\draw[help lines,step=5mm,yellow!80!black,shift={(interior.north west)}]
(interior.south west) grid (interior.north east);
\end{tcbclipinterior}},
title={Solution of Exercise~\ref{exercise:#1} on page~\pageref{exercise:#1}:},
phantomlabel={solution:#1},
attach title to upper=\par,
}{\input{#2}}
\tcbset{no solution/.style={no recording,after upper=}}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% MACRO FOR LITERAL NUMBERING CHAPTERS %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\newcommand\words[1]{\expandafter\xwords\csname c@#1\endcsname}
\def\xwords#1{\ifcase#1\or
one\or
two\or
three\or
four\or
five\or
six\or
seven\or
eight\or
nine\or
ten
\else
I need more words\fi}
\usepackage{etoolbox} %% comment if 'etoolbox' have been loaded before
\makeatletter
\makeatother
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% END MACRO FOR LITERAL NUMBERING %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\usepackage{geometry}
\geometry{paperwidth=180mm, paperheight=180mm, top=-10mm, bottom=25mm}
\begin{document}
\chapter{The first chapter}
\tcbstartrecording
\begin{exercise}{Ex1}[coltitle=cyan!80!black]
Compute the derivative of the following function:
\begin{equation*}
f(x)=\sin((\sin x)^2)
\end{equation*}
\tcblower
The derivative is:
\begin{align*}
f’(x) &= \left( \sin((\sin x)^2) \right)’
=\cos((\sin x)^2) 2\sin x \cos x.
\end{align*}
\end{exercise}
\chapter{The second chapter}
\begin{exercise}{Ex2}[coltitle=cyan!80!black]
Compute the derivative of the following function:
\begin{equation*}
f(x)=(x^2+1) \sqrt{x^4+1}
\end{equation*}
\tcblower
The derivative is:
\begin{align*}
f’(x) &= \left( (x^2+1) \sqrt{x^4+1} \right)’
= 2x\sqrt{x^4+1} + \frac{2x^3(x^2+1)}{\sqrt{x^4+1}}.
\end{align*}
\end{exercise}
We can write any code between two exercises, directly...
\extrasolutioncontent{We can write too any code between two solutions, by using the ingenious $\backslash$extrasolutioncontent macro developed in the answer...}
\begin{exercise}{Ex3}[coltitle=cyan!80!black]
Compute the derivative of the following function:
\begin{equation*}
f(x)=(x^2+1) \sqrt{x^4+1}
\end{equation*}
\tcblower
The derivative is:
\begin{align*}
f’(x) &= \left( (x^2+1) \sqrt{x^4+1} \right)’
= 2x\sqrt{x^4+1} + \frac{2x^3(x^2+1)}{\sqrt{x^4+1}}.
\end{align*}
\end{exercise}
\tcbstoprecording
\newpage
\chapter{Solutions of the exercices}
\fetchsolutions
\end{document}
and the compilation of some "interesting" pages...
Edit. I edit my question to give an answer, using pgfplots package, what I gave into an "answer" but the webmasters of the site said that I must edit my question to give an answer, and I do it !
The following commands, in the preamble:
\def\mywordscounter{{"zero","one","two","three"}} %%% You can go far from..
%% Counters of pgf begins always by 0...
%% The non-numeric-values must be between " "
% Formatting command as a 'headline' of the solutions of chapter X
\NewDocumentCommand{\solutionchapterformat}{m}{%
\section{Solutions of the exercises of the chapter \protect\pgfmathparse{\mywordscounter[#1]}\pgfmathresult}%
}
enables to do the job. One can note the command \protect added before the \solutionchapterformat command, on the suggestion of @ChristianHupfer, which gives the suitable output..
Unfortunately, there's a problem in my file because its contents are false ! Here is the tex file used after update:
\documentclass{book}
\usepackage[most]{tcolorbox}
\usepackage{xpatch}
\usepackage{pgfplots}
\def\mywordscounter{{"zero","one","two","three"}} %%% You can go far from..
%% Counters of pgf begins always by 0...
%% The non-numeric-values must be between " "
% Formatting command as a 'headline' of the solutions of chapter X
\NewDocumentCommand{\solutionchapterformat}{m}{%
\section{Solutions of the exercises of the chapter \protect\pgfmathparse{\mywordscounter[#1]}\pgfmathresult}%
}
\makeatletter
\xpretocmd{\chapter}{%
\begingroup
%% \ifnum\value{chapter}>0\relax
%% \tcbrecord{\string\clearpage}% Write a clearpage after the first chapter for each new chapter
%% \fi %% Uncomment these 3 lines if you want a pagebreak between the solutions of 2 chapters
\c@chapter \numexpr\c@chapter+1% Increase the count register \@chapter by one to trick \thechapter using the 'correct' chapter number
\tcbrecord{%
\solutionchapterformat{\thechapter}}%
\endgroup
}{}{}
\NewDocumentCommand{\extrasolutioncontent}{+m}{%
%%\tcbrecord{Extra solution stuff\par}% Remove this later on!
\tcbrecord{\detokenize{#1}}%
}
\newcommand{\fetchsolutions}{%
%For the first chapter
\begingroup
\c@chapter1%
\solutionchapterformat{\thechapter}%
\endgroup% Now get the rest of the stuff
\tcbinputrecords
}
\makeatother
\NewTColorBox[auto counter,number within=chapter]{exercise}{m+O{}}{%
enhanced,
colframe=green!20!black,
colback=yellow!10!white,
coltitle=green!40!black,
fonttitle=\bfseries,
underlay={\begin{tcbclipinterior}
(interior.north west) circle (2cm);
\draw[help lines,step=5mm,yellow!80!black,shift={(interior.north west)}]
(interior.south west) grid (interior.north east);
\end{tcbclipinterior}},
title={Exercise~ \thetcbcounter:},
label={exercise:#1},
after upper={\par\hfill\textcolor{green!40!black}%
{\itshape Solution on page~\pageref{solution:#1}}},
lowerbox=ignored,
savelowerto=solutions/exercise-\thetcbcounter.tex,
record={\string\solution{#1}{solutions/exercise-\thetcbcounter.tex}},
#2
}
\NewTotalTColorBox{\solution}{mm}{%
enhanced,
colframe=red!20!black,
colback=yellow!10!white,
coltitle=red!40!black,
fonttitle=\bfseries,
underlay={\begin{tcbclipinterior}
(interior.north west) circle (2cm);
\draw[help lines,step=5mm,yellow!80!black,shift={(interior.north west)}]
(interior.south west) grid (interior.north east);
\end{tcbclipinterior}},
title={Solution of Exercise~\ref{exercise:#1} on page~\pageref{exercise:#1}:},
phantomlabel={solution:#1},
attach title to upper=\par,
}{\input{#2}}
\tcbset{no solution/.style={no recording,after upper=}}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% MACRO FOR LITERAL NUMBERING CHAPTERS %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\newcommand\words[1]{\expandafter\xwords\csname c@#1\endcsname}
\def\xwords#1{\ifcase#1\or
one\or
two\or
three
\else
I need more words\fi}
\usepackage{etoolbox} %% comment if 'etoolbox' have been loaded before
\makeatletter
\makeatother
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% END MACRO FOR LITERAL NUMBERING %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{document}
\tableofcontents
\chapter{The first chapter}
\tcbstartrecording
\begin{exercise}{Ex1}[coltitle=cyan!80!black]
Compute the derivative of the following function:
\begin{equation*}
f(x)=\sin((\sin x)^2)
\end{equation*}
\tcblower
The derivative is:
\begin{align*}
f’(x) &= \left( \sin((\sin x)^2) \right)’
=\cos((\sin x)^2) 2\sin x \cos x.
\end{align*}
\end{exercise}
\chapter{The second chapter}
\begin{exercise}{Ex2}[coltitle=cyan!80!black]
Compute the derivative of the following function:
\begin{equation*}
f(x)=(x^2+1) \sqrt{x^4+1}
\end{equation*}
\tcblower
The derivative is:
\begin{align*}
f’(x) &= \left( (x^2+1) \sqrt{x^4+1} \right)’
= 2x\sqrt{x^4+1} + \frac{2x^3(x^2+1)}{\sqrt{x^4+1}}.
\end{align*}
\end{exercise}
We can write any code between two exercises, directly...
\extrasolutioncontent{We can write too any code between two solutions, by using the ingenious $\backslash$extrasolutioncontent macro developed in the answer...}
\begin{exercise}{Ex3}[coltitle=cyan!80!black]
Compute the derivative of the following function:
\begin{equation*}
f(x)=(x^2+1) \sqrt{x^4+1}
\end{equation*}
\tcblower
The derivative is:
\begin{align*}
f’(x) &= \left( (x^2+1) \sqrt{x^4+1} \right)’
= 2x\sqrt{x^4+1} + \frac{2x^3(x^2+1)}{\sqrt{x^4+1}}.
\end{align*}
\end{exercise}
\tcbstoprecording
\newpage
\fetchsolutions
\end{document}
which gives bad output in the contents... "Solutions of the exercises of the chapter 0.017" !!
What's the problem ?
• I am quite occupied right now (my paid daily job at school prevents unpaid support round the clock, unfortunately ;-)) others will do the job ;-)
– user31729
Mar 7, 2017 at 6:45
• Luckily I found some time slot
– user31729
Mar 7, 2017 at 9:16
• Many thanks for your disponibility. Texifying is really happy with you, and all the team Mar 7, 2017 at 10:27
Use the fmtcount package and its \numberstring and \numberstringnum macros to produce the word correspondence of numbers.
\documentclass{book}
\usepackage{fmtcount}
\usepackage[most]{tcolorbox}
\usepackage{xpatch}
% Formatting command as a 'headline' of the solutions of chapter X
\NewDocumentCommand{\solutionchapterformat}{m}{%
\noindent \bgroup\bfseries Solutions of the exercises of the chapter \numberstringnum{#1}\egroup%
}
\makeatletter
\xpretocmd{\chapter}{%
\begingroup
%% \ifnum\value{chapter}>0\relax
%% \tcbrecord{\string\clearpage}% Write a clearpage after the first chapter for each new chapter
%% \fi %% Uncomment these 3 lines if you want a pagebreak between the solutions of 2 chapters
\c@chapter \numexpr\c@chapter+1% Increase the count register \@chapter by one to trick \thechapter using the 'correct' chapter number
\tcbrecord{%
\solutionchapterformat{\thechapter}}%
\endgroup
}{}{}
\NewDocumentCommand{\extrasolutioncontent}{+m}{%
%%\tcbrecord{Extra solution stuff\par}% Remove this later on!
\tcbrecord{\detokenize{#1}}%
}
\newcommand{\fetchsolutions}{%
%For the first chapter
\begingroup
\c@chapter1%
\solutionchapterformat{\thechapter}%
\endgroup% Now get the rest of the stuff
\tcbinputrecords
}
\makeatother
\NewTColorBox[auto counter,number within=chapter]{exercise}{m+O{}}{%
enhanced,
colframe=green!20!black,
colback=yellow!10!white,
coltitle=green!40!black,
fonttitle=\bfseries,
underlay={\begin{tcbclipinterior}
(interior.north west) circle (2cm);
\draw[help lines,step=5mm,yellow!80!black,shift={(interior.north west)}]
(interior.south west) grid (interior.north east);
\end{tcbclipinterior}},
title={Exercise~ \thetcbcounter:},
label={exercise:#1},
after upper={\par\hfill\textcolor{green!40!black}%
{\itshape Solution on page~\pageref{solution:#1}}},
lowerbox=ignored,
savelowerto=solutions/exercise-\thetcbcounter.tex,
record={\string\solution{#1}{solutions/exercise-\thetcbcounter.tex}},
#2
}
\NewTotalTColorBox{\solution}{mm}{%
enhanced,
colframe=red!20!black,
colback=yellow!10!white,
coltitle=red!40!black,
fonttitle=\bfseries,
underlay={\begin{tcbclipinterior}
(interior.north west) circle (2cm);
\draw[help lines,step=5mm,yellow!80!black,shift={(interior.north west)}]
(interior.south west) grid (interior.north east);
\end{tcbclipinterior}},
title={Solution of Exercise~\ref{exercise:#1} on page~\pageref{exercise:#1}:},
phantomlabel={solution:#1},
attach title to upper=\par,
}{\input{#2}}
\tcbset{no solution/.style={no recording,after upper=}}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% MACRO FOR LITERAL NUMBERING CHAPTERS %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\newcommand\words[1]{\expandafter\xwords\csname c@#1\endcsname}
\def\xwords#1{\ifcase#1\or
one\or
two\or
three
\else
I need more words\fi}
\renewcommand{\words}[1]{\numberstring{chapter}}
\usepackage{etoolbox} %% comment if 'etoolbox' have been loaded before
\makeatletter
\makeatother
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% END MACRO FOR LITERAL NUMBERING %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\usepackage{geometry}
\geometry{paperwidth=180mm, paperheight=180mm, top=-10mm, bottom=25mm}
\begin{document}
\chapter{The first chapter}
\tcbstartrecording
\begin{exercise}{Ex1}[coltitle=cyan!80!black]
Compute the derivative of the following function:
\begin{equation*}
f(x)=\sin((\sin x)^2)
\end{equation*}
\tcblower
The derivative is:
\begin{align*}
f’(x) &= \left( \sin((\sin x)^2) \right)’
=\cos((\sin x)^2) 2\sin x \cos x.
\end{align*}
\end{exercise}
\chapter{The second chapter}
\begin{exercise}{Ex2}[coltitle=cyan!80!black]
Compute the derivative of the following function:
\begin{equation*}
f(x)=(x^2+1) \sqrt{x^4+1}
\end{equation*}
\tcblower
The derivative is:
\begin{align*}
f’(x) &= \left( (x^2+1) \sqrt{x^4+1} \right)’
= 2x\sqrt{x^4+1} + \frac{2x^3(x^2+1)}{\sqrt{x^4+1}}.
\end{align*}
\end{exercise}
We can write any code between two exercises, directly...
\extrasolutioncontent{We can write too any code between two solutions, by using the ingenious $\backslash$extrasolutioncontent macro developed in the answer...}
\begin{exercise}{Ex3}[coltitle=cyan!80!black]
Compute the derivative of the following function:
\begin{equation*}
f(x)=(x^2+1) \sqrt{x^4+1}
\end{equation*}
\tcblower
The derivative is:
\begin{align*}
f’(x) &= \left( (x^2+1) \sqrt{x^4+1} \right)’
= 2x\sqrt{x^4+1} + \frac{2x^3(x^2+1)}{\sqrt{x^4+1}}.
\end{align*}
\end{exercise}
\tcbstoprecording
\newpage
\chapter{Solutions of the exercices}
\fetchsolutions
\end{document}
• Ingenious macro as usual. Thanks for this happy TeXifying Mar 7, 2017 at 10:28
• Unfortunately... this was too beautiful !!... I haven't knew what's the fmtcount package exactly... when I read the doc, I find this: Version 1.02 of the fmtcount package now has limited multilingual support. The following languages are implemented: English, Spanish, Portuguese, French, French (Swiss) and French (Belgian). German support was added in version 1.1.2 Italian support was added in version 1.31. But my book is written in arabic, and I thank that there's not a language problem, but there is ! Mar 7, 2017 at 15:03
• So have you another solution based on my own counter words ? Mar 7, 2017 at 15:05
• @FaouziBellalouna: Well, there's not a single word in your question that you need support for Arabic language. How many chapters do you plan? If it is limited to 10 etc, the \words macro can be used anyway, you have to replace the English expressions by the Arabic ones (including the fonts) -- I neither speak Arabic nor can I read the Arabic 'alphabet' ...
– user31729
Mar 7, 2017 at 15:52
• Ok I haven't say that I use the arabic language because all happens the same stricto sensu else for some things and I don't stop traveling between right-to-left and left-to-right modes, but there's more and more beautiful macros developed for this task... Mar 7, 2017 at 16:11
I can solve my question by using the pgfplots package, which enables to define "literal" counters. The following commands in the preamble of my question:
\usepackage{pgfplots}
\def\mywordscounter{{"zero","one","two","three"}} %%% You can go far from..
%% Counters of pgf begins always by 0...
%% The non-numeric-values must be between " "
% Formatting command as a 'headline' of the solutions of chapter X
\NewDocumentCommand{\solutionchapterformat}{m}{%
\noindent\bgroup\bfseries Solutions of the exercises of the chapter \pgfmathparse{\mywordscounter[#1]}\pgfmathresult\egroup%
}
gives the suitable behaviour. Thanks to Christian Hupfer for its original beautiful macro.
But there's a final last problem !! If I want my output title of solutions in format of section. If I use instead the command:
\NewDocumentCommand{\solutionchapterformat}{m}{%
\section{Solutions of the exercises of the chapter \pgfmathparse{\mywordscounter[#1]}\pgfmathresult}%
}
I obtain the error message:
! Incomplete \iffalse; all text was ignored after line ...
What am I doing wrong here ?
• You should ask a follow-up question for your final problem and edit your answer. It shouldn't ask questions but should rather be a self-contained answer to the above question. (I accidently flagged this as not an answer. Sorry…) Mar 8, 2017 at 13:37
• You need to protect the command \pgfmathparse, i.e. \section{....\protect\pgfmathparse{...}} etc.
– user31729
Mar 8, 2017 at 15:52 | 2022-07-06 04:34:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9773387312889099, "perplexity": 10323.896622819515}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104660626.98/warc/CC-MAIN-20220706030209-20220706060209-00141.warc.gz"} |
http://www.lofoya.com/Solved/2199/the-figure-below-shows-two-concentric-circle-withcentre-o-pqrs | # Difficult Geometry & Mensuration Solved QuestionAptitude Discussion
Q. The figure below shows two concentric circle with centre O. PQRS a square, inscribed in the outer circle. It also circumscribes the inner circle, touching it at points B, C, D and A. What is the ratio of the perimeter of the outer circle to that of polygon ABCD?
✖ A. $\pi / 4$ ✖ B. $3 \pi / 2$ ✔ C. $\pi / 2$ ✖ D. $\pi$
Solution:
Option(C) is correct
A, B, C and D must be the mid-points of PS, PQ, QR and RS and ABCD will thus be a square.
Let PQ be $r$. Then, the radius of the outer circle,
$= \dfrac{r\sqrt{2}}{2}\\ = \dfrac{r}{\sqrt{2}}$
The diameter of the inner circle is equal to the side of the outer square, that is $r$. The diameter of the 7 inner circle is equal to the diagonal of the inner square. So, the diagonal of the inner square is $r$.
Hence, the side of the inner square is $r/\sqrt{2}$. $\therefore$ Ratio of perimeter of outer circle to that of
Polygon ABCD,
$=\left(\dfrac{2\pi \dfrac{r}{\sqrt2}}{4\dfrac{r}{\sqrt2}}\right)\\ =\dfrac{\pi}{2}$
Thus option (C) is the right choice.
Edit: As pointed by KARTIK correct typo in the solution. (Changed PO $=r$ to PQ $=r$)
## (3) Comment(s)
KARTIK
()
typo error let PO is $r$, should be PQ is $r$. :)
Deepak
()
Thank you for letting me know the anomaly. Corrected it.
KARTIK
()
No problem ;)
I really like this site..and that counter running on top is the best feature. So any improvement would help me and people like me.
Great work team lofoya. :D | 2016-10-25 04:56:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7477509379386902, "perplexity": 1666.271002445183}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719908.93/warc/CC-MAIN-20161020183839-00016-ip-10-171-6-4.ec2.internal.warc.gz"} |
https://socratic.org/questions/how-do-you-solve-x-2-10x-24-0-using-completing-the-square | # How do you solve x^2 + 10x + 24 = 0 using completing the square?
Jun 14, 2015
$0 = {x}^{2} + 10 x + 24 = {\left(x + 5\right)}^{2} - 1$
Hence $x = - 5 \pm \sqrt{1} = - 5 \pm 1$
#### Explanation:
${\left(x + 5\right)}^{2} = {x}^{2} + 10 x + 25$
So we have:
$0 = {x}^{2} + 10 x + 24 = {\left(x + 5\right)}^{2} - 1$
Add $1$ to both ends to get:
${\left(x + 5\right)}^{2} = 1$
So:
$x + 5 = \pm \sqrt{1} = \pm 1$
Subtract $5$ from both sides to get:
$x = - 5 \pm 1$
That is $x = - 6$ or $x = - 4$
Jun 14, 2015
Factor $y = {x}^{2} + 10 x + 24$ by completing the square
#### Explanation:
$y = {x}^{2} 10 x + \left(25 - 25\right) + 24 = 0$
$y = {\left(x + 5\right)}^{2} - 1 = 0$
${\left(x + 5\right)}^{2} = 1$ -> $x + 5 = \pm 1$
x = -5 + 1 = -4
x = -5 - 1 = -6
Jun 14, 2015
Create a perfect square trinomial on the left side of the equation, then factor it and solve for $x$. The general equation for a perfect square trinomial is ${a}^{2} + 2 a b + {b}^{2} = {\left(a + b\right)}^{2}$.
#### Explanation:
${x}^{2} + 10 x + 24 = 0$
We are going to create a perfect square trinomial on the left side of the equation, then solve for $x$. The general equation for a perfect square trinomial is ${a}^{2} + 2 a b + {b}^{2} = {\left(a + b\right)}^{2}$.
Subtract $24$ from both sides.
${x}^{2} + 10 x = - 24$
Divide the coefficient of the $x$ term by $2$, then square the result, and add it to both sides.
$\frac{10}{2} = 5$ ; 5^2=25
${x}^{2} + 10 x + 25 = - 24 + 25$ =
${x}^{2} + 10 x + 25 = 1$
We now have a perfect square trinomial on the left side, in which $a = x \mathmr{and} b = 5$. Factor the trinomial, then solve for $x$.
${\left(x + 5\right)}^{2} = 1$
Take the square root of both sides.
$x + 5 = \pm \sqrt{1}$ =
$x = \pm \sqrt{1} - 5$ =
$x = \sqrt{1} - 5 = 1 - 5 = - 4$
$x = - \sqrt{1} - 5 = - 1 - 5 = - 6$
$x = - 4$
$x = - 6$ | 2019-08-24 05:18:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 38, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8042401671409607, "perplexity": 445.87845957266677}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027319724.97/warc/CC-MAIN-20190824041053-20190824063053-00289.warc.gz"} |
http://www.reference.com/browse/Solid+tori | Definitions
Nearby Words
# Geometrization conjecture
Thurston's geometrization conjecture states that compact 3-manifolds can be decomposed into submanifolds that have geometric structures. The geometrization conjecture is an analogue for 3-manifolds of the uniformization theorem for surfaces. It was proposed by William Thurston in 1982, and implies several other conjectures, such as the Poincaré conjecture and Thurston's elliptization conjecture.
Thurston's geometrization theorem, or hyperbolization theorem, states that Haken manifolds satisfy the conclusion of geometrization conjecture. Thurston announced a proof in the 1980s and since then several complete proofs have appeared in print. Grigori Perelman sketched a proof of the full geometrization conjecture in 2003 using Ricci flow with surgery, which (as of 2008) appears to be essentially correct. See the Solution of the Poincaré conjecture for a discussion of the proof.
## The conjecture
A 3-manifold is called closed if it is compact and has no boundary.
Every closed 3-manifold has a prime decomposition: this means it is the connected sum of prime three-manifolds (this decomposition is essentially unique except for a small problem in the case of non-orientable manifolds). This reduces much of the study of 3-manifolds to the case of prime 3-manifolds: those that cannot be written as a non-trivial connected sum.
Here is a statement of Thurston's conjecture:
Every oriented prime closed 3-manifold can be cut along tori, so that the interior of each of the resulting manifolds has a geometric structure with finite volume.
There are 8 possible geometric structures in 3 dimensions, described in the next section. There is a unique minimal way of cutting an irreducible oriented 3-manifold along tori into pieces that are Seifert manifolds or atoroidal called the JSJ decomposition, which is not quite the same as the decomposition in the geometrization conjecture, because some of the pieces in the JSJ decomposition might not have finite volume geometric structures. (For example, the mapping torus of an Anosov map of a torus has a finite volume sol structure, but its JSJ decomposition cuts it open along one torus to produce a product of a torus and a unit interval, and the interior of this has no finite volume geometric structure.)
For non-oriented manifolds the easiest way to state a geometrization conjecture is to first take the oriented double cover. It is also possible to work directly with non-orientable manifolds, but this gives some extra complications: it may be necessary to cut along projective planes and Klein bottles as well as spheres and tori, and manifolds with a projective plane boundary component usually have no geometric structure so this gives a minor extra complication.
In 2 dimensions the analogous statement says that every surface (without boundary) has a geometric structure consisting of a metric with constant curvature; it is not necessary to cut the manifold up first.
## The eight Thurston geometries
A model geometry is a simply connected smooth manifold X together with a transitive action of a Lie group G on X with compact stabilizers.
A model geometry is called maximal if G is maximal among groups acting smoothly and transitively on X with compact stabilizers. Sometimes this condition is included in the definition of a model geometry.
A geometric structure on a manifold M is a diffeomorphism from M to X/Γ for some model geometry X, where Γ is a discrete subgroup of G acting freely on X. If a given manifold admits a geometric structure, then it admits one whose model is maximal.
A 3-dimensional model geometry X is relevant to the geometrization conjecture if it is maximal and if there is at least one compact manifold with a geometric structure modelled on X. Thurston classified the 8 model geometries satisfying these conditions; they are listed below and are sometimes called Thurston geometries. (There are also uncountably many model geometries without compact quotients.)
There is some connection with the Bianchi groups: the 3-dimensional Lie groups. Most Thurston geometries can be realized as a left invariant metric on a Bianchi group. However S2×R cannot be, Euclidean space corresponds to two different Bianchi groups, and there are an uncountable number of solvable non-unimodular Bianchi groups, most of which give model geometries with no compact representatives.
### Spherical geometryS3
The point stabilizer is O3(R), and the group G is the 6-dimensional Lie group O4(R), with 2 components. The corresponding manifolds are exactly the closed 3-manifolds with finite fundamental group. Examples include the 3-sphere, the Poincaré homology sphere, Lens spaces. This geometry can be modeled as a left invariant metric on the Bianchi group of type IX. Manifolds with this geometry are all compact, orientable, and have the structure of a Seifert fiber space (often in several ways). The complete list of such manifolds is given in the article on Spherical 3-manifolds. Under Ricci flow manifolds with this geometry collapse to a point in finite time.
### Euclidean geometryE3
The point stabilizer is O3(R), and the group G is the 6-dimensional Lie group R3.O3(R), with 2 components. Examples are the 3-torus, and more generally the mapping torus of a finite order automorphism of the 2-torus; see torus bundle. There are exactly 10 finite closed 3-manifolds with this geometry, 6 orientable and 4 non-orientable. This geometry can be modeled as a left invariant metric on the Bianchi groups of type I or VII0. Finite volume manifolds with this geometry are all compact, and have the structure of a Seifert fiber space (sometimes in two ways). The complete list of such manifolds is given in the article on Seifert fiber spaces. Under Ricci flow manifolds with Euclidean geometry remain invariant.
### Hyperbolic geometryH3
The point stabilizer is O3(R), and the group G is the 6-dimensional Lie group O1,3(R)+, with 2 components. There are enormous numbers of examples of these, and their classification is not completely understood. The example with smallest known volume is the Weeks manifold. Other examples are given by the Seifert-Weber space, or "sufficiently complicated" Dehn sugeries on links, or most Haken manifolds. The geometrization conjecture implies that a closed 3-manifold is hyperbolic if and only if it is irreducible, atoroidal, and has infinite fundamental group. This geometry can be modeled as a left invariant metric on the Bianchi group of type V. Under Ricci flow manifolds with hyperbolic geometry expand.
### The geometry of S2×R
The point stabilizer is O2(RZ/2Z, and the group G is O3(RR.Z/2Z, with 4 components. The four finite volume manifolds with this geometry are: S2×S1, the mapping torus of the antipode map of S2, the connected sum of two copies of 3 dimensional projective space, and the product of S1 with two-dimensional projective space. The first two are mapping tori of the identity map and antipode map of the 2-sphere, and are the only examples of 3-manifolds that are prime but not irreducible. The third is the only example of a non-trivial connected sum with a geometric structure. This is the only model geometry that cannot be realized as a left invariant metric on a 3-dimensional Lie group. Finite volume manifolds with this geometry are all compact and have the structure of a Seifert fiber space (often in several ways). Under normalized Ricci flow manifolds with this geometry converge to a 1-dimensional manifold.
### The geometry of H2×R
The point stabilizer is O2(R) × Z/2Z, and the group G is O1,2(R)+ × R.Z/2Z, with 4 components. Examples include the product of a hyperbolic surface with a circle, or more generally the mapping torus of an isometry of a hyperbolic surface. Finite volume manifolds with this geometry have the structure of a Seifert fiber space if they are orientable. (If they are not orientable the natural fibration by circles is not necessarily a Seifert fibration: the problem is that some fibers may "reverse orientation"; in other words their neighborhoods look like fibered solid Klein bottles rather than solid tori.) The classification of such (oriented) manifolds is given in the article on Seifert fiber spaces. This geometry can be modeled as a left invariant metric on the Bianchi group of type III. Under normalized Ricci flow manifolds with this geometry converge to a 2-dimensional manifold.
### The geometry of the universal cover of SL2(R)
$\left\{tilde\left\{rm\left\{SL\right\}\right\}\right\}_2 \left(mathbb\left\{R\right\}\right)$ is the universal cover of SL2(R), which fibers over $\left\{mathbb\left\{H\right\}^2\right\}$. The point stabilizer is O2(R). The group G has 2 components. Its identity component has the structure $\left(mathbb\left\{R\right\}timestilde\left\{rm\left\{SL\right\}\right\}_2 \left(mathbb\left\{R\right\}\right)\right)/mathbb\left\{Z\right\}$. Examples of these manifolds include: the manifold of unit vectors of the tangent bundle of a hyperbolic surface, and more generally the Brieskorn homology spheres (excepting the 3-sphere and the Poincare dodecahedral space). This geometry can be modeled as a left invariant metric on the Bianchi group of type VIII. Finite volume manifolds with this geometry are orientable and have the structure of a Seifert fiber space. The classification of such manifolds is given in the article on Seifert fiber spaces. Under normalized Ricci flow manifolds with this geometry converge to a 2-dimensional manifold.
### Nil geometry
This fibers over E2, and is the geometry of the Heisenberg group. The point stabilizer is O2(R). The group G has 2 components, and is a semidirect product of the 3-dimensional Heisenberg group by the group O2(R) of isometries of a circle. Compact manifolds with this geometry include the mapping torus of a Dehn twist of a 2-torus, or the quotient of the Heisenberg group by the "integral Heisenberg group". This geometry can be modeled as a left invariant metric on the Bianchi group of type II. Finite volume manifolds with this geometry are compact and orientable and have the structure of a Seifert fiber space. The classification of such manifolds is given in the article on Seifert fiber spaces. Under normalized Ricci flow compact manifolds with this geometry converge to R2 with the flat metric.
### Sol geometry
This geometry fibers over the line with fiber the plane, and is the geometry of the identity component of the group G. The point stabilizer is the dihedral group of order 8. The group G has 8 components, and is the group of maps from 2-dimensional Minkowski space to itself that are either isometries or multiply the metric by −1. The identity component has a normal subgroup R2 with quotient R, where R acts on R2 with 2 (real) eigenspaces, with distinct real eigenvalues of product 1. This is the Bianchi group of type VI0 and the geometry can be modeled as a left invariant metric on this group. All finite volume manifolds with sol geometry are compact. The compact manifolds with sol geometry are either the mapping torus of an Anosov map of the 2-torus (an automorphism of the 2-torus given by an invertible 2 by 2 matrix whose eigenvalues are real and distinct, such as $\left\{2,1choose 1,1\right\}$), or quotients of these by groups of order at most 8. The eigenvalues of the automorphism of the torus generate an order of a real quadratic field, and the sol manifolds could in principle be classified in terms of the units and ideal classes of this order, though the details do not seem to be written down anywhere. Under normalized Ricci flow compact manifolds with this geometry converge (rather slowly) to R1.
## Uniqueness
A closed 3-manifold has a geometric structure of at most one of the 8 types above, but finite volume non-compact 3-manifolds can occasionally have more than one type of geometric structure. (However a manifold can have many different geometric structures of the same type; for example, a surface of genus at least 2 has a continuum of different hyperbolic metrics.) More precisely, if M is a manifold with a finite volume geometric structure, then the type of geometric structure is almost determined as follows, in terms of the fundamental group π1(M):
• If π1(M) is finite then the geometric structure on M is spherical, and M is compact.
• If π1(M) is virtually cyclic but not finite then the geometric structure on M is S2×R, and M is compact.
• If π1(M) is virtually abelian but not virtually cyclic then the geometric structure on M is Euclidean, and M is compact.
• If π1(M) is virtually nilpotent but not virtually abelian then the geometric structure on M is nil geometry, and M is compact.
• If π1(M) is virtually solvable but not virtually nilpotent then the geometric structure on M is sol geometry, and M is compact.
• If π1(M) has an infinite normal cyclic subgroup but is not virtually solvable then the geometric structure on M is either H2×R or the universal cover of SL2(R). The manifold M may be either compact or non-compact. If it is compact, then the 2 geometries can be distinguished by whether or not π1(M) has a finite index subgroup that splits as a semidirect product of the normal cyclic subgroup and something else. If the manifold is non-compact, then the fundamental group cannot distinguish the two geometries, and there are examples (such as the complement of a trefoil knot) where a manifold may have a finite volume geometric structure of either type.
• If π1(M) has no infinite normal cyclic subgroup and is not virtually solvable then the geometric structure on M is hyperbolic, and M may be either compact or non-compact.
Infinite volume manifolds can have many different types of geometric structure: for example, R3 can have 6 of the different geometric structures listed above, as 6 of the 8 model geometries are homeomorphic to it. Moreover if the volume does not have to be finite there are an infinite number of new geometric structures with no compact models; for example, the geometry of almost any non-unimodular 3-dimensional Lie group.
There can be more than one way to decompose a closed 3-manifold into pieces with geometric structures. For example:
• Taking connected sums with several copies of S3 does not change a manifold.
• The connected sum of two projective 3-spaces has a S2×R geometry, and is also the connected sum of two pieces with S3 geometry.
• The product of a surface negative curvature and a circle has a geometric structure, but can also be cut along tori to produce smaller pieces that also have geometric structures. There are many similar examples for Seifert fiber spaces.
It is possible to choose a "canonical" decomposition into pieces with geometric structure, for example by first cutting the manifold into prime pieces in a minimal way, then cutting these up using the smallest possible number of tori. However this minimal decomposition is not necessarily the one produced by Ricci flow; if fact, the Ricci flow can cut up a manifold into geometric pieces in many inequivalent ways, depending on the choice of initial metric.
## History
The Fields Medal was awarded to Thurston in 1982 partially for his proof of the geometrization conjecture for Haken manifolds.
The case of 3-manifolds that should be spherical has been slower, but provided the spark needed for Richard Hamilton to develop his Ricci flow. In 1982, Hamilton showed that given a closed 3-manifold with a metric of positive Ricci curvature, the Ricci flow would collapse the manifold to a point in finite time, which proves the geometrization conjecture for this case as the metric becomes "almost round" just before the collapse. He later developed a program to prove the geometrization conjecture by Ricci flow with surgery. The idea is that the Ricci flow will in general produce singularities, but one may be able to continue the Ricci flow past the singularity by using surgery to change the topology of the manifold. Roughly speaking, the Ricci flow contracts positive curvature regions and expands negative curvature regions, so it should kill off the pieces of the manifold with the "positive curvature" geometries S3 and S2×R, while what is left at large times should have a thick-thin decomposition into a "thick" piece with hyperbolic geometry and a "thin" graph manifold.
In 2003 Grigori Perelman sketched a proof of the geometrization conjecture by showing that the Ricci flow can indeed be continued past the singularities, and has the behavior described above. The main difficulty in verifying Perelman's proof of the Geometrization conjecture was a critical use of his Theorem 7.4 in the Ricci flow with surgery preprint. This theorem was stated by Perelman without proof. There are now three different proofs of Perelman's Theorem 7.4. There is the method of Shioya and Yamaguchi [T. Shioya and T. Yamaguchi, 'Volume collaped three-manifolds with a lower curvature bound,' Math. Ann. 333 (2005), no. 1, 131-155.] that uses Perelman's stability theorem [V. Kapovitch, 'Perelman's Stability Theorem', preprint arXiv:math/0703002, 2007.] and a fibration theorem for Alexandrov spaces [T. Yamaguchi. A convergence theorem in the geometry of Alexandrov spaces. In Actes de la Table Ronde de Geometrie Differentielle (Luminy, 1992), volume 1 of Semin. Congr., pages 601-642. Soc. math. France, Paris, 1996.] . There is the method of Bessieres et.al. [L. Bessires, G. Besson, M. boileau, S. maillot, J. Porti, 'Weak collapsing and geometrization of aspherical 3-manifolds,' preprint arXiv:math/0706:2065, 2007.], which uses Thurston's hyperbolization theorem for Haken manifolds [J-P. Otal, 'Thurston's hyperbolization of Haken manifolds,'Surveys in differential geometry, Vol. III Cambridge, MA, 77-194, Int. Press, Boston, MA, 1998.] and Gromov's norm for 3-manifolds [M. Gromov. Volume and bounded cohomology. Inst. Hautes Etudes Sci. Publ. Math., (56):5-99 (1983), 1982.]. Finally there is the method of Morgan and Tian [J. Morgan and G. Tian, 'Completion of the Proof of the Geometrization Conjecture', arXiv:math/0809.4040, 2008.] that only uses Ricci flow. From Perelman's Theorem 7.4, the Geometrization conjecture "quickly" follows.
## References
Search another word or see Solid torion Dictionary | Thesaurus |Spanish | 2013-05-21 12:11:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 4, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7953881025314331, "perplexity": 399.4310274175027}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699977678/warc/CC-MAIN-20130516102617-00090-ip-10-60-113-184.ec2.internal.warc.gz"} |
http://www.fightfinance.com/?q=3,32,142,150,209,262,422,502,755,954 | # Fight Finance
#### CoursesTagsRandomAllRecentScores
The following equation is called the Dividend Discount Model (DDM), Gordon Growth Model or the perpetuity with growth formula: $$P_0 = \frac{ C_1 }{ r - g }$$
What is $g$? The value $g$ is the long term expected:
You really want to go on a back packing trip to Europe when you finish university. Currently you have $1,500 in the bank. Bank interest rates are 8% pa, given as an APR compounding per month. If the holiday will cost$2,000, how long will it take for your bank account to reach that amount?
When using the dividend discount model to price a stock:
$$p_{0} = \frac{d_1}{r - g}$$
The growth rate of dividends (g):
A share just paid its semi-annual dividend of $10. The dividend is expected to grow at 2% every 6 months forever. This 2% growth rate is an effective 6 month rate. Therefore the next dividend will be$10.20 in six months. The required return of the stock is 10% pa, given as an effective annual rate.
What is the price of the share now?
Find Piano Bar's Cash Flow From Assets (CFFA), also known as Free Cash Flow to the Firm (FCFF), over the year ending 30th June 2013.
Piano Bar Income Statement for year ending 30th June 2013 $m Sales 310 COGS 185 Operating expense 20 Depreciation 15 Interest expense 10 Income before tax 80 Tax at 30% 24 Net income 56 Piano Bar Balance Sheet as at 30th June 2013 2012$m $m Assets Current assets 240 230 PPE Cost 420 400 Accumul. depr. 50 35 Carrying amount 370 365 Total assets 610 595 Liabilities Current liabilities 180 190 Non-current liabilities 290 265 Owners' equity Retained earnings 90 90 Contributed equity 50 50 Total L and OE 610 595 Note: all figures are given in millions of dollars ($m).
A 90-day $1 million Bank Accepted Bill (BAB) was bought for$990,000 and sold 30 days later for $996,000 (at t=30 days). What was the total return, capital return and income return over the 30 days it was held? Despite the fact that money market instruments such as bills are normally quoted with simple interest rates, please calculate your answers as compound interest rates, specifically, as effective 30-day rates, which is how the below answer choices are listed. $r_\text{total}$, $r_\text{capital}$, $r_\text{income}$ Acquirer firm plans to launch a takeover of Target firm. The firms operate in different industries and the CEO's rationale for the merger is to increase diversification and thereby decrease risk. The deal is not expected to create any synergies. An 80% scrip and 20% cash offer will be made that pays the fair price for the target's shares. The cash will be paid out of the firms' cash holdings, no new debt or equity will be raised. Firms Involved in the Takeover Acquirer Target Assets ($m) 6,000 700 Debt ($m) 4,800 400 Share price ($) 40 20 Number of shares (m) 30 15
Ignore transaction costs and fees. Assume that the firms' debt and equity are fairly priced, and that each firms' debts' risk, yield and values remain constant. The acquisition is planned to occur immediately, so ignore the time value of money.
Calculate the merged firm's share price and total number of shares after the takeover has been completed.
An investor owns an empty block of land that has local government approval to be developed into a petrol station, car wash or car park. The council will only allow a single development so the projects are mutually exclusive.
All of the development projects have the same risk and the required return of each is 10% pa. Each project has an immediate cost and once construction is finished in one year the land and development will be sold. The table below shows the estimated costs payable now, expected sale prices in one year and the internal rates of returns (IRR's).
Mutually Exclusive Projects Project Costnow ($) Sale price inone year ($) IRR(% pa) Petrol station 9,000,000 11,000,000 22.22 Car wash 800,000 1,100,000 37.50 Car park 70,000 110,000 57.14
Which project should the investor accept?
A firm wishes to raise $50 million now. They will issue 7% pa semi-annual coupon bonds that will mature in 6 years and have a face value of$100 each. Bond yields are 5% pa, given as an APR compounding every 6 months, and the yield curve is flat.
How many bonds should the firm issue?
Question 954 option, at the money option
If a put option is at-the-money, then the spot price ($S_0$) is than, than or to the put option's strike price ($K_T$)? | 2021-04-16 22:40:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23291631042957306, "perplexity": 4004.8838497672255}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038092961.47/warc/CC-MAIN-20210416221552-20210417011552-00119.warc.gz"} |
https://jira.lsstcorp.org/browse/DM-7581 | # Use box annotations to indicate missing jobs
XMLWordPrintable
#### Details
• Type: Story
• Status: Done
• Resolution: Done
• Fix Version/s: None
• Component/s: None
• Labels:
None
• Story Points:
2.5
• Team:
SQuaRE
#### Activity
Hide
Angelo Fausti added a comment -
Box annotations provide a nice way to visualize failing jobs, the start and end times to draw the box annotations are obtained from the job status.
Show
Angelo Fausti added a comment - Box annotations provide a nice way to visualize failing jobs, the start and end times to draw the box annotations are obtained from the job status.
Hide
Angelo Fausti added a comment - - edited
Minimal working example:
from bokeh.io import curdoc from datetime import datetime import time from bokeh.models import BoxAnnotation from bokeh.plotting import figure plot = figure(width=700, height=250,x_axis_type='datetime') start = time.mktime(datetime(2016,9,1,0,0,0,0).timetuple())*1000 end = time.mktime(datetime(2016,9,10,21,0,0,0).timetuple())*1000 plot.line(x=[start,end], y=[1,1]) box = BoxAnnotation(left=start, right=end, fill_alpha=0.1, fill_color='red') plot.add_layout(box) curdoc().add_root(plot)
Show
Angelo Fausti added a comment - - edited Minimal working example: from bokeh.io import curdoc from datetime import datetime import time from bokeh.models import BoxAnnotation from bokeh.plotting import figure plot = figure(width= 700 , height= 250 ,x_axis_type= 'datetime' ) start = time.mktime(datetime( 2016 , 9 , 1 , 0 , 0 , 0 , 0 ).timetuple())* 1000 end = time.mktime(datetime( 2016 , 9 , 10 , 21 , 0 , 0 , 0 ).timetuple())* 1000 plot.line(x=[start,end], y=[ 1 , 1 ]) box = BoxAnnotation(left=start, right=end, fill_alpha= 0.1 , fill_color= 'red' ) plot.add_layout(box) curdoc().add_root(plot)
Hide
Angelo Fausti added a comment -
The result can be verified in my test environment:
it draws a red rectancle each time a job is missing, you can follow that by looking the CI Ids of each measurement. You can think about the visual result in production with lots of data.
Show
Angelo Fausti added a comment - The result can be verified in my test environment: https://angelo-squash-squash.lsst.codes/dashboard it draws a red rectancle each time a job is missing, you can follow that by looking the CI Ids of each measurement. You can think about the visual result in production with lots of data.
Hide
Angelo Fausti added a comment - - edited
Added title and a second line description for the selected metric as part of this ticket:
https://angelo-squash-squash.lsst.codes/dashboard
Show
Angelo Fausti added a comment - - edited Added title and a second line description for the selected metric as part of this ticket: https://angelo-squash-squash.lsst.codes/dashboard
#### People
Assignee:
Angelo Fausti
Reporter:
Angelo Fausti
Watchers:
Angelo Fausti | 2022-01-28 06:03:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49055883288383484, "perplexity": 10288.01005902008}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305420.54/warc/CC-MAIN-20220128043801-20220128073801-00045.warc.gz"} |
http://gdkl.puntoprato.it/pattern-multiplication-in-antenna-array.html | 11, 2012 A Beam-Switching Antenna Array With Shaped Radiation Patterns Han Wang, Zhijun Zhang, Senior Member, IEEE, and Zhenghe Feng, Fellow, IEEE Abstract—In this letter, a novel beam-switching array, which uses four patch elements to generate two anti-symmetric-shaped patterns, is. Because the number of the elements of antenna array in 60GHz increases, the beam patterns generated are more than common BF, and it causes long set-up time during beam patterns searching. , are also discussed. 4 - see online Help) Introduction According to the principle of pattern multiplication, the radiation pattern of an array of identical. Instead, the field radiated by each element must be evaluated separately for the observation directions rˆ defined in the global coordinate system. Wire Antennas 223 4. arrays with a small number of antenna elements e. Radiation Mechanisms of Linear Wire and Loop antennas, Aperture antennas, Reflector antennas, Microstrip antennas and Frequency independent antennas, Design considerations and applications. (b) find the efficiency of antenna if radation resistance is 72Ω and loss resistance is 8Ω [L4][CO1][6M] 10. Introduction to Antennas 2. Antenna Arrays Antennas with a given radiation pattern may be arranged in a pattern (line, circle, plane, etc. Radiation is the term used to represent the emission or reception of wave front at the antenna, specifying its strength. This theorem states that the combined pattern of N. 3 Pattern Multiplication Principle. Antenna array - a configuration of multiple antennas (elements) arranged to achieve a given radiation pattern. Thus we find that the radiation pattern of an array is the product of the function of the individual element with the array pattern function. Use the theory of receiving antennas vii. The classical, simplified approach to the synthesis, which is based on the pattern multiplication rule [1], can be successfully used only for large antenna arrays or when the antenna radiation. Serkan Aksoy - 2008 These lecture notes are heavily based on the book of Antenna Theory and Design by W. According to the pattern multiplication principle, the antenna array total electrical field can be expressed as: E TOT = E m × AF θ ϕ E3 In the case of URPA, the array factor equation is very similar to the ULA with the only difference that is designed by considering two dimensions [ 15 , 22 ]:. shown in Table. Time-Modulation in Array Antenna Synthesis for New Generation Communication Systems A modern antenna system should be able to maintain high quality communication links through a suitable modification of its operating conditions (i. BTL 4 Analyzing 2. This effect is often referred to as mutual coupling. The array pattern is a function of the location of the antennas in the array and their relative complex excitation amplitudes. Antenna Fundamentals Training, covers in some detail antennas, antenna theory, antenna characteristics, antenna specifications, antenna applications in wireless communications and military systems and other important key topics. 1 Polynomial Representation 220 5. Arthur David Snider, Ph. Antenna arrays are one of the key features of all modern communication and industrial systems employing RF, microwave, and mm-wave frequencies. capability of steering the antenna radiation pattern to track the transmit/receive antenna. Examples of use of antenna arrays. 3, March 2008 721. Linear Array. antennas”, IEEE Trans. 4 µm spacing with a shifted bolometer to. The pattern multiplication theorem in array theory states that the far-field radiation pattern of an array is the product of the individual element pattern and the array factor. patternMultiply(array,frequency) plots the 3-D radiation pattern of the array object over a specified frequency. Figure 3 H-plane substrate-side power pattern for dual-dipole antenna array with bolometer centered Figure 4 H-plane substrate-side power pattern for dual-dipole antenna array with bolometer shifted to the left DOI 10. Array Factor and Pattern Multiplication. 5 and 6 are presenting the results of the radi- ation pattern of the linear array by using LMS (Fig. It covers types of driven array antennas including collinear antenna. 22, top, the total antenna pattern is for no grating lobes. Applications of this spe- cialized pattern-multiplication theorem to the analysis and design. In particular, we demonstrate that pattern multiplication (assuming the behavior of single antennas embedded in the array is the same as those same antennas by themselves) does not generate reliable estimates of SEFD. For example, AM broadcast radio antennas consisting of multiple mast radiators fed so as to create a specific radiation pattern are also called "phased arrays". 4GHz are plotted in Fig. The number of elements should be N = 8 and they are 180 mm spaced apart. Antenna arrays can be designed to perform beam steering and null steering. In addition the beam efficiency and the dipole multiplication patterns for different lengths were also. What are the field components of loop antenna? PART – B 1. Question 45. In this video, i have explained Pattern Multiplication of four Point Sources. We took 3 receivers as Rx-1, Rx-2, and Rx-3, whereas each receiver has virtual array of 75 receiving antennas with 0. • Design of antenna arrays: principle of pattern multiplication, broadside and endfire arrays, array synthesis, coupling effects and mutual impedance, parasitic elements, Yagi-Uda antenna • Design of aperture-type antennas: rectangular aperture, circular aperture, horn antenna, reflector antennas, microstrip patch antennas. Effect of Earth on the Radiation Pattern 65. - The HAARP antenna array is similar or identical to many other types of directive antenna types in use for both military and civilian applications including air | PowerPoint PPT presentation | free to view. Limitations of the pattern multiplication technique for uniformly spaced linear antenna arrays Abstract: Linear antenna arrays find wide applications in wireless communication systems. (2) All questions carry equal marks. A workflow of using electromagnetic field simulation for phased array design including the effects of mutual coupling will be presented. Design and analysis of high gain array antenna for wireless communication applications Sri J. The number, geometrical arrangement, and relative amplitudes and phases of the array elements depend on the angular pattern that must be achieved. The concept of the generalized radiation pattern of an element of a phased array is introduced and analyzed. The program was able to synthesize the pattern for isotropic elements, short dipoles, and half-wave dipoles in a planar array above a ground plane. The array elements can be any antenna, how-ever. 1 Polynomial Representation 220 5. control of the sar pattern within an interstitial microwave array through variation of antenna driving phase. • The total pattern, therefore, can be controlled via the single-element pattern or via the AF of an array can be obtained by. In this study a new approach to active-element pattern analysis, for large phased array antennas, was created using Floquet’s theorem. Symmetric Array. A linear antenna element, say along the z-direction, has an omnidirectional pattern with respect to the azimuthal angle φ. Unlike a single antenna whose radiation pattern is fixed, an antenna array's radiation pattern, called the array pattern, can be changed upon exciting its elements with different currents (both current magnitudes and current phases). It is a graph which shows the variation in actual field strength of the EM wave at all points which are at equal distance from the antenna. Uniform linear arrays: visibility windows, radiation pattern, beamwidth, phased beam, broadside and endfire arrays, electronic beam scanning, greating lobes, arrays of dipoles, beamforming networks (tree and bus). The radiation efficiency of a certain antenna is 95%. 101 Figure4. Show how the radiation pattern of a straight travelling wave wire antenna of arbitrary length can be obtained using the principle of pattern multiplication. The Yagi array, owing to the directive The effect of spacing of elements on directivity of a Yagi array is governed by similar considerations as in dipole arrays. The program, written in MATLAB, allows the user to study the two-way antenna pattern for different subarray architectures. Isotropic Characteristic of an Array of Four Antennas. ADD COMMENT • link. Array design suitable for mobile communications is an active area of the research, and many prototypes have been built, analysed, and tested. The pattern characteristics and other important antenna parameters. View Full Document. Freedom Electronics, LLC, is a remanufacturer and sells a portfolio of over 2300 electronics components used in fuel dispensers, retail point-of-sale systems, and automatic tank gauges. 1 Antenna Arrays Arrays of antennas are used to direct radiated power towards a desired angular sector. There are three basic types of driven array antennas viz. Broadside and end fire patterns, Balun. What is the principle of pattern multiplication? In case of isotropic antenna arrays the total field of the antenna array is simply the vector sum of those of individual radiating sources. For a given angle , determine the point of intersection of a radial line from the origin with the perimeter of the circle. patternMultiply(array,frequency) plots the 3-D radiation pattern of the array object over a specified frequency. Analyze electrically small antennas vi. On the other hand, in case of the propsoed. For large phased array antennas, it is likely that H and V copolar pattern shapes will be well matched for boresight directions (i. Explain how the pattern multiplication principle is used to compute the radiation pattern of antenna array. change the array radiation pattern 2. The pattern multiplication theorem in array theory states that the far-field radiation pattern of an array is the product of the individual element pattern and the array factor. 3 Di erent Array Excitations One of the biggest advantages of antenna arrays as that they allow many di erent array patterns to be synthesized. Linear transmitter power amplifier. The thesis also presents improvements to the modified MUSIC and APPR algorithms. change the matching characteristic of the antenna elements (change the input impedances) We will mainly study the first two effects in this chapter. This theorem states that the combined pattern of N. This situation changes dramatically for a higher number of array elements, e. 2 Total Field1. This design uses a combination of. It follows the classical approach of cor-recting the array output vector by multiplication by a decoupling matrix to achieve a more or less unper-turbed array response. Use sparameters to calculate the S-parameter coupling matrix of an array in Antenna Toolbox™. • The total pattern, therefore, can be controlled via the single-element pattern or via the AF of an array can be obtained by. Simulation of Smart Antenna Array Parameters To study the effect of number of antenna elements and the inter-spacing element on the pattern shaping, we consider a linear array of N. The former will allow the antenna array to direct its radiation beam into a pre-specified direction. In fact, as illustrated in Fig. Fixed-Site AntennasIntroduction. Link to Planar Phased Array Antenna Calculator has been recently reported as not working and has been temporarily delisted from our categories Check related resources in Antennas/Antenna Calculators This calculator computes the far zone radiation power pattern for a planar phased array antenna. In this video, i have explained Pattern Multiplication of four Point Sources. An 8x8 array antenna has been designed for a frequency band of 8-8. The A148-3S is the low priced quality leader for Packet, FM or even portable use. This chapter presents essential concepts in antenna arrays and beamforming. Horn linear and planar array patterns are plotted in the present work in both rectangular and polar forms. It only depend on the $\theta, \phi, \alpha, d$. The radiation characteristics of an array are given by the pattern multiplication principle i. gaindBEfar, you can compute the far-field gain of the antenna array. have derived an equation for compensated excitation voltages, which allows the radiation pattern of an antenna array to be predicted accurately using the principle of pattern multiplication. However, when an antenna gets deployed into an array, its radiation pattern is modified by its neighboring elements. , radiation pattern and frequency band) to avoid jammings and interference signals from external sources. Since the feeding network in this project is fixed on Butler Matrix, the radiation pattern of the antenna array then could be. Port, surface, and field analysis; embedded pattern, pattern multiplication Perform port, surface, and field (space around the antenna) analysis of antennas and arrays. The program was able to synthesize the pattern for isotropic elements, short dipoles, and half-wave dipoles in a planar array above a ground plane. The overall radiation pattern of the antenna array is equal to F(θ,φ )w(θi,φ i) which follows from the pattern multiplication rule [13]. 4 - see online Help) Introduction According to the principle of pattern multiplication, the radiation pattern of an array of identical. Expanding on this theme, the radiation pattern of an array can be generated from the multiplication of the element pattern and the array factor. Analysis of the huge-scale array antenna is important to estimate the radiation property of the array antenna, but a full-wave analysis requires too much computer memory and excessive CPU time. Array Factor and Pattern Multiplication. A comprehensive study of 2x2 planar phased array of rectangular rnicrostrip antenna on Ni-Co based ferrite substrate at 10 GHz is presented. One can simply understand the function and directivity of an antenna by. The feeding network may limit the. 1 Antenna Arrays Arrays of antennas are used to direct radiated power towards a desired angular sector. Use plane waves to excite an antenna to simulate a receiving antenna. 5km in diameter we have decided that the most effective method would be to design an antenna array. 3, March 2008 721. 1 TWO-DIMENSIONAL ARRAY ANTENNA PATTERNS (Note: some details slightly outdated for version 1. By multiplying the array factor equation to the antenna far-field gain variable, emw. Antenna theorems - Applicability and proofs for equivalence of directional characteristics, Loop antennas : Small loops - Field components, Comparison of far fields of small loop and short dipole, Concept of short magnetic dipole, D and R relations for small loops. The number, geometrical arrangement, and relative amplitudes and phases of the array elements depend on the angular pattern that must be achieved. However, if more than one stage of multiplication occurs between the antenna element and the output of the array (as was the case in the example of the foul' element correlation array) the cross-product terms will OCCUl' as low-frequency a-c terms and also as d-c terms. 23: Array Design Methods. , a binomial array) and two-dimensional arrays (1,2). Advantage: It helps to sketch the radiation pattern of array antennas rapidly from the simple product of element pattern and array pattern. On the other hand, in case of the propsoed. Array antenna radiation patterns are extremely useful in both airborne as well as ground based applications. For example, AM broadcast radio antennas consisting of multiple mast radiators fed so as to create a specific radiation pattern are also called "phased arrays". excitations of its antenna elements. It allows the user to set a target polar angle (theta) and azimuthal angle (phi) and will then adjust the linear phase across the antenna array to target that direction for the 0 th order emitted by the array. The directive gain of an isotropic or omnidirectional antenna (an antenna that pattern of a linear array of the three isotropic sources spaces apart. This brings us to the important principle of pattern multiplication which can be stated as : The total field pattern of an array of nonisotropic but similar elements is the product of the individual element pattern. The methods developed in this study are applied to the design of several five-element linear microstrip antenna arrays with comparisons made to full-wave electromagnetic simulations. 2 Chebyshev Array Synthesis 232 Exercises 240 CHAPTER 6 Special Antennas 242 Introduction 242. change the matching characteristic of the antenna elements (change the input impedances) We will mainly study the first two effects in this chapter. , we assume that mutual coupling effect among the elements of the array are neglected. Antennas and Propagation Slide 4 Chapter 4 General Array Assume we have N elements pattern of ith antenna Total pattern Identical antenna elements "Pattern Multiplication". In Figure 2b and c, the directivity patterns of the Yagi-Uda 2×2 array and 3×3 array (green and blue curve) and the patterns of the corresponding dipole arrays (black curves) are plotted. 16 HFSS Schematic of implementation of microstrip line fed patch antenna array 104 Figure4. Linear Array. In order to find the position of the defective element in the array, it is necessary to measure the degraded pattern of array having one or more faulty element(s). Classify antenna arrays. The array is designed on RT duroid substrate at 9. In Figure 2b and c, the directivity patterns of the Yagi-Uda 2×2 array and 3×3 array (green and blue curve) and the patterns of the corresponding dipole arrays (black curves) are plotted. The interconnection between elements, called the feed network, can provide fixed phase to each element or. An array antenna is a directive antenna made of individual elements. If the phase of the currents is equal to the physical spacing of the elements, then the array radiates from its end. Series-Fed Aperture-Coupled Microstrip Antennas and Arrays by Bojana Zivanovic A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy Department of Electrical Engineering College of Engineering University of South Florida Major Professor: Thomas M. The roundness of the pattern without. Yagi-Uda antenna, Micro strip patch array, Aperture array, Slotted wave guide array Used for very high gain applications, mostly when needs to control the radiation pattern Let us discuss the above-mentioned types of antennas in detail, in the coming chapters. For free materials of different engineering subjects use my android application named Engineering Funda with following. antenna arrays. Binomial array. - A free PowerPoint PPT presentation (displayed as a Flash slide show) on PowerShow. Method of pattern multiplication. An antenna array is a set of a combination of two or more antennas in order to achieve improved performance over a single antenna. Array antenna radiation patterns are extremely useful in both airborne as well as ground based applications. Use sparameters to calculate the S-parameter coupling matrix of an array in Antenna Toolbox™. Symmetric Array. (a) E-plane. Yagi-Uda antenna, Micro strip patch array, Aperture array, Slotted wave guide array Used for very high gain applications, mostly when needs to control the radiation pattern Let us discuss the above-mentioned types of antennas in detail, in the coming chapters. The talk begins with a brief history of phased array antennas followed by its basic. For a general non-linear array of sensors, (i. This is done in the IEEE 802. 11, 2012 A Beam-Switching Antenna Array With Shaped Radiation Patterns Han Wang, Zhijun Zhang, Senior Member, IEEE, and Zhenghe Feng, Fellow, IEEE Abstract—In this letter, a novel beam-switching array, which uses four patch elements to generate two anti-symmetric-shaped patterns, is. An array of dipoles can be fed to achieve an end-fire pattern. Typically, arrangements with uniform spacing between the array elements are applied to gain highly directive radiation patterns. According to system requirements, vertical pattern shape is a modified cosecant pattern. KEYWORDS: Antenna Array, Array Synthesis, Amplitude Distributions, Multiplication Pattern INTRODUCTION An antenna array is group of antennas connected and arranged in a regular structure to form a. Isotropic Linear Arrays 63. This property of fixed or reconfigurable array of an antenna can be achieved by using switched parasitic elements. To Plot the Radiation Pattern of a Directional Antenna. give a desired beam pattern The synthesis of the antenna array pattern is a procedure in which an array of antennas is exploited to achieve maximum reception in a specified direction by estimating the signal of arrival from a desired direction (in the presence of noise) while signals of the same. The principle of pattern multiplication states that "the radiation pattern of an array is the product of the pattern of the individual antenna with the array pattern of isotropic point sources each located at the phase centre of the individual source. Antenna Definition,Principles of Radiation, Basic antenna parameters, Retarded Vector Magnetic Potential, Radiation field from Current element. Analysis of the huge-scale array antenna is important to estimate the radiation property of the array antenna, but a full-wave analysis requires too much computer memory and excessive CPU time. 4, December 23, 2002. This effect is sometimes called pattern multiplication. Antenna and Antenna Array Modeling Do you need to chose which antenna or antenna array to use among many types, very diverse, infinite configurations? Do you need to estimate the impact of the antenna early in your development cycle, and you are not an antenna expert? Do you need to integrate your antenna in your wireless. The problem of antenna array synthesis for radiation pattern defined on a planar surface will be considered in this chapter. 6 Vertical Array D9. The off-diagonal terms capture the mutual coupling between the ports of the antenna. 2 Comparison between AEP and Pattern Multiplication Techniques The array pattern of dipole antenna array has been calculated using AEP and pattern multiplication method for comparison. Use plane waves to excite an antenna to simulate a receiving antenna. Array of microstrip antennas. Tokan and F. In Figure 2b and c, the directivity patterns of the Yagi-Uda 2×2 array and 3×3 array (green and blue curve) and the patterns of the corresponding dipole arrays (black curves) are plotted. Hence pattern multiplication by an array factor, commonly used for planar arrays, is no longer valid. Complete array pattern and pattern multiplication. 6) generated according to (1). Calculate gain, directivity, and solid angle of antenna beam patterns and use constructive and destructive interference and pattern multiplication to design beam patterns of array antennas (1,2) Calculate and plot array beam patterns in 3D using scientific computing tools ; B. The array factor of the antenna array under design, which is probably being worked for a difierent inter-element spacing, is later equated to the array factor of the already synthesized virtual array. Mutual Coupling Compensation in Non-Uniform Antenna Arrays using Inter-Element Spacing Restrictions F. The antenna arrays were fabricated at low cost, on a single layer of microstrip, with good agreement between measurement and simulation. A linear antenna element, say along the z-direction, has an omnidirectional pattern with respect to the azimuthal angle φ. Use sparameters to calculate the S-parameter coupling matrix of an array in Antenna Toolbox™. From the results obtained for the radiation pattern plotted as array factor shows that when the distance between the two consecutive antenna elements is equal to 0. The user interface of an antenna array simulation application with an 8×8 virtual array, electric field distribution, and 3D far-field radiation pattern view. 9m high from the ground were examined. By multiplying the array factor equation to the antenna far-field gain variable, emw. The array factors for each of the two array settings can be seen in Figure 6. The basis of the array theory is the pattern multiplication theorem. and Verification > Antenna and Array Analysis > array factor dipole element pattern pattern. Module-II. Horn linear and planar array patterns are plotted in the present work in both rectangular and polar forms. An antenna array is a set of a combination of two or more antennas in order to achieve improved performance over a single antenna. 101 Figure4. Second, it is shown how the NFFT can be combined with the Macro Basis Function method to express the embedded element patterns as a series of pattern multiplication problems, each term being related to a macro basis function. Antenna Arrays Antennas with a given radiation pattern may be arranged in a pattern (line, circle, plane, etc. An array of antenna elements is a spatially extended collection of N similar radiators or elements, where N is a countable number bigger than 1, and the term "similar radiators" means that all the elements have the same polar radiation patterns, orientated in the same direction in 3-d space. An element spacing. This shapes the radiation pattern and steers the beam of an antenna array to control the input signal and address angular coverage issues. 4D antenna arrays have been used to generate multiple beams [12, 23], and, more importantly, these beams are formed at different sidebands and point in different directions. The result demonstrates correct radiation characteristics of an antenna array system. Analysis of Rhombic antenna. Linear Antenna Array with N elements. This in turn results in mutual coupling. It approximates the pattern of a complicated array without making lengthy computations. Link to Planar Phased Array Antenna Calculator has been recently reported as not working and has been temporarily delisted from our categories Check related resources in Antennas/Antenna Calculators This calculator computes the far zone radiation power pattern for a planar phased array antenna. 17 Reflection curve (S11) of four element microstrip antenna array. ADD COMMENT • link. Those cells can be combined through a process called pattern multiplication, to form a larger cell or array of cells, increasing directivity or gain. (2) All questions carry equal marks. INTRODUCTION Vivaldi antenna is a planar antenna with a light weight, low profile, wide bandwidth, directional radiation pattern, and high gain [1-4] Vivaldi antenna can be applied for through-wall detection [5],. - This antenna produces a highly directional radiation pattern that is broadside or perpendicular to the plane of the array. Hence, by taking the field pattern of one element of an array (consider two stacked elements, i. they are equal to the antenna element factor multiplied by the array factor. element pattern and discrete-element aperture available to illumination taper, can ensure low sidelobes (FUKAO et al. Arthur David Snider, Ph. monopoles, small loop antennas. On the Antenna Gain Formula Alade Olusope Michael Department of Pure and Applied Physics Ladoke Akintola University of Technology P. Effective microwave breast phantom imaging system with an array of 16 antipodal antennas is designed where one antenna works as a transmitter and rest of the antenna works as a receiver in turn. 4, December 23, 2002. Mention the factors on which the resultant pattern of array depends. Chapter 5 is devoted to the study of antenna arrays. s =95 mm and the width of W. , n = 2, where one element has a 30° half-power beam width in the stacking plane) and multiplying by the various array factors corresponding to the spacings chosen, a series of patterns results as shown in Fig. Then, by the principle of pattern multiplication, the far field pattern of the helix is the product of the pattern of one turn and the pattern of an array of n isotropic point sources, where the spac ing, S, between sources is equal to the turn spacing. AV-848 Microwave Design This course presents advanced techniques applicable to the design of RF amplifiers and oscillators and emphasizes advanced theory and design techniques. to enroll in courses, follow best educators, interact with the community and track your progress. So far, only isotropic point sources have been considered as elements in the arrays. The thesis also presents improvements to the modified MUSIC and APPR algorithms. change the array radiation pattern 2. Total Radiation Pattern of Array of Known Antenna Element Radiation Pattern = Element Pattern x Array Factor The pattern multiplication principle neglects the effect of mutual coupling (mutual impedance) between elements, i. ∙ Antenna polarization should be considered to have constructive interference. Expression (1-8) states the following pattern multiplication principle: An array consist-ing of identical and identically oriented elements has a pattern, which can be expressed as the product of the element pattern and the array factor. the return loss, the pattern and the antenna gain. For transmit configurations, applying the gain tapers Between Feed Network and RF Link causes the array antenna pattern shape to change as the amplifiers start to compress, while applying the gain tapers Between RF Link and Antenna does not significantly modify the shape of the array antenna pattern, only the gain levels. 3 shows the normalized gain of the omni-directional polyhedron array with and without coupling compensation. The design of an array involves mainly first the. distribution and Radiation patterns of center-fed dipoles of length D, 3D/2 and 2 D. Uniform linear array. Introduction, Two-element array; Example problems, Pattern multiplication concept; N-element array, Uniform array , Array factor; Broad-side and end-fire arrays, Phased array; Directivity and pattern characteristic of linear uniform array; Non-uniform array, Binomial array; Dolph-Chebyshev array concept. Linear antenna array: uniformly excited equally spaced arrays. The array factors for each of the two array settings can be seen in Figure 6. What Is The Advantage Of Pattern Multiplication? Answer : Useful tool in designing antenna. Index Terms — Antenna Array, Beamforming, Radio Astronomy. Two-dimensional array of microstrip patch antennas. Antennas can be formed into arrays through use of elements or cells, each with a pattern. 1 2 Figure 1. 1 to each section of the array. It is a set of individual antennas used for transmitting and receiving radio waves. Step-by-step solution: Chapter: CH1 CH2 CH3 CH4 CH5 CH6 CH7 CH8 CH9 CH10 Problem: 1CQ 1E 1P 2CQ 2E 2P 3CQ 3E 3P 4CQ 4E 4P 5CQ 5E 5P 6CQ 6E 6P 7CQ 7E 7P 8CQ 8E 8P 9CQ 9E 9P 10CQ 10E 10P 11CQ 11E 11P 12CQ 12E 12P 13CQ 13E 13P 14CQ 14E 14P. The classic approach to finding active-element patterns uses a full array simulation that can become slow and produce patterns that are specific to certain. The roundness of the pattern without. 2), the principle of pattern multiplication applies. Wire Antennas 223 4. Therefore, a great flexibility for tests and measurements is achieved. This paper investigates the beam steering technique using the active element pattern of dipole antenna array. the result for an array of isotropic radiatiors (i. 1 Pattern multiplication For arrays of non-isotropic but similar point sources (case a) of § 4. Given an antenna array of identical elements, the radiation pattern of the antenna array may be found according to the pattern multiplication theorem [4]: Array pattern = Array element pattern x Array factor where Array element pattern is the pattern of the individual. It requires including an influ- ence of the internal (by a supplying slots waveguide) and the external (through the open space) mutual coupling between radiating slots on a radiation pattern. Use the theory of receiving antennas vii. of pattern multiplication, the combined response will be that which would be obtained from a two-dimensional matrix antenna array composed of as many rows as there are elements in one arm of the cross and as many columns as there are elements in the other arm of the cross. Explain the. If the phase of the current on each element is equal, the array will have maximum radiation broadside to the array. Use of polynomial. Use plane waves to excite an antenna to simulate a receiving antenna. Use of method of images for antennas above ground. By replicating the antenna element along the x-ory-directions, the azimuthal symmetry is broken. Antenna Arrays Page 4 from ˇto ˇ, we trace out the projected array pattern inside the circle as follows. Antenna array - a configuration of multiple antennas (elements) arranged to achieve a given radiation pattern. The design of the slotted waveguide array antenna is a fairly complicated task. , commonly a group of antenna elements, called an array antenna, or simply array, is used. standard pattern multiplication method. Linear array - antenna elements arranged along a straight line. pattern of a center-fed dipole. However, we showed that, within the MBF approximation, the embedded element patterns, as well as any array pattern, can be computed as a finite series of pattern multiplication problems, which then enables the exploitation of the FFT, as well as the recuperation of MBF patterns computed in the course of the multipole approach. Principle of Pattern Multiplication123 Array with n-isotropic Point Sources of Equal Amplitude and Linear Spacing124 Broadside Array125 End-fire Array 128 Electronic Phased Array129 Effect of Earth on Vertical Patterns130 Comparison of Methods131 Dolph–Tchebyscheff or Chebyshev Array132 Tchebyscheff Polynomial132 Dolph Pattern Method of Obtaining Optimum Pattern Using Tchebyscheff Polynomial134. The array pattern is a function of the location of the antennas in the array and their relative complex excitation amplitudes. the wide-coverage array antenna with three dual-beams. Two-dimensional array of microstrip patch antennas. Log-periodic antennas and frequency independent antennas; Aperture-type antennas; Fourier transforms and the radiated field; Horn antennas; Reflector and lens antennas; Slot antennas and Babinet's principle; Microstrip antennas; Antenna arrays; Basic concept; Isotropic linear arrays; Pattern multiplication principle; Element mutual coupling. This chapter presents essential concepts in antenna arrays and beamforming. The program, written in MATLAB, allows the user to study the two-way antenna pattern for different subarray architectures. 2 Chebyshev Array Synthesis 232 Exercises 240 CHAPTER 6 Special Antennas 242 Introduction 242. The purpose of this research was to develop a program based on the principle of pattern multiplication to synthesize and access the two-way antenna pattern for DSAs. Antenna Arrays: electric Field due to 2 element arrays, 3 element Arrays; Pattern Multiplication; Uniform Linear Array: End fire and Broad side; Phased array. 4 - see online Help) Introduction According to the principle of pattern multiplication, the radiation pattern of an array of identical. Linear array - antenna elements arranged along a straight line. Element spacing and the relative amplitudes and phases of the element excitation determine the arrays radiative properties. Define broadside array. Examples of use of antenna arrays. The reverse is true. What Is The Advantage Of Pattern Multiplication? Answer : Useful tool in designing antenna. ∙ Antenna polarization should be considered to have constructive interference. The array elements can be any antenna, how-ever. It starts with the pattern multiplication principle and goes on to explain various pattern prop-erties using a two-element array as an example. The field pattern of an array of non-isotropic but similar point sources is the multiplication of the pattern of the individual source and the pattern of an array of isotropic point sources and having the same locations, relative amplitudes and phases as the non-isotropic point sources. The radiation pattern of an antenna array depends not only on their geometrical arrangement, but also on the phase of the signal that each antenna is fed with. excitations of its antenna elements. use Schelkunoff's method. This gives us a freedom to choose (or design) a certain desired array. Define end fire array. Recall that what is actually radiated by an antenna array is the product (pattern multiplication) of the Array Factor and the radiation pattern of the antennas that make up the array. ----- sales arenaV Early EPA investigations into downward radiation from FM and TV arrays prompted a sponsored program of radiation pattern measurement of selected single element FM types. Concentric ring array pattern To be able to steer the beam, without mechanically steering an antenna that would be around 2. Radiation is the term used to represent the emission or reception of wave front at the antenna, specifying its strength. the beam width. View Notes - Antenna-ch4-2 from ELECTRICAL 321 at Vietnam National University, Ho Chi Minh City. The antenna array was simulated using full-wave simulations for S parameters and the individual antenna patterns computed were subjected to pattern post-processing to test the beam-switching scheme. My core experience is on acoustic micro electro mechanical systems (MEMS), with emphasis on monolithic and heterogeneous integration of RF MEMS, ultrasonic MEMS, CMOS, RFSOI, and other integrated circuit (IC) technologies. The array pattern is also simulated by the principle of pattern multiplication. Two-dimensional array of microstrip patch antennas. Active impedance. Antenna arrays: array factor, uniformly excited equally spaced arrays, pattern multiplication principles, nonuniformly excited arrays, phased arrays. principle of pattern multiplication, the overall radiation pattern is found as the product of the individual element radiation pattern with its array factor [8]. Evaluate several types of antennas including wire, microstrip, and aperture antennas ix. When we say array-factor, we are talking about an array of isotropic elements (since, element pattern is already separated). Also, the ideal pattern of the tetrahedron array is computed by the pattern multiplication principle, while the actual one is derived by FEKO. Mutual coupling in antenna arrays. 5 N-Element Linear Array: Uniform Amplitude and Spacing Two-element, N. | 2019-12-12 19:24:31 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2705092132091522, "perplexity": 1584.0127965855181}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540545146.75/warc/CC-MAIN-20191212181310-20191212205310-00451.warc.gz"} |
https://lavelle.chem.ucla.edu/forum/viewtopic.php?p=110095 | ## Numbers to memorize
$E=hv$
Soyoung Park 1H
Posts: 30
Joined: Sat Sep 29, 2018 12:15 am
### Numbers to memorize
Do we have to memorize numbers such as c=3.00x10^8 m/s and planck's constant (h=6.63x10^-34Js) for the test?
Brandon Sanchez 3J
Posts: 30
Joined: Wed Oct 03, 2018 12:16 am
### Re: Numbers to memorize
I think they give us a sheet that has all the constants and equations along with the test so no.
Jacqueline Duong 1H
Posts: 61
Joined: Fri Sep 28, 2018 12:28 am
### Re: Numbers to memorize
Yeah, before the test we're given an equation sheet that has those numbers. So luckily, we don't need to memorize them!
Ariana Morales
Posts: 29
Joined: Fri Sep 28, 2018 12:15 am
### Numbers to memorize
Which equations should we know and understand before the test?
Rami_Z_AbuQubo_2K
Posts: 89
Joined: Thu Jun 07, 2018 3:00 am
### Re: Numbers to memorize
Most of the equations will be given on the exam sheet during the exam which you can find on the website if you want to make sure. Rather than memorize them, just make sure you understand each part and know how to manipulate them because that is much more important for exam 2.
Jordan Y4D
Posts: 25
Joined: Fri Sep 28, 2018 12:27 am
### Re: Numbers to memorize
Most of the equations and numbers should be on the formula sheet. I do remember Dr. Lavelle saying that the Ek =(½)mv^2 would not be on it, but I believe all other equations and numbers should be on there.
Mikka Hoffman 1C
Posts: 62
Joined: Fri Sep 28, 2018 12:23 am
Been upvoted: 2 times
### Re: Numbers to memorize
I think this is the sheet that we will be given with all the constants and equations: https://lavelle.chem.ucla.edu/wp-conten ... ations.pdf
405021651
Posts: 46
Joined: Tue Nov 14, 2017 3:03 am
### Re: Numbers to memorize
We will be given a formula sheet that will include those numbers!!
Michael Torres 4I
Posts: 92
Joined: Thu May 10, 2018 3:00 am
Been upvoted: 1 time
### Re: Numbers to memorize
I recommend memorizing whichever constants and formulas you can, but don't stress about it too much. I can recall hearing that a formula sheet would be provided, but you should still play around with all the formulas just to make sure you understand everything before the exam.
Hilda Sauceda 3C
Posts: 76
Joined: Fri Sep 28, 2018 12:24 am
### Re: Numbers to memorize
Constants and equations will always be provided at the beginning of the test.
Maxwell S 3E
Posts: 15
Joined: Fri Sep 28, 2018 12:24 am
### Re: Numbers to memorize
No, those will be a part of a formula sheet added to the test. A few equations, or at least conversion between one given equation to one that is not given, may be more prudent to remember. For almost all constants, they will be provided.
allisoncarr1i
Posts: 60
Joined: Fri Sep 28, 2018 12:15 am
### Re: Numbers to memorize
They are given on the exams but it might be helpful to familiarize yourself with some of the more frequently used constants just as a time saving technique.
Jordan Lo 2A
Posts: 85
Joined: Fri Sep 28, 2018 12:25 am
### Re: Numbers to memorize
Do you know how many significant figures we should use when solving problems, or should we just match the given information?
allisoncarr1i wrote:They are given on the exams but it might be helpful to familiarize yourself with some of the more frequently used constants just as a time saving technique.
Desiree1G
Posts: 62
Joined: Fri Sep 28, 2018 12:16 am
Been upvoted: 2 times
### Re: Numbers to memorize
I think it is possible they may provide numbers we do not know for the test, however once we do practice problems I think you will memorize them. Currently the speed of light and plank's constant are ones we will gradually remember.
Posts: 30
Joined: Fri Sep 28, 2018 12:24 am
### Re: Numbers to memorize
I agree with the replies most of the equations and constants are given on the sheet. It was recommended that we memorize the constants and the basic equations to go through the test faster.
harperlacroix1a
Posts: 43
Joined: Fri Sep 28, 2018 12:19 am
### Re: Numbers to memorize
Our first test had a formula sheet on the front. I assume that is the same sheet we will be given all quarter
Dayna Pham 1I
Posts: 98
Joined: Fri Sep 28, 2018 12:16 am
Been upvoted: 3 times
### Re: Numbers to memorize
Hello!
I agree that there will be a formula sheet in the front. Common equations, such as E=hv and c= lambda•v will be given. However, derivations of common equations that are necessary for some problems, such as E = hc/lambda will have to be determined by the test taker. All constants should be on the formula sheet though.
Hope this helped!
Cienna Henry 1J
Posts: 61
Joined: Fri Sep 28, 2018 12:15 am
Been upvoted: 1 time
### Re: Numbers to memorize
jordan_lo_3k wrote:Do you know how many significant figures we should use when solving problems, or should we just match the given information?
allisoncarr1i wrote:They are given on the exams but it might be helpful to familiarize yourself with some of the more frequently used constants just as a time saving technique.
Match the given information
Posts: 32
Joined: Fri Sep 28, 2018 12:23 am
### Re: Numbers to memorize
While these numbers will be given on a formula sheet, I have learned that learning the numbers and when to use them gives me a greater understanding of the concept. Plus, by the time I know the numbers, I have become familiar with the process used in the types of problems.
josephperez_2C
Posts: 70
Joined: Wed Nov 15, 2017 3:04 am
### Re: Numbers to memorize
The formula sheet that was given on the first test is the same one that will be given on the second test and third test as well.
Jaedyn_Birchmier3F
Posts: 30
Joined: Fri Sep 28, 2018 12:26 am
### Re: Numbers to memorize
No, constants do not need to be memorized. The equation sheet with the constants given during the test can be found on Dr. Lavelle's class website.
ArielKim3C
Posts: 30
Joined: Fri Sep 28, 2018 12:25 am
### Re: Numbers to memorize
Constants like the examples you provided do not need to be memorized for they are given on an equation sheet with the test, however my TA recommended that we do memorize them just so that we don't have to constantly be flipping back to the equation sheet during the test.
Jack Hewitt 2H
Posts: 67
Joined: Fri Sep 28, 2018 12:27 am
### Re: Numbers to memorize
Soyoung Park 1H wrote:Do we have to memorize numbers such as c=3.00x10^8 m/s and planck's constant (h=6.63x10^-34Js) for the test?
No you do not we are given a constants and equations sheet. However it may be helpful to memorize them to some extent so you can do problems faster.
205458163
Posts: 33
Joined: Wed Jun 26, 2019 12:15 am
### Re: Numbers to memorize
No, we don't. They will give us when we have a test. | 2019-09-21 09:58:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3892979919910431, "perplexity": 2126.7780429461095}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574377.11/warc/CC-MAIN-20190921084226-20190921110226-00258.warc.gz"} |
http://math.stackexchange.com/questions/216669/seminorms-in-locally-convex-spaces | # Seminorms in locally convex spaces
This is a theorem in Rudin's functional analysis:
Theorem. Suppose $\mathcal{P}$ is a separating family of seminorms on a real vector space $X$. Associate to each $p\in \mathcal{P}$ and to each $n\in \mathbb{N}$ the set $$V(p,n)=\{x\in X: p(x)<\frac{1}{n}\}.$$ Let $\mathcal{B}$ be the collection of all finite intersections of the sets $V(p,n)$. Then $\mathcal{B}$ is a convex balanced local base for a topology $\tau$ on X, which turns $X$ into locally convex space such that every $p\in \mathcal{P}$ is continuous.
Rudin declared that $A\subseteq X$ is open iff $A$ is a union of translates of members of $\mathcal{B}$. Does this mean that $$\tau = \{A \subseteq X: \forall x \in A, \exists y\in X \mbox{ and } B \in \mathcal{B} \mbox{ with }x \in y+B \subseteq A\}?$$ If that is so, then how can we prove that $\tau$ is closed under finite intersection?
-
$\mathcal{B}$ forms a base for a system of neighbourhoods of 0. The set of translates of $\mathcal{B}$ form a base for a topology $\tau$ on $X$. Does this help? – Vobo Oct 19 '12 at 7:53
Rudin is right, you are too. Yes, your assumption on the definition of $\tau$ is correct, but in addition you can show $$\tau = \{ A \subseteq X : \forall x \in A \exists B \in \mathcal{B} \mbox{ with } x+B \subseteq A\}$$ which is obviously a topology.
I preferred the latter $\tau$. I got no problem of showing it a topology. Many thx. – juniven Oct 19 '12 at 22:11 | 2016-05-02 13:32:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9093862175941467, "perplexity": 54.16315075386879}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461863599979.27/warc/CC-MAIN-20160428171319-00057-ip-10-239-7-51.ec2.internal.warc.gz"} |
https://repository.upenn.edu/dissertations/AAI9321357/ | # A performance analysis of sparse neural associative memory
#### Abstract
According to one of the folk tenets neural associative memories are robust, i.e. computation in them is not substantially affected by damage to network components, whereas the other tenet says that dense interconnectivity is a sine qua non for efficient usage of resources. In this thesis we analyse the validity of these folk tenets. We show that the second tenet is invalid at least for the case of recurrent networks when it is used as associative memory. We show that a special kind of sparse recurrent architecture, which we call block architecture, where neurons are partitioned into a number of fully interconnected blocks makes the second tenet invalid. We prove when size of blocks are $\Omega$ (log n) we could construct codes of exponential size for which the network can correct errors from a $\rho n$ ball (for some $\rho<1/8)$ around every codeword with probability almost 1. We show, however, that there are other codes for which the performance of the network deteriorates. Next we examine the validity of the first folk tenet. We have seen in the previous paragraph that there are codes for which performance of the block architecture deteriorates. We show this deterioration is not substantial provided size of the blocks are $\Omega$ (log n). We investigate the truthfulness of this result for another architecture which we call randomly sparsed architecture. It is our conjecture that for random sparsity, too, we can have error correction from an arbitrary $\rho n$ ball $(\rho<1/2)$ provided the degree of the interconnection graph is $\Omega$ (log n). We investigate the effect of sparsity in one non-neural paradigm of associative memory and obtain the same behaviour of the performance. We summarily conclude that sparse recurrent networks (associative memories in general) have good performance provided the interconnection graph has degree $\Omega$ (log n), and sometime sparsity can become boon, i.e., taking advantage of sparsity we can construct smart codes to have exponential increase in efficiency of usage of a neuron.
#### Subject Area
Electrical engineering
#### Recommended Citation
Biswas, Sanjay, "A performance analysis of sparse neural associative memory" (1993). Dissertations available from ProQuest. AAI9321357.
https://repository.upenn.edu/dissertations/AAI9321357
COinS | 2023-03-31 12:40:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7175812125205994, "perplexity": 964.1233109550649}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949642.35/warc/CC-MAIN-20230331113819-20230331143819-00348.warc.gz"} |
https://infoscience.epfl.ch/record/64476?ln=en | ## From a Static Impossibility to an Adaptive Lower Bound: the Complexity of Early Deciding Set Agreement
Published in:
Proceedings of the 37th ACM Symposium on Theory of Computing (STOC'05), 714-722
Year:
2005
Laboratories:
Note: The status of this file is: Anyone | 2020-08-09 01:10:45 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8650876879692078, "perplexity": 2095.298148672552}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738366.27/warc/CC-MAIN-20200808224308-20200809014308-00217.warc.gz"} |
https://www.ias.ac.in/listing/bibliography/jgen/PRABIR_K._BHATTACHARYYA | • PRABIR K. BHATTACHARYYA
Articles written in Journal of Genetics
• Identification and analysis of low light tolerant rice genotypes in field conditions and their SSR-based diversity in various abiotic stress tolerant lines
The yield potentiality of kharif rice is not completely used even under well-irrigated agro-ecosystem, mainly due to lowirradiance by overcast cloud throughout the growing season in eastern India. We observed more than 50% yield reduction compared to theperformance of 100 high-yield genotypes for consecutive three years both under open and 30–35% reduced light intensity, mainly by 34%,25% and 12% reduction of panicle number, grains per panicle and test weight. As per the analysis of variance, genotypic varianceexplained 39% of the total yield-variation under shade with 58% heritability. Overall, the maintenance of equal panicle per plant in bothopen and shade has the highest association with shade tolerance. Purnendu, Sashi and Pantdhan19 showed less than 28% yield-reductionby maintenance or even by increasing grain numbers under shade and test weight. On the other hand, maintenance of an equal numberof panicle under both situations was the key to the tolerance of Bhasamanik, Sasarang, Rudra and Swarnaprabha. As compared toopen, we noticed the improvement of chlorophyll a and b under shade but saw a poor correlation with the shade tolerance index.Comparing the net photosynthesis rate (Pn) in eight genotypes, we found the best tolerant line ranked last with least Pn at low light intensity(\400 $\mu$mol m-2 s-1). We also identified diverse parental combinations between newly identified shade tolerant and abiotic stress toleranthigh-yielding rice lines following diversity analysis using 54 simple-sequence repeats. Thus, the selected tolerant lines from a large set ofgenotypes with different adjustment ability to keep up high yield under low light intensity can be used for physiological, molecular analysisas well as pyramiding of traits.
• Yield-enhancing SPIKE allele from the aus-subtype indica rice and its allele specific codominant marker
Improving spikelet number without limiting panicle number is an important strategy to increase rice productivity. In this study, a spikelet number enhancing SPIKE-allele was identified from the aus subtype indica rice, cv. Bhutmuri, which has an identical japonica like corresponding sequence including retrotransposon sequence, usually absent in indica genotypes, like IR64. An allele-specific singletube PCR-based codominant marker targeting an A/G single-nucleotide polymorphism (SNP) at the 3ˊUTR was identified for easier genotyping. The yield enhancing ability of the Bhutmuri-SPIKE allele carrying RILs and NILs over IR64-SPIKE allele carrying alleles was due to increased number of filled grains/panicle. More than three times higher abundance of SPIKE transcripts was observedin Bhutmuri and NILs carrying this allele compared with IR64 and its allele carrying NILs. Higher rate of photosynthesis at more than 900 µmolm-2s-1 light intensity and more than six small vascular bundles between the two large vascular bundles in the flag leaves of Bhutmuri and its allele carrying NILs were also observed. The identified SPIKE allele and the marker associated with it will be useful for increasing the productivity of rice by marker-assisted breeding.
• # Editorial Note on Continuous Article Publication
Posted on July 25, 2019 | 2022-08-15 13:34:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4838462471961975, "perplexity": 12811.502281981577}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572174.8/warc/CC-MAIN-20220815115129-20220815145129-00605.warc.gz"} |
https://stats.stackexchange.com/questions/191377/granger-causality-and-non-linear-regression/191753 | # Granger causality and non-linear regression
I’m new to Granger Causality concept. I know that the “Granger causality” is a statistical concept of causality that is based on prediction. According to Granger causality, if a time series X "Granger-causes" (or "G-causes") a time series Y, then past values of X should contain information that helps predict Y above and beyond the information contained in past values of Y alone.
Suppose we have two time series;
X={1,2,3,4.5,5,6,7.2}; and Y={2,4,5,6.5,7.5,9,13}.
The following table shows samples of X,Y over time:
I would like to estimate the causality (or causality ratio) using non-linear regression model. Can anyone helps me to find if X "Granger-causes" Y using non-linear regression.
• Can you explain why you prefer a nonlinear model over a standard VAR? – Christoph Hanck Jan 19 '16 at 13:18
• In reality, of course, many causal relationships are more or less nonlinear, raising some doubts as to the applicability and usefulness of purely linear methods. So, I'm intended to estimate the causality ratio and feedback ratio using 10th order regression model, namely k = 10. – Omar14 Jan 19 '16 at 13:35
• With seven observations? – Christoph Hanck Jan 19 '16 at 13:38
• That's slightly off-topic, but by "10th order regression" do you mean including regressors $x,x^2,\dotsc,x^{10}$? That is asking for trouble unless there is a good subject-matter motivation for such high powers. @ChristophHanck, would there be any trouble carrying the notion of Granger causality over to models other than VAR? Say, we just stick to the basic principle that a model allows for a contribution of lagged $x$ towards predicting $y$ (besides lagged $y$'s own contribution), and that is testable. If $x$'s contribution is rejected, Granger causality is rejected. What about that? – Richard Hardy Jan 19 '16 at 20:33
• Why I said "asking for trouble": because the estimation variance and model sensitivity will likely be very high and you will be very likely to pick up noise in place of signal when estimating the model. – Richard Hardy Jan 19 '16 at 20:39
A non-linear Granger causality test was implemented by Diks and Panchenko (2006). The code can be found here and it is implemented in C. The test work as follows:
Suppose we want to infer about the causality between two variables $X$ and $Y$ using $q$ and $p$ lags of those variables, respectively. Consider the vectors $X_t^q = (X_{t-q+1}, \cdots, X_t)$ and $Y_t^p = (Y_{t-p+1}, \cdots, Y_t)$, with $q, p \geq 1$. The null hypothesis that $X_t^q$ does not contain any additional information about $Y_{t+1}$ is expressed by
$$H_0 = Y_{t+1}|(X_t^q;Y_t^p) \sim Y_{t+1}|Y_t^p$$
This null hypothesis is a statement about the invariant distribution of the vector of random variables $W_t = (X_t^q, Y_t^p, Z_t)$, where $Z_t=Y_{t+1}$. If we drop the time indexes, the joint probability density function $f_{X,Y,Z}(x,y,z)$ and its marginals must satisfy the following relationship:
$$\frac{f_{X,Y,Z}(x,y,z)}{f_Y(y)} = \frac{f_{X,Y}(x,y)}{f_Y(y)} \cdot \frac{f_{Y,Z}(y,z)}{f_Y(y)}$$
for each vector $(x,y,z)$ in the support of $(X,Y,Z)$. Diks and Panchenko (2006) show that, for a proper choice of weight function, $g(x,y,z)=f_Y^2(y)$, this is equivalent to
\begin{align} q = E[f_{X,Y,Z}(X,Y,Z)f_Y(Y) - f_{X,Y}(X,Y)f_{Y,Z}(Y,Z)]. \end{align}
They proposed the following estimator for $q$:
\begin{align} T_n(\varepsilon) = \frac{(n-1)}{n(n-2)} \sum_i (\hat{f}_{X,Y,Z}(X_i,Y_i,Z_i) \hat{f}_Y(Y_i) - \hat{f}_{X,Y}(X_i,Y_i) \hat{f}_{Y,Z}(Y_i,Z_i)) \end{align}
where $n$ is the sample size, and $\hat{f}_W$ is a local density estimator of a $d_W$-variate random vector $W$ at $W_i$ based on indicator functions $I_{ij}^W = I(\|W_i - W_j\| < \varepsilon)$, denoted by
\begin{align} \hat{f}_W(W_i) = \frac{(2 \varepsilon)^{-d_W}}{n-1} \sum_{j,j \neq i} I_{ij}^W. \end{align}
In the case of bivariate causality, the test is consistent if the bandwidth $\varepsilon$ is given by $\varepsilon_n = Cn^{-\beta}$, for any positive constant $C$ and $\beta \in (\frac{1}{4}, \frac{1}{3})$. The test statistic is asymptotically normally distributed in the absence of dependence between the vectors $W_i$. For the choice of the bandwidth, Diks and Panchenko (2006) suggest $\varepsilon_n = max(C_n^{-2/7},1.5)$, were $C$ can be calculated based on the ARCH coefficient of the series.
There are other tests of non-linear Granger causality such as in Hiemstra and Jones (1994), but this test in particular suffers from lack of power and over-rejection problems, as stated by Diks and Panchenko here.
As pointed out by @RichardHardy, you should be careful about using local density estimation in small samples. Since Diks and Panchenko showed that in samples smaller than 500 observations their test may under-reject, it would be wise to make further investigations in case the test does not reject the null hypothesis.
# References
Diks, C., & Panchenko, V. (2006). A new statistic and practical guidelines for nonparametric Granger causality testing. Journal of Economic Dynamics and Control, 30[9–10], 1647-1669.
Hiemstra, C., & Jones, J. D. (1994). Testing for Linear and Nonlinear Granger Causality in the Stock Price- Volume Relation. The Journal of Finance, 49(5), 1639–1664.
• Since the test involves density evaluation, I presume it is suited for relatively large samples. Would it make sense to use it for small samples as well? (The OP has samples of sizes between 15 and 100.) Also, what do you mean by the ARCH coefficient? Is it the $\alpha_1$ (a.k.a. "ARCH") coefficient from an ARCH(1) model? – Richard Hardy Jan 21 '16 at 17:21
• Exactly, it is the $\alpha_1$ coefficient. Since you have two series, you can estimate two ARCHs an take the mean of the alphas as a proxy for C. For series smaller than 500 obs the test may under-reject. There are some simulations in the Dicks and Panchenko paper. – Regis A. Ely Jan 21 '16 at 17:41
Transferring the notion of Granger causality from VAR to other models does not seem to have intrinsic contradictions. The basic principle when testing for Granger non-causality is to build a model that allows for a contribution of lagged $x$ towards predicting $y$ (besides lagged $y$'s own contribution), and testing the null hypothesis that the contribution is zero (by checking its statistical significance). If $x$'s contribution is too large to be written off to pure chance (is found to be statistically significantly different from zero), Granger non-causality is rejected; otherwise it is not.
The answer by @RegisA.Ely provides a very comprehensive way of testing, and I appreciate it. However, I have some reservations over how effective it might be when applied in small samples. My approach is more general and therefore should be applicable in small samples, too. That is, for a small sample, a "small" model would be used to avoid estimating complicated structures that need large sample size for decent estimation precision.
• I agree, but the problem of the test in small samples is under rejection, so I would say use Dicks and Panchenko test. If the test is significant, there is probably a non linear relationship. If the test is not significant, you need further investigation. – Regis A. Ely Jan 21 '16 at 18:40
• @RegisA.Ely, OK, I did not intend to pick on Dicks and Panchenko too much. I am just sceptical about local density estimation in small samples (and especially in more than one dimension) in general. – Richard Hardy Jan 21 '16 at 18:43 | 2019-01-18 01:29:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 3, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8246092796325684, "perplexity": 785.0118578711142}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583659654.11/warc/CC-MAIN-20190118005216-20190118031216-00114.warc.gz"} |
https://www.degruyter.com/view/j/snde.2016.20.issue-1/snde-2013-0134/snde-2013-0134.xml | Show Summary Details
More options …
# Studies in Nonlinear Dynamics & Econometrics
Ed. by Mizrach, Bruce
5 Issues per year
IMPACT FACTOR 2016: 0.649
CiteScore 2016: 0.63
SCImago Journal Rank (SJR) 2016: 0.546
Source Normalized Impact per Paper (SNIP) 2016: 0.793
Mathematical Citation Quotient (MCQ) 2016: 0.03
Online
ISSN
1558-3708
See all formats and pricing
More options …
Volume 20, Issue 1
# Are US real house prices stationary? New evidence from univariate and panel data
Jing Zhang
/ Robert de Jong
/ Donald Haurin
Published Online: 2015-06-05 | DOI: https://doi.org/10.1515/snde-2013-0134
## Abstract
Many papers in the housing literature treat the intertemporal evolution of the logarithm of US real house prices as a unit root process. They also study the cointegration relationship among the logarithm of real house prices and fundamental economic variables such as income and they apply an error correction specification for modeling and forecasting real house prices. This paper argues that the logarithm of US real house price is not a unit root process. Instead, the evidence from a 120-year national dataset and metro area level and state level panel data sets supports the notion that US house prices are trend stationary. One result of this conclusion is that the validity of analyses of US house prices based on cointegration and error correction models needs to be reconsidered.
This article offers supplementary material which is provided at the end of the article.
## 1 Introduction
Understanding real house price dynamics is an important issue because the housing market is an important economic sector. Many papers in the housing literature apply cointegration analysis and error correction specifications for modeling US real house prices (Abraham and Hendershott 1996; Malpezzi 1999; Capozza, Hendershott, and Mack 2004; Gallin 2006; Mikhed and Zemcik 2009; Holly, Pesaran, and Yamagata 2010). For cointegration and error correction analysis to be valid, real house price should be a unit root process.1
In a regression equation the stationarity of the variables ensures that hypothesis tests are valid. If a series is not stationary but trend-stationary, it can be transformed into a stationary series by subtracting the deterministic trend. If a nonstationary series has a unit root, a stationary series can be generated by differencing. Moreover, in a context with multiple series with unit roots, if they have a long run equilibrium relationship (or more formally, if there exists a linear combination among them that is stationary), they are said to be cointegrated. In the cointegration case, variables adjust to discrepancies from the long run relationship, and hence an error correction specification is appropriate to capture the impact of the deviation from the long run equilibrium on the short-run dynamics. Therefore, a prerequisite for applying the error correction model is the existence of a cointegrating relationship, and a prerequisite for the cointegration analysis is that the variables contain unit roots.
Determining whether there is a unit root in US real house prices also sheds light on the appropriateness of theoretical urban models that explain real house prices. If real income has a unit root and real house price is trend stationary as our results suggest, then the models such as the one by Capozza and Helsley (1989, 1990) that suggest an equilibrium relationship between real house price and real income are puzzling.
Our paper studies a fundamental and important question: do US real house prices contain a unit root? Surprisingly, the literature has not taken a careful look at this question. To answer the question of whether US real house prices have a unit root, we apply unit root tests to national data, to state level panel data, and to MSA (Metropolitan Statistical Area) panel data. We first study a national real home price index over 120 years constructed by Robert Shiller. This is a long time series that is of particular interest because the literature typically uses data sets that begin around 1975. A central feature of the data is that there appear to be structural breaks, and therefore we apply unit root tests that explicitly allow for such breaks. Whether there is a national housing market has been debated and thus we next apply the Pesaran (2007) panel unit root test to a data set of 48 states and Washington, DC and to one that contains 363 MSAs for the 1975–2011 period. Because areas with an inelastic supply of housing usually have more severe house price cycles, we conduct separate tests in set of MSAs where supply is most inelastic and most elastic.
To understand why our results differ from results found in the literature, we restrict our data to the areas and years in the samples used in the most cited papers that apply panel unit root tests and cointegration tests to US real house prices. We find that these paper’s results regarding whether real house price has a unit root change when the sample period is extended to the most recent data.
We also employ nonlinear unit root tests to consider the possibility that house prices might be nonlinear, and a sequential panel selection method to select house price series with stronger evidence of stationarity from a panel.
The remainder of this paper is organized as follows. Section 2 discusses the related literature. Section 3 provides unit root tests results for Shiller’s 120-year home price index. Section 4 reports the panel unit root test results for the state and MSA panels as well as for supply elastic and inelastic subgroups. The second half of Section 4 returns to the literature and determines what happens to the results in the literature when the time period of analysis is extended to include recent data, and then discusses the implications of our results for theoretical house price models. Section 5 presents the results for nonlinear unit root tests. Section 6 provides the Sequential Panel Selection Method and the results. Section 7 concludes.
## 2 Literature review
Error correction models are widely used in the housing literature to model US house price dynamics. Examples include Abraham and Hendershott (1996), Malpezzi (1999), and Capozza, Hendershott, and Mack (2004). Abraham and Hendershott (1996) use annual data for 30 metropolitan areas over the period 1977–1992 to investigate the determinants of real house price appreciation. The explanatory variables consist of three parts: the change in fundamental price, the lagged real house price appreciation with its coefficient called the “bubble builder,” and the deviation of house price from its equilibrium level in the previous period with its coefficient called the “bubble burster.” They find a positive bubble builder coefficient and a negative bubble burster coefficient. Moreover, the absolute values of these coefficients are higher for the coastal city group than for the inland city group.
Capozza, Hendershott, and Mack (2004) empirically study which variables explain the significant difference in the geographic patterns of the bubble builder and the bubble burster coefficients found in Abraham and Hendershott’s (1996) error correction specification. Using a panel dataset of 62 MSAs from 1979 to 1995, they find that higher real income growth, higher population growth, and a higher level of real construction costs increase the bubble builder coefficient, while higher real income growth, a larger population, and a lower level of real construction costs increase the absolute value of the bubble burster coefficient.
Abraham and Hendershott (1996) and Capozza, Hendershott, and Mack (2004) do not provide formal unit root and cointegration tests results to justify the existence of a long run equilibrium relationship among house prices and fundamental variables, which should be a prerequisite for the validity of error correction models.
Malpezzi (1999) uses a dataset which includes 133 MSAs and covers 18 years from 1979 through 1996 and states that short run real house price changes are well modeled by an error correction formulation. The panel unit root test of Levin, Lin, and Chu (2002, LLC test) is applied to real house price changes, the house-price-to-income ratio, and the residuals of the regression of real house prices on real per capita incomes. The first two are the dependent variables in Malpezzi’s error correction model. A unit root is rejected for price changes, but cannot be rejected for the price-to-income ratio. Moreover, a unit root is rejected for the residuals of the regression of real house prices on real per capita incomes, and hence Malpezzi concludes that real house prices and real incomes are cointegrated. This cointegration test procedure suffers from several shortcomings. First, before applying the cointegration test, Malpezzi does not examine if real house prices and income have a unit root, respectively. Second, critical values of the LLC panel unit root test have not been shown to work for residuals from the first stage regression, so the claim that house prices and incomes are cointegrated based on the LLC critical values could be misleading. Third, the LLC test does not allow cross-sectional dependence in the regression errors, hence the test result may be biased.
The 2000s’ US housing boom, which reached its peak in 2006, raises the question of whether US real house prices are supported by fundamentals; that is, whether real house prices and the fundamental economic variables such as income have a long run equilibrium relationship. The housing literature formalizes this argument by discussing the cointegration relationship among real house prices and the fundamental variables. Thus far, the results are mixed. Some papers such as Gallin (2006) and Mikhed and Zemcik (2009) apply cointegration tests and claim that there is no long run equilibrium relationship, which cast doubts on the validity of applying error correction models to US real house prices. Other papers argue that there is a cointegration relationship, such as Holly, Pesaran, and Yamagata (2010).
Gallin (2006) tests for the existence of a long run relationship among US house prices and economic fundamental variables by applying cointegration tests to both national level data and city level panel data. The augmented Engle-Granger cointegration test is applied to national level house prices, per capita income, population, construction wage, user cost of housing, and the Standard and Poor’s 500 stock index. No cointegration relationship is found. He also applies panel cointegration tests to city level house prices, per capita income, and population, for 95 MSAs over 23 years from 1978 to 2000. The panel cointegration tests he uses are Pedroni (1999) and Maddala and Wu (1999). He also applies a bootstrapped version of the tests to take into account cross-sectional dependence. The null hypothesis of no cointegration cannot be rejected, neither by the original tests nor the bootstrapped version. To test for a unit root, Gallin applies the ADF unit root test to the national level real house prices and a unit root is not rejected. But he does not provide panel unit root test result for the city level panel data.
Mikhed and Zemcik (2009) also examines if US house price and fundamental factors are cointegrated. The innovation in their paper is that they include more fundamental variables to avoid the possibility that the omission of potential demand and supply shifters cause the lack of cointegration relationships. The fundamentals included are house rent, a building cost index, per capita income, population, mortgage rate, and the Standard and Poor’s 500 stock index. Their sample includes 22 MSAs over 1978–2007, and they examine several different time periods (1978–2007, 1978–2006, 1978–2005, 1997–2007, 1978–1996). They apply the CIPS panel unit root tests to real house prices and the fundamental variables for all periods, setting the time lag in the CIPS test to 1 year. For real house prices, a unit root is rejected at the 5% level for 1978–2007 and rejected at the 10% level for 1978–2006, but cannot be rejected for other periods. The authors interpret this as a correction of the house price bubble around 2006. They further investigate the cointegration relationship of house prices and fundamentals for periods prior to 2005 when a unit root cannot be rejected in house prices. They apply the Pedroni (1999, 2004) panel cointegration test and bootstrap the critical values for possible cross-sectional dependence. No evidence of cointegration relationships is found in any of the cases. Hence they claim that US real house price dynamics are not explained by fundamentals, and this is evidence of housing bubbles in those subsamples.
Holly, Pesaran, and Yamagata (2010) study the determination of US real house prices using a panel of state level data (48 states, excluding Alaska and Hawaii and they include the District of Columbia) over 29 years from 1975 to 2003. Unlike Gallin (2006) and Mikhed and Zemcik (2009), Holly et al. find that real house prices and real income are cointegrated. The innovation of Holly et al. is that they apply the common correlated effects (CCE) estimators of Pesaran (2006) to study a panel of real house prices. They first use an asset non-arbitrage model to show that the ratio of real house price and real income should be stationary, hence the log of real house price and the log of real income should be cointegrated with a cointegration vector of (1, −1). Then they empirically find such a cointegration relationship and estimate a panel error correction model for the dynamics of adjustment of real house prices to real incomes.
Most of the above papers use time periods that end before 2006 and thus they do not include the recent US housing bust period in their sample. An exception is Mikhed and Zemcik (2009), which reports unit root tests for periods ending after 2006, and a unit root is rejected at the 5% level over 1978–2007 for real house prices. The authors then apply cointegration tests to the subperiods ending before 2006 where a unit root is not rejected. This is an ad hoc solution that does not address the issue of whether real house prices have a unit root or are stationary.
## 3.1 Data
The periods examined in the literature usually start at or after 1975 because reliable US house price data are available since then, both for national and regional house prices. However, if the periods of the recent house price bubble and bust are omitted, as occurs in many papers in the literature, then only 20–30 years of data remain. Therefore, we examine the full period to increase the power of the unit root tests.
We study the US real home price index constructed by Shiller (2005).2 It is a national level index that begins in 1890. Shiller constructed this index “by linking together various available series that were designed to provide estimates of the price of a standard, unchanging, house…” He also created an index for the period 1934–1953. Before 1953 there are only annual data, and therefore we use the logarithm of the annual data for this analysis. Figure 1 shows the path of this index. Real house prices appear to be relatively stable except for the period during the two world wars and the Great Depression, and a sharp spike during the 2000s.
Figure 1:
Log of Shiller’s Real Home Price Index, 1890–2011.
Note: The contruction of Shiller’s home price index is described in Shiller (2005).
## 3.2 Methodology: univariate unit root tests
We apply the Lee and Strazicich (2003) (L-S) minimum LM endogenous two breaks unit root test, the multiple breaks unit root tests in Carrion-i-Silvestre, Kim, and Perron (2009) (CKP), the augmented Dickey-Fuller (ADF) unit root test, and the Kwiatkowski-Phillips-Schmidt-Shin (KPSS) test to the 120 year time series of house prices.
Because two breaks clearly appear in Figure 1, with one break around World War I (1914–1918) and the second around World War II (1939–1945), we apply the Lee-Strazicich minimum LM endogenous two structural breaks unit root test. The L-S test estimates the points of structural break by searching for the two break points where the unit root t-statistic is minimized, and tests the null of a unit root against the alternative of trend stationarity. It allows for breaks under both the null and alternative hypotheses. If breaks are not included in the null, as in Lumsdaine and Papell (1997), then a rejection of the null may imply a unit root process with breaks and not necessarily a trend stationary series. Model C in the Lee and Strazicich paper is adopted here, which is the most general case that allows for changes in both level and slope.
We apply unit root test statistic in Lee and Strazicich (2003), which is obtained from the following regression:
$Δyt=δ′ΔZt+ϕS˜t−1+∑i=1kγiΔS˜t−i+ut, (1)$(1)
where $S˜t=yt−Ψ˜x−Ztδ˜, t=2, …, T, Ψ˜x=y1−Z1δ˜; δ˜,$ are the regression coefficients from a regression of Δyt on ΔZt; Zt=[1, t, D1t, D2t, DT1t, DT2t]′, where Djt=1 and DTjt=tTBj for tTBj+1 and 0 otherwise, j=1, 2, and TBj is the time of a break point. The lagged terms $ΔS˜t−i$ are included to correct for serial correlation, and the number of lags k is selected following the general-to-specific procedure described in their paper. The LM test statistic is given by
$τ˜=t statistic testing the null hypothesis ϕ=0. (2)$(2)
The two breaks (λj=TBj/T, j=1, 2) are determined endogenously by a grid search over the time span [0.1T, 0.9T] (to avoid end-point issues).
Because of the potential existence of three breaks during 1890–2011 due to the two world wars and the 2000s’ housing bubble, we also apply Carrion-i-Silvestre, Kim, and Perron’s (2009) unit root tests that allow for multiple structural breaks. They extend the unit root tests of Elliott, Rothenberg, and Stock (1996) and Ng and Perron (2001) to allow for multiple breaks (we use their tests with three breaks), and provide five test statistics $(PTGLS, MZaGLS, MSBGLS, MZtGLS, MPTGLS)$ to test the null of a unit root against the alternative of trend stationarity. The tests considered in Carrion-i-Silvestre, Kim, and Perron (2009) estimate the break points, allow for breaks under both the null and the alternative hypotheses, and allow for breaks in both the level and the slope.
We also apply the ADF and KPSS tests because they are popular in the literature that tests the stationarity of a series. The ADF test tests the null of a unit root against the alternative of trend stationarity, while the KPSS test tests the null of trend stationarity against the alternative of a unit root.
## 3.3 Empirical results
Table 1 reports the results of the Lee and Strazicich (2003) two breaks test. A unit root cannot be rejected for the period 1890–2011. Break points are found in 1916 and 1949. These values closely correspond to the visual inspection of Figure 1. However, it is possible that a third break occurs during the most recent housing bubble, hence biasing the test towards non-rejection of a unit root. If the period of the recent housing bubble is excluded, a unit root is rejected at the 1% level over 1890–1998. The estimated break points are 1916 and 1947 for this time span. We also apply this two-break test to the 1920–2011 period to avoid the structural break at World War I, thus allowing for the break at World War II and a possible break during the recent housing bubble. A unit root again is rejected at the 1% level. The estimated break points are 1946 and 2000. Therefore, a unit root is rejected for both 1890–1998 and 1920–2011.
Table 1
Lee and Strazicich (2003) minimum LM endogenous two breaks unit root test for Shiller’s 120-year log real home price index.
Table 2 reports the tests statistics for three breaks in Carrion-i-Silvestre, Kim, and Perron (2009), for different numbers of lags. The estimated break points are 1917, 1944, and 1998, which are similar to the above results. The rejection of a unit root is very robust to the number of lags. None of the statistics can reject a unit root when the number of lags equals zero; three of them reject a unit root at the 10% significance level when one lag is included; all of them reject a unit root at the 5% or 1% level when more than one lag is included. The selected number of lags is two using the Bayesian information criterion (BIC) and is zero using the modified Akaike information criterion (MAIC) by Ng and Perron (2001) and Perron and Qu (2007). Because real house prices and real house price changes are documented in the literature as having strong serial correlation, we think the number of lags selected based on BIC is more reliable. Therefore, the three breaks tests provide strong evidence of trend stationarity with three breaks over 1890–2011.
Table 2
Carrion-i-Silvestre, Kim, and Perron (2009) three breaks unit root tests for Shiller’s 120-year log real home price index.
Results for the ADF and the KPSS tests are reported in Table 3. For the period 1890–2011, the ADF test cannot reject a unit root and the KPSS test rejects stationarity, which is not surprising because ignoring the possibility of structural breaks will bias these test statistics towards the unit root hypothesis. To avoid the effects of dramatic changes due to the world wars and the Great Depression, we restrict the test period to 1950–2011, and the unit root hypothesis is rejected at the 5% level by the ADF test, suggesting that real house price is trend stationary during this 60-year period. Moreover, to examine the possible effect of the recent bubble on the unit root test results, we examine the period 1950–2006. The ADF test cannot reject a unit root and the KPSS test rejects trend stationarity at the 5% level. When we look at the series that ends at 1998 (excluding the recent bubble), the ADF test rejects a unit root at the 1% level and KPSS test cannot reject trend stationarity, implying strong evidence of trend stationarity over 1950–1998. Therefore, including or excluding the entire recent bubble-bust cycle results in trend stationarity, but including only the boom period of this bubble will result in a unit root conclusion.3 To further emphasize the potential problem in the literature when only a subperiod is studied, we apply the ADF test and the KPSS test to periods starting at 1975 for both annual data and quarterly data. A unit root is weakly rejected at the 10% level for 1975–2011, but cannot be rejected for 1975–2006 and 1975–2002. This result coincides with the findings in the literature which generally claim that real house prices have a unit root.
Table 3
ADF and KPSS unit root test for Shiller’s 120-year log real home price index.
Table 4 presents a summary of the findings of the ADF test, the L-S two breaks test and the CKP three breaks test for different time periods. Our preferred method is the CKP three breaks test for the full sample period, 1890–2011, resulting in a conclusion that house prices are trend stationary with breaks.
Table 4
Summary of unit root test results for log of Shiller’s real home price index.
## 4.1 Data
We consider two panel data sets: the first includes house prices for 363 MSAs in the US and the second reports data for 48 states and Washington, DC. Both data sets cover the 1975–2011 period. We use the Freddie Mac House Price Index (FMHPI), which is a monthly nominal repeated-sales index estimated using data on house price transactions (including refinanced) on one-family detached and townhome properties whose mortgage has been purchased by Freddie Mac or Fannie Mae.4 We deflate the nominal house price index by the CPI-U for each month and then average over a year to obtain annual data and then take logarithms.5
## 4.2 Methodology: panel unit root test
In the literature, early specifications of panel unit root tests did not allow for cross-sectional dependence (Levin, Lin, and Chu 2002; Im, Pesaran, and Shin 2003). More recently, panel unit root tests allow for cross-sectional dependence (Bai and Ng 2004; Moon and Perron 2004; Pesaran 2007).
We apply the Pesaran CIPS test (2007). There are several reasons for this choice. First, the CIPS test allows for cross-sectional dependence, which is an important consideration for house prices because house prices in different areas are very likely to be affected by common effects such as changes in interest rates or technology. Second, it allows for cross-sectional heterogeneity in the intercept, trend, and autoregressive coefficients. The third reason is its favorable size and power for large N and small T, which is the case for our sample. In particular, Moon and Perron (2004) and Bai and Ng (2004) both assume that N/T→0 as N and T→∞ when deriving the asymptotic properties of the test, while Pesaran (2007) assumes N/Tk (where k is a finite positive constant) as N and T→∞.
To justify the use of Pesaran’s (2007) CIPS test, we need to establish the presence of cross-sectional dependence. We use the CD (cross-section dependence) test proposed in Pesaran (2004) for this purpose. This test statistic asymptotically converges in distribution to a standard normal distribution as T and N→∞ in any order under the null of no cross-sectional dependence,6 and is defined as
$CD=2TN(N−1)(∑i=1N−1∑j=i+1Nρ^ij), (3)$(3)
where $ρ^ij=Corr(ε^i, ε^j)$ and $ε^i$ is the estimated residual from the ADF(p) regression that estimates the model
$Δyit=ai0+ai1t+ai2yi,t−1+∑j=1pδijΔyi, t−j+εit. (4)$(4)
Pesaran’s (2007) CIPS panel unit root test is based on a cross-sectionally augmented ADF (CADF) regression, which filters out the cross-sectional dependence by augmenting the ADF regressions with the lagged cross-section mean and the lagged first differences of the cross-sectional mean. The CADF regression estimates the model
$Δyit=ai0+ai1t+ai2yi, t−1+ai3y¯t−1+∑j=0pdijΔy¯t−j+∑j=1pδijΔyi, t−j+vit. (5)$(5)
Let $t˜i$ denote the t-ratio for ai2 in the above regression. Then the CIPS statistic is
$CIPS=N−1∑i=1Nt˜i. (6)$(6)
The null and alternative hypotheses for the CIPS test are
$H0: ai2=0 for i=1, 2, …, N, (7)$(7)
and
$H1:{ai2=0for i=1, 2, …, N1ai2<0for i=N1+1, N1+2, …, N. (8)$(8)
## 4.3 Empirical results
To confirm the presence of cross-sectional dependence, we carry out CD tests using 1, 2 and 3 lags and using both the MSA and state data sets. We also consider subsamples of the top 20 and 50 most supply inelastic and elastic MSAs. These MSAs are selected based on the supply elasticity estimated in Saiz (2010), who provides housing supply elasticity measures for 269 metro areas. After matching his data with the MSA sample used here, supply elasticity measures for 254 MSAs are available.
The cross-sectional dependence is confirmed by the CD test statistics in Table 5, which are statistically highly significant. This implies that the panel unit root tests that do not allow for cross-sectional dependence are inappropriate.
Table 5
Pesaran (2004) cross-section dependence test (CD test) for log real house prices, 1975–2011.
Table 6 reports the CIPS statistics with an intercept and a linear time trend included for varying augmented orders and for the state and MSA sample and the MSA subsamples. We examine the periods 1975–2011, 1975–2006, 1975–2002, and 1975–1998 to determine if and how the recent housing bubble affects unit root test results. For the 363 MSAs group, the period 1975–2011 strongly rejects a unit root at the 1% level, but a unit root cannot be rejected for all other periods except 1975–1998 which rejects a unit root at the 10% level when the augmented order is one. This result indicates that we can reject a unit root if we examine the period that includes both the boom and bust years of the recent bubble, but a unit root cannot be rejected if we exclude the bust. For the state sample, the periods 1975–2011 and 1975–2006 both reject a unit root, but 1975–2002 and 1975–1998 cannot reject.7
Table 6
Pesaran (2007) CIPS panel unit root test for log real house prices.
The CIPS test statistics for the supply elastic and inelastic MSA groups also are reported in Table 6. The recent literature has identified that housing supply elasticity plays an important role in house price dynamics. For example, Glaeser, Gyourko, and Saiz (2008) find, both theoretically and empirically, that places with more elastic housing supply have smaller house price increases and shorter house price bubbles. Also, we can see from Figure 2 that the average of the log real house price index for the top 20 supply inelastic MSAs is much more volatile and has much bigger and prolonged cycles than that for the top 20 supply elastic MSAs.
Figure 2:
Average log real house price index, 1975–2011.
Note: House prices are based on the Freddie Mac house price index. Housing supply elasticity measure is provided in Saiz (2010).
Table 6 shows that the top 50 supply inelastic MSA group strongly rejects a unit root over 1975–2011 with the test statistics much higher in absolute value than that for the 363 MSA group, while all other periods cannot reject a unit root. The top 20 supply inelastic MSA group reveals even stronger evidence for trend stationarity – a unit root is very strongly rejected for all time periods when a 1-year lag is selected. In contrast, the supply elastic groups reveal little evidence of trend stationarity. The top 50 supply elastic MSA group cannot reject a unit root at the 5% level for all periods when we select a 1-year lag, and the top 20 supply elastic group cannot reject a unit root even at the 10% level. Therefore, it appears that the test statistics are impacted by the house supply elasticity of the areas under consideration.8
## 4.4 Panel unit root tests for samples studied in the literature
Our panel unit root test results indicate trend stationarity if the data extends through 2011. This result differs from many studies in the literature, which find that real house prices have a unit root process. This raises the question as to the causes of the difference in results. Possible explanations are (1) the different time periods covered, (2) the different areas included, and (3) the different unit root tests that are applied. To answer this question, we apply the CIPS panel unit root test to the area samples studied in the most cited papers that examine panel unit root tests and cointegration tests for US real house prices.
## 4.4.1 Malpezzi (1999)
We apply the CIPS panel unit root test to a subsample that matches the MSAs and years in Malpezzi’s paper. Of the 133 MSAs studied by Malpezzi, 124 of them are available in our dataset. The first row of Table 7 presents the CIPS statistics. A unit root cannot be rejected over the years studied in Malpezzi’s sample 1979–1996, and this result is robust to the varying augmented lags, and thus we replicate his result. The second row of Table 7 presents the CIPS statistics for the same group of MSAs over 1975–2011, and a unit root is rejected at the 1% level, regardless of the number of the augmented lags. This change of the test results is clearly a result of the different time periods available in his and our samples.
Table 7
Panel unit root test for log real house prices of the MSAs in Malpezzi (1999).
## 4.4.2 Gallin (2006)
Of the 95 MSAs in Gallin’s sample, 83 of them can be matched with our data and we apply the CIPS test to this group of MSAs. The first row of Table 8 reports the CIPS statistics over 1978–2000, the years examined in Gallin’s sample. A unit root is not rejected. But if the data are extended to cover the 1975–2011 period, a unit root is rejected as reported in the second row. The results are robust to changing the number of lags. Therefore, the 1975–2011 period data contain new information that may invalidate the cointegration analysis in Gallin’s paper.
Table 8
Panel unit root test for log real house prices of the MSAs in Gallin (2006).
## 4.4.3 Mikhed and Zemcik (2009)
We apply the CIPS test to the 22 MSAs in Mikhed and Zemcik’s paper that are in our sample. The first and second rows of Table 9 report the CIPS statistics over 1978–2005 and 1978–2007, their sample periods. Our results are very similar to theirs when one augmented lag is selected (which is the case in their paper). Specifically, a unit root is rejected at the 5% level for 1978–2007, but cannot be rejected for 1978–2005. The third row of Table 9 reports the test statistics for the entire period 1975–2011 and a unit root is rejected at the 1% level when the lag is set to one year. The non-rejection of a unit root for periods before 2005 could be an indication of less test power caused by the limited number of time periods. As before, the application of cointegration methodology in Mikhed and Zemcik’s paper may well have been inappropriate.
Table 9
Panel unit root test for log real house prices of the MSAs in Mikhed and Zemcik (2009).
## 4.4.4 Holly, Pesaran, and Yamagata (2010)
Table 10 reports the CIPS statistics using our sample for 48 states and Washington, DC. The house price data in Holly et al. are the house price index from the Office of Federal Housing Enterprise Oversight and are deflated by a state level consumer price index, while we use the Freddie Mac house price index deflated by the national consumer price index. The first row shows the test statistics for the time period 1975–2003 studied in Holly et al., and a unit root cannot be rejected. This agrees with their unit root test result. However, the second row shows that, if the data are extended to also cover the 2004–2011 period, a unit root is strongly rejected. Again, this change in unit root test results indicates that the cointegration analysis and the error correction model discussed in Holly et al.’s paper may well have been inappropriate.
Table 10
Panel unit root test for log real house prices of the states in Holly, Pesaran, and Yamagata (2010).
For each of these studies, we find that a unit root cannot be rejected if we consider samples similar to those in the original study and thus we confirm their results. However, if these samples are extended to 1975–2011, a unit root is always rejected. Therefore, the reason for our disagreement with the literature is the difference in the time span covered in the analysis. In particular, stopping an analysis midway through the 1998–2011 housing cycle yields different results than including the complete cycle.
## 4.5 Implications of the results
Both the panel unit root tests and the univariate unit root tests provide supportive evidence that real house price is a trend stationary series with structural breaks, rather than a unit root process. This result presents a challenge to urban models that aim to find the determinants of real house prices. As mentioned in the introduction, house price models usually identify real income as one of the most important factors determining real house prices. But our study suggests some income-house-price “puzzles.” Real house prices and real income are very likely two processes with different stationarity properties. Real house prices reject a unit root based on findings in our paper, while there is strong evidence that real income is a unit root process. Table 11 reports the CIPS panel unit root test statistics when applied to log of real per capita personal income, for the all-MSA group, top-20-supply-inelastic-MSA group, and top-20-supply-elastic-MSA group, and covering different time periods: 1975–2009, 1975–2006, and 1975–1998.9 None of the test statistics are significant. How a unit root real income process determines a trend stationary real house price is a challenge for urban models.
Table 11
Pesaran (2007) CIPS panel unit root test for log real per capita income.
In addition to the stationarity puzzle, there is also an issue related to house price trends. Figure 1 indicates that there is hardly any trend for the national level real home price index, but Figure 3 shows that there is an obvious upward trend for real income. A possible explanation is to include housing supply because if housing supply is elastic, house price trends will follow trends in construction costs. As documented in Gyourko and Saiz (2006), national level real construction costs trended down over 1980–2003. Therefore, in supply elastic MSAs, one would expect real house prices to decline even when real income is increasing. In contrast, if housing supply is inelastic, then real house prices will rise as income increases. This explanation is supported by Figure 2. Real house prices have an upward trend for the top 20 supply inelastic MSAs, and a downward trend for the top 20 supply elastic MSAs. In contrast, Figure 3 shows that, for real income, there are obvious upward trends for all the three groups, and the three paths are almost parallel.
Figure 3:
Average log real per capita personal income, 1975–2009.
Note: Personal income is from the Bureau of Economic Analysis. Housing supply elasticity measure is provided in Saiz (2010).
## 5.1 Unit root test with nonlinear ESTAR alternative
To consider the possibility that house price series might be nonlinear,10 we implement a nonlinear unit root test proposed in Kapetanios, Shin, and Snell (2003) (hereafter, KSS test), which tests the null of a unit root process against the alternative of a globally stationary exponential smooth transition autoregressive (ESTAR) process. We carry out this test for Shiller’s national house price series, as well as the 49 individual states.
Table 12 presents the results. For Shiller’s series from 1950 to 2011, therefore excluding the two World Wars and the Great Depression, the KSS test rejects a unit root at the 1% or 5% level for lag lengths varying from one through three. If we include the time period prior to 1950 and examine the period of 1890–2011, the test does not reject a unit root with one lag, but rejects a unit root at the 5% level with two lags and at the 10% level with three lags. For the 49 individual states, the test with one lag rejects a unit root at the 10% or higher level for 34 states. The 15 states for which the test cannot reject a unit root are mostly inland areas.
Table 12
Kapetanios, Shin, and Snell (2003) nonlinear unit root test.
The above results suggest that the evidence of stationarity for house prices is very strong under the KSS nonlinear unit root testing procedure: A unit root can be rejected for Shiller’s series and almost 70% (34 out of 49) of the states.
## 5.2 Unit root test with smooth breaks
Arguably, structural breaks in house prices could be smooth and gradual. We consider this issue by implementing a unit root test proposed in Enders and Lee (2012). This test uses the low frequency components of a Fourier expansion to approximate smooth structural breaks of unknown forms. Enders and Lee suggest pre-testing for a deterministic nonlinear trend using an F-statistic before carrying out their unit root test. They define the F-statistic in their equation (10) using a single frequency component, and suggest a grid-search method to select the frequency value k that minimizes the sum of squared residuals from the unit root test regression equation (and thus maximizes the F-statistic). We implement the F-test for Shiller’s series from 1890 to 2011, for lag lengths varying from one through three, and experiment with one through five for the frequency value k. For each lag length, the k that maximizes the F-statistic is always k=1, and the corresponding F-statistic is always insignificant.11 This result indicates that a single frequency component Fourier expansion is not a good approximation for the trend in Shiller’s house price series. Therefore, Enders and Lee’s (2012) unit root test may have low test power in our case, and we do not proceed in applying their unit root test.
## 6 Sequential panel selection method
To investigate which series in the panel have stronger evidence of stationarity, we employ the Sequential Panel Selection Method (SPSM) of Chortareas and Kapetanios (2009) in this section.
The SPSM separates a panel into two groups by the following steps. A panel unit root test is first implemented to the entire panel. If a unit root is not rejected, the null that all series are non-stationary is accepted. If a unit root is rejected, the most stationary series (measured as the minimum individual CADF statistic in our case of the CIPS test) is excluded from the panel, and the panel unit root test is re-run on the remaining series. Repeat this process until the null is not rejected. Then the panel is separated into a group of series with evidence for stationarity and a group of series without evidence for stationarity.
We apply this SPSM to the panel of state house price series, using the CIPS panel unit root test with one lag. At the 10% significance level, 19 out of the 49 states are selected as areas with evidence for stationarity: FL MA NY AL PA UT OK NH NJ MD TN RI MO CT CO NC TX NV ID. A pattern observed is that 10 of these 19 states are along the Atlantic Ocean.
A related question is how the stationarity properties of the disaggregated house price series affect the stationarity of the aggregated national series. Intuitively, states with large population and expensive housing such as the big coastal states will have higher weights in constructing the national-level index, and the stronger evidence of stationarity in these areas contribute to the stationarity of national level index.
## 7 Conclusion
This paper examines a fundamental and important question regarding the time series properties of US real house prices; specifically, do US real house prices contain a unit root. This stationarity property is of vital importance for analyzing univariate and panel time series data, modeling and forecasting the dynamics of the series, and conducting tests such as cointegration. Choosing an inappropriate model due to misunderstanding of whether a unit root exists could invalidate the usual statistical tests, such as t-tests and F-tests.
We first examine Shiller’s national level real home price index over a 120-year period. We apply the multiple breaks unit root tests in Carrion-i-Silvestre, Kim, and Perron (2009) (CKP), the Augmented Dickey-Fuller (ADF) unit root test, and the Kwiatkowski-Phillips-Schmidt-Shin (KPSS) test. The CKP three breaks test rejects a unit root for 1890–2011. When the ADF and the KPSS tests are applied to periods beginning at 1950, whether the recent bubble is included or excluded affects the results. Selecting a period that includes the recent house price boom, but not the bust, results in the ADF test being unable to reject a unit root. Moreover, if the period typically used in the literature (starting in 1975) is examined, the ADF test generally cannot reject a unit root at the 5% level. However, these results are reversed if the complete price cycle is included.
Arguably, the US housing market is not a single national market, but is composed of state or MSA level markets. Thus, we also conduct the Pesaran (2007) CIPS panel unit root test using samples of states and MSAS. We also conduct this test in supply elastic and inelastic MSAs given that house price cycles differ between them. We find that, first, real house prices are trend stationary if both the boom and bust periods of the recent price bubble are included, both for the 363 MSA group and the state group. But not including the bust period of this bubble results in non-rejection of a unit root for the 363 MSA group. For the state group, a unit root in real house prices can be rejected if the period ends at 2006, but a unit root cannot be rejected if the period ends at 2002. Second, housing supply elasticity affects unit root test results; specifically, there is stronger evidence of rejecting a unit root in supply inelastic MSAs than elastic MSAs.
We then apply the CIPS panel unit root test to the samples studied in the literature. When our data is restricted to the same areas and time periods, we verify these studies’ results. In nearly all cases the result is that a unit root cannot be rejected. But if the dataset is extended to 2011, a unit root is always rejected at a highly significantly level. Thus, the difference in unit root test results between the literature and this paper occur because of the smaller number of time periods studied in the literature.
We also implement the nonlinear unit root test of Kapetanios, Shin and Snell (2003) to consider the case that house prices might be nonlinear, and the test in Enders and Lee (2012) which uses a Fourier expansion to approximate smooth breaks. The evidence of stationarity is still strong under the nonlinear unit root testing procedure. We then carry out the Sequential Panel Selection Method of Chortareas and Kapetanios (2009) to obtain more insights on which individual house price series have stronger evidence of stationarity in the panel.
The unit root tests results in this study have important implications, both empirically and theoretically. Empirically, if US real house prices are indeed trend stationary, then the cointegration analysis and the error correction models widely used in modeling and forecasting US house price dynamics need to be reconsidered. Theoretically, a model should be able to explain the following issues. First, the model should explain the relationships between real income, which has a unit root, and US real house prices, which does not have a unit root. Second, it should explain that the evidence in favor of trend stationarity in real house prices for supply inelastic cities is stronger than for supply elastic cities. Third, although areas with inelastic and elastic housing supply have similar trends in real income, the former group has an upward trend and the latter group has a negative trend in real house prices.
## Appendix on Shiller’s house price series
Shiller constructs the extended US time series of real house prices by linking together existing house price data from 1890 to 1934 that was derived from Grebler, Blank, and Winnick (1956) with the home-purchase component of the CPI-U from 1953 to 1975. The series next uses the Office of Federal Housing Enterprise Oversight (OFHEO) data from 1975 to 1987 and thereafter uses the Case-Shiller-Weiss index. In the above time series, there is a gap between 1934 and 1953. Shiller constructs an index of house prices for this period using data on the sales price of houses reported in newspapers from five large cities.
The data from Grebler, Blank, and Winnick (1956) is viewed as the best source of data for the 1890–1934 period. It has been used to construct the house price series reported in the Historical Statistics of the United States, Colonial Times to 1970, Part 2, Tables 259–260 (1975). Grebler, Blank, and Winnick (1956) used survey information from 22 cities in the U.S. Department of Commerce’s 1937 Financial Survey of Urban Housing. At least two cities were located in each of the nine census regions except the East South Central. The survey data included the value of property in 1934, the year of acquisition by the current owner, and the original purchase price. Price increases (measured for the median property) were then determined for the sample period for single family owner-occupied properties. The results were then adjusted for annual depreciation and for structural additions (net depreciation was calculated to be 1.375 percent). Grebler, Blank, and Winnick (1956) compared their time series of house prices to construction costs and found a close conformation of the two indexes.
The most careful evaluation of the Shiller data is by Davis and Heathcote (2007). They construct house price indexes and compare their results with Shiller for the periods 1930–1950, 1950–1960, and 1960–1970. Only in 1970–1980 is there a disagreement as the differences in the earlier periods were only 0.1 percent through 1960 and 0.5 percent from 1960 to 1970. From 1980 on the differences in indexes again were relatively small. They argue that “Shiller’s series underestimates true house price growth, especially during the 1970s.” However, the estimates of structural breaks in the Shiller price series occur in periods many years from the period of disputed data.
## Acknowledgment:
We thank the editor and anonymous referee for their valuable comments and suggestions.
Disclaimer: The views expressed in this paper are those of the authors and should not be attributed to Moody’s Analytics.
## References
• Abraham, J., and P. Hendershott. 1996. “Bubbles in Metropolitan Housing Markets.” Journal of Housing Research 7 (2): 191–207.Google Scholar
• Bai, J., and S. Ng. 2004. “A PANIC Attack on Unit Roots and Cointegration.” Econometrica 72 (4): 1127–1177.
• Canarella, G., S. M. Miller, and S. K. Pollard. 2012. “Unit Roots and Structural Change: An Application to US House-Price Indices.” Urban Studies 49 (3): 757–776.
• Capozza, D., and R. Helsley. 1989. “The Fundamentals of Land Prices and Urban Growth.” Journal of Urban Economics 26 (3): 295–306.
• Capozza, D., and R. Helsley. 1990. “The Stochastic City.” Journal of Urban Economics 28 (2): 187–203.
• Capozza, D., P. Hendershott, and C. Mack. 2004. “An Anatomy of Price Dynamics in Illiquid Markets: Analysis and Evidence from Local Housing Markets.” Real Estate Economics 32 (1): 1–32.
• Carrion-i-Silvestre, J. L., D. Kim, and P. Perron. 2009. “GLS-based Unit Root Tests with Multiple Structural Breaks both under the Null and the Alternative Hypotheses.” Econometric Theory 25 (06): 1754–1792.
• Chortareas, G., and G. Kapetanios. 2009. “Getting PPP Right: Identifying mean-Reverting Real Exchange Rates in Panels.” Journal of Banking and Finance 33: 390–404.
• Davis, Morris A., and Jonathan Heathcote. 2007. “The Price and Quantity of Residential Land in the United States.” Journal of Monetary Economics 54 (4): 2595–2620.
• Elliott, G., T. J. Rothenberg, and J. H. Stock. 1996. “Efficient Tests for an Autoregressive Unit Root.” Econometrica 64 (4): 813–836.
• Enders, W., and J. Lee. 2012. “A Unit Root Test Using a Fourier Series to Approximate Smooth Breaks.” Oxford Bulletin of Economics and Statistics 74 (4): 574–599.
• Gallin, J. 2006. “The Long-run Relationship between House Prices and Income: Evidence from Local Housing Markets.” Real Estate Economics 34 (3): 417–438.
• Glaeser, E., J. Gyourko, and A. Saiz. 2008. “Housing Supply and Housing Bubbles.” Journal of Urban Economics, 64 (2): 198–217.
• Grebler, Leo, David M. Blank, and Louis Winnick. 1956. Capital Formation in Residential Real Estate: Trends and Prospects. Princeton, NJ: Princeton University Press.Google Scholar
• Gyourko, J., and A. Saiz. 2006. “Construction Costs and the Supply of Housing Structure.” Journal of Regional Science 46 (4): 661–680.
• Historical Statistics of the United States, Colonial Times to 1970, Part 2. 1975. U.S. Bureau of the Census: Washington, DC.Google Scholar
• Holly, S., M. H. Pesaran, and T. Yamagata. 2010. “A Spatio-temporal Model of House Prices in the US.” Journal of Econometrics 158 (1): 160–173.
• Im, K., M. H. Pesaran, and Y. Shin. 2003. “Testing for Unit Roots in Heterogeneous Panels.” Journal of Econometrics 115 (1): 53–74.
• Kapetanios, G., Y. Shin, and A. Snell. 2003. “Testing for Cointegration in Nonlinear Smooth Transition Error Correction Models.” Econometric Theory 22: 279–303.
• Lee, J., and M. Strazicich. 2003. “Minimum Lagrange Multiplier Unit Root Test with Two Structural Breaks.” Review of Economics and Statistics 85 (4):1082–1089.
• Levin, A., C. Lin, and C. Chu. 2002. “Unit Root Tests in Panel Data: Asymptotic and Finite Sample Properties.” Journal of Econometrics 108 (1): 1–24.
• Lumsdaine, R. L., and D. H. Papell. 1997. “Multiple Trend Breaks and the Unit Root Hypothesis.” Review of Economics and Statistics 79 (2): 212–218.
• Maddala, G., and S. Wu. 1999. “A Comparative Study of Unit Root Tests with Panel Data and a New Simple Test.” Oxford Bulletin of Economics and Statistics 61 (S1): 631–652.
• Malpezzi, S. 1999. “A Simple Error Correction Model of House Prices.” Journal of Housing Economics 8 (1): 27–62.
• Mikhed, V., and P. Zemcik. 2009. “Do House Prices Reflect Fundamentals? Aggregate and Panel Data Evidence.” Journal of Housing Economics 18 (2): 140–149.
• Moon, H. R., and B. Perron. 2004. “Testing for a Unit Root in Panels with Dynamic Factors.” Journal of Econometrics 122 (1): 81–126.
• Ng, S., and P. Perron. 2001. “Lag Length Selection and the Construction of Unit Root Tests with Good Size and Power.” Econometrica 69 (6): 1519–1554.
• Pedroni, P. 1999. “Critical Values for Cointegration Tests in Heterogeneous Panels with Multiple Regressors.” Oxford Bulletin of Economics and Statistics 61 (S1): 653–670.
• Pedroni, P. 2004. “Panel Cointegration: Asymptotic and Finite Sample Properties of Pooled Time Series Tests with an Application to the PPP Hypothesis.” Econometric Theory 20 (3): 597–625.
• Perron, P., and Z. Qu. 2007. “A Simple Modification to Improve the Finite Sample Properties of Ng and Perron’s Unit Root Tests.” Economics Letters 94 (1): 12–19.
• Pesaran, M. H. 2004. “General Diagnostic Tests for Cross Section Dependence in Panels.” CESifo Working Papers, no. 1233.Google Scholar
• Pesaran, M. H. 2006. “Estimation and Inference in Large Heterogeneous Panels with a Multifactor Error Structure.” Econometrica 74 (4): 967–1012.
• Pesaran, M. H. 2007. “A Simple Panel Unit Root Test in the Presence of Cross Section Dependence.” Journal of Applied Econometrics 22 (2): 265–312.
• Pesaran, M. H. 2015. “Testing Weak Cross-sectional Dependence in Large Panels.” Econometric Reviews 34 (6–10): 1089–1117.
• Pesaran, M. H., L. V. Smith, and T. Yamagata. 2013. “Panel Unit Root Tests in the Presence of A Multifactor Error Structure.” Journal of Econometrics 175 (2): 94–115.
• Saiz, A. 2010. “The Geographic Determinants of Housing Supply.” The Quarterly Journal of Economics 125 (3): 1253–1296.
• Shiller, R. 2005. Irrational Exuberance, 2nd ed., Princeton University Press, Princeton, NJ, USA.Google Scholar
## Supplemental Material:
The online version of this article (DOI: 10.1515/snde-2013-0134) offers supplementary material, available to authorized users.
## Footnotes
• Following conventions in the literature, all series under consideration in this paper are in logarithms, and we take statements such as “unit root in real house prices” to mean “unit root in the log of real house prices.”
• Shiller’s home price index can be found at http://www.econ.yale.edu/~shiller/data.htm. See the Appendix for a more thorough description of the construction of the series and a discussion of the reliability of it. Detailed descriptions of each of the sources for constructing the index are also available in Shiller’s book Irrational Exuberance, 2nd ed.: 234–235.
• As a robustness check, we also apply unit root tests to the period 1950–1989 and 1950–1982, which, respectively, ends at the peak and the start of the housing boom prior to the 2000s bubble, and find evidence of trend stationarity for both periods: a unit root is rejected by the ADF test at the 5% level and the 1% level, respectively. Therefore, of the examined five periods since 1950, 1950–2006 is the only one for which we cannot reject a unit root.
• The FMHPI series can be found at http://www.freddiemac.com/finance/fmhpi/.
• The CPI-U series can be found at http://www.bls.gov/cpi/#data.
• Pesaran (2015) shows robustness of the CD test to the null of weak cross-sectional dependence.
• We also apply the CIPS test to monthly Freddie Mac house price data for the state panel, and set the maximum number of lags to 24 months. A unit root is rejected when the number of lags equals 11, 13, or is greater than 14. This result is consistent with the finding from the annual data.
• Pesaran, Smith, and Yamagata (2013) propose a CIPSM test which is an extension of the CIPS test. The CIPS test allows for only one common factor affecting the cross-sectional dependence, while the CIPSM test allows for multiple common factors. In the CIPSM test, an additional variable is included when there are two common factors, and the lagged cross-sectional mean and the lagged first differences of the cross-sectional mean of that additional variable are used to filter out the cross-sectional dependence created by the second common factor. Similarly, in the case of three common factors, two additional variables are included in the CIPSM test. We carry out the CIPSM test with lag order being one year, and consider three cases of the additional variables included: income, population, and income plus population. (By income we mean log real per capita income, and population means log population.) For the 363 MSAs group over 1975–2011, a unit root is rejected at the 5% level when income is the additional variable included, but it cannot be rejected if the additional variable(s) included is(are) population or income plus population. For the top 20 supply inelastic MSAs group and the to 20 supply elastic MSAs group, results are very similar to the one common factor CIPS test: the former group strongly rejects a unit root in all the cases of the additional variables included, and the latter group has little evidence of rejecting a unit root.
• The per capita personal income is annual data from Bureau of Economic Analysis, and we convert this nominal data to real terms by deflating it by CPI-U.
• Canarella, Miller, and Pollard (2012) examine stationarity for the house price indices included in the S P/Case-Shiller Composite-10 index. They implement a number of unit root tests including linear tests, nonlinear tests, and tests with structural breaks. The series they examine are the log difference of city house prices, and the ratio of log city house price index to log Composite-10 house price index, but they do not examine any house price series itself. They find different results for different unit root test procedures applied and different series examined.
• The F-statistic is 3.28 for lag=1, 3.77 for lag=2, and 3.37 for lag=3.
Corresponding author: Jing Zhang, ECCA, Moody’s Analytics, 121 N Walnut St, West Chester, PA 19380, USA, e-mail:
Published Online: 2015-06-05
Published in Print: 2016-02-01
Citation Information: Studies in Nonlinear Dynamics & Econometrics, Volume 20, Issue 1, Pages 1–18, ISSN (Online) 1558-3708, ISSN (Print) 1081-1826,
Export Citation | 2018-04-27 04:39:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 14, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5456019639968872, "perplexity": 2096.4354198101237}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125949036.99/warc/CC-MAIN-20180427041028-20180427061028-00140.warc.gz"} |
https://2ground.com/products/attenuator-dg20 | Attenuator DG20
Gigahertz Solutions
\$58.00 USD
Extend the Maximum Power of your RF Meter - Increase the RF measurement range by 100x or 20 dB - Compatible with: Standard Log-Per antenna of the HF32D, HF35C and the HF38B
Product Summary
Increase the signal strength measuring range of your RF Meter with the DG20Attenuator (100x High Frequency Signal Reducer)
If your RF Meter measurement range is set to maximum and the display indicates ("1...") this attenuator will allow the HF-Analyzer (HF Analyser) to accurately display higher signal strength measurements and extend the range of your RF Meter.
• Extends the measuring range of the RF Meters
• Allows for 20dB or 100x higher field strength measurements in the frequency range of 10 MHz – 3 GHz
How it Works:
1) Attach the DG20 attenuator to a compatible RF Analyzer (RF Analyser)
2) Attach the LogPer directional antenna to the DG20
3) Power on the RF Analyzers (RF Analysers) and configure appropriately
4) Hold the RF Analyzer with an outstretched arm
5) The analyzer will display the intensity of RF radiation on its LCD display
6) Simply multiply the measured value by 100 to obtain the actual value
Compatible with:
HF32D, HF35C and the HF38B
Know for sure what RF levels you are being exposed to…
Product Details
Supplied with:
DG20 Attenuator: Qty=1 | 2017-11-18 21:10:44 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8756029605865479, "perplexity": 8409.258259191276}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805049.34/warc/CC-MAIN-20171118210145-20171118230145-00021.warc.gz"} |
https://stats.stackexchange.com/questions/82579/which-to-believe-kolmogorov-smirnov-test-or-q-q-plot/82584 | # Which to believe: Kolmogorov-Smirnov test or Q-Q plot?
I'm trying to determine if my dataset of continuous data follows a gamma distribution with parameters shape $=$ 1.7 and rate $=$ 0.000063.
The problem is when I use R to create a Q-Q plot of my dataset $x$ against the theoretical distribution gamma (1.7, 0.000063), I get a plot that shows that the empirical data roughly agrees with the gamma distribution. The same thing happens with the ECDF plot.
However when I run a Kolmogorov-Smirnov test, it gives me an unreasonably small $p$-value of $<1\%$.
Which should I choose to believe? The graphical output or the result from KS-test?
• can you also provide the density distribution plots you obtain ? – Scratch Jan 17 '14 at 11:34
• The test and the diagnostic plot aren't inconsistent. The distribution is similar to the theoretical one, as the QQ plot shows. The sample size is large enough that you are likely to pick up even small differences from the theoretical one. – Glen_b Jan 17 '14 at 13:21
I don't see any sense in not "believing" the Q-Q plot (if you've produced it properly); it's just a graphical representation of the reality of your data, juxtaposed with the definitional distribution. Clearly it's not a perfect match, but if it's good enough for your purposes, that may be more or less the end of the story. You may want to check out this related question: Is normality testing 'essentially useless'?
The $p$-value from the KS test is basically telling you that your sample size is large enough to give strong evidence against the null hypothesis that your data belong to exactly the same distribution as your reference distribution (I assume you referenced the gamma distribution; you may want to double-check that you did). That seems clear enough from the Q-Q plot as well (i.e., there are some small but seemingly systematic patterns of deviation), so I don't think there's truly any conflicting information here.
Whether your data are too different from a gamma distribution for your intended purposes is another question. The KS test alone can't answer it for you (because its outcome will depend on your sample size, among other reasons), but the Q-Q plot might help you decide. You might also want to look into robust alternatives to any other analyses you plan to run, and if you're particularly serious about minding the sensitivity of any subsequent analyses to deviations from the gamma distribution, you might want to consider doing some simulation testing too.
What you could do is create multiple samples from your theoretical distribution and plot those on the background of your QQ-plot. That will give you an idea of what kind of variability you can reasonably expect from just sampling.
You can extend that idea to create an envelope around the theoretical line, using the example from pages 86-89 of :
Venables, W.N. and Ripley, B.D. 2002. Modern applied statistics with S. New York: Springer.
This will be a point-wise envelope. You can extend that idea even further to create an overall envelope using the ideas from pages 151-154 of:
Davison, A.C. and Hinkley, D.V. 1997. Bootstrap methods and their application. Cambridge: Cambridge University Press.
However, for basic exploration I think just plotting a couple of reference samples in the background of your QQ-plot will be more than enough.
• Good idea! Remind me to upvote this in 11 hours (used up all my votes on cartoons)...I particularly like bootstrapping the ECDF as a way of enriching that kind of plot. – Nick Stauner Jan 17 '14 at 12:39
• Also have a look at CRAN package sfsmisc, which has the function ecdf.ksCI draweing a confidence band on the ecdf plot. The same idea could be used to draw a confidence band on the QQ plot ... – kjetil b halvorsen Apr 8 '15 at 12:41
The KS test assumes particular parameters of your distribution. It tests the hypothesis "the data are distributed according to this particular distribution". You might have specified these parameters somewhere. If not, some not matching defaults may have been used. Note that the KS test will become conservative if the estimated parameters are plugged into the hypothesis.
However, most goodness-of-fit tests are used the wrong way round. If the KS test would not have shown significance, this does not mean that the model you wanted to prove is appropriate. That's what @Nick Stauner said about too small sample size. This issue is similar to point hypothesis tests and equivalence tests.
So in the end: Only consider the QQ-plots.
QQ Plot is an exploratory data analysis technique and should be treated as such - so are all other EDA plots. They are only meant to give you preliminary insights into the data on hand. You should never decide or stop your analysis based on EDA plots like QQ plot. It is a wrong advice to consider only QQ plots. You should definitely go by quantiative techniques like KS Test. Suppose you have another QQ plot for similar data set, how would you compare the two without a quantitative tool? Right next step for you, after EDA and KS test is to find out why KS test is giving low p-value (in your case, it could even be due to some error).
EDA techniques are NOT meant to serve as decision making tools. In fact, I would say even inferential statistics are meant to be only exploratory. They give you pointers as to which direction your statistical analysis should proceed. For example, a t-test on a sample would only give you a confidence level that the sample may (or may not) belong to the population, you may still proceed further based on that insight as to what distribution your data belongs to and what are its parameters etc. In fact, when some state that even techniques implemented as part of machine learning libraries are also exploratory in nature!!! I hope they mean it in this sense...!
To conclude statistical decisions on the basis of plots or visualization techniques is making mockery of advances made in statistical science. If you ask me, you should use these plots as tools for communicating the final conclusions based on your quantitative statistical analysis.
• This forbids me from doing something that I do often and regard as sensible, make a decision given an exploratory plot and stop before a more formal significance test. No mockery is entailed. This is a repetitive and dogmatic comment that doesn't add anything useful to existing excellent, and much more nuanced, answers. It's very easy to compare QQ plots... – Nick Cox Feb 11 at 9:05
• I have not read other answers but if they also encourage quantitative methods, I am fine. For the question asked, I had given my answer. But, I am curious, it does not take much time to do formal quant tests (just few more minutes to do KS test) with now available packages like R, so why would anybody stop at EDA plots? Just after validating KS test results of R with bootstrapping, I noticed in several places where it was mentioned as not preferable to use etc.,.. Is it due to a general suspicion about traditional stat methods? This is the rationale behind my strong comments..not to offend any – Murugesan Narayanaswamy Feb 11 at 9:58
• You really should read other answers before posting. The implication of posting is that you have something different (as well as defensible) to say. Your comment is puzzling in implying that Q-Q plots are not "quantitative methods". A Q-Q plot shows in principle all the quantitative information relevant in assessing distribution fit. In contrast a test like Kolmogorov-Smirnov gives a one-dimensional reduction and gives little help on what to do next. – Nick Cox Feb 11 at 10:23
• Q-Q plot compares theoretical distribution with given test data and provides a visual representation but KS test does the same thing in much more rigorous way using statistical concepts and gives finally a probability value. You cannot compare two QQ plots but you will get a quantiative difference when you use KS test. It is misnomer that KS test p-value is wrong. It is also wrong that empirical data set cannot be used to extract distribution parameters. I have personally done bootstrapping and verified with p values with both tables and manually calculated kolomogrov distribution. – Murugesan Narayanaswamy Feb 11 at 12:52
• There is much shadow boxing in your comment, Who is arguing where that you can't use empirical data to get parameter estimates? That is what we should all agree is being done here. You'll have to forgive me for not wanting to pursue a discussion. I stand by my reaction to your answer. – Nick Cox Feb 11 at 13:02
## protected by Nick CoxFeb 11 at 9:06
Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead? | 2019-06-25 10:32:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6502947211265564, "perplexity": 775.1033983023144}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999817.30/warc/CC-MAIN-20190625092324-20190625114324-00186.warc.gz"} |
http://sdk.numxl.com/help/ndk-diff | # NDK_DIFF
int __stdcall NDK_DIFF ( double * X, size_t N, size_t S, size_t D )
Returns an array of cells for the differenced time series (i.e. (1-L^S)^D).
Returns
status code of the operation
Return values
NDK_SUCCESS Operation successful NDK_FAILED Operation unsuccessful. See Macros for full list.
Parameters
[in,out] X is the univariate time series data (a one dimensional array). [in] N is the number of observations in X. [in] S is the lag order (e.g. k=0 (no lag), k=1 (1st lag), etc.). [in] D is the number of repeated differencing (e.g. d=0 (none), d=1 (difference once), 2=(difference twice), etc.).
Remarks
1. The time series are homogeneous or equally spaced.
2. The two time series have an identical number of observations and time order, or the second series contains a single value.
3. In the case where the two time series are identically sized, the second series is subtracted from the first point-by-point: $\left[z_t\right] = \left[x_t\right] - \left[y_t\right]$ Where:
• $$\left[z_t\right]$$ is the difference time series.
• $$\left[x_t\right]$$ is the first time series.
• $$\left[y_t\right]$$ is the second time series.
4. In the case where the second time series is passed as a single value ($$\alpha$$), this constant is subtracted from all points in the first time series: $\left[z_t\right] =\left[x_t\right] - \left[\alpha\right]$ Where:
• $$\left[z_t\right]$$ is the difference time series.
• $$\left[x_t\right]$$ is the first time series.
• $$\alpha$$ is a constant value.
5. The returned array has the same size and time order as the first input time series.
Requirements
Examples
Namespace: NumXLAPI Class: SFSDK Scope: Public Lifetime: Static
int NDK_DIFF ( double[] data, UIntPtr nSize, UIntPtr nLag, UIntPtr nDifference )
Returns an array of cells for the differenced time series (i.e. (1-L^S)^D).
Returns
status code of the operation
Return values
NDK_SUCCESS Operation successful NDK_FAILED Operation unsuccessful. See Macros for full list.
Parameters
[in,out] data is the univariate time series data (a one dimensional array). [in] nSize is the number of observations in data. [in] nLag is the lag order (e.g. k=0 (no lag), k=1 (1st lag), etc.). [in] nDifference is the number of repeated differencing (e.g. d=0 (none), d=1 (difference once), 2=(difference twice), etc.).
Remarks
1. The time series are homogeneous or equally spaced.
2. The two time series have an identical number of observations and time order, or the second series contains a single value.
3. In the case where the two time series are identically sized, the second series is subtracted from the first point-by-point: $\left[z_t\right] = \left[x_t\right] - \left[y_t\right]$ Where:
• $$\left[z_t\right]$$ is the difference time series.
• $$\left[x_t\right]$$ is the first time series.
• $$\left[y_t\right]$$ is the second time series.
4. In the case where the second time series is passed as a single value ($$\alpha$$), this constant is subtracted from all points in the first time series: $\left[z_t\right] =\left[x_t\right] - \left[\alpha\right]$ Where:
• $$\left[z_t\right]$$ is the difference time series.
• $$\left[x_t\right]$$ is the first time series.
• $$\alpha$$ is a constant value.
5. The returned array has the same size and time order as the first input time series.
Exceptions
Exception Type Condition
None N/A
Requirements
Namespace NumXLAPI SFSDK Public Static NumXLAPI.DLL
Examples
References
* Hamilton, J .D.; Time Series Analysis , Princeton University Press (1994), ISBN 0-691-04289-6
* Tsay, Ruey S.; Analysis of Financial Time Series John Wiley & SONS. (2005), ISBN 0-471-690740
* D. S.G. Pollock; Handbook of Time Series Analysis, Signal Processing, and Dynamics; Academic Press; Har/Cdr edition(Nov 17, 1999), ISBN: 125609906
* Box, Jenkins and Reisel; Time Series Analysis: Forecasting and Control; John Wiley & SONS.; 4th edition(Jun 30, 2008), ISBN: 470272848 | 2017-10-24 05:31:29 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6263771653175354, "perplexity": 3748.4293237878846}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187828178.96/warc/CC-MAIN-20171024052836-20171024072836-00088.warc.gz"} |
https://www.mysciencework.com/publication/show/weak-chaos-melting-transition-confined-microplasma-system-d3d4d7bd | # Weak Chaos and the "Melting Transition" in a Confined Microplasma System
Authors
Type
Published Article
Publication Date
Submission Date
Identifiers
DOI: 10.1103/PhysRevE.81.016211
Source
arXiv
We present results demonstrating the occurrence of changes in the collective dynamics of a Hamiltonian system which describes a confined microplasma characterized by long--range Coulomb interactions. In its lower energy regime, we first detect macroscopically, the transition from a "crystalline--like" to a "liquid--like" behavior, which we call the "melting transition". We then proceed to study this transition using a microscopic chaos indicator called the \emph{Smaller Alignment Index} (SALI), which utilizes two deviation vectors in the tangent dynamics of the flow and is nearly constant for ordered (quasi--periodic) orbits, while it decays exponentially to zero for chaotic orbits as $\exp(-(\lambda_{1}-\lambda_{2})t)$, where $\lambda_{1}>\lambda_{2}>0$ are the two largest Lyapunov exponents. During the "melting phase", SALI exhibits a peculiar, stair--like decay to zero, reminiscent of "sticky" orbits of Hamiltonian systems near the boundaries of resonance islands. This alerts us to the importance of the $\Delta\lambda=\lambda_{1}-\lambda_{2}$ variations in that regime and helps us identify the energy range over which "melting" occurs as a multi--stage diffusion process through weakly chaotic layers in the phase space of the microplasma. Additional evidence supporting further the above findings is given by examining the $GALI_{k}$ indices, which generalize SALI (=$GALI_{2}$) to the case of $k>2$ deviation vectors and depend on the complete spectrum of Lyapunov exponents of the tangent flow about the reference orbit. | 2018-02-18 18:38:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8018145561218262, "perplexity": 1491.9885514959499}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812247.50/warc/CC-MAIN-20180218173208-20180218193208-00166.warc.gz"} |
https://blog.coupon-addict.fr/rphwt/keto-spinach-chicken-casserole-fd30fb | • If λ = eigenvalue, then x = eigenvector (an eigenvector is always associated with an eigenvalue) Eg: If L(x) = 5x, 5 is the eigenvalue and x is the eigenvector. :5/ . Question: If λ Is An Eigenvalue Of A Then λ − 7 Is An Eigenvalue Of The Matrix A − 7I; (I Is The Identity Matrix.) 2. In case, if the eigenvalue is negative, the direction of the transformation is negative. This eigenvalue is called an infinite eigenvalue. Observation: det (A – λI) = 0 expands into a kth degree polynomial equation in the unknown λ called the characteristic equation. If x is an eigenvector of the linear transformation A with eigenvalue λ, then any vector y = αx is also an eigenvector of A with the same eigenvalue. In fact, together with the zero vector 0, the set of all eigenvectors corresponding to a given eigenvalue λ will form a subspace. B = λ I-A: i.e. Therefore, λ 2 is an eigenvalue of A 2, and x is the corresponding eigenvector. Here is the most important definition in this text. An application A = 10.5 0.51 Given , what happens to as ? If λ = 1, the vector remains unchanged (unaffected by the transformation). Eigenvalues so obtained are usually denoted by λ 1 \lambda_{1} λ 1 , λ 2 \lambda_{2} λ 2 , …. The eigenvalue λ is simply the amount of "stretch" or "shrink" to which a vector is subjected when transformed by A. The first column of A is the combination x1 C . then λ is called an eigenvalue of A and x is called an eigenvector corresponding to the eigen-value λ. detQ(A,λ)has degree less than or equal to mnand degQ(A,λ) | 2022-05-27 06:57:10 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9429680109024048, "perplexity": 519.2167065463433}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662636717.74/warc/CC-MAIN-20220527050925-20220527080925-00575.warc.gz"} |
http://gmatclub.com/forum/strategy-for-exponents-and-roots-138017.html#p1116665 | Find all School-related info fast with the new School-Specific MBA Forum
It is currently 26 Aug 2016, 16:30
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# Strategy for Exponents and Roots
new topic post reply Question banks Downloads My Bookmarks Reviews Important topics
Author Message
TAGS:
### Hide Tags
Intern
Joined: 15 Aug 2011
Posts: 4
Followers: 0
Kudos [?]: 1 [0], given: 0
Strategy for Exponents and Roots [#permalink]
### Show Tags
28 Aug 2012, 14:02
2
This post was
BOOKMARKED
I am preparing for my third run on the GMAT (590 and 620) and I'm definitely working for 700+ score.
What other strategy book the MGMAT (Number properties) would any of you recommend me, for strengthen the section of exponents and roots?
Patrick
Veritas Prep GMAT Instructor
Joined: 16 Oct 2010
Posts: 6830
Location: Pune, India
Followers: 1924
Kudos [?]: 11943 [3] , given: 221
Re: Strategy for Exponents and Roots [#permalink]
### Show Tags
28 Aug 2012, 21:41
3
KUDOS
Expert's post
1
This post was
BOOKMARKED
Patheinemann wrote:
I am preparing for my third run on the GMAT (590 and 620) and I'm definitely working for 700+ score.
What other strategy book the MGMAT (Number properties) would any of you recommend me, for strengthen the section of exponents and roots?
Patrick
I have discussed exponents and roots in some of my posts. The first two are pretty basic and then we move on to more difficult stuff.
http://www.veritasprep.com/blog/2011/07 ... eparation/
http://www.veritasprep.com/blog/2011/07 ... ration-ii/
http://www.veritasprep.com/blog/2011/07 ... s-applied/
http://www.veritasprep.com/blog/2011/08 ... -the-gmat/
http://www.veritasprep.com/blog/2011/08 ... exponents/
http://www.veritasprep.com/blog/2011/08 ... -question/
http://www.veritasprep.com/blog/2012/02 ... exponents/
You can also check out your high school Math book for exponents and roots theory.
_________________
Karishma
Veritas Prep | GMAT Instructor
My Blog
Get started with Veritas Prep GMAT On Demand for $199 Veritas Prep Reviews Intern Joined: 15 Aug 2011 Posts: 4 Followers: 0 Kudos [?]: 1 [0], given: 0 Re: Strategy for Exponents and Roots [#permalink] ### Show Tags 29 Aug 2012, 08:35 Thank you Karishma. Math Expert Joined: 02 Sep 2009 Posts: 34449 Followers: 6271 Kudos [?]: 79581 [0], given: 10022 Re: Strategy for Exponents and Roots [#permalink] ### Show Tags 29 Aug 2012, 08:45 Patheinemann wrote: I am preparing for my third run on the GMAT (590 and 620) and I'm definitely working for 700+ score. What other strategy book the MGMAT (Number properties) would any of you recommend me, for strengthen the section of exponents and roots? Thanks in advance Patrick For theory on exponents and roots check: math-number-theory-88376.html For practice on exponents and roots check: tough-and-tricky-exponents-and-roots-questions-125967.html (DS problems) tough-and-tricky-exponents-and-roots-questions-125956.html (PS problems) Hope it helps. _________________ GMAT Club Legend Joined: 09 Sep 2013 Posts: 11081 Followers: 511 Kudos [?]: 134 [0], given: 0 Re: Strategy for Exponents and Roots [#permalink] ### Show Tags 09 Oct 2014, 09:15 Hello from the GMAT Club BumpBot! Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos). Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email. _________________ GMAT Club Legend Joined: 09 Sep 2013 Posts: 11081 Followers: 511 Kudos [?]: 134 [0], given: 0 Re: Strategy for Exponents and Roots [#permalink] ### Show Tags 28 Apr 2016, 10:26 Hello from the GMAT Club BumpBot! Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos). Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email. _________________ Manager Joined: 08 Dec 2015 Posts: 204 GMAT 1: 600 Q44 V27 Followers: 1 Kudos [?]: 5 [0], given: 33 Strategy for Exponents and Roots [#permalink] ### Show Tags 28 Apr 2016, 11:16 VeritasPrepKarishma Great post, thank you a lot! Do you have any new posts on exponentials\roots? Since this post is 4 years old maybe something new was published Veritas Prep GMAT Instructor Joined: 16 Oct 2010 Posts: 6830 Location: Pune, India Followers: 1924 Kudos [?]: 11943 [1] , given: 221 Re: Strategy for Exponents and Roots [#permalink] ### Show Tags 28 Apr 2016, 20:02 1 This post received KUDOS Expert's post iliavko wrote: VeritasPrepKarishma Great post, thank you a lot! Do you have any new posts on exponentials\roots? Since this post is 4 years old maybe something new was published Here is a relatively new one: http://www.veritasprep.com/blog/2016/01 ... -property/ _________________ Karishma Veritas Prep | GMAT Instructor My Blog Get started with Veritas Prep GMAT On Demand for$199
Veritas Prep Reviews
Manager
Joined: 08 Dec 2015
Posts: 204
GMAT 1: 600 Q44 V27
Followers: 1
Kudos [?]: 5 [0], given: 33
Re: Strategy for Exponents and Roots [#permalink]
### Show Tags
29 Apr 2016, 03:02
Thank you very much
Re: Strategy for Exponents and Roots [#permalink] 29 Apr 2016, 03:02
Similar topics Replies Last post
Similar
Topics:
16 Exponents and Roots on the GMAT: Tips and hints 2 24 Jul 2014, 07:33
1 Function and square root strategy 6 27 Oct 2012, 05:14
1 exponents 4 11 May 2011, 10:15
Exponents 1 06 Aug 2010, 15:23
18 Which of the following is equal to (2^12 - 2^6)/(2^6 - 2^3) 8 15 Mar 2010, 12:12
Display posts from previous: Sort by
# Strategy for Exponents and Roots
new topic post reply Question banks Downloads My Bookmarks Reviews Important topics
Powered by phpBB © phpBB Group and phpBB SEO Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®. | 2016-08-26 23:30:31 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2602609097957611, "perplexity": 13163.156432194226}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982296721.55/warc/CC-MAIN-20160823195816-00173-ip-10-153-172-175.ec2.internal.warc.gz"} |
http://semparis.lpthe.jussieu.fr/list?type=seminars&key=12707 | Statut Confirmé Série IPN-X Domaines hep-ph Date Jeudi 8 Novembre 2018 Heure 11:00 Institut CPHT Salle Salle de Conferences Louis Michel, bat. 6, CPHT, Ecole polytechnique Nom de l'orateur Pham Prenom de l'orateur Tri Nang Addresse email de l'orateur Institution de l'orateur CPHT, Ecole polytechnique Titre Heavy to Light Meson Semileptonic Decays Form Factors Résumé Like the two-photon and two-gluon decays of the $P$-wave $\chi_{c0,2}$ and $\chi_{b0,2}$ state for which the Born term produces a very simple decays amplitude in terms of an effective Lagrangian with two-quark local operator, the Born term for the processes $c\bar{d}\to(\pi,K)\ell\nu$ and $b \bar{d}\to(\pi, K)\ell\nu$, could also produce the $D$ and $B$ meson semileptonic decays with the light meson $\pi, K$ in the final state. In this approach to heavy-light meson form factors with the $\pi, K$ meson treated as Goldstone boson, a simple expression is found for the decays form factors, given as~: $f_{+}(0)/(1-q^{2}/(m_{H}^{2}+m_{\pi}^{2}))$, with $H=D,B$ for $D,B\to\pi$ form factors, and $f_{+}(0)/(1-q^{2}/(m_{H}^{2}+ m_{K}^{2}))$ for $B, D\to K$ form factor. In this talk, I would like to present a recent work showing that this expression for the form factors could describe rather well the $q^{2}$-behaviour observed in the BaBar, Belle and BESSIII measurements and lattice simulation. In particular, the $D\to K$ form factors are in good agreement with the measured values in the whole range of $q^{2}$ showing evidence for $SU(3)$ breaking with the presence of $m_{K}^{2}$ term in the quark propagator, but some corrections to the Born term are needed at large $q^{2}$ for $D, B\to\pi$ form factors. Numéro de preprint arXiv Commentaires Fichiers attachés 2018-11-08_slides.pdf (99443 bytes)
Pour obtenir l' affiche de ce séminaire : [ Postscript | PDF ]
[ Annonces ] [ Abonnements ] [ Archive ] [ Aide ] [ JavaScript requis ] [ English version ] | 2019-03-21 18:41:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6758386492729187, "perplexity": 5397.610636578419}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202530.49/warc/CC-MAIN-20190321172751-20190321194751-00482.warc.gz"} |
https://proofwiki.org/wiki/Equivalence_of_Definitions_of_Integral_Dependence | # Equivalence of Definitions of Integral Dependence
It has been suggested that this page or section be merged into Equivalence of Definitions of Integral Element of Algebra. (Discuss)
## Theorem
Let $A$ be an extension of a commutative ring with unity $R$.
For $x \in A$, the following are equivalent:
$(1):\quad$ $\displaystyle$ $\displaystyle$ $x$ is integral over $R$ $(2):\quad$ $\displaystyle$ $\displaystyle$ The $R$-module $R \left[{x}\right]$ is finitely generated $(3):\quad$ $\displaystyle$ $\displaystyle$ There exists a subring $B$ of $A$ such that $x \in B$, $R \subseteq B$ and $B$ is a finitely generated $R$-module $(4):\quad$ $\displaystyle$ $\displaystyle$ There exists a subring $B$ of $A$ such that $x B \subseteq B$ and $B$ is finitely generated over $R$ $(5):\quad$ $\displaystyle$ $\displaystyle$ There exists a faithful $R \left[{x}\right]$-module $B$ that is finitely generated as an $R$-module
## Proof
### $(1) \implies (2)$
By hypothesis, there exist $r_0, \ldots, r_{n-1} \in R$ such that:
$x^n + r_{n-1} x^{n-1} + \cdots + r_1 x + r_0 = 0$
So the powers $x^k$, $k \ge n$ can be written as an $R$-linear combination of:
$\left\{ {1, \ldots, x^{n-1} }\right\}$
Therefore this set generates $R \left[{x}\right]$.
$\Box$
### $(2) \implies (3)$
$B = R \left[{x}\right]$ trivially satisfies the required conditions.
$\Box$
### $(3) \implies (4)$
By $(3)$ we have an $R$-module $B$ such that $R \subseteq B$, $B$ is finitely generated over $R$.
Also, $x \in B$, so $x B \subseteq B$ as required.
$\Box$
### $(4) \implies (5)$
By $(4)$ we have an $R \left[{x}\right]$-module $B$ that is finitely generated over $R$.
Let $y$ lie in the annihilator $\operatorname{Ann}_{R \left[{x}\right]} \left({B}\right)$
We have that $1 \in B$.
Then in particular $y \cdot 1 = 0$, and $y = 0$.
Therefore, $B$ is faithful over $R \left[{x}\right]$.
$\Box$
### $(5) \implies (1)$
Let $B$ be as in $(5)$, say generated by $m_1, \ldots, m_n \in B$.
Then there are $r_{i j} \in R$, $i, j = 1,\ldots, n$ such that:
$\displaystyle x \cdot m_i = \sum_{j \mathop = 1}^n r_{i j} m_j$
Let $b_{i j} = x \delta_{i j} - r_{i j}$ where $\delta_{i j}$ is the Kronecker delta.
Then:
$\displaystyle \sum_{j \mathop = 1}^n b_{i j} m_j = 0, \quad i = 1, \ldots, n$
So, let $M = \left({b_{i j} }\right)_{1 \le i, j \le n}$.
Then by Cramer's Rule:
$\left({\det M}\right) m_i = 0$, $i = 1, \ldots, n$
Since $\det M \in R \left[{x}\right]$, also $\det M \in \operatorname{Ann}_{R \left[{x}\right]} \left({B}\right)$.
So $\det M = 0$ by hypothesis.
But $\det M = 0$ is a monic polynomial in $x$ with coefficients in $R$.
Thus $x$ is integral over $R$.
$\blacksquare$ | 2019-07-23 13:55:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9829491376876831, "perplexity": 97.37719644061194}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195529406.97/warc/CC-MAIN-20190723130306-20190723152306-00076.warc.gz"} |
https://en.wikipedia.org/wiki/De_Laval_nozzle | # de Laval nozzle
Diagram of a de Laval nozzle, showing approximate flow velocity (v), together with the effect on temperature (T) and pressure (p)
A de Laval nozzle (or convergent-divergent nozzle, CD nozzle or con-di nozzle) is a tube that is pinched in the middle, making a carefully balanced, asymmetric hourglass shape. It is used to accelerate a hot, pressurized gas passing through it to a higher supersonic speed in the axial (thrust) direction, by converting the heat energy of the flow into kinetic energy. Because of this, the nozzle is widely used in some types of steam turbines and rocket engine nozzles. It also sees use in supersonic jet engines.
Similar flow properties have been applied to jet streams within astrophysics.[1]
## History
Longitudinal section of RD-107 rocket engine (Tsiolkovsky State Museum of the History of Cosmonautics)
The nozzle was developed (independently) by German engineer and inventor Ernst Körting in 1878 and Swedish inventor Gustaf de Laval in 1888 for use on a steam turbine.[2][3][4][5]
This principle was first used in a rocket engine by Robert Goddard. Most modern rocket engines that employ hot gas combustion use de Laval nozzles.
## Operation
Its operation relies on the different properties of gases flowing at subsonic, sonic, and supersonic speeds. The speed of a subsonic flow of gas will increase if the pipe carrying it narrows because the mass flow rate is constant. The gas flow through a de Laval nozzle is isentropic (gas entropy is nearly constant). In a subsonic flow sound will propagate through the gas. At the "throat", where the cross-sectional area is at its minimum, the gas velocity locally becomes sonic (Mach number = 1.0), a condition called choked flow. As the nozzle cross-sectional area increases, the gas begins to expand and the gas flow increases to supersonic velocities where a sound wave will not propagate backward through the gas as viewed in the frame of reference of the nozzle (Mach number > 1.0).
As the gas exits the throat the increase in area allows for it to undergo a Joule-Thompson expansion wherein the gas expands at supersonic speeds from high to low pressure pushing the velocity of the mass flow beyond sonic speed.
When comparing the general geometric shape of the nozzle between the rocket and the jet engine, it only looks different at first glance, when in fact is about the same essential facts are noticeable on the same geometric cross-sections - that the combustion chamber in the jet engine must have the same "throat" (narrowing) in the direction of the outlet of the gas jet, so that the turbine wheel of the first stage of the jet turbine is always positioned immediately behind that narrowing, while any on the further stages of the turbine are located at the larger outlet cross section of the nozzle, where the flow accelerates.
## Conditions for operation
A de Laval nozzle will only choke at the throat if the pressure and mass flow through the nozzle is sufficient to reach sonic speeds, otherwise no supersonic flow is achieved, and it will act as a Venturi tube; this requires the entry pressure to the nozzle to be significantly above ambient at all times (equivalently, the stagnation pressure of the jet must be above ambient).
In addition, the pressure of the gas at the exit of the expansion portion of the exhaust of a nozzle must not be too low. Because pressure cannot travel upstream through the supersonic flow, the exit pressure can be significantly below the ambient pressure into which it exhausts, but if it is too far below ambient, then the flow will cease to be supersonic, or the flow will separate within the expansion portion of the nozzle, forming an unstable jet that may "flop" around within the nozzle, producing a lateral thrust and possibly damaging it.
In practice, ambient pressure must be no higher than roughly 2–3 times the pressure in the supersonic gas at the exit for supersonic flow to leave the nozzle.
## Analysis of gas flow in de Laval nozzles
The analysis of gas flow through de Laval nozzles involves a number of concepts and assumptions:
• For simplicity, the gas is assumed to be an ideal gas.
• The gas flow is isentropic (i.e., at constant entropy). As a result, the flow is reversible (frictionless and no dissipative losses), and adiabatic (i.e., there is no heat gained or lost).
• The gas flow is constant (i.e., steady) during the period of the propellant burn.
• The gas flow is along a straight line from gas inlet to exhaust gas exit (i.e., along the nozzle's axis of symmetry)
• The gas flow behaviour is compressible since the flow is at very high velocities (Mach number > 0.3).
## Exhaust gas velocity
As the gas enters a nozzle, it is moving at subsonic velocities. As the cross-sectional area contracts the gas is forced to accelerate until the axial velocity becomes sonic at the nozzle throat, where the cross-sectional area is the smallest. From the throat the cross-sectional area then increases, allowing the gas to expand and the axial velocity to become progressively more supersonic.
The linear velocity of the exiting exhaust gases can be calculated using the following equation:[6][7][8]
${\displaystyle v_{e}={\sqrt {{\frac {TR}{M}}\cdot {\frac {2\gamma }{\gamma -1}}\cdot \left[1-\left({\frac {p_{e}}{p}}\right)^{\frac {\gamma -1}{\gamma }}\right]}},}$
${\displaystyle v_{e}}$ ${\displaystyle T}$ where: = exhaust velocity at nozzle exit, = absolute temperature of inlet gas, = universal gas law constant, = the gas molecular mass (also known as the molecular weight) = ${\displaystyle {\frac {c_{p}}{c_{v}}}}$ = isentropic expansion factor (${\displaystyle c_{p}}$ and ${\displaystyle c_{v}}$ are specific heats of the gas at constant pressure and constant volume respectively), = absolute pressure of exhaust gas at nozzle exit, = absolute pressure of inlet gas.
Some typical values of the exhaust gas velocity ve for rocket engines burning various propellants are:
As a note of interest, ve is sometimes referred to as the ideal exhaust gas velocity because it is based on the assumption that the exhaust gas behaves as an ideal gas.
As an example calculation using the above equation, assume that the propellant combustion gases are: at an absolute pressure entering the nozzle p = 7.0 MPa and exit the rocket exhaust at an absolute pressure pe = 0.1 MPa; at an absolute temperature of T = 3500 K; with an isentropic expansion factor γ = 1.22 and a molar mass M = 22 kg/kmol. Using those values in the above equation yields an exhaust velocity ve = 2802 m/s, or 2.80 km/s, which is consistent with above typical values.
The technical literature often interchanges without note the universal gas law constant R, which applies to any ideal gas, with the gas law constant Rs, which only applies to a specific individual gas of molar mass M. The relationship between the two constants is Rs = R/M.
## Mass Flow Rate
In accordance with conservation of mass the mass flow rate of the gas throughout the nozzle is the same regardless of the cross-sectional area.[9]
${\displaystyle {\dot {m}}={\frac {Ap_{t}}{\sqrt {T_{t}}}}\cdot {\sqrt {\frac {\gamma }{R}}}M\cdot (1+{\frac {\gamma -1}{2}}M^{2})^{-{\frac {\gamma +1}{2(\gamma -1)}}}}$
${\displaystyle {\dot {m}}}$ where: = mass flow rate, = cross-sectional area of the throat, = total pressure, = total temperature, = ${\displaystyle {\frac {c_{p}}{c_{v}}}}$ = isentropic expansion factor, = gas constant, = Mach number
When the throat is at sonic speed M = 1 where the equation simplifies to:
${\displaystyle {\dot {m}}={\frac {Ap_{t}}{\sqrt {T_{t}}}}\cdot {\sqrt {\frac {\gamma }{R}}}\cdot ({\frac {\gamma +1}{2}})^{-{\frac {\gamma +1}{2(\gamma -1)}}}}$
By Newtons third law of motion the mass flow rate can be used to determine the force exerted by the expelled gas by:
${\displaystyle F={\dot {m}}\cdot v_{e}}$
${\displaystyle F}$ where: = force exerted, = mass flow rate, = exit velocity at nozzle exit
In aerodynamics, the force exerted by the nozzle is defined as the thrust.
## References
1. ^ C.J. Clarke and B. Carswell (2007). Principles of Astrophysical Fluid Dynamics (1st ed.). Cambridge University Press. pp. 226. ISBN 978-0-521-85331-6.
2. ^ See:
• Belgian patent no. 83,196 (issued: 1888 September 29)
• English patent no. 7143 (issued: 1889 April 29)
• de Laval, Carl Gustaf Patrik, "Steam turbine," U.S. Patent no. 522,066 (filed: 1889 May 1 ; issued: 1894 June 26)
3. ^ Theodore Stevens and Henry M. Hobart (1906). Steam Turbine Engineering. MacMillan Company. pp. 24–27. Available on-line here in Google Books.
4. ^ Robert M. Neilson (1903). The Steam Turbine. Longmans, Green, and Company. pp. 102–103. Available on-line here in Google Books.
5. ^ Garrett Scaife (2000). From Galaxies to Turbines: Science, Technology, and the Parsons Family. Taylor & Francis Group. p. 197. Available on-line here in Google Books.
6. ^ Richard Nakka's Equation 12.
7. ^ Robert Braeuning's Equation 1.22.
8. ^ George P. Sutton (1992). Rocket Propulsion Elements: An Introduction to the Engineering of Rockets (6th ed.). Wiley-Interscience. p. 636. ISBN 0-471-52938-9.
9. ^ Hall, Nancy. "Mass Flow Chocking". NASA. Retrieved 29 May 2020. | 2020-07-15 18:52:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 25, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6614261269569397, "perplexity": 1476.4576905636702}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657170639.97/warc/CC-MAIN-20200715164155-20200715194155-00307.warc.gz"} |
http://mathhelpforum.com/advanced-statistics/139783-covarience.html | # Thread: covarience
1. ## covarience
hello everybody
what is happening when a co-variance of two random variable is zero but they are not independent?i mean that what is the relation between this two random variable?
2. look at the defintion of covariance...
if the variables are indepndent you know the joint pdf can be written $f_{X,Y}(x,y) = f_{X}(x) f_{Y}(y)$
so now assuming X & Y are not independent, what properties of the joinit pdf would lead to zero covariance? | 2017-12-15 22:02:27 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8224274516105652, "perplexity": 769.4541709759092}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948579567.73/warc/CC-MAIN-20171215211734-20171215233734-00356.warc.gz"} |
https://www.studyadda.com/question-bank/11th-cbse-chemistry-redox-reactions_q16/108/15581 | • question_answer What is the oxidation number of hydrogen in $LiAl{{H}_{4}}$? Name a compound in which hydrogen has the same oxidation state.
$\overset{+1}{\mathop{L}}\,i\overset{+3}{\mathop{A}}\,l\overset{x}{\mathop{{{H}_{4}}}}\,\,:\,1+3+4x=0$ or $x=-1$ In sodium hydride (NaH); O.N. of $H=-1$. | 2020-09-26 00:10:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5222481489181519, "perplexity": 7129.758881780074}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400228998.45/warc/CC-MAIN-20200925213517-20200926003517-00257.warc.gz"} |
https://www.clutchprep.com/chemistry/practice-problems/100636/in-2014-a-major-chemical-leak-at-a-facility-in-west-virginia-released-7500-gallo | # Problem: In 2014, a major chemical leak at a facility in West Virginia released 7500 gallons of MCHM (4-methylcyclohexylmethanol, C8H16O) into the Elk River. The density of MCHM is 0.9074 g/mL.Calculate the initial molarity of MCHM in the river, assuming that the first part of the river is 7.10 feet deep, 100.0 yards wide, and 100.0 yards long. 1 gallon = 3.785 L.
###### FREE Expert Solution
Calculate moles MCHM:
molar mass MCHM = 128.24 g/mol
moles MCHM = 200864.1415 mol
Calculate the volume of the first part of the river:
$\overline{)\mathbf{V}\mathbf{=}\mathbf{l}\mathbf{×}\mathbf{w}\mathbf{×}\mathbf{h}}$
convert units to cm:
97% (489 ratings)
###### Problem Details
In 2014, a major chemical leak at a facility in West Virginia released 7500 gallons of MCHM (4-methylcyclohexylmethanol, C8H16O) into the Elk River. The density of MCHM is 0.9074 g/mL.
Calculate the initial molarity of MCHM in the river, assuming that the first part of the river is 7.10 feet deep, 100.0 yards wide, and 100.0 yards long. 1 gallon = 3.785 L. | 2021-04-16 11:27:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5301502346992493, "perplexity": 8816.910610401956}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038056325.1/warc/CC-MAIN-20210416100222-20210416130222-00563.warc.gz"} |
https://www.physicsforums.com/threads/this-is-a-limits-question.872787/ | # This is a limits question.
## Homework Statement
Find the following limit.
## The Attempt at a Solution
I cannot apply L' Hopital rule because it does not apply to this question. Hence I have no idea how to approach to this question. Please give me some guidelines.
## Answers and Replies
member 587159
What is lim(x->0) lnx? What is lim(x->0) 1/x^n?
Nipuna Weerasekara
What is lim(x->0) lnx? What is lim(x->0) 1/x^n?
I already know the answer to this and it is zero but I do not know how it comes.
For your question, lim (x->0) lnx is infinity and lim(x->0) 1/x^n is again infinity.
But I do not find any help from these two.
member 587159
I already know the answer to this and it is zero but I do not know how it comes.
For your question, lim (x->0) lnx is infinity and lim(x->0) 1/x^n is again infinity.
But I do not find any help from these two.
The answer is not zero.
And more specifically, what kind of infinity are the limits above I asked for? I also forgot to mention the following very important thing: lim(x->0) lnx is NOT defined. The right handed limit is defined though.
Nipuna Weerasekara
The answer is not zero.
And more specifically, what kind of infinity are the limits above I asked for? I also forgot to mention the following very important thing: lim(x->0) lnx is NOT defined. The right handed limit is defined though.
I think The question has some printing mistake or so. However thanks for your kind concern.
member 587159
I think The question has some printing mistake or so. However thanks for your kind concern.
The answer is that the limit does not exist since lnx is undefined for negative numbers. The right handed limit can be obtained by splitting the limit in 2 seperate limits by using lim x>a fg = (lim x>a f )*( lim x>a g).
Nipuna Weerasekara | 2022-05-17 04:30:12 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8290020227432251, "perplexity": 855.5823079778563}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662515501.4/warc/CC-MAIN-20220517031843-20220517061843-00105.warc.gz"} |
https://homework.cpm.org/category/CCI_CT/textbook/pc3/chapter/9/lesson/9.2.3/problem/9-134 | ### Home > PC3 > Chapter 9 > Lesson 9.2.3 > Problem9-134
9-134.
If a triangle has one side of $10$ cm, a second side of $20$ cm, and an included angle of $30^\circ$, solve the triangle.
Sketch the triangle. Since the given information is SAS, use the Law of Cosines to being solving the triangle. | 2020-09-24 02:47:34 | {"extraction_info": {"found_math": true, "script_math_tex": 3, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9107306599617004, "perplexity": 1178.9541825117485}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400213006.47/warc/CC-MAIN-20200924002749-20200924032749-00620.warc.gz"} |
https://www.maplesoft.com/support/help/errors/view.aspx?path=LinearAlgebra/NullSpace&L=E | NullSpace - Maple Help
LinearAlgebra
NullSpace
compute a basis for the nullspace (kernel) of a Matrix
Calling Sequence NullSpace(A, options)
Parameters
A - Matrix options - (optional); constructor options for the result object
Description
• The NullSpace(A) function computes a basis for the nullspace (kernel) of the linear transformation defined by Matrix A. The result is a (possibly empty) set of Vectors.
• The constructor options provide additional information (readonly, shape, storage, order, datatype, and attributes) to the Vector constructor that builds the result. These options may also be provided in the form outputoptions=[...], where [...] represents a Maple list. If a constructor option is provided in both the calling sequence directly and in an outputoptions option, the latter takes precedence (regardless of the order). If constructor options are specified in the calling sequence, each resulting Vector has the same specified options.
• This function is part of the LinearAlgebra package, and so it can be used in the form NullSpace(..) only after executing the command with(LinearAlgebra). However, it can always be accessed through the long form of the command by using LinearAlgebra[NullSpace](..).
Examples
> $\mathrm{with}\left(\mathrm{LinearAlgebra}\right):$
> $A≔⟨⟨6,3,0⟩|⟨4,2,0⟩|⟨2,1,0⟩⟩$
${A}{≔}\left[\begin{array}{ccc}{6}& {4}& {2}\\ {3}& {2}& {1}\\ {0}& {0}& {0}\end{array}\right]$ (1)
> $\mathrm{kern}≔\mathrm{NullSpace}\left(A\right)$
${\mathrm{kern}}{≔}\left\{\left[\begin{array}{c}{-}\frac{{1}}{{3}}\\ {0}\\ {1}\end{array}\right]{,}\left[\begin{array}{c}{-}\frac{{2}}{{3}}\\ {1}\\ {0}\end{array}\right]\right\}$ (2)
> $A·\mathrm{kern}\left[1\right]$
$\left[\begin{array}{c}{0}\\ {0}\\ {0}\end{array}\right]$ (3)
> $A·\mathrm{kern}\left[2\right]$
$\left[\begin{array}{c}{0}\\ {0}\\ {0}\end{array}\right]$ (4)
> $\mathrm{NullSpace}\left(\mathrm{IdentityMatrix}\left(3\right)\right)$
${\varnothing }$ (5)
> $B≔\mathrm{Matrix}\left(\left[\left[\frac{1}{3},\frac{1}{2}\right],\left[\frac{1}{2},\frac{3}{4}\right],\left[1,\frac{3}{2}\right]\right],\mathrm{datatype}=\mathrm{float}\right)$
${B}{≔}\left[\begin{array}{cc}{0.333333333333333}& {0.500000000000000}\\ {0.500000000000000}& {0.750000000000000}\\ {1.}& {1.50000000000000}\end{array}\right]$ (6)
> $\mathrm{NullSpace}\left(B\right)$
$\left\{\left[\begin{array}{c}{0.832050294337844}\\ {-0.554700196225229}\end{array}\right]\right\}$ (7) | 2023-01-31 10:11:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 15, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9184604287147522, "perplexity": 1293.665297794192}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499857.57/warc/CC-MAIN-20230131091122-20230131121122-00751.warc.gz"} |
https://ai.stackexchange.com/tags/bellman-equations/new | # Tag Info
It seems that you are getting confused between the definition of a Q-value and the update rule used to obtain these Q-values. Remember that to simply obtain an optimal Q-value for a given state-action pair we can evaluate $$Q(s, a) = r + \gamma \max_{a'} Q(s', a)\;;$$ where $s'$ is the state we transitioned into (note that this only holds when obtaining the ...
Your equations all look correct to me. It is not possible to solve the linear equation for state values in the vector $V$ without knowing the policy. There are ways of working with MDPs, through sampling of actions, state transitions and rewards, where it is possible to estimate value functions without knowing either $\pi(a|s)$ or $P^{a}_{ss'}$. For instance,... | 2021-03-06 06:37:21 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9291772246360779, "perplexity": 283.90273882028123}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178374391.90/warc/CC-MAIN-20210306035529-20210306065529-00403.warc.gz"} |
https://mathematica.stackexchange.com/questions/84565/synchrotron-radiation-and-listdensityplot | I'm trying to reproduce the following plots:
As described by this blog entry: Synchrotron radiation
As you can see in the linked article, the task is essentially to make a DensityPlot of a vector field. This vector field depends on the position of the observer $(x,y)$ and on the time $(t_r)$. This time has the peculiarity that it must be computed numerically from the real time using a simple equation:
$$t_r=t_0-\frac{ x_0-x(t_r)}{c}$$
Using this we can evaluate the electric vector field as:
$$\mathbf{E}(\mathbf{r},t)=\frac{q}{4\pi\varepsilon_0}\left[\frac{\hat{\mathbf{n}}-\vec{\beta}}{\gamma^2R^2(1-\vec{\beta}\mathbf{\cdot}\hat{\mathbf{n}})^3}+\frac{\hat{\mathbf{n}}\times[(\hat{\mathbf{n}}-\vec{\beta})\times\dot{\vec{\beta}}\,]}{c\,R\,(1-\vec{\beta}\mathbf{\cdot}\hat{\mathbf{n}})^3}\right]_{\mathrm{retarded}} \qquad \qquad (2)$$
In this expression, $\mathbf{\beta}$ is the velocity of the particle divided by the speed of light, $\gamma$ is the relativistic factor of the particle, R is the distance from the particle to the observer and $\mathbf{n}$ is the unit vector from the particle to the observer. The retarded label means that all the quantities are evaluated as a function of the retarded time. For example:
$$\beta=\beta(t_r(t))$$
where the function $t_r(t)$ comes from resolving the first equation in this text.
Here is the code I'm trying to use to produce the plots.
First, we define a function to calculate the retarded time from the normal time:
tr[t0_, x0_, y0_] :=
FindRoot[
t0 - Norm[{x0, y0} - {x[tr], y[tr]}]/10 - tr, {tr, t0}][[1,
2]]
Next, we define the trajectory of the particle
R=5
x[t_] := R Cos[t]
y[t_] := R Sin[t]
We define auxiliar functions for $\beta$ and $n$:
n[x0_, y0_] := n[x0_, y0_, t_] := ({x[t], y[t], 0} - {x0, y0, 0})/
Norm[{x[t], y[t], 0} - {x0, y0, 0}]
β[t0_] := {x'[t0], y'[t0], 0}
And finally, the electric field:
Efield[γ_, x0_, y0_,t0_] := ((n[x0, y0, t0] - β[
tr[t0, x0,
y0]])/(γ^2 Norm[{x0, y0} - {x[tr[t0, x0, y0]],
y[tr[t0, x0, y0]]}]^2 (1 - β[tr[t0, x0, y0]].n[x0, y0,
t0])^3) + Cross[n[x0, y0, t0], (
Cross[ (n[x0, y0, t0] - β[tr[t0, x0, y0]]), β'[
tr[t0, x0, y0]] ] )]/(10 Norm[{x0,
y0} - {x[tr[t0, x0, y0]],
y[tr[t0, x0, y0]]}] (1 - β[tr[t0, x0, y0]].n[x0, y0,
t0])^3))
When I try to use Efield1, Efield[0] to make a ListDensityPlot the (wrong) following result:
So the question is the following:
Is there some way to generate the original plots using ListDensityPlot?
Maybe the problem is the scaling of the z-axis in the DensityPlot, but what I have tried so far does not work. For scaling I pass the following options to ListDensityPlot:
ColorFunction -> Function[z, ColorData["DeepSeaColors"][z/10]],
ColorFunctionScaling -> False
Edit 1:
1.Corrected a typo in the code for n[x0_, y0_] (changed Abs to Norm).
2.The code of the DensityPlot is:
data = Flatten[Quiet@Table[{x0, y0,Efield[1, x0, y0, 34 \[Pi]/3 (*For example*)][[1]]}, {x0, -40, 40,1}, {y0, -40, 40, 1}],1]
ListDensityPlot[data, InterpolationOrder -> 2, PlotRange -> All,
ColorFunction -> Function[z, ColorData["DeepSeaColors"][z/10]],
ColorFunctionScaling -> False]
Edit 2
It seems that is a problem with DensityPlot. The blue lines in the plots I wanted have much much lower values that the rest of the plot. So the problem is how to rescale these lines with the appropriate color function. Using ArcTanh as a scaling function shows the "dipole" lines, but no the pretty blue lines.
• Can you provide your exact ListDensityPlot command? – dantopa May 27 '15 at 18:47
• Yes, i'll edit the question to add the code. – Dargor May 27 '15 at 18:49
• I just checked the example data and found the data are very unevenly distributed. One suggestion might be using the logarithmic values of the example data to finish the plots. – sunt05 May 28 '15 at 18:28
• It seems that is a problem with DensityPlot. The blue lines in the plots I wanted have much much lower values that the rest of the plot. So the problem is how to resalte this lines with the appropriate color function. Using ArcTanh as a scaling function shows the "dipole" lines, but no the pretty blue lines. – Dargor May 28 '15 at 18:33
• @sunt05 The problem is that there is negative values and the Log will crash there. I use now ArcTanh, but there is another problem. (In edit 2) – Dargor May 28 '15 at 18:34
Jason Cole's blog is titled "Almost looks like work", and I have to agree - I could even go so far as to title it "You know you want to put off your real work and recreate this MATLAB project"
So your code was good, but I found myself wanting some sort of units system, otherwise you could have $v$ greater than $c$ (you did not, but I just wanted to make it explicit). The only parameter mentioned in the blog post is $\gamma$, which determines the velocity, so that's how I wrote the code. Also, in your code for the field, there are a couple of places where you take $\hat{\mathbf{n}}$ evaluted at $t_0$ instead of the retarded time. But you are correct that the main issue is the plotting scale.
We will set the speed of light and the radius of the circle to one, and measure the field in units of $q/(4 \pi \epsilon_0)$. This means the velocity is determined by the Lorentz factor, $\gamma$.
γ = 1.2;
w = Sqrt[γ^2 - 1]/γ;
tPeriod = 2 π/w;
xt[t_] := {Cos[w t], Sin[w t]};
β[t_] := w {-Sin[w t], Cos[w t]};
βprime[t_] := -w^2 xt[t];
{xmin, xmax} = {ymin, ymax} = {-8.05, 8.05};
I'm going to make the plots out of 2D lists, and I want to try and avoid evaluating the field at the same point in space where the electron is, so I offset the x and y grids by a small amount.
The retarded time must be calculated numerically. I wish I knew a faster or more reliable way to do this - the FindRoot returns an error every once in a while, but not always at the same place. But it seems to give a good answer, and memoization will help later. The goal here is to make animations, so it is good to decide how many frames are needed. I want two full revolutions, and I tried using a timestep of 0.05 tPeriod but found that to be too jerky so I went for 0.01 tPeriod. You can speed things up by using a sparser time or spatial grid.
ClearAll@retime;
retime[t0_, x0_] :=
retime[t0, x0] =
tr /. FindRoot[t0 - Norm[x0 - xt[tr]] - tr, {tr, t0},
MaxIterations -> 1000];
So now we define the electric field and the radial component of the Poynting vector,
eField[t0_, x0_] :=
With[{
n = Chop@Normalize[x0 - xt[retime[t0, x0]]]~PadRight~3,
r = Norm[x0 - xt[retime[t0, x0]]]
},
Which[retime[t0, x0] < 0,
PadRight[(x0 - xt[0])/Norm[x0 - xt[0]]^3, 3]
, True,
(n - βvec)/(γ^2 r^2 (1 - βvec.n)^3) +
Cross[n, Cross[n - βvec, βprimevec]]/(
r (1 - βvec.n)^3)]];
poynting[t0_, x0_] := With[{
n = Chop@Normalize[x0 - xt[retime[t0, x0]]]~PadRight~3,
r = Norm[x0 - xt[retime[t0, x0]]]
},
Which[retime[t0, x0] < 0, 0,
True,
r^-2 Norm[Cross[n, Cross[n - βvec, βprimevec]]/(
r (1 - βvec.n)^3)]^2
]
];
It should be possible to save a bit of time above, by storing the field as two separate parts, the velocity and radiation fields, and then defining the Poynting vector as a function of the latter.
Now we generate lists for the retarded time, the field, and the Poynting vector,
Monitor[
Quiet[trlist1 =
Table[retime[t0 tPeriod, {x0, y0}], {t0, 0, 2, .04}, {y0, ymin,
ymax, .1}, {x0, xmin, xmax, .1}];], {x0, y0, t0}]
Monitor[elist1 =
Table[eField[t0 tPeriod, {x0, y0}], {t0, 0.00, 2, .01}, {y0, ymin,
ymax, .1}, {x0, xmin, xmax, .1}];, {x0, y0, t0}]
Monitor[plist1 =
Table[poynting[t0 tPeriod, {x0, y0}], {t0, 0.00, 2, .01}, {y0, ymin,
ymax, .1}, {x0, xmin, xmax, .1}];, {x0, y0, t0}]
The easiest to plot is the retarded time. No nonlinear scaling needed here, You just want to replace any negative values with 0, and then plot it using the MATLAB color map Parula, a visually appealing palette that is much better than the old Jet palette. I have this palette defined in a pastebin, which is what the first line below is,
<<"http://pastebin.com/raw.php?i=sqYFdrkY";
trplot[n_] :=
With[{rsdata =
Rescale[trlist1[[n]] /. {x_?Negative -> 0.0}, {0,
Max@trlist1}]},
Show[
ListDensityPlot[rsdata, PlotRange -> All,
ColorFunction -> ParulaCM,
DataRange -> {{xmin, xmax}, {ymin, ymax}}, Frame -> None],
Graphics@{Red, Point[xt[(n - 1) .01 tPeriod]]}
]];
Let's plot the x component of the electric field for one timestep (the 75th for no particular reason). Here it is using linear scaling with an automatic cutoff, the log scaling used in the blog, and an ArcSinh scaling I use sometimes.
GraphicsRow[{ListDensityPlot[#, PlotRange -> Automatic,
ColorFunction -> ParulaCM, Frame -> None],
ListDensityPlot[Log@Abs@#, PlotRange -> All,
ColorFunction -> ParulaCM, Frame -> None],
ListDensityPlot[
Rescale[ArcSinh[10000 # /(Max@Abs@#)]/ArcSinh[10000], {-1,
1}] &@#, PlotRange -> All, ColorFunction -> ParulaCM, Frame -> None]} &@
elist1[[75,All,All,1]], ImageSize -> 700]
The log scale is the best visually, so we'll go with it. Now, when making an animation, it's very important to use the same scale with each frame. For these plots, since we sample the field on a grid, sometimes we have the electron very close to a grid point, and if you just let ListDensityPlot choose the scale, your animation will have flashes of brightness, very offputting.
The rescaling values I have below I arrived at via trial and error - just trying to make the plots look best. If someone can think of a more programmatic way to do it, I'd be happy to hear it.
exloglist = Rescale[Log@Abs@elist1[[All,All,All,1]], {-9, 3}];
eyloglist = Rescale[Log@Abs@elist1[[All,All,All,2]], {-9, 3}];
ploglist = Rescale[Log@Abs@plist1, {-20, 6}];
trplot[n_] :=
With[{rsdata =
Rescale[trlist1[[n]] /. {x_?Negative -> 0.0}, {0, Max@trlist1}]},
Show[
ListDensityPlot[rsdata, PlotRange -> All,
ColorFunction -> ParulaCM,
DataRange -> {{xmin, xmax}, {ymin, ymax}}, Frame -> None],
Graphics@{Red, Point[xt[(n - 1) .01 tPeriod]]}
]]; explot[n_] :=
ListDensityPlot[exloglist[[n]], ColorFunction -> ParulaCM,
DataRange -> {{xmin, xmax}, {ymin, ymax}},
ColorFunctionScaling -> False, Frame -> False];
eyplot[n_] :=
ListDensityPlot[eyloglist[[n]], ColorFunction -> ParulaCM,
DataRange -> {{xmin, xmax}, {ymin, ymax}},
ColorFunctionScaling -> False, Frame -> False];
pplot[n_] :=
Show[ListDensityPlot[ConstantArray[0, {2, 2}],
DataRange -> {{xmin, xmax}, {ymin, ymax}},
ColorFunction -> ParulaCM, ColorFunctionScaling -> False,
Frame -> False],
ListDensityPlot[ploglist[[n]], ColorFunction -> ParulaCM,
DataRange -> {{xmin, xmax}, {ymin, ymax}},
ColorFunctionScaling -> False, Frame -> False]
];
gridplot[n_] := Grid[{{pplot[n], explot[n]}, {trplot[n], eyplot[n]}}];
gridplot[137]
Now to create the animation, you need to export the frames as image files and use another program to do the work. Mathematica is great, but for some things you should use other tools.
Quiet@CreateDirectory["syncimages"];
Quiet@CreateDirectory["syncimages/grd_1.2"];
Do[
img = gridplot[n];
Export[
"syncimages/grd_1.2/frame_" <> IntegerString[n, 10, 3] <> ".png",
img], {n, 76, Length@trlist1}];~Monitor~n
Then navigate to the folder "syncimages" in the command line and either create an mp4 video using ffmpeg,
ffmpeg -framerate 30 -i "grd_1.2/frame_%03d.png" -codec:v libx264 -vf "scale=trunc(iw/2)*2:trunc(ih/2)*2" -r 30 -pix_fmt yuv420p grd_1.2.mp4
or create an animated gif using ImageMagick
convert -delay 3 grd_1.2/* g12B.gif
This website won't let me link to a .gifv file, so here is the animation on imgur.
edit: Gonna try for the undulating example overnight :-)
• Wow! Thank you so much. Your answer is GREAT. I will mark it as accepted :D. Using this answer I have optimised some parts of the code. What do you think that is the better way to share this code: as another answer or in the original question? – Dargor Dec 21 '15 at 17:40
• Some tips I think that are interesting: Using "Brent" method for the FindRoot and using a compile function for the E field and S vector! – Dargor Dec 21 '15 at 17:43
• See this Wolfram blog post: blog.wolfram.com/2012/07/20/… – Dargor Dec 21 '15 at 18:03
• @Dargor, not sure what the protocol is here, but I'm sure you can edit your post with modifications to this code. I'm interested to try some trajectory different from those on the blog, so any way to speed it up is great. Seems the biggest time sink is generating the retarded time lists, wish I could speed that up. – Jason B. Dec 21 '15 at 18:53 | 2019-10-17 02:11:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.423490047454834, "perplexity": 3104.4594800015516}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986672431.45/warc/CC-MAIN-20191016235542-20191017023042-00207.warc.gz"} |
http://www.koreascience.or.kr/article/ArticleFullRecord.jsp?cn=GCSHCI_2012_v37An10_821 | Parallel Writing and Detection for Two Dimensional Magnetic Recording Channel
Title & Authors
Parallel Writing and Detection for Two Dimensional Magnetic Recording Channel
Zhang, Yong; Lee, Jaejin;
Abstract
Two-dimensional magnetic recording (TDMR) is treated as the next generation magnetic recording method, but because of its high channel bit error rate, it is difficult to use in practices. In this paper, we introduce a new writing method that can decrease the nonlinear media error effectively, and it can also achieve 10 Tb/$\small{in^2}$ of user bit density on a magnetic recording medium with 20 Teragrains/$\small{in^2}$.
Keywords
Two-dimensional magnetic recording;non-linear media error;LDPC;shingled writing;two-dimensional detection;
Language
English
Cited by
References
1.
Y. Shiroishi, K. Fukuda, I. Tagawa, H. Iwasaki, S. Takenoiri, H. Tanaka, H. Mutoh, and N. Yoshikawa, "Future Options for HDD storage," IEEE Trans. Magn., vol. 45, no. 10, pp. 3816-3822, Oct. 2009.
2.
J. Kim and J. Lee, "Performance of read head offset on patterned media recording channel," J. KICS, vol. 35, no. 11, pp. 896-900, Nov. 2011.
3.
R. Wood, M. Williams, A. Kavcic, and J. Miles, "The feasibility of magnetic recording at 10 terabits per square inch on conventional media," IEEE Trans. Magn., vol. 45, no. 2, pp. 917-923, Feb. 2009.
4.
K. S. Chan, J. J. Miles, E. Hwang, B. V. K. V. Kumar, J. Zhu, W. Lin, and R. Negi, "TDMR platform simulations and experiments," IEEE Trans. Magn., vol. 45, no. 10, pp. 3837-3843, Oct. 2009.
5.
B. Vasic, A. R. Krishnan, R. Radhakrishnan, A. Kavcic, W. Ryan, and F. Erden, "Two-dimensional magnetic recording: read channel modeling and detection," IEEE Trans. Magn., vol. 45, no. 10, pp. 3830-3836, Oct. 2009.
6.
K.S. Chan, R. Radhakrishnan, K. Eason, E. M. Rachid, J. Miles, B. Vasic, and A. R. Krishnan, "Channel models and detectors for two-dimensional magnetic recording." IEEE Trans. Magn., vol. 46, no. 3, pp. 804-811, March 2010.
7.
E. Hwang, R. Negi, B. V. K. V. Kumar, and R. Wood, "Investigation of two-dimensional magnetic recording (TDMR) with position and timing uncertainty at 4 \$Tb/in^{2}\$," IEEE Trans. Magn., vol. 47, no. 12, pp. 4775-4780, Dec. 2011.
8.
A. Kavcic, B. Vasic, W. Ryan, and F. M. Erden, "Channel modeling and capacity bounds for two dimensional magnetic recording," IEEE Trans. Magn., vol. 46, no. 3, pp. 812-818, March 2010.
9.
E. Hwang, R. Negi, and B. V. K. V. Kumar. "Signal processing for near 10 Tbit/in2 density in two-dimensional magnetic recording (TDMR)," IEEE Trans. magn., vol. 46, no. 6, pp. 1813-1816, June 2010. | 2018-08-19 05:17:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 2, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21879327297210693, "perplexity": 9229.006794478673}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221214702.96/warc/CC-MAIN-20180819051423-20180819071423-00710.warc.gz"} |
https://www.physicsforums.com/threads/summing-up-an-arithmetic-progression-via-integration.513577/ | # Summing up an Arithmetic Progression via Integration?
1. Jul 12, 2011
Why doesn't the integration of the general term of an A.P. give its sum? Integration sums up finctions, so if I integrate the general term function of an A.P., I should get its sum.
Like
2,4,6,8,.....
T=2+(n-1)2=2n
$\int T dn$=n^2 ..(1)
Sum=S=(n/2)(4+(n-1)2)=(n/2)(2+2n)=n+(n^2) ..(2)
Why aren't these two equal?
2. Jul 12, 2011
### micromass
I see no reason at all why the integral should equal the sum. The integral doesn't sum integers, it calculates area.
That said, we do have the following inequality (this does not hold in general!!):
$$\sum_{k=0}^{n-1}{f(k)}\leq \int_0^n{f(x)dx}\leq\sum_{k=1}^n{f(k)}$$
This inequality is the best you can do, I fear...
3. Jul 13, 2011
### chiro
If you are dealing with a polynomial expression, you can use what are called Bernoulli polynomials.
If the expression is not a simple one (as in some finite polynomial expression), then the inequality is a good bet, unless there are some tighter constraints for the specific expression.
4. Jul 15, 2011
### nickalh
Clarification:
Haven't you dropped a sign?
On the left hand integral, after integrating, I see
-ln|1 - x|
On the next or final line, the leading negative disappears. | 2018-06-20 15:50:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.903154194355011, "perplexity": 1503.8821406298298}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863650.42/warc/CC-MAIN-20180620143814-20180620163814-00462.warc.gz"} |
http://openstudy.com/updates/55c26669e4b0f6bb86c36539 | ## anonymous one year ago A set of data has mean 62 and standard deviation 4. Find the z-score of the value 78.
1. IrishBoy123
$$\huge z = {x- \mu \over \sigma}$$ yes?
2. anonymous
im getting 4 is that right? | 2016-10-23 06:36:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.757908821105957, "perplexity": 1116.274075086434}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719155.26/warc/CC-MAIN-20161020183839-00296-ip-10-171-6-4.ec2.internal.warc.gz"} |
http://kawahara.ca/latex-bold-vector-and-arrow-vectors/ | # LaTeX – bold vectors and arrow vectors
Lately I’m writing a lot of papers in $\LaTeX$ and every once and a while something comes up that drives me crazy trying to figure out.
Here’s how to easily switch between a bold vector $\boldsymbol{x}$ and an arrow vector $\vec{x}$.
% Minimal latex example: % Shows how to switch between bold and arrow vectors. % Specifies the type of document you have. \documentclass{article} % Used for the boldsymbol. \usepackage{amsmath} % Comment this out to represent vectors with an arrow on top. % Uncomment this to represent vectors as bold symbols. \renewcommand{\vec}[1]{\boldsymbol{#1}} % Start of the document. \begin{document} % Your content. My lovely vector: $\vec{x}$ % End of the document. \end{document}
And that’s it! By commenting and un-commenting the \renewcommand{\vec}[1]{\boldsymbol{#1}} line, you can toggle between representing vectors with an arrow on top or bold.
This solution was modified based on fbianco’s comment in the comment section. Thanks fbianco!
I’ve left my original text below, but it’s no longer recommended.
January 3, 2017. This text below was my earlier approach. I recommend you use the above
I got this idea from D.H here and I thought it was worth expanding a bit more.
First, make sure you include,
\usepackage{amsmath} % used for boldsymbol.
at the top of your .tex file. Then add these two lines somewhere underneath.
Now when you are writing your vectors, instead of using \vec{}, you use \vect{}. This allows you to easily toggle between the two different modes.
This will display the boldface vectors $\boldsymbol{x}$.
Whereas, this,
Will produce an arrow vector $\vec{x}$.
Now if anyone ever asks you to change how your vectors look in a paper, you can smile at them :D.
## 3 thoughts on “LaTeX – bold vectors and arrow vectors”
1. walala says:
I tried this, but mine appears as only bold, instead of bold and italic
1. Jeremy says:
Hi walala, I just updated this post to include a fully working example in LaTeX. Could you try that?
2. The easiest is to use renewcommand this will change existing arrow vector by bold vector :
\renewcommand{\vec}[1]{\boldsymbol{#1}}
By the way, thanks for the boldsymbol tricks. | 2018-02-24 10:00:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 5, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9913948774337769, "perplexity": 5196.996543935982}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891815544.79/warc/CC-MAIN-20180224092906-20180224112906-00242.warc.gz"} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.