url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
https://www.ideals.illinois.edu/handle/2142/104267
## Files in this item FilesDescriptionFormat application/pdf 1397536.pdf (5MB) PresentationPDF application/pdf 3808.pdf (18kB) AbstractPDF ## Description Title: LASER ABLATION OF SOLID ORGANIC PRECURSORS AS AN ALTERNATIVE TOOL IN THE GENERATION OF INTERSTELLAR MOLECULES Author(s): Kolesniková, Lucie Contributor(s): Alonso, José L.; Mata, Santiago ; Alonso, Elena R.; León, Iker Subject(s): Mini-symposium: Astrochemistry and Astrobiology in the age of ALMA Abstract: In the course of the investigation of the rotational spectrum of prebiotic hydantoic acid by Fourier transform microwave spectroscopy coupled to a laser ablation source in a supersonic expansion, rotational signatures of two cyclic molecules, hydantoin and 2,5-oxazolidinedione, have been unexpectedly observed along with the four most stable conformers of hydantoic acid.\footnote{Kolesniková, L.; León, I.; Alonso, E. R. et al.: \textit{J. Phys. Chem. Lett.} \textbf{2019}, accepted, DOI: 10.1021/acs.jpclett.9b00208.} Interestingly, two of them presented folded geometric arrangements that might act as precursors in the cyclization reactions assisted by laser ablation. They could play the role of near-attack conformations (NACs) in the framework of the NAC theory for intramolecular reactions. A detailed analysis of the spectrum further revealed the simultaneous formation of other species in the jet, showing that the laser ablation of solid organic precursors constitutes an alternative tool in the generation of new chemical species.$^{b}$ It has been recently confirmed using diaminomaleonitrile as a solid precursor. Up to 30 different species (most of them detected in space) have been revealed in the supersonic expansion of our laser ablation chirped pulse Fourier transform microwave LA-CP-FTMW experiment. Issue Date: 2019-06-20 Publisher: International Symposium on Molecular Spectroscopy Genre: Conference Paper / Presentation Type: Text Language: English URI: http://hdl.handle.net/2142/104267 DOI: 10.15278/isms.2019.RG04 Rights Information: Copyright 2019 Lucie Kolesniková Date Available in IDEALS: 2019-07-152020-01-25 
2020-10-01 21:56:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29526183009147644, "perplexity": 10177.719932287526}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402132335.99/warc/CC-MAIN-20201001210429-20201002000429-00764.warc.gz"}
https://www.questionsolutions.com/magnitude-resultant-force-500-n/
# Magnitude of the resultant force is to be 500 N 4 If the magnitude of the resultant force is to be 500 N, directed along the positive y axis, determine the magnitude of force F and its direction θ. #### Solution: Let us first draw the vector components out. Note that the resultant force is along the positive y-axis as stated in the question. Now, let us draw the vector components tail to tail. Using this diagram, we can figure out F. To do so, we will use the law of cosines. $F^2=500^2+700^2-2(500)(700)\cos105^0$ $F=\sqrt{500^2+700^2-2(500)(700)\cos105^0}$ $F=959.78N$ Now, we will use the law of sines to figure out θ. $\dfrac{\sin(90^0+ \theta)}{700}=\dfrac{\sin105^0}{959.78}$$\theta=45.2^0$
2022-11-29 15:04:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 5, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5874733924865723, "perplexity": 227.6047279964273}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710698.62/warc/CC-MAIN-20221129132340-20221129162340-00058.warc.gz"}
http://atozmath.com/Default.aspx?q1=equilateral%20triangle%20A(2%2C5)%2CB(8%2C5)%2CC(5%2C10.196152)%60454&do=1
Home > Geometry calculators > Coordinate Geometry > Points (3 or 4) are Collinear or Triangle or Quadrilateral form Solve any problem (step by step solutions) Input table (Matrix, Statistics) Mode : SolutionHelp Solution Find equilateral triangle A(2,5),B(8,5),C(5,10.196152) [ Calculator, Method and examples ]Solution:Your problem -> equilateral triangle A(2,5),B(8,5),C(5,10.196152)Here A(2,5), B(8,5), C(5,10.2) are the given pointsAB^2 = (8-2)^2 + (5-5)^2 = (6)^2 + (0)^2 = 36 + 0 = 36BC^2 = (5-8)^2 + (10.2-5)^2 = (-3)^2 + (5.2)^2 = 9 + 27 = 36AC^2 = (5-2)^2 + (10.2-5)^2 = (3)^2 + (5.2)^2 = 9 + 27 = 36:. AB^2=BC^2=AC^2:. AB=BC=AC:. ABC is an equilateral triangle Solution provided by AtoZmath.com Any wrong solution, solution improvement, feedback then Submit Here Want to know about AtoZmath.com and me
2019-06-26 01:36:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3157764673233032, "perplexity": 7884.458211648341}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000044.37/warc/CC-MAIN-20190626013357-20190626035357-00357.warc.gz"}
https://en.wikibooks.org/wiki/Cg_Programming/Unity/Screen_Overlays
# Cg Programming/Unity/Screen Overlays Title screen of a movie from 1934. This tutorial covers screen overlays. It is the first tutorial of a series of tutorials on non-standard vertex transformations, which deviate from the standard vertex transformations that are described in Section “Vertex Transformations”. This particular tutorial uses texturing as described in Section “Textured Spheres” and blending as described in Section “Transparency”. ## Screen Overlays There are many applications for screen overlays, e.g. titles as in the image to the left, but also other GUI (graphical user interface) elements such as buttons or status information. The common feature of these elements is that they should always appear on top of the scene and never be occluded by any other objects. Neither should these elements be affected by any of the camera movements. Thus, the vertex transformation should go directly from object space to screen space. Unity has various ways to render a texture image at a specified position on the screen. This tutorial tries to achieve this purpose with a simple shader. ## Rendering a Texture to the Screen with a Cg Shader Let's specify the screen position of the texture by an X and a Y coordinate of the lower, left corner of the rendered rectangle in pixels with ${\displaystyle (0,0)}$ at the center of the screen and a Width and Height of the rendered rectangle in pixels. (Specifying the coordinates relative to the center often allows us to support various screen sizes and aspect ratios without further adjustments.) We use these shader properties: Properties { _MainTex ("Texture", Rect) = "white" {} _Color ("Color", Color) = (1.0, 1.0, 1.0, 1.0) _X ("X", Float) = 0.0 _Y ("Y", Float) = 0.0 _Width ("Width", Float) = 128 _Height ("Height", Float) = 128 } and the corresponding uniforms uniform sampler2D _MainTex; uniform float4 _Color; uniform float _X; uniform float _Y; uniform float _Width; uniform float _Height; For the actual object, we could use a mesh that consists of just two triangles to form a rectangle. However, we can also just use the default cube object since back-face culling (and culling of triangles that are degenerated to edges) allows us to make sure that only two triangles of the cube are rasterized. The corners of the default cube object have coordinates ${\displaystyle -0.5}$ and ${\displaystyle +0.5}$ in object space, i.e., the lower, left corner of the rectangle is at ${\displaystyle (-0.5,-0.5)}$ and the upper, right corner is at ${\displaystyle (+0.5,+0.5)}$. To transform these coordinates to the user-specified coordinates in screen space, we first transform them to raster positions in pixels where ${\displaystyle (0,0)}$ is at the lower, left corner of the screen: uniform float4 _ScreenParams; // x = width; y = height; // z = 1 + 1.0/width; w = 1 + 1.0/height ... vertexOutput vert(vertexInput input) { vertexOutput output; float2 rasterPosition = float2( _X + _ScreenParams.x / 2.0 + _Width * (input.vertex.x + 0.5), _Y + _ScreenParams.y / 2.0 + _Height * (input.vertex.y + 0.5)); ... This transformation transforms the lower, left corner of the front face of our cube from ${\displaystyle (-0.5,-0.5)}$ in object space to the raster position float2(_X + _ScreenParams.x / 2.0, _Y + _ScreenParams.y / 2.0), where _ScreenParams.x is the screen width in pixels and _ScreenParams.y is the height in pixels. The upper, right corner is transformed from ${\displaystyle (+0.5,+0.5)}$ to float2(_X + _ScreenParams.x / 2.0 + _Width, _Y + _ScreenParams.y / 2.0 + _Height). Raster positions are convenient and, in fact, they are often used in OpenGL; however, they are not quite what we need here. The output parameter of the vertex shader is in the so-called “clip space” as discussed in Section “Vertex Transformations”. The GPU transforms these coordinates to normalized device coordinates between ${\displaystyle -1}$ and ${\displaystyle 1}$ by dividing them by the fourth coordinate in the perspective division. If we set this fourth coordinate to ${\displaystyle 1}$, this division doesn't change anything; thus, we can think of the first three coordinates as coordinates in normalized device coordinates, where ${\displaystyle (-1,-1,-1)}$ specifies the lower, left corner of the screen on the near plane and ${\displaystyle (1,1,-1)}$ specifies the upper, right corner on the near plane. In order to specify any screen position as vertex output parameter, we have to specify it in this coordinate system. Fortunately, transforming the ${\displaystyle x}$ and ${\displaystyle y}$ coordinates of the raster position to normalized device coordinates is not too difficult. For the ${\displaystyle z}$ coordinate we want to use the coordinate of the near clipping plane. In Unity, this depends on the platform; therefore, we use Unity's built-in uniform _ProjectionParams.y which specifies the ${\displaystyle z}$ coordinate of the near clipping plane. output.pos = float4( 2.0 * rasterPosition.x / _ScreenParams.x - 1.0, 2.0 * rasterPosition.y / _ScreenParams.y - 1.0, _ProjectionParams.y, // near plane is at -1.0 or at 0.0 1.0); As you can easily check, this transforms the raster position float2(0,0) to normalized device coordinates ${\displaystyle (-1.0,-1.0)}$ and the raster position float2(_ScreenParams.x, _ScreenParams.y) to ${\displaystyle (1.0,1.0)}$, which is exactly what we need. There is one more complication: Sometimes Unity uses a flipped projection matrix where the ${\displaystyle y}$ axis points in the opposite direction. In this case, we have to multiply the ${\displaystyle y}$ coordinate with -1. We can achieve this by multiplying it with _ProjectionParams.x: output.pos = float4( 2.0 * rasterPosition.x / _ScreenParams.x - 1.0, _ProjectionParams.x * (2.0 * rasterPosition.y / _ScreenParams.y - 1.0), _ProjectionParams.y, // near plane is at -1.0 or at 0.0 1.0); This is all we need for the vertex transformation from object space to screen space. However, we still need to compute appropriate texture coordinates in order to look up the texture image at the correct position. Texture coordinates should be between ${\displaystyle 0.0}$ and ${\displaystyle 1.0}$, which is actually easy to compute from the vertex coordinates in object space between ${\displaystyle -0.5}$ and ${\displaystyle +0.5}$: output.tex = float4(input.vertex.x + 0.5, input.vertex.y + 0.5, 0.0, 0.0); // for a cube, vertex.x and vertex.y // are -0.5 or 0.5 With the vertex output parameter tex, we can then use a simple fragment program to look up the color in the texture image and modulate it with the user-specified color _Color: float4 frag(vertexOutput input) : COLOR { return _Color * tex2D(_MainTex, input.tex.xy); } That's it. If we put all the pieces together, we get the following shader, which uses the Overlay queue to render the object after everything else, and uses alpha blending (see Section “Transparency”) to allow for transparent textures. It also deactivates the depth test to make sure that the texture is never occluded: Properties { _MainTex ("Texture", Rect) = "white" {} _Color ("Color", Color) = (1.0, 1.0, 1.0, 1.0) _X ("X", Float) = 0.0 _Y ("Y", Float) = 0.0 _Width ("Width", Float) = 128 _Height ("Height", Float) = 128 } Tags { "Queue" = "Overlay" } // render after everything else Pass { Blend SrcAlpha OneMinusSrcAlpha // use alpha blending ZTest Always // deactivate depth test CGPROGRAM #pragma vertex vert #pragma fragment frag #include "UnityCG.cginc" // defines float4 _ScreenParams with x = width; // y = height; z = 1 + 1.0/width; w = 1 + 1.0/height // and defines float4 _ProjectionParams // with x = 1 or x = -1 for flipped projection matrix; // y = near clipping plane; z = far clipping plane; and // w = 1 / far clipping plane // User-specified uniforms uniform sampler2D _MainTex; uniform float4 _Color; uniform float _X; uniform float _Y; uniform float _Width; uniform float _Height; struct vertexInput { float4 vertex : POSITION; float4 texcoord : TEXCOORD0; }; struct vertexOutput { float4 pos : SV_POSITION; float4 tex : TEXCOORD0; }; vertexOutput vert(vertexInput input) { vertexOutput output; float2 rasterPosition = float2( _X + _ScreenParams.x / 2.0 + _Width * (input.vertex.x + 0.5), _Y + _ScreenParams.y / 2.0 + _Height * (input.vertex.y + 0.5)); output.pos = float4( 2.0 * rasterPosition.x / _ScreenParams.x - 1.0, _ProjectionParams.x * (2.0 * rasterPosition.y / _ScreenParams.y - 1.0), _ProjectionParams.y, // near plane is at -1.0 or at 0.0 1.0); output.tex = float4(input.vertex.x + 0.5, input.vertex.y + 0.5, 0.0, 0.0); // for a cube, vertex.x and vertex.y // are -0.5 or 0.5 return output; } float4 frag(vertexOutput input) : COLOR { return _Color * tex2D(_MainTex, input.tex.xy); } ENDCG } } } When you use this shader for a cube object, the texture image can appear and disappear depending on the orientation of the camera. This is due to clipping by Unity, which doesn't render objects that are completely outside of the region of the scene that is visible in the camera (the view frustum). This clipping is based on the conventional transformation of game objects, which doesn't make sense for our shader. In order to deactivate this clipping, we can simply make the cube object a child of the camera (by dragging it over the camera in the Hierarchy Window). If the cube object is then placed in front of the camera, it will always stay in the same relative position, and thus it won't be clipped by Unity. (At least not in the game view.) ## Changes for Opaque Screen Overlays Many changes to the shader are conceivable, e.g. a different blend mode or a different depth to have a few objects of the 3D scene in front of the overlay. Here we will only look at opaque overlays. An opaque screen overlay will occlude triangles of the scene. If the GPU was aware of this occlusion, it wouldn't have to rasterize these occluded triangles (e.g. by using deferred rendering or early depth tests). In order to make sure that the GPU has any chance to apply these optimizations, we have to render the screen overlay first, by setting Tags { "Queue" = "Background" } Also, we should avoid blending by removing the Blend instruction. With these changes, opaque screen overlays are likely to improve performance instead of costing rasterization performance. ## Summary Congratulations, you have reached the end of another tutorial. We have seen: • How to render screen overlays with a Cg shader. • How to modify the shader for opaque screen overlays.
2020-06-02 12:59:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 25, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26335859298706055, "perplexity": 3155.1444339676536}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347424174.72/warc/CC-MAIN-20200602100039-20200602130039-00062.warc.gz"}
https://forthright48.com/2015/08/prime-factorization-of-factorial.html
# Problem Given a positive integer $N$, find the prime factorization of $N!$. For example, $5! = 5 \times 4 \times 3 \times 2 \times 1 = 120 = 2^3 \times 3 \times 5$. # Brute Force Solution A possible solution is to calculate the value of $x = N!$ and then prime factorize $x$. But calculating $N!$ is tedious. We cannot fit $N!$ where $N > 20$ in a long long variable. We will need to use the Big Integer class and that would make things slow. I will soon write a blog post on Big Integer until then know that using Big Integer it would take more than $N^2$ steps to calculate $N!$. Is there a better way? # Limits on Prime Before we move on to the solution, let us first decide the limit on prime. In order to factorize $x = N!$, we have to generate prime numbers. But up to which value? Should we generate all primes less than $\sqrt{x}$? Even for a small value of $N$ like $100$, $x$ can be huge with over $100$ digits in it, thus, $\sqrt{x}$ will also be huge. Generating so many primes is not feasible. Using Sieve of Eratosthenes we could generate primes around $10^8$, which is nowhere near $\sqrt{100!}$. Note that $N! = N \times (N-1) \times (N-2) \times … \times 2 \times 1$. That is, $N!$ is a product of numbers less than $N$ only. Now, can there be any prime greater than $N$ that can divide $N!$? Suppose there is a number $A$ and we factorized it. It is trivial to realize that all its prime factors will be less than or equal to $A$. So in $N!$, which is the product of numbers less than $N$, if we decompose all those numbers to their prime factors, then they will reduce to primes less than or equal to $N$. For example, $6! = 6 \times 5 \times 4 \times 3 \times 2 \times 1 = (2 \times 3) \times 5 \times 2^2 \times 3 \times 2 = 2^4 \times 3^2 \times 5$. So the prime factors of $N!$ will be less than or equal to $N$. Generating primes till $\sqrt{N!}$ is not necessary. We just need to generate all primes less than or equal to $N$. # Prime Factorization Now that we know the limit for primes, we are ready to begin factorizing the factorial. There is more than one way to achieve this. We will see three of them and discuss which one is best. ## First – Linear Loop From $1$ to $N$ We know that $N! = N \times (N-1) \times (N-2) \times … \times 2 \times 1$. So we could simply factorize every number from $1$ to $N$ and add to a global array that tracks the frequency of primes. Using the code for $factorize()$ from here, we could write a solution like below. vector<int> prime; int primeFactor[SIZE]; // Size should be as big as N void factorize( int n ) { int sqrtn = sqrt ( n ); for ( int i = 0; i < prime.size() && prime[i] <= sqrtn; i++ ) { if ( n % prime[i] == 0 ) { while ( n % prime[i] == 0 ) { n /= prime[i]; primeFactor[ prime[i] ]++; // Increment global primeFactor array } sqrtn = sqrt ( n ); } } if ( n != 1 ) { primeFactor[n]++; } } void factFactorize ( int n ) { for ( int i = 2; i <= n; i++ ) { factorize( i ); } // Now We can print the factorization for ( int i = 0; i < prime.size(); i++ ) { printf ( "%d^%d\n", prime[i], primeFactor[ prime[i] ] ); } } We pass the value of $N$ to $factFactorize()$ in line $20$, and it calculates the frequency of each prime in $N!$. It starts a loop from $2$ to $N$ and factorizes each of them. In line $4$ we have the $factorize()$ function modified a bit in line $10$ and $16$ to suit our need. When those lines find a prime factor, they increase the frequency of that prime in the $primeFactor$ array. It is simple and straightforward but takes $O(N \times factorize() )$ amount of time. We can do better. ## Second - Summation of Each Prime Frequency Instead of factorizing from $1$ to $N$, how about we just find out how many times each prime occurs in $N!$ and list them. If $p_1$ occurs $a_1$ times, $p_2$ occurs $a_2$ times$...$ $p_x$ occurs $a_x$ times, then $N! = p_1^{a_1} \times p_2^{a_2} \times ... \times p_x^{a_x}$. That sounds nice, but how do we find the frequency of prime factors in $N!$. Let us just focus on one prime factor, for example, $2$, and find out how many times it occurs in $N!$. We will extend this idea to other primes. Let $N = 12$. How many times does $2$ occur in $12!$? We know that $12! = 12 \times 10 \times 9 \times ... \times 1$. How many numbers from $1$ to $12$ has $2$ as their prime factors? $\frac{12}{2} = 6$ numbers do and they are ${ 2, 4, 6, 8, 10, 12 }$. So we can say that at least $2^6$ is a factor of $12!$. But is there more? Yes. Notice that $4 = 2^2$, so it has an extra $2$ in it that we did not count. That means for each number that has $2^2$ in them as a factor, we need to add $1$ to our result. How many numbers are there which has $2^2$ as their factor? $\frac{12}{4} = 3$ numbers, which are ${ 4, 8, 12 }$. So we increase our frequency to $6 + 3 = 9$ and say we have at least $2^9$ in $12!$. But is that it? No. $8 = 2^3$ and for each number with $2^3$ as factor we add $1$ to result. So our result is $9 + \frac{12}{8} = 9 + 1 = 10$. Do we try with $16 = 2^4$ now? No. $12!$ cannot have any number with factor $2^4$ since $\frac{12}{16} = 0$. So we conclude that $12!$ has $2^{10}$ as its factor and no more. Now, we extend this idea to other primes. What is the frequency of prime factor $3$ in $12!$? $\frac{12}{3} + \frac{12}{9} + \frac{12}{27} = 4 + 1 + 0 = 5$. We repeat this for all primes less than equal to $12$. Therefore, we can say that for a given prime $p$, $N!$ will have $p^x$ as its prime factor where $x = \frac{N}{p} + \frac{N}{p^2} + \frac{N}{p^3} + ... \text{ Until it becomes 0 }$. So, using this idea our code will look as the following. void factFactorize ( int n ) { for ( int i = 0; i < prime.size() && prime[i] <= n; i++ ) { int p = prime[i]; int freq = 0; while ( n / p ) { freq += n / p; p *= prime[i]; } printf ( "%d^%d\n", prime[i], freq ); // Printing prime^freq which is factor of N! } } This code factorizes $N!$ as long as we can generate all primes less than or equal to $N!$. The loop in line $6$ runs until $\frac{n}{p}$ becomes 0. This code has 3 advantages over the "First" code. • We don't have to write $factorize()$ code. • Using this code, we can find how many \times a specific prime $p$ occurs in $N!$ in $O(log_p (N))$ time. In the "First" code, we will need to run $O(N)$ loop and add occurrences of $p$ in each number. • It has a better complexity for Factorization. Assuming the loop in line $6$ runs $log_2 (N)$ \times, this code has a complexity of $O(N log_2 (N))$. The code runs faster than this since we only loop over primes less than $N$ and at each prime the loop runs only $O(log_p (N))$ \times. The "First" code ran with $O(N \times factorize() )$ complexity, where $factorize()$ has complexity of $O(\frac{ \sqrt{N} }{ ln ( \sqrt{N} ) } + log_2(N) : )$. This idea still has a small flaw. So the next one is better than this one. ## Third - Better Code than Two Suppose, we want to find out how many times $1009$ occurs in $9 \times 10^{18}!$. Let us modify the "Second" code to write another function that will count the result for us. long long factorialPrimePower ( long long n, long long p ) { long long freq = 0; long long cur = p; while ( n / cur ) { freq += n / cur; cur *= p; } return freq; } If we pass $n = 9 \times 10^{18}$ and $p = 1009$, it will return us $8928571428571439$. But this is wrong. The line $7$ in the code overflows resulting in a wrong answer. Try it yourself. Print out the value of $cur$ in each step and see when it overflows. We could change the condition in line $5$ into something like this to solve the situation: while ( n / cur > 0 ) But this remedy won't work if $cur \times p$ overflows into a positive number. If we want we could use techniques that avoid multiplying two numbers if it crosses a limit, but there is a simpler way. Note that $\frac{N}{p^3}$ is same as $\frac{ \frac{N}{p^2} }{p}$. So instead of saying that $res = \frac{N}{p} + \frac{N}{p^2}...$, we could rewrite it as $x = N; \ res = res + \frac{x}{p}$. $x = \frac{N}{p} = \frac{x}{p}; \ res = res + \frac{x}{p}$. $x = \frac{N}{p^2} = \frac{ \frac{N}{p} } {p} = \frac{x}{p}; \ res = res + \frac{x}{p}$. $...$ $x = 0$ Instead of raising the power of $p$, we divide the value of $N$ by $p$ at each step. This has the same effect. So our code for finding the frequency of specific prime should look like the following: long long factorialPrimePower ( long long n, long long p ) { long long freq = 0; long long x = n; while ( x ) { freq += x / p; x = x / p; } return freq; } There might still be inputs for which this code will overflow, but chances for that is now lower. Now if we send in $n = 9 \times 10^{18}$ and $p = 1009$, then this time we get $8928571428571425$ as our result. If we apply this improvement in our $factFactorize()$ function, then it will become: void factFactorize ( int n ) { for ( int i = 0; i < prime.size() && prime[i] <= n; i++ ) { int x = n; int freq = 0; while ( x / prime[i] ) { freq += x / prime[i]; x = x / prime[i]; } printf ( "%d^%d\n", prime[i], freq ); } } This code has less chance to overflow so it is better.
2018-11-17 13:12:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6866292953491211, "perplexity": 387.86027700278754}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039743521.59/warc/CC-MAIN-20181117123417-20181117145417-00337.warc.gz"}
https://spinnaker8manchester.readthedocs.io/en/latest/_modules/spynnaker/pyNN/external_devices_models/push_bot/spinnaker_link/push_bot_led_device/
# Copyright (c) 2017-2019 The University of Manchester # # This program is free software: you can redistribute it and/or modify # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program. If not, see <http://www.gnu.org/licenses/>. from spynnaker.pyNN.external_devices_models.push_bot.ethernet import ( PushBotEthernetLEDDevice) """ The LED of a PushBot """ __slots__ = [] default_parameters = { 'n_neurons': 1, 'label': None, 'board_address': None, 'start_active_time_front': None, 'start_active_time_back': None, 'start_total_period': None, 'start_frequency': None} def __init__( n_neurons=default_parameters['n_neurons'], label=default_parameters['label'], start_active_time_front=default_parameters[ 'start_active_time_front'], start_active_time_back=default_parameters[ 'start_active_time_back'], start_total_period=default_parameters['start_total_period'], start_frequency=default_parameters['start_frequency']): """ :param led: The LED device to control :type led: ~spynnaker.pyNN.external_devices_models.push_bot.parameters.PushBotLED :param protocol: The protocol instance to get commands from :param int n_neurons: The number of neurons in the device :param str label: The label of the device The IP address of the board that the device is connected to :param start_active_time_front: The "active time" to set for the front LED at the start :type start_active_time_front: int or None :param start_active_time_back: The "active time" to set for the back LED at the start :type start_active_time_back: int or None :param start_total_period: The "total period" to set at the start :type start_total_period: int or None :param start_frequency: The "frequency" to set at the start :type start_frequency: int or None """ # pylint: disable=too-many-arguments super().__init__( led, protocol, start_active_time_front, start_active_time_back, start_total_period, start_frequency)
2022-01-23 17:50:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1891290545463562, "perplexity": 10060.303909087235}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304309.5/warc/CC-MAIN-20220123172206-20220123202206-00468.warc.gz"}
https://www.physicsforums.com/threads/explanation-to-this-velocity-equation.334543/
# Explanation to this velocity equation 1. Sep 3, 2009 ### Satyr I have already done this problem enough times forwards and backwards to get the answer, so this is more of a theory question rather than help with homework. The equation for Vavg.=displacement/time or deltaX/deltaT. This equate to V=(X2-X1)/(T2-T1), correct? In working through a problem, the correct version of this equation ended up being: V=(X1-X2)/(T1+T2). Here's the question: In reaching her destination, a backpacker walks with an average velocity of 1.34m/s, due west. This average velocity results because she hikes for 6.44km with an average velocity of 2.68m/s, due west, turns around, and hikes with an average velocity of 0.447m/s, due east. How far east did she walk? In fact, here's a link to the solution to the problem that I found: My question is this: How was that form of the equation derived? I solved this many times using the standard equation and was doing the entire procedure correctly...but was getting the wrong answer as I was using my initial equation rather than the version I have mentioned here. Can someone explain to me what the reasoning is behind this? Last edited by a moderator: Apr 24, 2017 2. Sep 3, 2009 ### Jebus_Chris $$v_{avg}=\frac{x_2-x_1}{t_2-t_1}$$ $$x_2$$ is your final position and $$x_1$$ is your initial position. $$x_1=0$$ $$x_2=6.44-d$$ For time $$t_1$$ is your initial time and $$t_2$$ is your final time. $$t_1=o$$ $$t_2=d_1v_1+d_2v_2$$ 3. Sep 3, 2009 ### Satyr Are you interchanging x and d for displacement? I'm still not following how Vavg=(deltaX)(deltaT) or V=(X2-X1)/(T2-T1) turned into: V=(X1-X2)/(T1+T2) 4. Sep 3, 2009 ### Jebus_Chris |----------x-----| You walk 10m right then 3m back. Now we find displacement, (10-3)-0 = 7. Another thing to point out. The guy who solved the formula has X1 = distance traveled in the first walk and X2 equals distance travled in the second walk. So if you walk X1 units right, and X2 units left, the displacement would be X1-X2. In the formula you're so interested in you have an X2 and X1 but they do not equal the X1 and X2 he has defined. The same with t, t1 = time of first walk and t2 = time of second walk. 5. Sep 3, 2009 ### Satyr Okay, your diagram somewhat clears the water for me. So the displacement is 7 because your initial location was 0 and your final location was 7...which would work for Xf - Xo also. However, when writing the motion your final position is the 10-3 because you come back over the path you've already traveled, nulling its quantity? That clears up the displacement aspect of it for me. Now, for the average time. Why is it added together in this example rather than being Tf-To like normal? Is it because your final travel time is actually the result of the time it takes you to make the westward hike + the eastward walk? Similarly to the displacement equation, above, Tf=(T1+T2)-To? 6. Sep 3, 2009 ### planck42 You first have to decide on a sign convention; which direction is positive and which is negative? Since the question looks only for a magnitude, it does not matter which convention you choose, just so long as you stick to it. 7. Sep 3, 2009 ### Jebus_Chris Yeah. $$\Delta t = t_f - t_o$$ Your total time, tf, is the time of walk 1 + walk 2. And the time that you start this at is 0. $$\Delta t = T_1+t_2$$
2017-08-20 03:33:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7280058264732361, "perplexity": 1217.1931947102516}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105961.34/warc/CC-MAIN-20170820015021-20170820035021-00556.warc.gz"}
https://web2.0calc.com/questions/geometry-question_91
+0 # Geometry Question 0 45 1 If AC = 36 inches, what is the measure of AB. (Please See Attached) Jul 8, 2020 #1 +25508 +2 If $$AC = 36$$ inches, what is the measure of AB. (Please See Attached) $$\begin{array}{|rcll|} \hline \mathbf{(2x-6) + (x^2-13x)} &=& \mathbf{36} \\ x^2-11x-6 &=& 36 \\ \mathbf{x^2-11x-42} &=& \mathbf{0} \\\\ x &=& \dfrac{11\pm \sqrt{11^2-4*(-42)} } {2} \\\\ x &=& \dfrac{11\pm \sqrt{121+168} } {2} \\\\ x &=& \dfrac{11\pm \sqrt{289} } {2} \\\\ x &=& \dfrac{11\pm 17 } {2} \\\\ x &=& \dfrac{11 \mathbf{+} 17 } {2} \quad | \quad x > 0! \\\\ \mathbf{x} &=& \mathbf{14} \\\\ \mathbf{\text{AB}} &=& \mathbf{2x-6} \\ \text{AB} &=& 2*14-6 \\ \mathbf{\text{AB}} &=& \mathbf{22\ \text{inches}} \\ \hline \end{array}$$ Jul 9, 2020
2020-08-12 20:29:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9787992238998413, "perplexity": 3420.730803444798}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738944.95/warc/CC-MAIN-20200812200445-20200812230445-00503.warc.gz"}
http://cpr-astrophep.blogspot.com/2013/04/13046638-p-pinilla-et-al.html
## Explaining millimeter-sized particles in brown dwarf disks    [PDF] P. Pinilla, T. Birnstiel, M. Benisty, L. Ricci, A. Natta, C. P. Dullemond, C. Dominik, L. Testi Planets have been detected around a variety of stars, including low-mass objects, such as brown dwarfs. However, such extreme cases are challenging for planet formation models. Recent sub-millimeter observations of disks around brown dwarf measured low spectral indices of the continuum emission that suggest that dust grains grow to mm-sizes even in these very low mass environments. To understand the first steps of planet formation in scaled-down versions of T-Tauri disks, we investigate the physical conditions that can theoretically explain the growth from interstellar dust to millimeter-sized grains in disks around brown dwarf. We modeled the evolution of dust particles under conditions of low-mass disks around brown dwarfs. We used coagulation, fragmentation and disk-structure models to simulate the evolution of dust, with zero and non-zero radial drift. For the non-zero radial drift, we considered strong inhomogeneities in the gas surface density profile that mimic long-lived pressure bumps in the disk. We studied different scenarios that could lead to an agreement between theoretical models and the spectral slope found by millimeter observations. We find that fragmentation is less likely and rapid inward drift is more significant for particles in brown dwarf disks than in T-Tauri disks. We present different scenarios that can nevertheless explain millimeter-sized grains. As an example, a model that combines the following parameters can fit the millimeter fluxes measured for brown dwarf disks: strong pressure inhomogeneities of $\sim$ 40% of amplitude, a small radial extent $\sim$ 15 AU, a moderate turbulence strength $\alpha_{\mathrm{turb}}= 10^{-3}$, and average fragmentation velocities for ices $v_f = 10 m s^{-1}$. View original: http://arxiv.org/abs/1304.6638
2018-04-20 22:22:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7248357534408569, "perplexity": 3591.571022726998}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125944742.25/warc/CC-MAIN-20180420213743-20180420233743-00480.warc.gz"}
https://mathoverflow.net/questions/251276/rationality-of-the-sum-of-the-reciprocals-of-the-values-of-a-polynomial-function
# Rationality of the sum of the reciprocals of the values of a polynomial function at the positive integers Let $f$ be a polynomial function of degree at least $2$ with integer coefficients, and assume that $f(n)$ is nonzero for any positive integer $n$. Question: Is it algorithmically decidable whether $$S(f) \ := \ \sum_{n = 1}^\infty \frac{1}{f(n)}$$ is rational or not? -- Which are the known necessary or sufficient criteria for the rationality or the irrationality of the value of this expression? Examples: $S(n^2) = \zeta(2) = \frac{\pi^2}{6}$ and $S(n^3) = \zeta(3)$ are irrational, while $S(n^2+n) = 1$ is rational. • Isn't it already open for arbitrary $n^{2k+1}$? – SashaP Oct 2 '16 at 20:47 • I'd be very surprised if there is any such. – T. Amdeberhan Oct 2 '16 at 21:40 • @SashaP: As far as I know, yes. -- But still it is conceivable that someone knows how to prove algorithmic undecidability, or can give criteria which apply to classes of polynomials which do not include $n^{2k+1}$. – Stefan Kohl Oct 2 '16 at 22:17 • Asking about algorithmic decidability is probably the wrong question. It could be that $S(f)$ is irrational except in certain cases where rationality is obvious. Then it would be decidable, but we wouldn't be able to prove that it is decidable. I think you're really just interested in which cases the irrationality is known. – Timothy Chow Oct 3 '16 at 18:04 • @TimothyChow: This is quite possible. Do you know of any heuristics which suggest that $S(f)$ is irrational except in cases where it is obviously rational (I mean one which is better than "the rationals are countable, but the irrationals are not, so assuming some 'well-behaved' kind of random distribution, almost all $S(f)$ should be irrational")? – Stefan Kohl Oct 3 '16 at 21:09 A perhaps not so interesting class of examples where the sums in question are known to be either transcendental or explicitly computable algebraics is referenced here: https://mathoverflow.net/a/33586 Perhaps that can direct you to more papers on the subject. Edit: I should perhaps also mention that there are lattice based algorithms that can reconstruct minimal polynomials of algebraics with a good enough approximations. Thus, given that the degree and logarithmic height of these values are bounded by computable constants, you may take a large partial sum, use said lattice techniques, and if you don't get a match conclude that it is in fact a transcendental. • Thanks for a start! -- Though I don't see that the criterion in the post you refer to would decide any nontrivial case ... . – Stefan Kohl Oct 3 '16 at 10:56 • In the last paragraph: why are "the degree and logarithmic height of these values bounded by computable constants"? – Stefan Kohl Oct 3 '16 at 10:59 If $f(n)$ has only simple rational zeros, the paper Transcendental infinite sums is related. p.3: Corollary 2.1. Let $f : \mathbb{Z} \to \overline{\mathbb{Q}}$ be periodic $\mod q$. Let $Q(X) \in \mathbb{Q}[X]$ have simple rational zeros. If $$S=\sum_{n=0}^\infty \frac{f(n)}{Q(n)}$$ converges, then $S$ equals a computable algebraic number or $S \not \in \overline{\mathbb{Q}}$. In the later case we have ... • What do we have in the later case? – Stefan Kohl Oct 3 '16 at 11:01 • They give bounds for the latter case, check the public paper. – joro Oct 3 '16 at 11:37
2020-02-20 07:28:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8736701011657715, "perplexity": 338.431402731814}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144708.87/warc/CC-MAIN-20200220070221-20200220100221-00155.warc.gz"}
https://calendar.math.illinois.edu?year=2004&month=11&day=29&interval=year&regexp=Graph+Theory+and+Combinatorics&use=Find
Department of # Mathematics Seminar Calendar for Graph Theory and Combinatorics events the year of Monday, November 29, 2004. . events for the events containing Questions regarding events or the calendar should be directed to Tori Corkery. October 2004 November 2004 December 2004 Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa 1 2 1 2 3 4 5 6 1 2 3 4 3 4 5 6 7 8 9 7 8 9 10 11 12 13 5 6 7 8 9 10 11 10 11 12 13 14 15 16 14 15 16 17 18 19 20 12 13 14 15 16 17 18 17 18 19 20 21 22 23 21 22 23 24 25 26 27 19 20 21 22 23 24 25 24 25 26 27 28 29 30 28 29 30 26 27 28 29 30 31 31 Tuesday, January 27, 2004 3:00 pm in 241 Altgeld Hall,Tuesday, January 27, 2004 #### On strong chromatic index of bipartite graphs ###### Kittikorn Nakprasit (UIUC Math) Abstract: A strong edge-coloring of a graph G is an edge-coloring in which every color class is an induced matching; that is, if uv and wz have the same color, then the subgraph induced by those four vertices has only those two edges. The strong chromatic index s'(G) is the minimum integer number of colors in a strong edge-coloring of G. Brualdi and Quinn conjectured that for every bipartite graph G, s'(G) is bounded by D1 D2, where D1 and D2 are the maximum degrees among vertices in the two partite sets. We give an affirmative answer for D1=2. Tuesday, February 3, 2004 3:00 pm in 241 Altgeld Hall,Tuesday, February 3, 2004 #### Bounds on set systems with restricted k-wise intersections ###### Weiting Cao (UIUC Math) Abstract: Maximizing the size of a family of subsets of an n-element set under some conditions on intersections of its members is a classical theme in extremal set theory. For example, Frankl and Wilson proved that the sum from i=0 to s of n\choose i is an upper bound when the sizes of pairwise intersections are restricted to a set L of s nonnegative integers. Snevily proved their conjecture that n can be replaced with n-1 in the upper bound when L is a set of s positive integers. These bounds hold also when L is viewed as a set of s congruence classes modulo a prime p. We generalize these bounds by placing the restriction on the intersections of k distinct members of the family. Again let L be a set of s congruence classes modulo a prime p. Let H be a family of subsets of [n] such that the size modulo p of each member of H is not in L, but the size modulo p of every intersection of k distinct members of H is in L. We prove that the sum from i=0 to s of (n-1)\choose i is an upper bound on |H|. This improves an earlier bound by Grolmusz having n in place of n-1. We use the linear algebra method, obtaining the bound from the dimension of a linear space of polynomials. This work is joint with Kyung-Won Huang and Douglas West. Tuesday, February 10, 2004 3:00 pm in 241 Altgeld Hall,Tuesday, February 10, 2004 #### The best choice problem for partially ordered sets ###### Michal Morayne (Wroclaw University of Technology, Wroclaw, Poland) Abstract: The classical secretary problem concerns the best strategy in an on-line decision process. The administrator interviews n candidates for a job The candidates are linearly ordered in qualifications, but the administrator sees them in a random permutation and can only compare the relative ranks of the ones seen so far. The aim is to choose the absolute best candidate, but the only one that can be chosen is the currently interviewed one. The optimal strategy is well known and gives the administrator a probability better than 1/e of choosing the best candidate. The problem has been generalized in various ways. Here we consider the problem with a partial order instead of a linear one. This seems to better approximate real life situations where options need not be linearly ordered. The aim of this talk is to discuss optimal and effective strategies for the partial order generalization of the secretary problem. Such strategies in some specific situations (such as a binary tree order and an unknown order) will be considered. Interesting combinatorial counting problems that arise when looking for such strategies will be discussed. Tuesday, February 17, 2004 3:00 pm in 241 Altgeld Hall,Tuesday, February 17, 2004 ###### Gexin Yu (UIUC Math) Abstract: We introduce the notion of H-linked graphs, where H is a fixed graph with vertices u1,...,um. A graph G is H-linked if for every choice of vertices v1,...,vm in G, there exists a subdivision of H in G such that vi is the branch vertex representing ui (for all i). This generalizes the notions of k-linked and k-ordered graphs. Given k and n, we determine the least integer d such that, for every graph H with k edges and minimum degree at least two, every n-vertex graph with minimum degree at least d is H-linked. This value D(k,n) appears to equal the least integer d' such that every n-vertex graph with minimum degree at least d' is k-connected. On the way to the proof, we extend a theorem by Kierstead et al on the least integer d" such that every n-vertex graph with minimum degree at least d" is k-ordered. This is joint work with Alexandr Kostochka. Tuesday, February 24, 2004 3:00 pm in 241 Altgeld Hall,Tuesday, February 24, 2004 #### On a common feature in certain sequences ###### Sergey Kitaev (University of Kentucky) Abstract: The Arshon sequence was given in 1937 in connection with the problem of constructing a square-free sequence on a given alphabet, that is a sequence that does not contain any subword of the type XX, where X is any nonempty word over the alphabet. The existence of such sequences, as well as the existence of sequences avoiding other kinds of repetitions, were studied in algebra, discrete analysis, and dynamical systems. The Dragon curve (the paperfolding sequence) was discovered by physicist John Heighway and described by Martin Gardner in 1978. It is defined as follows: we fold a sheet of paper in half, then fold in half again, and again, etc. and then unfold in such way that each crease created by the folding process is opened out into a 90-degree angle. The "curve" refers to the shape of the partially unfolded paper as seen edge on. The Dragon curve is related to the sigma-sequence used by Evdokimov in 1968 in order to construct chains of maximal length in the n-dimensional unit cube. The Peano curve was studied by the Italian mathematician Giuseppe Peano in 1890 as an example of a continuous space filling curve. The Peano infinite word is a discrete analog of the Peano curve. Are there any similarities between the Arshon sequence, the Dragon curve, and the Peano infinite word in terms of combinatorics on words? In this talk, I will answer this question using some recent results. Tuesday, March 2, 2004 3:00 pm in 241 Altgeld Hall,Tuesday, March 2, 2004 #### An unexpected drop in the circular chromatic number ###### Seog-Jin Kim (UIUC Math) Abstract: A (k,d)-coloring of a graph G is map from V(G) to the set of congruence classes modulo k so that adjacent vertices are assigned classes differing by at least d. The circular chromatic number of G is the minimum ratio k/d such that G has a (k,d)-coloring. This parameter has become an important subject of study in recent years as a refinement of the ordinary chromatic number, since the chromatic number is always the ceiling of the circular chromatic number. The chromatic number never declines by more than one when a vertex is deleted, and it was widely thought that the same would hold for the circular chromatic number. In this talk, we present the surprising construction by Xuding Zhu of an infinite family of graphs with circular chromatic number 4 such that deletion of any vertex reduces the circular chromatic number to 8/3. Tuesday, March 9, 2004 3:00 pm in 241 Altgeld Hall,Tuesday, March 9, 2004 #### Dominating sets in k-majority tournaments ###### Alexandr V. Kostochka (UIUC Math) Abstract: The k-majority tournament generated by 2k-1 linear orders on a finite set V is the tournament with vertex set V having an edge from v to w if v precedes w in at least k of the orders. Erdös and Moser proved that for some k(n) in O(n/log n), every n-vertex tournament is a k(n)-majority tournament. Kierstead and Trotter conjectured that for each k, there is a constant F(k) such that every k-majority tournament (with no restriction on the number of vertices) has a dominating set of size at most F(k). They also conjectured that F(2)=3; that is, every tournament generated by majority rule from a set of three linear orders has a dominating set of size 3. In this talk we prove these conjectures and describe some open problems. The result is joint work with G. Brightwell, H. Kierstead, and P. Winkler. Friday, March 12, 2004 1:00 pm in Altgeld Hall,Friday, March 12, 2004 #### Global Optima Results for the Kauffman \$NK Model 1:00 pm in Altgeld Hall,Friday, March 12, 2004 #### To Be Announced Tuesday, March 16, 2004 3:00 pm in 241 Altgeld Hall,Tuesday, March 16, 2004 #### Global optima results for the Kauffman NK model ###### Hemanshu Kaul (UIUC Math) Abstract: Many scenarios in theoretical biology, physics, and business organizations can be modeled as systems with several interacting components that can be in various states. The aim is to maximize a performance measure involving contributions from each component. This measure may depend on both the state of each component and on interactions between components. In 1987, Kauffman and Levin introduced a combinatorial optimization model for such systems, called the Kauffman NK model, where N is the number of components of the system and K measures the interaction between the components. This was proposed to model the evolution of genomes in theoretical biology but has since been applied in other areas as listed above. Previous research on the NK model has emphasized simulations and analysis of local optima. Here we focus on rigorous results for global optima. We describe a computational setup using a stochastic network model, which leads to applicable strategies for computing bounds on global optima when K is small or is close to N. Recent papers used tools from analysis and probability to obtain bounds on the expected value of the global optima for fixed K and large N. We present bounds when K grows with N, using elementary probabilistic combinatorics and order statistics. We use a `dependency' graph to convert the problem of bounding order statistics of dependent random variables into that of independent random variables while incorporating quantitative information about mutual dependencies among the underlying random variables. If time permits, an alternate upper bound and the analysis for the cases of underlying uniform and normal distributions will be presented. This is joint work with Prof. S.H. Jacobson. Tuesday, March 30, 2004 3:00 pm in 241 Altgeld Hall,Tuesday, March 30, 2004 #### A hierarchy of randomness for graphs ###### Vera Sos (Hungarian Academy of Science) Abstract: We formulate four families of problems with the aim of distinguishing different levels of randomness. The first is completely non-random, being the ordinary Ramsey-Turan problem. In the three subsequent problems we formulate randomized variations of it. We show that these four levels form a hierarchy. The problems and results are strongly related to three basic topics in graph theory: extremal graph theory, Ramsey theory, and the theory of random graphs. (Joint work with M. Simonovits.) Tuesday, April 6, 2004 3:00 pm in 241 Altgeld Hall,Tuesday, April 6, 2004 #### When the greedy algorithm fails ###### Gregory Gutin (Royal Holloway, University of London) Abstract: We provide a characterization of the cases when the greedy algorithm may produce the unique worst possible solution for the problem of finding a minimum weight base in a uniform independence system when the weights are taken from a finite range. We apply this theorem to TSP and the minimum bisection problem. The practical message of this talk is that the greedy algorithm should be used with great care, since for many optimization problems its usage seems impractical even for generating a starting solution (that will be improved by a local search or another heuristic). (Joint work with J. Bang-Jensen and A. Yeo.) Tuesday, April 13, 2004 3:00 pm in 241 Altgeld Hall,Tuesday, April 13, 2004 #### Cones over graphs ###### Michael Stiebitz (Technical University of Ilmenau, Germany) Abstract: The cone Mr(G) over a given graph G is obtained by taking the categorial product of G and a path P of length r+1 with a loop at one end, and then identifying all vertices whose second coordinate is the non-loop end of P. Some instances of this construction are well known. The cone M1(G) is the graph obtained from G by adding a vertex joined to all of V(G). The cone M2(G) is Mycielski's construction over G. This was invented in 1955 by Mycielski to generate triangle-free k-chromatic graphs for all k >= 2. It is easy to show that chi(M1(G))=chi(G)+1 and chi(M2(G))=chi(G)+1. Do all cones over a graph G have chromatic number larger than G? For example, is Mr(C2k+1) always 4-chromatic? We show that this question has a topological flavor. Tuesday, April 20, 2004 3:00 pm in 241 Altgeld Hall,Tuesday, April 20, 2004 #### Extremal functions for graph linkages ###### Paul Wollan (Georgia Institute of Technology) Abstract: A graph G is k-linked if for any 2k distinct vertices s1,...,sk and t1,...,tk there exist disjoint paths P1,...,Pk such that for each i the endpoints of Pi are si and ti. Bollobás and Thomason showed that connectivity at least 22k suffices to make a graph k-linked. Using a more direct induction argument, we have shown that 18k-connected graphs are k-linked. I will outline the argument and discuss recent improvements leading to a result of Kawarabayashi, Kostochka, and Yu that 12k-connected graphs are k-linked. Along these same lines, we further improved the analysis to show that connectivity at least 10k is sufficient. I will also discuss how the proof method can be used to show an optimal bound in the case k=3 and how the proof method can be used to find extremal functions for more general structures. This is joint work with Robin Thomas. Tuesday, April 27, 2004 3:00 pm in 241 Altgeld Hall,Tuesday, April 27, 2004 ###### Norihide Tokushige (Ryukyu University and Emory University) Abstract: We present some Erdos-Ko-Rado type problems and partial results concerning the "r-wise t-intersecting" version, the "r-wise intersecting and r-wise union" version, and the "r-wise cross-intersecting" version. One of our main tools is random walks; we show how to use random walks to get upper bounds for the size of multiply intersecting families. This is joint work with Peter Frankl. Tuesday, May 4, 2004 3:00 pm in 241 Altgeld Hall,Tuesday, May 4, 2004 #### Path-perfection of complete bipartite graphs ###### Weiting Cao (UIUC Math) Abstract: A graph G is path-perfect if there is a positive integer n such that the edge set E(G) of the graph G can be partitioned into n paths of lengths 1,2,...,n. We prove the conjecture of Fink and Strait: For t <= s, the complete bipartite graph Ks,t is path-perfect if and only if there is a positive integer n such that 2t >= n and n(n+1)=2ts. (This is joint work with Prof. Peter Hamburger.) Tuesday, August 31, 2004 3:00 pm in 241 Altgeld Hall,Tuesday, August 31, 2004 #### Two topics revisited ###### Alexandr Kostochka (UIUC Math) Abstract: First topic: Small triangle-free planar graphs that are not 3-list colorable. An example on 119 vertices was reported a year ago. In this talk, two examples on 98 and 97 vertices will be presented. This is joint work with A. Glebov and V. Tashkinov. Second topic: An extension of Brooks' Theorem. Kittikorn Nakprasit presented last winter our extension of Brooks' Theorem: If G is a KD+1-free graph with maximum degree at most D (where D >= 3), and f is a proper D-coloring of G-v for some v\in V(G), then G has a proper D-coloring f' such that the sizes of all color classes except one are the same. I will discuss some corollaries of this result. Tuesday, September 7, 2004 3:00 pm in 241 Altgeld Hall,Tuesday, September 7, 2004 #### Results on the pagenumber of k-trees ###### Jennifer Vandenbussche (UIUC Math) Abstract: A k-page embedding of a graph G is an ordering of V(G) (along the "spine" of a book) together with an assignment of E(G) to k pages such that no two edges on the same page are crossing. The pagenumber of a graph G is the minimum k such that G has a k-page embedding. The class of k-trees is defined inductively: A graph G is a k-tree if it is Kk or is obtained from a k-tree by adding a new vertex adjacent to a k-clique. It has been proved that the pagenumber of a k-tree is at most k+1, and conjectured that k is in fact a valid upper bound. We present an example of a 3-tree that cannot be embedded in three pages, providing a counterexample to this conjecture. We also present a new class of k-trees for which the conjecture holds and give an algorithm to produce a k-page embedding. (This is joint work with Gexin Yu and Douglas West as part of the summer REG.) Tuesday, September 14, 2004 3:00 pm in 241 Altgeld Hall,Tuesday, September 14, 2004 #### The number of edge-colorings with no monochromatic cliques ###### Jozsef Balogh (Ohio State University) Abstract: Let F(n,r,k) denote the maximum number of distinct r-edge-colorings of an n-vertex graph that avoid monochromatic copies of Kk. Erdös and Rothschild conjectured more than 20 years ago that when n is sufficiently large, F(n,2,k)=2tk-1(n), where tk(n) is the maximum number of edges of a Kk+1-free graph with n vertices (determined by Turán's Theorem). This was proved for k=2 by Yuster in 1996. We prove this conjecture for up to 3 colors and disprove it for larger r. That is, for every fixed k and for n sufficiently large, F(n,2,k)=2tk-1(n) and F(n,3,k)=3tk-1(n). On the other hand, for fixed r>3 and k>2, the function F(n,r,k) is exponentially bigger than rtk-1(n). The proofs use Szemerédi's Regularity Lemma plus additional tools in extremal graph theory and provide an rare example of a precise result proved by the Regularity Lemma. We shall review several other problems that were solved using our methods. (This is joint work with N. Alon, P. Keevash, and B. Sudakov.) Tuesday, September 21, 2004 3:00 pm in 241 Altgeld Hall,Tuesday, September 21, 2004 #### How to Exchange Items ###### Sujay Sanghavi (UIUC Coordinated Science Lab) Abstract: This work looks at the exchange of goods in the absence of a divisible commmodity of common value, like money. Each agent comes to the market with one item and a strictly ordered preference list of (a subset of) other items it would be willing to exchange its own item for. All lists are revealed to a central authority, which then makes a recommendation on how the agents should trade so that each agent gets one item. It is shown that a simple, fast "greedy" algorithm for setting up the exchanges has surprising properties: no group of agents can deviate from the recommendation, or jointly reveal false lists to the centre, to the advantage of all members of the group - EVEN IF other agents in the system are deviating / lying. Tuesday, September 28, 2004 3:00 pm in 241 Altgeld Hall,Tuesday, September 28, 2004 #### Fire containment on trees and grids ###### Stephen Hartke (UIUC Math -- NOTE change of speaker) Abstract: We consider a deterministic discrete-time model of fire spread introduced by Hartnell, where the fire spreads to adjacent vertices at each time step. A limited number of firefighters can be deployed per timestep to protect some vertices from catching fire. How should the firefighters be deployed to minimize the total number of burnt vertices? We consider this question for finite trees and for infinite d-dimensional square grids. Tuesday, October 5, 2004 3:00 pm in 241 Altgeld Hall,Tuesday, October 5, 2004 #### Tree-thickness and caterpillar-thickness of connected graphs ###### Qi Liu (UIUC Math) Abstract: In 1978, Chung proved that every connected graph G with n vertices decomposes into at most \ceil(n/2) trees. We prove that a connected graph with n vertices and girth g decomposes into at most \floor(n/g)+1 trees, if g >= 5, and this is sharp. We prove weaker results when g=4. A caterpillar is a tree having a single path incident to all edges. We prove that a connected outplanar graph G with girth 4 decomposes into at most \ceil(3n/8) caterpillars, and this is sharp. This is joint work with Douglas West and Derrick Cheng. Tuesday, October 12, 2004 3:00 pm in Altgeld Hall,Tuesday, October 12, 2004 #### The structure of pebbling moves and optimal graph pebbling ###### Kevin Milans (UIUC Computer Science) Abstract: Consider a graph G and a distribution of pebbles to the vertices of G. A pebbling move consists of removing two pebbles from some vertex v and placing one pebble on a neighbor of v. We present a characterization of when a given set of (unordered) pebbling moves may be placed in some order \sigma so that \sigma is a valid sequence of pebbling moves. As a consequence, when designing sequences of pebbling moves, one may focus on choosing which pebbling moves to make as opposed to the order in which to make them. We also present results on optimal graph pebbling. The optimal pebbling number of a graph G is the minimum k such that there exists a distribution of k pebbles to the vertices of G such that for any vertex v, there exists a sequence of pebbling moves which results in a pebble on v. We show that for any connected n-vertex graph G, the optimal pebbling number of G is at most the ceiling of 2n/3. If time permits, we may give a simplified proof of the well-known results equality holds when G is a path or a cycle. (Joint work with David Bunde, Bryan Clark, Dan Cranston, Douglas West, and Erin Wolf.) Tuesday, October 19, 2004 3:00 pm in 241 Altgeld Hall,Tuesday, October 19, 2004 #### Large graphs with no long induced path ###### Douglas B. West (UIUC Math) Abstract: A graph is H-free if it has no induced subgraph isomorphic to H. By analogy with the classical extremal Turan problem for forbidden subgraphs, let ex*(D;H) be the maximum number of edges in an H-free connected graph with maximum degree D. This value is finite if and only if H is a disjoint union of paths. Earlier results include ex*(D;P4)=D2 and the exact computation of ex*(D;2P3). For m >= 6, we improve the known bounds by showing that ex*(D;Pm)\in \Theta(D\ceil(m/2)), with leading coefficient between 1/8 and 1/2 when m is odd and between 1/2 and 2 when m is even. For m=5, we determine the exact value. (Joint work with Myung Chung and Tao Jiang.) Tuesday, October 26, 2004 3:00 pm in 241 Altgeld Hall,Tuesday, October 26, 2004 #### The Rényi-Ulam pathological liar game with a fixed number of lies ###### Robert Ellis (Texas A&M University) Abstract: The q-round Rényi-Ulam pathological liar game with k lies on the set [n]:={1,...,n} is a 2-player perfect information zero sum game. In each round Paul chooses a subset A of [n] and Carole either assigns 1 lie to each element of A or to each element of [n]-A. Paul wins if after q rounds there is at least one element with k or fewer lies. The game is equivalent to a covering problem in the discrete hypercube, and it is dual to the original Rényi-Ulam liar game, for which the winning condition is that at most one element has k or fewer lies. We give the exact smallest n for which Paul can win the pathological liar game with 1 or 2 lies, and we show that n is within an absolute constant of the coding theoretic sphere bound when k is fixed. This is already known to hold for the original Renyi-Ulam liar game due to a result of J. Spencer. Tuesday, November 2, 2004 3:00 pm in 241 Altgeld Hall,Tuesday, November 2, 2004 #### On Graph Coloring: list-coloring, coloring extensions, and graphs on surfaces ###### Joan Hutchinson (Macalester College and UColorado-Denver) Abstract: We consider solved and unsolved problems on list-coloring, coloring extensions, and their connections. In the latter, parts of a graph are precolored, and one asks when and how the precoloring extends to the whole graph. A precoloring constrains the colors on neighboring vertices and so leads to a list-coloring problem. We consider these problems for all graphs, for planar graphs, and for graphs on surfaces. This talk includes joint work with Mike Albertson of Smith College, Emily Moore of Grinnell College, and Radhika Ramamurthi (UIUC graduate) of CSU San Marcos. Tuesday, November 9, 2004 3:00 pm in 241 Altgeld Hall,Tuesday, November 9, 2004 #### The Mathematics of Postage: New Fast Algorithms for the Frobenius Problem ###### Stan Wagon (Macalester College, St. Paul, Minnesota) Abstract: Let A be a finite set of positive integers, viewed as postage stamp denominations. It turns out that when the elements of A have no common factor, there is a largest amount of postage that cannot be expressed using stamps with these denominations; larger values are nonnegative integer combinations of the elements of A. For example, if A = {6, 9, 20} (the Chicken McNugget numbers), then every number beyond 43 can be represented. The greatest nonrepresentable integer is called the "Frobenius number" of A. The classic Frobenius problem for {a, b, c, ...} has two parts: (1) determine the Frobenius number, and (2) given a target M, find nonnegative coefficients x, y, z, ... such that x a + y b + z c + ... = M. I will show how a shortest-path algorithm for directed weighted graphs leads to a reasonably efficient solution for input sizes less than one million. More advanced techniques lead to a fast solution even when the integers are very large, provided that the size of A is at most 6. (Joint work with Dale Beihoffer, David Einstein, and Albert Nijenhuis.) Tuesday, November 16, 2004 3:00 pm in 241 Altgeld Hall,Tuesday, November 16, 2004 #### Ramsey results for 3-coloring and odd cycles ###### Miklos Simonovits (Renyi Institute, Hungary, and University of Memphis) Abstract: For graphs G1,..., Gk, the Ramsey number R(G1,...,Gk) is the minimum integer N such that for any k-coloring of edges of the complete graph KN there exists a color i for which the corresponding color class contains Gi as a subgraph. Bondy and Erdös conjectured that if n is odd, then R(Cn,Cn,Cn)=4n-3. We prove this conjecture and some related stability theorems. Tuesday, November 30, 2004 3:00 pm in 241 Altgeld Hall,Tuesday, November 30, 2004 #### Finding a monochromatic subgraph or a rainbow path ###### Jeno Lehel (University of Memphis) Abstract: Let f(G,H) denote the least integer n such that in every coloring of the edges of a clique Kn there is either a monochromatic copy of the graph G or a multicolored (rainbow) copy of the graph H. For particular cases of G or H the mono/rainbow function f relates to the usual Ramsey and local Ramsey numbers. We show that for the paths Pk with k=4 or k=5, f(G,Pk) equals the (k-2)--color diagonal Ramsey number of G. A similar mono/rainbow function defined for complete bipartite graphs will be also mentioned. (Joint work with A. Gyárfás, R.H. Schelp, and P. Balister.) Tuesday, December 7, 2004 3:00 pm in 241 Altgeld Hall,Tuesday, December 7, 2004 #### Decomposition of products of regular graphs into isomorphic trees ###### Douglas B. West (UIUC Math) Abstract: Let T be a tree with m edges. Ringel conjectured that the complete graph K2m+1 decomposes into copies of T; such a partition is a T-decomposition. Häggkvist posed the more general conjecture that every 2m-regular graph has a T-decomposition. Graham and Häggkvist conjectured that also every m-regular bipartite graph has a T-decomposition. Later work by Snevily and by Avgustinovitch obtained T-decompositions for various classes of 2m-regular graphs and m-regular bipartite graphs. We extend their ideas to enlarge the families of 2m-regular graphs and m-regular bipartite graphs that are known to have T-decompositions. The new families consist of various cartesian products of regular graphs. (This is joint work with Alexandr Kostochka.) Tuesday, December 14, 2004 12:00 pm in 241 Altgeld Hall,Tuesday, December 14, 2004 #### How Random is the Human Genome? ###### Peter Winkler (Dartmouth) Abstract: Now that the human genome is (mostly) sequenced, how do we know when some statistical fact about that random-looking string of 3 billion A's, C's, G's and T's is significant? For example, there are strings of length 11 which appear nowhere in the sequence; does this mean anything? The speaker will describe an efficient combinatorial approach to problems of this sort, implemented with a group of scientists at Rockefeller University (Andy DeWan, Chad Hayes, Josephine Hoh, Jurg Ott, Tony Parrado, and Richard Sackler).
2018-05-27 01:36:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.666435718536377, "perplexity": 982.0972006852967}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867977.85/warc/CC-MAIN-20180527004958-20180527024958-00433.warc.gz"}
http://mathhelpforum.com/geometry/212453-determine-if-ab-parallel-cd.html
# Math Help - Determine if AB is parallel to CD? 1. ## Determine if AB is parallel to CD? I had this question on a test but my teacher is saying that it's not parallel, however I am fairly certain that it is parallel. Angle ABD and BDC are both equal making them alternate interior angles therefore making BD a transversal. However she is basing it off angles ADB and CBD claiming they aren't equal. Well of course they aren't equal because they are not within the range of the two lines that are trying to be proved parallel. 2. ## Re: Determine if AB is parallel to CD? Originally Posted by pacoMac I had this question on a test but my teacher is saying that it's not parallel, however I am fairly certain that it is parallel. Angle ABD and BDC are both equal making them alternate interior angles therefore making BD a transversal. However she is basing it off angles ADB and CBD claiming they aren't equal. Well of course they aren't equal because they are not within the range of the two lines that are trying to be proved parallel. The sum of the interior angles of a triangle is 180 degrees. Use triangle BCD and that will give you x. Then use the sum if interior angles for triangle BAD. That should give you a good start. -Dan 3. ## Re: Determine if AB is parallel to CD? Yes I already got that which I didn't input into the previous post, and x was 25. Making CDB = 44 degrees and angle ABD = 44 since 180 - 93 + 43 = 44. And that is where I assumed that AB and DC were parallel but my teacher is convinced that it isn't. 4. ## Re: Determine if AB is parallel to CD? Originally Posted by pacoMac Yes I already got that which I didn't input into the previous post, and x was 25. Making CDB = 44 degrees and angle ABD = 44 since 180 - 93 + 43 = 44. And that is where I assumed that AB and DC were parallel but my teacher is convinced that it isn't. The two are parallel if and only if $m\left( {\angle A} \right) + m\left( {\angle D} \right) = 180^o$. I agree that are parallel.
2014-08-28 16:44:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5985330939292908, "perplexity": 398.3022333791427}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500830903.34/warc/CC-MAIN-20140820021350-00226-ip-10-180-136-8.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/problem-with-feynmans-ie-prescription.511569/
# Homework Help: Problem with Feynman's iε prescription 1. Jul 3, 2011 ### alfredoalfred 1. The problem statement, all variables and given/known data My Problem is with Feynman's iε prescription. Trying to solve an Integral, it happens that there is a singularity s on the way. s depends on the Energy E of a particle. This integral is not convergent and doenst make any sense. To solve this Problem, one displaces the Energie from E to E - iε, with ε being very small. ( In Case of an anti particle it is E + iε). 2. Relevant equations So the denominator of the dr integral looks like this: [ sqrt(r) - sqrt(2*(M-E' ) ] . now r gets integrated in such a way, that it always would hit the singularity s=sqrt(2*(M-E')). Changing E to E-iε however gets rid of the singularity. My problem is now not how to solve the Integral with the iε (which im glad i found out already by myself), but to argue why we need such an iε in there in the first place. 3. The attempt at a solution I tried to look up several books in Quantum field theory , but as i havent taken such a course yet i dont understand them very well. There is so much stuff in there that im not really sure im missed an explanation. Im very confused :( . Can you help to help me and give me a tip where i can find anything that explains it in maybe not a rigourous way? Trying to google feynman iε prescription has not really helped me :(
2018-05-23 09:31:25
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8383548855781555, "perplexity": 615.673040656822}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794865468.19/warc/CC-MAIN-20180523082914-20180523102914-00274.warc.gz"}
http://nrich.maths.org/7169/solution
### Consecutive Numbers An investigation involving adding and subtracting sets of consecutive numbers. Lots to find out, lots to explore. ### Calendar Capers Choose any three by three square of dates on a calendar page... ### Days and Dates Investigate how you can work out what day of the week your birthday will be on next year, and the year after... # Colourful Tiles ##### Stage: 3 Short Challenge Level: Consider the colour at the top. There are $4$ different choices for this: red, yellow, green and blue. For the right hand colour, this can be any of the other $3$ colours that have not yet been used. The bottom colour can then be either of the other $2$ colours. The left hand part then has to have the remaining colour. This means there are $4 \times 3 \times 2 \times 1 = 24$ different ways to paint all four sections. This means there are $23$ other ways to paint the tile. This problem is taken from the UKMT Mathematical Challenges.
2017-02-27 02:21:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5435231328010559, "perplexity": 701.6952264289081}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172404.62/warc/CC-MAIN-20170219104612-00140-ip-10-171-10-108.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/special-relativitys-effect-on-density.902100/
# I Special Relativity's effect on density Tags: 1. Jan 30, 2017 ### MiLara Special relativity states that according to an observer at rest, a measuring stick on a moving platform will appear shorter. Would this observer still see the measuring stick as comprising of the same amount of atoms as the observer who is at rest with respect to the measuring stick? If this is the case, would the first observer actually measure the measuring stick as being more dense since it's length is contracted? I have a sense that this would violate conservation of mass and energy as this should still hold true regardless of the reference frame. I am no expert on relativity, so any insight as to where my logic is flawed would be greatly appreciated. 2. Jan 30, 2017 ### phinds Nothing is violated since the observer KNOWS that his observation is only valid in his reference frame and not in the object's rest frame where it counts. 3. Jan 30, 2017 ### Staff: Mentor Yes. I am not sure what you mean by "first" observer since you only described one observer. However, in any case the direct answer is that the density of an object is higher in a frame where it is moving than in its rest frame. No conservation law is violated by the above. However, it is very important to understand the difference between "conserved" and "invariant". A conserved quantity does not change over time in a given reference frame. An invariant quantity is the same in all reference frames. Energy is conserved, but not invariant. Different frames will have different values for the energy, but in each frame that value will not change over time. Mass (the usual invariant mass used in modern relativity) is both conserved and invariant. Different frames will agree on the mass and also find that it will not change over time. 4. Jan 30, 2017 ### MiLara If energy and mass are physically the same thing, How can one be invariant and conserved and the other just be conserved? Also, is the increased relative density due to the kinetic energy of the measuring stick? 5. Jan 30, 2017 ### weirdoguy They are not. You are probably thinking about rest energy, but that's not total energy. 6. Jan 30, 2017 ### m4r35n357 That is a simplification, and not a great one IMO. The observer will "measure" (in some sense) the stick to be shorter, but the the ends of the stick as measured are not the same age in the stick's frame. Length contraction is not what it might seem at first! In any case none of this represents what you would really see, which is a bit more complicated . . . this video will give you a better idea. 7. Jan 30, 2017 ### Bartolomeo That means, if I know readings of clocks on the ends of the rod, I can determine it's velocity and direction of it's motion. Am I right? 8. Jan 30, 2017 ### m4r35n357 I think so (together with the length you "measure"). If that is too vague, sorry but I don't really go in for this type of calculation as I hinted above. 9. Jan 30, 2017 ### Comeback City Einstein related Energy and Mass through e2=p2c2+m2c4 That doesn't mean they are physically the same thing, though. 10. Jan 30, 2017 ### pervect Staff Emeritus The usual formulation focuses on energy and not mass. The topic of mass in special relativity would probably require a separate post, I'll just briefely mention that it's worth learning about how "relativistic mass" is different from "invariant mass", and the importance of being clear about which concept of mass one is a) personally using and b) which concept the author of an article or paper or post on PF that one is reading is using. Confusion arises when the reader's concpets differ from the authors concept. To avoid a lengthly digression (such as which one is better), I'll focus on energy and it's conservation. In special relativity, energy and momentum are both regarded as part of something larger, called the energy-momentum four vector. This can be regarded as being a consequence of the fundamental inter-relation between space and time. Note that length contraction can also be regarded as a consequence of this same relationship, so the two are closely related - and not particularly intuitive until one learns SR. See the wiki article on the energy-momentum 4-vector <<link>>. The density of energy/momentum is modeled by another mathematical object, called the stress-energy tensor. The wiki article is here <<link>>, but it might not make a lot of sense without the right backround. The stress-energy tensor can be regarded as describing the flow of energy-momentum. If you read the details of the wiki article (or better yet a textbook reference), you'll see that there are applicable conservation laws, but the mathematical form of these laws and the mathematical entities (such as four-vectors and the stress-energy tensor, which is a rank 2 tensor) that are used to describe the applicable conservation laws may not be familiar. They're still there though. The number of atoms in the bar does not changed, of course. The mathematical object that describes the density of atoms per unit volume is known as the number-flux four vector. There used to be a brief (and not very clear) description of it in the Wiki, but I don't see it anymore. This article <<link>> describes the number-flux four-vector and how it can be used to motivate the stress-energy tensor of a swarm of particles (for instance, a gas made up of moving atoms). But it's rather advanced. The mathematical laws that describe the conservation of particles, and also the conservation of charge, are called the "continuity equation". An example for how the applicable laws look for the conservation of charge is given in this wiki article <<link>>. The same principles apply to the conservation of atoms. For the stress-energy tensor, the applicable conservation law says that the divergence of the stress-energy tensor is zero. I'll give an honorable mention to the book "Div, Grad, Curl and all that" https://www.amazon.com/Div-Grad-Cur...814920&sr=1-1&keywords=div+curl+grad+all+that, though this book gives only the vector calculus version of the relationship between conservation laws and divergence free flows. This is probably a good thing though, the use of vector calculus rather than tensors make the concepts more accessible. Last edited by a moderator: May 8, 2017 11. Jan 31, 2017 ### Staff: Mentor There is a concept called relativistic mass, which is physically the same thing as energy. It has fallen out of use, and modern physicists use the concept of invariant mass now. The invariant mass is physically different from energy. The increased mass density is purely due to length contraction. The increased energy density would be due to both length contraction and also the increased KE 12. Jan 31, 2017 ### Jeronimus You could determine the rod's speed, yes. If you placed clocks along the rod which are at sync in the rod's rest frame, you could then in theory determine its exact speed by checking the difference in clock counts between clocks on the rod from any given inertial frame of reference. From the diagrams below, you can see that the difference in time between two clocks at the endpoints of a rod, having the size of 5 lightseconds, would be 2.5 seconds if those clocks were to be in sync in the rod's rest frame. Which you could then use to calculate the relative speed between you and the rod. 0.5c in this case. The direction, i don't think so. This is how it would look like for a rod with a length of 5 lightseconds in the left x-t diagram, when observed by someone who is moving at 0.5c relative to the rod. In the left diagram, the red line on the x-axis represents a rod with 6 clocks on top of the rod, all synced with a clock count of 0 seconds. Those clocks are all on top of the x-axis (simultaneous) Those 6 clocks with a clock count of 0 seconds are not synced anymore when observed by an observer who is moving relative to the rod ( v=0.5c in the case of the right diagram). The diagonal red line in the right diagram is where those 6 clocks with a clock count of 0 are on. Their t-position is not equal anymore. Let's call the 6 clocks with a clock count of 0, instances of those 6 clocks, which lie on the worldlines of those 6 clocks. So if we were to define a rod by being composed of the same _instances_ of atoms in both frames, we would be looking at the red lines in both cases. However, that is not how we define the length of an object, or the object itself for that matter. To measure the length of an object, we measure two endpoints of the object having the same t- position (are simultaneous within any given inertial frame of reference). In the case of the right diagram, this would be the orange line. This orange line IS the rod by definition, and is composed of the "same" atoms by definition. Except, those atoms are different instances of the atoms which are either older or younger(compared to the "rest frame rod"), depending on the velocity vector. The orange line in the right diagram, representing the moving rod is only about 4.3 lightseconds in size, compared to the "same" rod in the left diagram, which is measured to be 5 lightseconds. Yet, both have the same amount of atoms between the endpoints, just as they have the same amount of clocks fitting between them. 6 in this case. They are different instances(older or younger) of the "same" clocks, with their worldlines (red and pink in the right diagram) all crossing through the orange line representing the rod. edit: Maybe someone can formulate it better. It's not really easy to pack this into words. - I tried :D Last edited: Jan 31, 2017 13. Feb 2, 2017 I would just say, according to that particular observer it is space itself that is contracted. Therefore everything is contracted, also the atoms in the stick. In both situations the stick contains the same amount of atoms with the same direction relative to each other and therefore there is no change in density. 14. Feb 2, 2017 ### Ibix Not really. Rather, different observers use different definitions of space, which intersect the worldtubes of objects in different ways. So the atom count doesn't change, as you say, but it is reasonable to say that density is a frame-dependent quantity. 15. Feb 2, 2017 ### Battlemage! Does this analogy apply to time? That is, all types of clocks run at different rates for the two inertial observers, so does it make sense to say time "dialates?" Time dilates, but only lengths contract? (rather than space contracts) 16. Feb 2, 2017 ### Ibix @Battlemage! - I'm fine with "length contraction". A moving ruler is shorter than a stationary one. Two things that are 1m apart in their shared rest frame are $1/\gamma$ apart in another. But "space contraction" kind of implies you're doing something to spacetime, rather than just changing coordinates. I think the reason that "time dilation" is ok is that the word "time" is doing double duty (maybe it should get overtime ) as both a component of spacetime and as measurements made in that direction. But we have two separate words for space and measurements of separation in space, and we shouldn't confuse them. 17. Feb 2, 2017 if the observer moving with the stick measures the mass density as (rest mass)/(rest Volume) = m0/V0 and the stationary observer measures the mass density as (relativistic mass)/(contracted Volume) = m/V = γm0/V0/γ = γ^2 m0/V0 then mass density is (indeed) frame dependent. In my mind the two γ canceled They do not, which makes my earlier statement false. 18. Feb 2, 2017 ### weirdoguy 19. Feb 2, 2017 i know, but i could not come up with a better name for m.
2017-12-15 13:13:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6656515598297119, "perplexity": 532.8853377495644}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948569405.78/warc/CC-MAIN-20171215114446-20171215140446-00782.warc.gz"}
https://web2.0calc.com/questions/coins_5
+0 # Coins 0 660 11 A country has three denominations of coins, worth 7, 10, and 53 units of value. What is the maximum number of units of currency which one cannot have if they are only carrying these three kinds of coins? May 2, 2018 #9 +2 I agree with Guest #6, we are looking for 7x + 10y + 53z = d where d is the sum required and x, y and z are all positive. That means that most of the smaller sums are not possible, (we are not allowed to pay 8 units for example, by tendering 24 7 unit coins and receiving 16 10 ' s in change). The answer can be arrived at by what might be regarded as trial and error. Forget the 53 for the moment and just look at the 7 and 10 unit coins, and prior to that the various multiples of 7, 7, 14, 21, 28, 35, 42 ,49, 56, 63, and notice that that the units digits cover each of the numbers from 1 to 9. Some examples. Suppose the sum required were 89, to make this up, look at the final digit 9 and that will lead to 89 = 49 + 40 = 7*7 + 4*10. Similarly 74 = 14 + 60 = 2*7 + 6*10, 68 = 28 + 40 = 4*7 + 4*10, etc. If the sum is large we may or maynot choose to make use of the 53 unit coins. For example if the sum required was say 732, we could still use just 7's and 10's, 732 = 42 + 690 = 6*7 + 69*10, (there's nothing in the question to say that we can't do that), or we could say that 732 = 636 + 96 = 12*53 + 8*7 + 4*10. Going back to smaller numbers, we couldn't make up a sum of 39, for example, because, from the 7's multiples we would have to use the 49 and that is too big, it's greater than the required 39. Looking at it this way the largest number that can't be made up, (with one exception), is 46, as it requires the use of 56. All numbers above this, with one exception, can be catered for. 47 = 7 + 40, 48 = 28 + 20, 49 = 49, 50 = 50, 51 = 21 + 30, 52 = 42 + 10, 53 = ????. 53 doesn't work, the 3 means that we have to use 63 which is too big,  and that's why 53 is included in the question as a third denomination coin. There is some background analysis using the Euclidean algorithm, it leads to $$\displaystyle \frac{2S}{7}\leq k \leq\frac{3S}{10}\quad ,$$ where S is the sum required and k is a parameter. For a given value of S, it's necessary to be able to find an integer value for k in order that the sum can be made up with 7's and 10's. If, for example S = 46 we have 92/7 (=13.14..)<= k <= 138/10 (=13.8), so no integer k exists, so 46 can't be catered for, while, for example, if S = 47, 94/7 (=13.43..) <= k <= 141/10 (=14.1), so k = 14 and a solution can be found. Tiggsy May 4, 2018 #1 0 May 2, 2018 #2 0 Help! May 3, 2018 #3 0 If I understand your question, you want to know if there is a maximum amount that you CANNOT make by adding or subtracting these 3 coins. If that is what is meant, then I think there is NO MAXIMUM that you cannot make. My reasoning is that you can make ALL the numbers from 1, 2, 3, 4, 5, 6, 7, 8, 9 and 10 and up just from the 7 and 10 by adding and subtracting them. Example: If you want to make 1,000,000 from these 3 coins, you could easily do it this way: [18,866 x 53] + [6 x 7] + [6 x 10] = 1,000,000. Let us know if that is what is meant by the question. May 3, 2018 #4 0 This is how i understood his question (correct me if i'm wrong): Find largest natural number that we can't express as 7*a+10*b+53*c where a, b, c are non-negative integers. Guest May 3, 2018 #5 0 Well, if we are allowed to add AND subtract the 3 coins, then I cannot think of not being able to make up ANY number by finding the proper values of a, b, c. Now, for exmple, you would think that it would be difficult or not possible to make up a number 1 greater than the LCM of 7, 10, 53. But, that is not the case because: LCM [7, 10, 53] =3,710 + 1 =3,711. I can easily make up 3,711 as follows: [69 x 53] + [ 7 x 2] + [ 10 x 4] =3,711........and so on and in many different combinations of the 3 coins. Guest May 3, 2018 #6 +1 No. This is a variant of the Chicken McNugget theorem. It’s 7x+10y+53z=d. We want to find the biggest value of d where it cannot be expressed as the left hand side. Also, x, y, z, are all positive. I personally dunno how to do this type thoe. Somebody else maybe post solution? C Phill? Melody? May 3, 2018 #7 0 May 3, 2018 #8 0 \bump May 4, 2018 #9 +2 I agree with Guest #6, we are looking for 7x + 10y + 53z = d where d is the sum required and x, y and z are all positive. That means that most of the smaller sums are not possible, (we are not allowed to pay 8 units for example, by tendering 24 7 unit coins and receiving 16 10 ' s in change). The answer can be arrived at by what might be regarded as trial and error. Forget the 53 for the moment and just look at the 7 and 10 unit coins, and prior to that the various multiples of 7, 7, 14, 21, 28, 35, 42 ,49, 56, 63, and notice that that the units digits cover each of the numbers from 1 to 9. Some examples. Suppose the sum required were 89, to make this up, look at the final digit 9 and that will lead to 89 = 49 + 40 = 7*7 + 4*10. Similarly 74 = 14 + 60 = 2*7 + 6*10, 68 = 28 + 40 = 4*7 + 4*10, etc. If the sum is large we may or maynot choose to make use of the 53 unit coins. For example if the sum required was say 732, we could still use just 7's and 10's, 732 = 42 + 690 = 6*7 + 69*10, (there's nothing in the question to say that we can't do that), or we could say that 732 = 636 + 96 = 12*53 + 8*7 + 4*10. Going back to smaller numbers, we couldn't make up a sum of 39, for example, because, from the 7's multiples we would have to use the 49 and that is too big, it's greater than the required 39. Looking at it this way the largest number that can't be made up, (with one exception), is 46, as it requires the use of 56. All numbers above this, with one exception, can be catered for. 47 = 7 + 40, 48 = 28 + 20, 49 = 49, 50 = 50, 51 = 21 + 30, 52 = 42 + 10, 53 = ????. 53 doesn't work, the 3 means that we have to use 63 which is too big,  and that's why 53 is included in the question as a third denomination coin. There is some background analysis using the Euclidean algorithm, it leads to $$\displaystyle \frac{2S}{7}\leq k \leq\frac{3S}{10}\quad ,$$ where S is the sum required and k is a parameter. For a given value of S, it's necessary to be able to find an integer value for k in order that the sum can be made up with 7's and 10's. If, for example S = 46 we have 92/7 (=13.14..)<= k <= 138/10 (=13.8), so no integer k exists, so 46 can't be catered for, while, for example, if S = 47, 94/7 (=13.43..) <= k <= 141/10 (=14.1), so k = 14 and a solution can be found. Tiggsy Guest May 4, 2018 #10 +1 As Guest#6 says, there is actually a Theorem called the "Chicken McNugget Theorem" and is discussed in some significant detail here: https://artofproblemsolving.com/wiki/index.php?title=Chicken_McNugget_Theorem May 4, 2018 #11 +1698 +3 There is a “Chicken McNugget’s Theorem,” but there should not be one. The real reason is the “cute” name “Chicken McNugget’s Theorem.”  If it’s “cute” then more will remember it and use it. Cuteness in general, and anthropomorphic cuteness in particular, is the byword for educating children and adults too.  We might not understand it unless a blòódy dancing McNugget explains it to us. According to the Wikipedia article the “Chicken McNugget’s Theorem” is a “special case” of the Frobenius coin problem. The supposed reason it is special is that boxes of chicken nuggets were sold in units of 6, 9, and 20.” Now, let’s put on our “thinking caps,” and consider these cute numbers: 6, 9, and 20. The “thinking caps” work well: There is nothing special about these numbers—it’s just one set of an infinite number of sets that have a Frobenius number. GA GingerAle  May 6, 2018
2019-08-21 10:49:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6355700492858887, "perplexity": 454.4539717471501}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315865.44/warc/CC-MAIN-20190821085942-20190821111942-00303.warc.gz"}
https://www.studyadda.com/question-bank/bonding-and-hybridisation-inorganic-compounds_q11/1406/104353
• question_answer The bond between carbon atom (1) and carbon atom (2) in compound $N\equiv C-CH=C{{H}_{2}}$ involves the hybridised carbon as [IIT-JEE 1987; DCE 2000] A) $s{{p}^{2}}$ and $s{{p}^{2}}$ B) $s{{p}^{3}}$ and $sp$ C) $sp$ and $s{{p}^{2}}$ D) $sp$ and $sp$ sp and $s{{p}^{2}}$ $N\equiv \overset{sp}{\mathop{\underset{1}{\mathop{C}}\,}}\,-\underset{2}{\overset{s{{p}^{2}}}{\mathop{CH}}}\,=\underset{3}{\mathop{C{{H}_{2}}}}\,$
2020-09-19 03:54:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6010881066322327, "perplexity": 2338.0773229295833}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400189928.2/warc/CC-MAIN-20200919013135-20200919043135-00574.warc.gz"}
http://mathhelpforum.com/calculus/100641-apparently-i-have-wrong-upper-bound-limit.html
# Math Help - Apparently I have the wrong upper bound limit 1. ## Apparently I have the wrong upper bound limit Question #21 and work attached. Any help would be greatly appreciated!
2015-03-05 01:21:34
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9507659673690796, "perplexity": 1749.653159441014}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936463676.59/warc/CC-MAIN-20150226074103-00082-ip-10-28-5-156.ec2.internal.warc.gz"}
https://www.aimsciences.org/article/doi/10.3934/krm.2019036
American Institute of Mathematical Sciences October  2019, 12(5): 945-967. doi: 10.3934/krm.2019036 Macroscopic regularity for the relativistic Boltzmann equation with initial singularities 1 College of Science, University of Shanghai for Science and Technology, Shanghai 200093, China 2 College of Mathematics and Physics, Beijing University of Chemical Technology, Beijing 100029, China * Corresponding author: Weiyuan Zou* Received  January 2018 Revised  November 2018 Published  July 2019 Fund Project: The second author is supported by the Fundamental Research Funds for the Central Universities ZY1937. In this paper, it is proved that the macroscopic parts of the relativistic Boltzmann equation will be continuous, even though the macroscopic components are discontinuity initially. The Lorentz transformation plays an important role to prove the continuity of nonlinear term. Citation: Yan Yong, Weiyuan Zou. Macroscopic regularity for the relativistic Boltzmann equation with initial singularities. Kinetic & Related Models, 2019, 12 (5) : 945-967. doi: 10.3934/krm.2019036 References: show all references References: [1] François Dubois. Third order equivalent equation of lattice Boltzmann scheme. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 221-248. doi: 10.3934/dcds.2009.23.221 [2] Tong Yang, Seiji Ukai, Huijiang Zhao. Stationary solutions to the exterior problems for the Boltzmann equation, I. Existence. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 495-520. doi: 10.3934/dcds.2009.23.495 [3] Kihoon Seong. Low regularity a priori estimates for the fourth order cubic nonlinear Schrödinger equation. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5437-5473. doi: 10.3934/cpaa.2020247 [4] Erica Ipocoana, Andrea Zafferi. Further regularity and uniqueness results for a non-isothermal Cahn-Hilliard equation. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020289 [5] Kengo Nakai, Yoshitaka Saiki. Machine-learning construction of a model for a macroscopic fluid variable using the delay-coordinate of a scalar observable. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 1079-1092. doi: 10.3934/dcdss.2020352 [6] Hai-Liang Li, Tong Yang, Mingying Zhong. Diffusion limit of the Vlasov-Poisson-Boltzmann system. Kinetic & Related Models, , () : -. doi: 10.3934/krm.2021003 [7] Tomáš Oberhuber, Tomáš Dytrych, Kristina D. Launey, Daniel Langr, Jerry P. Draayer. Transformation of a Nucleon-Nucleon potential operator into its SU(3) tensor form using GPUs. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 1111-1122. doi: 10.3934/dcdss.2020383 [8] Michael Winkler, Christian Stinner. Refined regularity and stabilization properties in a degenerate haptotaxis system. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 4039-4058. doi: 10.3934/dcds.2020030 [9] Wenxiong Chen, Congming Li, Shijie Qi. A Hopf lemma and regularity for fractional $p$-Laplacians. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3235-3252. doi: 10.3934/dcds.2020034 [10] Hao Wang. Uniform stability estimate for the Vlasov-Poisson-Boltzmann system. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 657-680. doi: 10.3934/dcds.2020292 [11] Yuanfen Xiao. Mean Li-Yorke chaotic set along polynomial sequence with full Hausdorff dimension for $\beta$-transformation. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 525-536. doi: 10.3934/dcds.2020267 [12] Jens Lorenz, Wilberclay G. Melo, Suelen C. P. de Souza. Regularity criteria for weak solutions of the Magneto-micropolar equations. Electronic Research Archive, 2021, 29 (1) : 1625-1639. doi: 10.3934/era.2020083 [13] Philippe G. Lefloch, Cristinel Mardare, Sorin Mardare. Isometric immersions into the Minkowski spacetime for Lorentzian manifolds with limited regularity. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 341-365. doi: 10.3934/dcds.2009.23.341 [14] Petr Čoupek, María J. Garrido-Atienza. Bilinear equations in Hilbert space driven by paths of low regularity. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 121-154. doi: 10.3934/dcdsb.2020230 [15] Tuoc Phan, Grozdena Todorova, Borislav Yordanov. Existence uniqueness and regularity theory for elliptic equations with complex-valued potentials. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1071-1099. doi: 10.3934/dcds.2020310 [16] João Vitor da Silva, Hernán Vivas. Sharp regularity for degenerate obstacle type problems: A geometric approach. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1359-1385. doi: 10.3934/dcds.2020321 [17] Do Lan. Regularity and stability analysis for semilinear generalized Rayleigh-Stokes equations. Evolution Equations & Control Theory, 2021  doi: 10.3934/eect.2021002 [18] Mehdi Badsi. Collisional sheath solutions of a bi-species Vlasov-Poisson-Boltzmann boundary value problem. Kinetic & Related Models, 2021, 14 (1) : 149-174. doi: 10.3934/krm.2020052 [19] Pavel Eichler, Radek Fučík, Robert Straka. Computational study of immersed boundary - lattice Boltzmann method for fluid-structure interaction. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 819-833. doi: 10.3934/dcdss.2020349 [20] Mengni Li. Global regularity for a class of Monge-Ampère type equations with nonzero boundary conditions. Communications on Pure & Applied Analysis, 2021, 20 (1) : 301-317. doi: 10.3934/cpaa.2020267 2019 Impact Factor: 1.311
2021-01-20 11:26:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5263327956199646, "perplexity": 5816.13905928072}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703519984.9/warc/CC-MAIN-20210120085204-20210120115204-00662.warc.gz"}
https://osca.bioconductor.org/integrating-datasets.html
# Chapter 13 Integrating Datasets ## 13.1 Motivation Large single-cell RNA sequencing (scRNA-seq) projects usually need to generate data across multiple batches due to logistical constraints. However, the processing of different batches is often subject to uncontrollable differences, e.g., changes in operator, differences in reagent quality. This results in systematic differences in the observed expression in cells from different batches, which we refer to as “batch effects”. Batch effects are problematic as they can be major drivers of heterogeneity in the data, masking the relevant biological differences and complicating interpretation of the results. Computational correction of these effects is critical for eliminating batch-to-batch variation, allowing data across multiple batches to be combined for common downstream analysis. However, existing methods based on linear models (Ritchie et al. 2015; Leek et al. 2012) assume that the composition of cell populations are either known or the same across batches. To overcome these limitations, bespoke methods have been developed for batch correction of single-cell data (Haghverdi et al. 2018; Butler et al. 2018; Lin et al. 2019) that do not require a priori knowledge about the composition of the population. This allows them to be used in workflows for exploratory analyses of scRNA-seq data where such knowledge is usually unavailable. ## 13.2 Setting up the data To demonstrate, we will use two separate 10X Genomics PBMC datasets generated in two different batches. Each dataset was obtained from the TENxPBMCData package and separately subjected to basic processing steps. Separate processing prior to the batch correction step is more convenient, scalable and (on occasion) more reliable. For example, outlier-based QC on the cells is more effective when performed within a batch (Section 6.3.2.3). The same can also be said for trend fitting when modelling the mean-variance relationship (Section 8.2.4.1). ### loading ### library(TENxPBMCData) pbmc3k <- TENxPBMCData('pbmc3k') ### quality-control ### is.mito <- grep("MT", rowData(pbmc3k)$Symbol_TENx) library(scater) stats <- perCellQCMetrics(pbmc3k, subsets=list(Mito=is.mito)) high.mito <- isOutlier(stats$subsets_Mito_percent, nmads=3, type="higher") pbmc3k <- pbmc3k[,!high.mito] ### normalization ### pbmc3k <- logNormCounts(pbmc3k) ### variance-modelling ### library(scran) dec3k <- modelGeneVar(pbmc3k) ### feature-selection ### chosen.hvgs <- which(dec3k$bio > 0) ### dimensionality-reduction ### # Using randomized SVD, which is more efficient for file-backed matrices. set.seed(10000) pbmc3k <- runPCA(pbmc3k, subset_row=chosen.hvgs, ncomponents=25, BSPARAM=BiocSingular::RandomParam()) set.seed(100000) pbmc3k <- runTSNE(pbmc3k, dimred="PCA") set.seed(1000000) pbmc3k <- runUMAP(pbmc3k, dimred="PCA") ### clustering ### g <- buildSNNGraph(pbmc3k, k=10, use.dimred = 'PCA') clust <- igraph::cluster_walktrap(g)$membership pbmc3k$cluster <- factor(clust) pbmc3k ## class: SingleCellExperiment ## dim: 32738 2609 ## metadata(0): ## assays(2): counts logcounts ## rownames(32738): ENSG00000243485 ENSG00000237613 ... ## ENSG00000215616 ENSG00000215611 ## rowData names(3): ENSEMBL_ID Symbol_TENx Symbol ## colnames: NULL ## colData names(12): Sample Barcode ... Date_published cluster ## reducedDimNames(3): PCA TSNE UMAP ## spikeNames(0): ## altExpNames(0): ### loading ### library(TENxPBMCData) pbmc4k <- TENxPBMCData('pbmc4k') ### quality-control ### is.mito <- grep("MT", rowData(pbmc4k)$Symbol_TENx) library(scater) stats <- perCellQCMetrics(pbmc4k, subsets=list(Mito=is.mito)) high.mito <- isOutlier(stats$subsets_Mito_percent, nmads=3, type="higher") pbmc4k <- pbmc4k[,!high.mito] ### normalization ### pbmc4k <- logNormCounts(pbmc4k) ### variance-modelling ### library(scran) dec4k <- modelGeneVar(pbmc4k) ### feature-selection ### chosen.hvgs <- which(dec4k$bio > 0) ### dimensionality-reduction ### # Using randomized SVD, which is more efficient for file-backed matrices. set.seed(10000) pbmc4k <- runPCA(pbmc4k, subset_row=chosen.hvgs, ncomponents=25, BSPARAM=BiocSingular::RandomParam()) set.seed(100000) pbmc4k <- runTSNE(pbmc4k, dimred="PCA") set.seed(1000000) pbmc4k <- runUMAP(pbmc4k, dimred="PCA") ### clustering ### g <- buildSNNGraph(pbmc4k, k=10, use.dimred = 'PCA') clust <- igraph::cluster_walktrap(g)$membership pbmc4k$cluster <- factor(clust) pbmc4k ## class: SingleCellExperiment ## dim: 33694 4182 ## metadata(0): ## assays(2): counts logcounts ## rownames(33694): ENSG00000243485 ENSG00000237613 ... ## ENSG00000277475 ENSG00000268674 ## rowData names(3): ENSEMBL_ID Symbol_TENx Symbol ## colnames: NULL ## colData names(12): Sample Barcode ... Date_published cluster ## reducedDimNames(3): PCA TSNE UMAP ## spikeNames(0): ## altExpNames(0): To prepare for the batch correction: 1. We subset all batches to the common “universe” of features. In this case, it is straightforward as both batches use Ensembl gene annotation16. universe <- intersect(rownames(pbmc3k), rownames(pbmc4k)) length(universe) ## [1] 31232 2. We rescale each batch to adjust for differences in sequencing depth between batches. The multiBatchNorm() function recomputes log-normalized expression values after adjusting the size factors for systematic differences in coverage between SingleCellExperiment objects. (Size factors only remove biases between cells within a single batch.) This improves the quality of the correction by removing one aspect of the technical differences between batches. library(batchelor) rescaled <- multiBatchNorm(pbmc3k[universe,], pbmc4k[universe,]) pbmc3k <- rescaled[[1]] pbmc4k <- rescaled[[2]] 3. We perform feature selection by averaging the variance components across all batches with the combineVar() function. We use the average as it is responsive to batch-specific HVGs while still preserving the within-batch ranking of genes. This allows us to use the same strategies described in Section 8.3 to select genes of interest. For example, if we were to retain all genes with positive biological components, convergence of the average towards zero for non-HVGs ensures that the expected number of retained genes is stable with more batches. In contrast, approaches based on taking the intersection or union of HVGs across batches become increasingly conservative or liberal, respectively, with an increasing number of batches. library(scran) combined.dec <- combineVar(dec3k[universe,], dec4k[universe,]) chosen.hvgs <- combined.dec$bio > 0 combined.dec[,1:6] ## DataFrame with 31232 rows and 6 columns ## mean total ## <numeric> <numeric> ## ENSG00000243485 0 0 ## ENSG00000237613 0 0 ## ENSG00000186092 0 0 ## ENSG00000238009 0.00104386992482179 0.00106138029098745 ## ENSG00000239945 0.000258798374840956 0.000281948894897105 ## ... ... ... ## ENSG00000212907 0.535233858583492 0.437804768744041 ## ENSG00000198886 3.01692549039158 0.617031726421764 ## ENSG00000198786 1.54198723959989 0.617834240701208 ## ENSG00000198695 0.127831201202011 0.129776187258376 ## ENSG00000198727 2.56240219438806 0.717304616025944 ## tech bio ## <numeric> <numeric> ## ENSG00000243485 0 0 ## ENSG00000237613 0 0 ## ENSG00000186092 0 0 ## ENSG00000238009 0.00106590149926527 -4.52120827782246e-06 ## ENSG00000239945 0.000264260488048504 1.76884068486008e-05 ## ... ... ... ## ENSG00000212907 0.408040681424972 0.0297640873190686 ## ENSG00000198886 0.596769333970983 0.0202623924507815 ## ENSG00000198786 0.628483569144462 -0.010649328443254 ## ENSG00000198695 0.129278188931492 0.000497998326884352 ## ENSG00000198727 0.691490878206059 0.0258137378198841 ## p.value FDR ## <numeric> <numeric> ## ENSG00000243485 NA NA ## ENSG00000237613 NA NA ## ENSG00000186092 NA NA ## ENSG00000238009 0.512246221415639 0.834032705754633 ## ENSG00000239945 0.314021259910558 0.834032705754633 ## ... ... ... ## ENSG00000212907 0.404607233389663 0.834032705754633 ## ENSG00000198886 0.215888743417311 0.834032705754633 ## ENSG00000198786 0.642327056813382 0.834072156711655 ## ENSG00000198695 0.585074623751064 0.834032705754633 ## ENSG00000198727 0.419767773645846 0.834032705754633 ## <numeric> ## 0 ## 0 ## 0 ## 0.00104386992482179 ## 0.000258798374840956 ## ... ## 0.535233858583492 ## 3.01692549039158 ## 1.54198723959989 ## 0.127831201202011 ## 2.56240219438806 ## <numeric> ## 0 ## 0 ## 0 ## 0.00106138029098745 ## 0.000281948894897105 ## ... ## 0.437804768744041 ## 0.617031726421764 ## 0.617834240701208 ## 0.129776187258376 ## 0.717304616025944 ## <numeric> ## 0 ## 0 ## 0 ## 0.00106590149926527 ## 0.000264260488048504 ## ... ## 0.408040681424972 ## 0.596769333970983 ## 0.628483569144462 ## 0.129278188931492 ## 0.691490878206059 ## <numeric> ## 0 ## 0 ## 0 ## -4.52120827782246e-06 ## 1.76884068486008e-05 ## ... ## 0.0297640873190686 ## 0.0202623924507815 ## -0.010649328443254 ## 0.000497998326884352 ## 0.0258137378198841 ## <numeric> ## NA ## NA ## NA ## 0.512246221415639 ## 0.314021259910558 ## ... ## 0.404607233389663 ## 0.215888743417311 ## 0.642327056813382 ## 0.585074623751064 ## 0.419767773645846 ## <numeric> ## NA ## NA ## NA ## 0.834032705754633 ## 0.834032705754633 ## ... ## 0.834032705754633 ## 0.834032705754633 ## 0.834072156711655 ## 0.834032705754633 ## 0.834032705754633 ## 13.3 Diagnosing batch effects Before we actually perform any correction, it is worth examining whether there is any batch effect in this dataset. We combine the two SingleCellExperiments and perform a PCA on the log-expression values for all genes with positive (average) biological components. # Synchronizing the metadata for cbind()ing. rowData(pbmc3k) <- rowData(pbmc4k) pbmc3k$batch <- "3k" pbmc4k$batch <- "4k" combined <- cbind(pbmc3k, pbmc4k) # Using RandomParam() as it is more efficient for file-backed matrices. library(scater) set.seed(0010101010) combined <- runPCA(combined, subset_row=chosen.hvgs, BSPARAM=BiocSingular::RandomParam()) We use graph-based clustering on the components to obtain a summary of the population structure. As our two PBMC populations should be replicates, each cluster should ideally consist of cells from both batches. However, we instead see clusters that are comprised of cells from a single batch. This indicates that cells of the same type are artificially separated due to technical differences between batches. library(scran) snn.gr <- buildSNNGraph(combined, use.dimred="PCA") clusters <- igraph::cluster_walktrap(snn.gr)$membership tab <- table(Cluster=clusters, Batch=combined$batch) tab ## Batch ## Cluster 3k 4k ## 1 0 126 ## 2 12 459 ## 3 1 776 ## 4 0 1310 ## 5 500 0 ## 6 0 536 ## 7 0 606 ## 8 1296 0 ## 9 0 176 ## 10 0 54 ## 11 149 0 ## 12 30 1 ## 13 0 89 ## 14 131 0 ## 15 342 0 ## 16 1 10 ## 17 134 0 ## 18 11 3 ## 19 2 36 We can also visualize the corrected coordinates using a $$t$$-SNE plot (Figure 13.1). The strong separation between cells from different batches is consistent with the clustering results. combined <- runTSNE(combined, dimred="PCA") plotTSNE(combined, colour_by="batch") Of course, the other explanation for batch-specific clusters is that there are cell types that are unique to each batch. The degree of intermingling of cells from different batches is not an effective diagnostic when the batches involved might actually contain unique cell subpopulations (which is not a consideration in the PBMC dataset, but the same cannot be said in general). If a cluster only contains cells from a single batch, one can always debate whether that is caused by a failure of the correction method or if there is truly a batch-specific subpopulation. For example, do batch-specific metabolic or differentiation states represent distinct subpopulations? Or should they be merged together? We will not attempt to answer this here, only noting that each batch correction algorithm will make different (and possibly inappropriate) decisions on what constitutes “shared” and “unique” populations. ## 13.4 Linear regression Batch effects in bulk RNA sequencing studies are commonly removed with linear regression. This involves fitting a linear model to each gene’s expression profile, setting the undesirable batch term to zero and recomputing the observations sans the batch effect, yielding a set of corrected expression values for downstream analyses. Linear modelling is the basis of the removeBatchEffect() function from the limma package (Ritchie et al. 2015) as well the comBat() function from the sva package (Leek et al. 2012). To use this approach in a scRNA-seq context, we assume that the composition of cell subpopulations is the same across batches. We also assume that the batch effect is additive, i.e., any batch-induced fold-change in expression is the same across different cell subpopulations for any given gene. These are strong assumptions as batches derived from different individuals will naturally exhibit variation in cell type abundances and expression. Nonetheless, they may be acceptable when dealing with batches that are technical replicates generated from the same population of cells. Linear modelling can also accommodate situations where the composition is known a priori by including the cell type as a factor in the linear model, but this situation is even less common17. We use the rescaleBatches() function from the batchelor package to remove the batch effect. This is roughly equivalent to applying a linear regression to the log-expression values per gene, with some adjustments to improve performance and efficiency. For each gene, the mean expression in each batch is scaled down until it is equal to the lowest mean across all batches. We deliberately choose to scale all expression values down as this mitigates differences in variance when batches lie at different positions on the mean-variance trend. (Specifically, the shrinkage effect of the pseudo-count is greater for smaller counts, suppressing any differences in variance across batches.) An additional feature of rescaleBatches() is that it will preserve sparsity in the input matrix for greater efficiency, whereas other methods like removeBatchEffect() will always return a dense matrix. library(batchelor) rescaled <- rescaleBatches(pbmc3k, pbmc4k) rescaled ## class: SingleCellExperiment ## dim: 31232 6791 ## metadata(0): ## assays(1): corrected ## rownames(31232): ENSG00000243485 ENSG00000237613 ... ## ENSG00000198695 ENSG00000198727 ## rowData names(0): ## colnames: NULL ## colData names(1): batch ## reducedDimNames(0): ## spikeNames(0): ## altExpNames(0): After clustering, we observe that most clusters consist of mixtures of cells from the two replicate batches, consistent with the removal of the batch effect. This conclusion is supported by the apparent mixing of cells from different batches in Figure 13.2. However, at least one batch-specific cluster is still present, indicating that the correction is not entirely complete. This is attributable to violation of one of the aforementioned assumptions, even in this simple case involving replicated batches. set.seed(1010101010) rescaled <- runPCA(rescaled, subset_row=chosen.hvgs, BSPARAM=BiocSingular::IrlbaParam(), exprs_values="corrected") snn.gr <- buildSNNGraph(rescaled, use.dimred="PCA") clusters.resc <- igraph::cluster_walktrap(snn.gr)$membership tab.resc <- table(Cluster=clusters.resc, Batch=rescaled$batch) tab.resc ## Batch ## Cluster 1 2 ## 1 272 523 ## 2 336 606 ## 3 126 266 ## 4 643 560 ## 5 19 47 ## 6 12 3 ## 7 313 0 ## 8 8 50 ## 9 19 58 ## 10 15 70 ## 11 131 154 ## 12 37 511 ## 13 10 83 ## 14 100 207 ## 15 137 8 ## 16 16 25 ## 17 397 964 ## 18 3 36 ## 19 4 8 ## 20 11 3 rescaled <- runTSNE(rescaled, dimred="PCA") rescaled$batch <- factor(rescaled$batch) plotTSNE(rescaled, colour_by="batch") ## 13.5 Performing MNN correction ### 13.5.1 Application to the PBMC data Consider a cell $$a$$ in batch $$A$$, and identify the cells in batch $$B$$ that are nearest neighbours to $$a$$ in the expression space defined by the selected features. Repeat this for a cell $$b$$ in batch $$B$$, identifying its nearest neighbours in $$A$$. Mutual nearest neighbours are pairs of cells from different batches that belong in each other’s set of nearest neighbours. The reasoning is that MNN pairs represent cells from the same biological state prior to the application of a batch effect - see Haghverdi et al. (2018) for full theoretical details. Thus, the difference between cells in MNN pairs can be used as an estimate of the batch effect, the subtraction of which yields batch-corrected values. The batchelor package provides an implementation of the MNN approach via the fastMNN() function. (Unlike the MNN method described by Haghverdi et al. (2018), the fastMNN() function performs PCA to reduce the dimensions beforehand and speed up the downstream neighbor detection steps.) We apply it to our two PBMC batches to remove the batch effect across the highly variable genes in chosen. To reduce computational work and technical noise, all cells in all batches are projected into the low-dimensional space defined by the top d principal components. Identification of MNNs and calculation of correction vectors are then performed in this low-dimensional space. # Using randomized SVD here, as this is faster than # irlba for file-backed matrices. set.seed(1000101001) mnn.out <- fastMNN(pbmc3k, pbmc4k, d=50, k=20, BSPARAM=BiocSingular::RandomParam(deferred=TRUE)) mnn.out ## class: SingleCellExperiment ## dim: 31232 6791 ## metadata(1): merge.info ## assays(1): reconstructed ## rownames(31232): ENSG00000243485 ENSG00000237613 ... ## ENSG00000198695 ENSG00000198727 ## rowData names(1): rotation ## colnames: NULL ## colData names(1): batch ## reducedDimNames(1): corrected ## spikeNames(0): ## altExpNames(0): The function returns a SingleCellExperiment object containing corrected values for downstream analyses like clustering or visualization. Each column of mnn.out corresponds to a cell in one of the batches, while each row corresponds to an input gene in chosen. The batch field in the column metadata contains a vector specifying the batch of origin of each cell. head(mnn.out$batch) ## [1] 1 1 1 1 1 1 The corrected matrix in the reducedDims() contains the low-dimensional corrected coordinates for all cells, which we will use in place of the PCs in our downstream analyses. dim(reducedDim(mnn.out, "corrected")) ## [1] 6791 50 The most relevant parameter for tuning fastMNN() is k, which specifies the number of nearest neighbours to consider when defining MNN pairs. This can be interpreted as the minimum anticipated frequency of any shared cell type or state in each batch. Increasing k will generally result in more aggressive merging as the algorithm is more generous in matching subpopulations across batches. It can occasionally be desirable to increase k if one clearly sees that the same cell types are not being adequately merged across batches. ### 13.5.2 Correction diagnostics We cluster on the low-dimensional corrected coordinates to obtain a partitioning of the cells that serves as a proxy for the population structure. If the batch effect is successfully corrected, clusters corresponding to shared cell types or states should contain cells from multiple batches. We see that all clusters contain contributions from each batch after correction, consistent with our expectation that the two batches are replicates of each other. library(scran) snn.gr <- buildSNNGraph(mnn.out, use.dimred="corrected") clusters.mnn <- igraph::cluster_walktrap(snn.gr)$membership tab.mnn <- table(Cluster=clusters.mnn, Batch=mnn.out$batch) tab.mnn ## Batch ## Cluster 1 2 ## 1 282 507 ## 2 331 588 ## 3 301 633 ## 4 210 163 ## 5 675 622 ## 6 12 17 ## 7 19 74 ## 8 9 52 ## 9 7 18 ## 10 28 50 ## 11 14 47 ## 12 59 174 ## 13 410 982 ## 14 122 132 ## 15 4 36 ## 16 111 76 ## 17 4 8 ## 18 11 3 We can also visualize the corrected coordinates using a $$t$$-SNE plot (Figure 13.3). The presence of visual clusters containing cells from both batches provides a comforting illusion that the correction was successful. library(scater) set.seed(0010101010) mnn.out <- runTSNE(mnn.out, use_dimred="corrected") mnn.out$batch <- factor(mnn.out$batch) plotTSNE(mnn.out, colour_by="batch") For fastMNN(), one useful diagnostic is the proportion of variance within each batch that is lost during MNN correction. Specifically, this refers to the within-batch variance that is removed during orthogonalization with respect to the average correction vector at each merge step. This is returned via the lost.var field in the metadata of mnn.out, which contains a matrix of the variance lost in each batch (column) at each merge step (row). metadata(mnn.out)$merge.info$lost.var ## [,1] [,2] ## [1,] 0.004513 0.00327 Large proportions of lost variance suggest that correction is removing genuine biological heterogeneity. This would occur due to violations of the assumption of orthogonality between the batch effect and the biological subspace (Haghverdi et al. 2018). In this case, the proportion of lost variance is small, indicating that non-orthogonality is not a major concern. ## 13.6 Preserving biological heterogeneity Another useful diagnostic check is to compare the clustering within each batch to the clustering of the merged data. Accurate data integration should preserve variance within each batch as there should be nothing to remove between cells in the same batch. This check complements the previously mentioned diagnostics that only focus on the removal of differences between batches. Specifically, it protects us against cases where the correction method simply aggregates all cells together, which would achieve perfect mixing but also discard the biological heterogeneity of interest. Ideally, we should see a many-to-1 mapping where the across-batch clustering is nested inside the within-batch clusterings. This indicates that any within-batch structure was preserved after correction while acknowledging that greater resolution is possible with more cells. In practice, more discrepancies can be expected even when the correction is perfect, due to the existence of closely related clusters that were arbitrarily separated in the within-batch clustering. As a general rule, we can be satisfied with the correction if the vast majority of entries of the table()s below are zero, though this may depend on whether specific clusters of interest are gained or lost. # For the first batch. table(New=clusters.mnn[rescaled$batch==1], Old=pbmc3k$cluster) ## Old ## New 1 2 3 4 5 6 7 8 9 ## 1 0 275 0 3 0 4 0 0 0 ## 2 0 0 3 0 0 0 0 328 0 ## 3 300 0 0 0 0 0 1 0 0 ## 4 162 0 0 0 0 0 48 0 0 ## 5 0 47 487 141 0 0 0 0 0 ## 6 0 0 12 0 0 0 0 0 0 ## 7 0 0 0 0 19 0 0 0 0 ## 8 9 0 0 0 0 0 0 0 0 ## 9 0 1 1 1 0 0 0 4 0 ## 10 0 2 0 0 0 26 0 0 0 ## 11 0 0 0 0 14 0 0 0 0 ## 12 0 0 0 59 0 0 0 0 0 ## 13 0 4 4 402 0 0 0 0 0 ## 14 0 0 0 0 0 122 0 0 0 ## 15 0 0 3 0 1 0 0 0 0 ## 16 0 0 0 0 0 0 111 0 0 ## 17 0 0 0 4 0 0 0 0 0 ## 18 0 0 0 0 0 0 0 0 11 # For the second batch. table(New=clusters.mnn[rescaled$batch==2], Old=pbmc4k$cluster) ## Old ## New 1 2 3 4 5 6 7 8 9 10 11 12 ## 1 1 1 0 501 0 4 0 0 0 0 0 0 ## 2 0 0 0 0 221 0 367 0 0 0 0 0 ## 3 0 632 0 0 0 0 0 1 0 0 0 0 ## 4 0 146 8 0 0 0 0 2 0 0 7 0 ## 5 464 0 0 60 0 0 0 0 77 21 0 0 ## 6 14 0 0 1 1 0 1 0 0 0 0 0 ## 7 0 0 74 0 0 0 0 0 0 0 0 0 ## 8 0 6 0 0 0 0 0 46 0 0 0 0 ## 9 0 0 0 1 4 1 12 0 0 0 0 0 ## 10 0 0 0 0 0 50 0 0 0 0 0 0 ## 11 0 2 45 0 0 0 0 0 0 0 0 0 ## 12 3 0 0 0 0 0 0 0 13 158 0 0 ## 13 10 0 0 4 0 0 0 0 946 22 0 0 ## 14 0 0 0 0 0 132 0 0 0 0 0 0 ## 15 0 0 0 0 0 0 0 0 0 0 0 36 ## 16 0 6 0 0 0 0 0 0 0 0 70 0 ## [ reached getOption("max.print") -- omitted 2 rows ] We can summarize the agreement between clusterings by computing the Rand index. This provides a simple metric that we can use to assess the preservation of variation by different correction methods. Larger Rand indices are more desirable, though this must be balanced against the ability of each method to actually remove the batch effect. library(fossil) rand.index(as.integer(clusters.mnn[rescaled$batch==1]), as.integer(pbmc3k$cluster)) ## [1] 0.9129 rand.index(as.integer(clusters.resc[rescaled$batch==1]), as.integer(pbmc3k$cluster)) ## [1] 0.9102 ## 13.7 Application to a pancreas dataset We perform another demonstration with two human pancreas CEL-seq(2) datasets (Muraro et al. 2016; Grun et al. 2016). This is a more challenging application than the PBMC dataset as it involves different patients and protocols. ### loading ### library(scRNAseq) sce.grun <- GrunPancreasData() ### gene-annotation ### library(org.Hs.eg.db) gene.ids <- mapIds(org.Hs.eg.db, keys=rowData(sce.grun)$symbol, keytype="SYMBOL", column="ENSEMBL") keep <- !is.na(gene.ids) & !duplicated(gene.ids) sce.grun <- sce.grun[keep,] rownames(sce.grun) <- gene.ids[keep] ### quality-control ### library(scater) stats <- perCellQCMetrics(sce.grun) qc <- quickCellQC(stats, percent_subsets="altexps_ERCC_percent", nmads=3) sce.grun <- sce.grun[,!qc$discard] ### normalization ### library(scran) set.seed(1000) # for irlba. clusters <- quickCluster(sce.grun) sce.grun <- computeSumFactors(sce.grun, min.mean=0.1, clusters=clusters) sce.grun <- logNormCounts(sce.grun) ### variance-modelling ### block <- paste0(sce.grun$sample, "_", sce.grun$donor) dec.grun <- modelGeneVarWithSpikes(sce.grun, spikes="ERCC", block=block) sce.grun ## class: SingleCellExperiment ## dim: 17692 1290 ## metadata(0): ## assays(2): counts logcounts ## rownames(17692): ENSG00000268895 ENSG00000121410 ... ## ENSG00000074755 ENSG00000036549 ## rowData names(2): symbol chr ## colnames(1290): D2ex_1 D2ex_2 ... D17TGFB_94 D17TGFB_95 ## colData names(2): donor sample ## reducedDimNames(0): ## spikeNames(0): ## altExpNames(1): ERCC ### loading ### library(scRNAseq) sce.muraro <- MuraroPancreasData() ### gene-annotation ### library(AnnotationHub) edb <- AnnotationHub()[["AH73881"]] gene.symb <- sub("__chr.*$", "", rownames(sce.muraro)) gene.ids <- mapIds(edb, keys=gene.symb, keytype="SYMBOL", column="GENEID") # Removing duplicated genes or genes without Ensembl IDs. keep <- !is.na(gene.ids) & !duplicated(gene.ids) sce.muraro <- sce.muraro[keep,] rownames(sce.muraro) <- gene.ids[keep] ### quality-control ### library(scater) stats <- perCellQCMetrics(sce.muraro) qc <- quickCellQC(stats, nmads=3, percent_subsets="altexps_ERCC_percent") sce.muraro <- sce.muraro[,!qc$discard] ### normalization ### library(scran) set.seed(1000) clusters <- quickCluster(sce.muraro) sce.muraro <- computeSumFactors(sce.muraro, min.mean=0.1, clusters=clusters) sce.muraro <- logNormCounts(sce.muraro) ### variance-modelling ### block <- paste0(sce.muraro$plate, "_", sce.muraro$donor) dec.muraro <- modelGeneVarWithSpikes(sce.muraro, "ERCC", block=block) sce.muraro ## class: SingleCellExperiment ## dim: 16940 2346 ## metadata(0): ## assays(2): counts logcounts ## rownames(16940): ENSG00000268895 ENSG00000121410 ... ## ENSG00000159840 ENSG00000074755 ## rowData names(2): symbol chr ## colnames(2346): D28-1_1 D28-1_2 ... D30-8_93 D30-8_94 ## colData names(3): label donor plate ## reducedDimNames(0): ## spikeNames(0): ## altExpNames(1): ERCC We subset both batches to their common universe of genes; adjust their scaling to equalize sequencing coverage (not really necessary in this case, as the coverage is already similar, but we will do so anyway for consistency); and select those genes with positive average biological components for further use. universe <- intersect(rownames(sce.grun), rownames(sce.muraro)) universe <- universe[!grepl("^ERCC", universe)] normed.pancreas <- multiBatchNorm(sce.grun[universe,], sce.muraro[universe,]) sce.grun <- normed.pancreas[[1]] sce.muraro <- normed.pancreas[[2]] combined.pan <- combineVar(dec.grun[universe,], dec.muraro[universe,]) chosen.genes <- universe[combined.panbio > 0] We observe that rescaleBatches() is unable to align cells from different batches in Figure 13.4. This is attributable to differences in population composition between batches, with additional complications from non-linearities in the batch effect, e.g., when the magnitude or direction of the batch effect differs between cell types. rescaled.pancreas <- rescaleBatches(sce.grun, sce.muraro) rescaled.pancreas <- runPCA(rescaled.pancreas, subset_row=chosen.genes, BSPARAM=BiocSingular::IrlbaParam(), exprs_values="corrected") rescaled.pancreas <- runTSNE(rescaled.pancreas, dimred="PCA") plotTSNE(rescaled.pancreas, colour_by="batch") Here, we use fastMNN() to merge together the two human pancreas datasets described earlier. Clustering on the merged datasets yields fewer batch-specific clusters, which is recapitulated as greater intermingling between batches in Figure 13.5. This improvement over Figure 13.4 represents the ability of fastMNN() to adapt to more complex situations involving differences in population composition between batches. mnn.pancreas <- fastMNN(sce.grun, sce.muraro, subset.row=chosen.genes) snn.gr <- buildSNNGraph(mnn.pancreas, use.dimred="corrected") clusters <- igraph::cluster_walktrap(snn.gr)membership tab <- table(Cluster=clusters, Batch=mnn.pancreas\$batch) tab ## Batch ## Cluster 1 2 ## 1 320 279 ## 2 343 265 ## 3 209 847 ## 4 61 197 ## 5 161 399 ## 6 42 1 ## 7 25 108 ## 8 22 127 ## 9 59 80 ## 10 33 0 ## 11 0 18 ## 12 8 4 ## 13 7 21 mnn.pancreas <- runTSNE(mnn.pancreas, dimred="corrected") plotTSNE(mnn.pancreas, colour_by="batch") ## 13.8 Using the corrected values The greatest value of batch correction lies in facilitating cell-based analysis of population heterogeneity in a consistent manner across batches. Cluster 1 in batch A is the same as cluster 1 in batch B when the clustering is performed on the merged data. There is no need to identify mappings between separate clusterings, which might not even be possible when the clusters are not well-separated. The burden of interpretation is consolidated by generating a single set of clusters for all batches, rather than requiring separate examination of each batch’s clusters. Another benefit is that the available number of cells is increased when all batches are combined, which allows for greater resolution of population structure in downstream analyses18. We previously demonstrated the application of clustering methods to the batch-corrected data, but the same principles apply for other analyses like trajectory reconstruction. At this point, it is also tempting to use the batch-corrected values for gene-based analyses like DE-based marker gene detection. This is not generally recommended as an arbitrary correction algorithm is not obliged to preserve the magnitude (or even direction) of differences in per-gene expression when attempting to align multiple batches. For example, cosine normalization in fastMNN() shrinks the magnitude of the expression values so that the computed log-fold changes have no obvious interpretation. Of greater concern is the possibility that the correction introduces artificial agreement across batches. To illustrate: 1. Consider a dataset (first batch) with two cell types, $$A$$ and $$B$$. Consider a second batch with the same cell types, denoted as $$A'$$ and $$B'$$. Assume that, for some reason, gene $$X$$ is expressed in $$A$$ but not in $$A'$$, $$B$$ or $$B'$$ - possibly due to some difference in how the cells were treated, or maybe due to a donor effect. 2. We then merge the batches together based on the shared cell types. This yields a result where $$A$$ and $$A'$$ cells are intermingled and the difference due to $$X$$ is eliminated. One can debate whether this should be the case, but in general, it is necessary for batch correction methods to smooth over small biological differences (as discussed in Section 13.3). 3. Now, if we corrected the second batch to the first, we must have coerced the expression values of $$X$$ in $$A'$$ to non-zero values to align with those of $$A$$, while leaving the expression of $$X$$ in $$B'$$ and $$B$$ at zero. Thus, we have artificially introduced DE between $$A'$$ and $$B'$$ for $$X$$ in the second batch to align with the DE between $$A$$ and $$B$$ in the first batch. (The converse is also possible where DE in the first batch is artificially removed to align with the second batch, depending on the order of merges.) 4. The artificial DE has implications for the identification of the cell types and interpretation of the results. We would be misled into believing that both $$A$$ and $$A'$$ are $$X$$-positive, when in fact this is only true for $$A$$. At best, this is only a minor error - after all, we do actually have $$X$$-positive cells of that overall type, we simply do not see that $$A'$$ is $$X$$-negative. At worst, this can compromise the conclusions, e.g., if the first batch was drug treated and the second batch was a control, we might mistakenly think that a $$X$$-positive population exists in the latter and conclude that our drug has no effect. Rather, it is preferable to perform DE analyses using the uncorrected expression values with blocking on the batch, as discussed in Section 11.4. This strategy is based on the expectation that any genuine DE between clusters should still be present in a within-batch comparison where batch effects are absent. It penalizes genes that exhibit inconsistent DE across batches, thus protecting against misleading conclusions when a population in one batch is aligned to a similar-but-not-identical population in another batch. ## Session Info R version 3.6.1 (2019-07-05) Platform: x86_64-pc-linux-gnu (64-bit) Running under: Ubuntu 14.04.5 LTS Matrix products: default BLAS: /home/ramezqui/Rbuild/danbuild/R-3.6.1/lib/libRblas.so LAPACK: /home/ramezqui/Rbuild/danbuild/R-3.6.1/lib/libRlapack.so locale: [1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C [3] LC_TIME=en_US.UTF-8 LC_COLLATE=C [5] LC_MONETARY=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 [7] LC_PAPER=en_US.UTF-8 LC_NAME=C [9] LC_ADDRESS=C LC_TELEPHONE=C [11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C attached base packages: [1] parallel stats4 stats graphics grDevices utils datasets [8] methods base other attached packages: [1] fossil_0.3.7 shapefiles_0.7 [3] foreign_0.8-72 maps_3.3.0 [5] sp_1.3-1 scater_1.13.18 [7] ggplot2_3.2.1 scran_1.13.18 [9] batchelor_1.1.11 SingleCellExperiment_1.7.8 [11] SummarizedExperiment_1.15.9 Biobase_2.45.1 [13] GenomicRanges_1.37.15 GenomeInfoDb_1.21.1 [15] HDF5Array_1.13.8 rhdf5_2.29.3 [17] DelayedArray_0.11.4 BiocParallel_1.19.2 [19] IRanges_2.19.14 S4Vectors_0.23.21 [21] BiocGenerics_0.31.5 matrixStats_0.55.0 [23] Cairo_1.5-10 BiocStyle_2.13.2 [25] OSCAUtils_0.0.1 loaded via a namespace (and not attached): [1] bitops_1.0-6 tools_3.6.1 [3] R6_2.4.0 irlba_2.3.3 [5] vipor_0.4.5 lazyeval_0.2.2 [7] colorspace_1.4-1 withr_2.1.2 [9] tidyselect_0.2.5 gridExtra_2.3 [11] compiler_3.6.1 BiocNeighbors_1.3.3 [13] labeling_0.3 bookdown_0.13 [15] scales_1.0.0 stringr_1.4.0 [17] digest_0.6.20 rmarkdown_1.15 [19] XVector_0.25.0 pkgconfig_2.0.2 [21] htmltools_0.3.6 limma_3.41.16 [23] highr_0.8 rlang_0.4.0 [25] DelayedMatrixStats_1.7.2 dplyr_0.8.3 [27] RCurl_1.95-4.12 magrittr_1.5 [29] BiocSingular_1.1.5 GenomeInfoDbData_1.2.1 [31] Matrix_1.2-17 Rcpp_1.0.2 [33] ggbeeswarm_0.6.0 munsell_0.5.0 [35] Rhdf5lib_1.7.5 viridis_0.5.1 [37] stringi_1.4.3 yaml_2.2.0 [39] edgeR_3.27.13 zlibbioc_1.31.0 [41] Rtsne_0.15 grid_3.6.1 [43] dqrng_0.2.1 crayon_1.3.4 [45] lattice_0.20-38 cowplot_1.0.0 [47] beachmat_2.1.2 locfit_1.5-9.1 [49] knitr_1.24 pillar_1.4.2 [51] igraph_1.2.4.1 glue_1.3.1 [53] evaluate_0.14 BiocManager_1.30.4 [55] gtable_0.3.0 purrr_0.3.2 [57] assertthat_0.2.1 xfun_0.9 [59] rsvd_1.0.2 viridisLite_0.3.0 [61] tibble_2.1.3 beeswarm_0.2.3 [63] statmod_1.4.32 ### Bibliography Butler, A., P. Hoffman, P. Smibert, E. Papalexi, and R. Satija. 2018. “Integrating single-cell transcriptomic data across different conditions, technologies, and species.” Nat. Biotechnol. 36 (5):411–20. Grun, D., M. J. Muraro, J. C. Boisset, K. Wiebrands, A. Lyubimova, G. Dharmadhikari, M. van den Born, et al. 2016. “De Novo Prediction of Stem Cell Identity using Single-Cell Transcriptome Data.” Cell Stem Cell 19 (2):266–77. Haghverdi, L., A. T. L. Lun, M. D. Morgan, and J. C. Marioni. 2018. “Batch effects in single-cell RNA-sequencing data are corrected by matching mutual nearest neighbors.” Nat. Biotechnol. 36 (5):421–27. Leek, J. T., W. E. Johnson, H. S. Parker, A. E. Jaffe, and J. D. Storey. 2012. “The sva package for removing batch effects and other unwanted variation in high-throughput experiments.” Bioinformatics 28 (6):882–83. Lin, Y., S. Ghazanfar, K. Y. X. Wang, J. A. Gagnon-Bartsch, K. K. Lo, X. Su, Z. G. Han, et al. 2019. “scMerge leverages factor analysis, stable expression, and pseudoreplication to merge multiple single-cell RNA-seq datasets.” Proc. Natl. Acad. Sci. U.S.A. 116 (20):9775–84. Muraro, M. J., G. Dharmadhikari, D. Grun, N. Groen, T. Dielen, E. Jansen, L. van Gurp, et al. 2016. “A Single-Cell Transcriptome Atlas of the Human Pancreas.” Cell Syst 3 (4):385–94. Ritchie, M. E., B. Phipson, D. Wu, Y. Hu, C. W. Law, W. Shi, and G. K. Smyth. 2015. “limma powers differential expression analyses for RNA-sequencing and microarray studies.” Nucleic Acids Res. 43 (7):e47. 1. This step can be much, much, much more painful. As is often said, biologists would rather share a toothbrush than nomenclature. 2. If I already knew the type and state of each cell, why would I waste money sequencing them? 3. And a nice $$t$$-SNE plot for Figure 1. Hey, those atlas papers almost write themselves!
2019-09-18 23:03:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5003225803375244, "perplexity": 3710.2666564499254}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573368.43/warc/CC-MAIN-20190918213931-20190918235931-00017.warc.gz"}
https://writeups.amosng.com/2017/pactf_2017/bartik/bitesized_80/index.html
# PACTF_2017: Bitesized Category: Points: 80 Description: There’s an image of some trees here. I bet the image contains more than trees, though. trees.png Hint: Try mod 8? ## Write-up Stegsolver is very useful for this, as we take a look at the plane 2s. If you squint hard enough, you can barely make out the flag. Therefore, the flag is dont_miss_the_flag_for_the_trees.
2023-03-28 15:16:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3892325758934021, "perplexity": 2827.4437594466713}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948867.32/warc/CC-MAIN-20230328135732-20230328165732-00433.warc.gz"}
https://math.stackexchange.com/questions/1560912/what-are-int-sqrta2-x2-textrmdx-int-sqrtx2a2-textrmdx-int
# What are $\int\sqrt{a^2-x^2}\,\textrm{d}x, \int\sqrt{x^2+a^2}\,\textrm{d}x,\int\sqrt{x^2-a^2}\,\textrm{d}x$? Can someone confirm these equations below? I got it from my college textbook, unfortunately there are no proofs and more importantly I cannot seem to find any other sources that say have these equations. $\displaystyle\int\sqrt{a^2-x^2}\,\textrm{dx}=\frac{x\sqrt{a^2-x^2}}{2}+\frac{a^2}{2}\sin^{-1}\left(\frac{x}{a}\right) + C$ $\displaystyle\int\sqrt{x^2+a^2}\,\text{dx}=\frac{x\sqrt{x^2+a^2}}{2}+\frac{a^2}{2}\ln\left(x+\sqrt{x^2+a^2}\right)+C$ $\displaystyle\int\sqrt{x^2-a^2}\,\textrm{dx}=\frac{x\sqrt{x^2-a^2}}{2}-\frac{a^2}{2}\ln\left(x+\sqrt{x^2-a^2}\right)+C$ • Try differentiating the right hand sides. – PM 2Ring Dec 5 '15 at 12:47 • For verification that the formulas are correct, try what PM 2Ring suggested, i.e., differentiate the right side. You can also just use integration by parts to get the closed forms of those integrals. Take the integrand as the first function and $x\mapsto 1$ as the second function during performing IBP. – learner Dec 5 '15 at 12:54 One may obtain these results by performing respectively the change of variable $x:=a\sin t$, $x:=a\sinh t$ and $x:=a\cosh t$.
2019-09-22 08:57:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8790817260742188, "perplexity": 342.4101649515748}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514575402.81/warc/CC-MAIN-20190922073800-20190922095800-00540.warc.gz"}
http://openstudy.com/updates/4dd65b76d95c8b0bb0f35dc4
• anonymous An article sold at profit of 20%.if both the cost price and selling price are $100 less, the profit would be 4%more.find the cost price. Mathematics At vero eos et accusamus et iusto odio dignissimos ducimus qui blanditiis praesentium voluptatum deleniti atque corrupti quos dolores et quas molestias excepturi sint occaecati cupiditate non provident, similique sunt in culpa qui officia deserunt mollitia animi, id est laborum et dolorum fuga. Et harum quidem rerum facilis est et expedita distinctio. Nam libero tempore, cum soluta nobis est eligendi optio cumque nihil impedit quo minus id quod maxime placeat facere possimus, omnis voluptas assumenda est, omnis dolor repellendus. Itaque earum rerum hic tenetur a sapiente delectus, ut aut reiciendis voluptatibus maiores alias consequatur aut perferendis doloribus asperiores repellat. • anonymous An article sold at profit of 20%.if both the cost price and selling price are$100 less, the profit would be 4%more.find the cost price. Mathematics Looking for something else? Not the answer you are looking for? Search for more explanations.
2017-03-26 05:37:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2593429386615753, "perplexity": 3073.99929471875}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189127.26/warc/CC-MAIN-20170322212949-00184-ip-10-233-31-227.ec2.internal.warc.gz"}
http://specedco.ga/binary-option-delta-formula-574739.html
July 14, 2020 ### Options Greeks calculation with Python | Quant Academy 2018/06/22 · this method just tools for help predict at match trade binary.com. using strategy 9 method. [email protected] ### Binary Option Delta Formula - jomdrop.co In fact, the Black–Scholes formula for the price of a vanilla call option (or put option) can be interpreted by decomposing a call option into an asset-or-nothing call option minus a cash-or-nothing call option, and similarly for a put – the binary options are easier to analyze, and correspond to the two terms in the Black–Scholes formula. ### Delta of binary option - Quantitative Finance Stack Exchange Pricing of Binary Options Derived from Delta The price of Binary Options indirectly imply the probability of those binary options ending up in the money. For instance, a binary option priced at \$0.70 is implying a profit probability of 70%. ### THE GREEKS BLACK AND SCHOLES (BS) FORMULA Relationship to vanilla options' Greeks. Since a binary call is a mathematical derivative of a vanilla call with respect to strike, the price of a binary call has the same shape as the delta of a vanilla call, and the delta of a binary call has the same shape as the gamma of a vanilla call. Black–Scholes in practice ### Binary Options: Pricing and Greeks European options, this method still requires a closed-form formula for the option price to derive option Greeks. Muroi and Suda [8] [9] took derivatives of the pricing formula for European options, however, in this article we take derivative at each node on the binomial tree to derive Greeks for American options. ### Option Price Calculator Keeping an Eye on Position Delta. In Meet the Greeks we discussed how delta affects the value of individual options. Now let’s have a look at how you can take delta to the next level. “Position delta” enables you to keep track of the net delta effect on an entire gaggle of options that are based on the same underlying stock. ### Option Delta Calculation Explained (Simple Guide FREE Binary options trading strategy with over 90% success rate: Binary Call Option Delta Formula. Binary Options Live, Best methods for binary options and forex. ### DeltaForce Indicator – very good no repainting binary Digital Option Analytical Formula! Work From Home Making Big Money. May 1, 2013.Binary call option delta digital option analytical formula formula options - more than It's einführung börse wertpapierhandel für dummies much simpler.. ### Call Option Delta Formula - Mello TV Using the Black and Scholes option pricing model, this calculator generates theoretical values and option greeks for European call and put options. ### Binary option - Wikipedia Binary options are a type of exotic option for which the payoff is determined by whether the final stock price is greater or less than the strike price . A binary call option pays out if , while a binary put option pays out for . In this Demonstration we set the payoff amount to be the strike price . ### Position Delta | Calculating Position Delta For each Excel Function that calculates an Option Greek or other Options statistic, there are certain parameters required as shown in the formula(s) above. Not all functions use all parameters. Here is a description of each parameter: UnadjustedPrice: Current price of the underlying Stock. Strike Price: Strike Price (aka Excercise Price). $\\textupBinary&space;Call&space;Option&space;Delta=\\frace^rtN'\\left&space;(&space;d_2&space;\\right&space;)}\\sigma&space;S\\sqrtt$ ### Black–Scholes model - Wikipedia With the abundance of binary options trading software available for traders online, it is important to take some time to research a system before making an investment decision. Some systems are extremely highly rated, while others are iffy. Hedge Formula trading system was created by George Dalio, a self-proclaimed financial genius who made his original fortune ### Computation of Greeks Using Binomial Tree Details about Greeks for Binary Options : Delta, Gamma, Rho, Vega Theta Continuing further from Binary Options Payoff Functions, here are the graphs and images for Greeks for Binary Options – please note that we have taken the case of Binary Call Option Greeks.Binary Put Option Greeks and Binary Tunnel Option Greeks will be different: ### Options: Valuation and (No) Arbitrage European Call European Put Forward Binary Call Binary Put; Price: Delta: Gamma: Vega: Rho: Theta
2021-06-19 21:51:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5702267289161682, "perplexity": 2845.0436946873447}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487649731.59/warc/CC-MAIN-20210619203250-20210619233250-00520.warc.gz"}
https://tug.org/pipermail/luatex/2016-May/005936.html
# [luatex] Confirm change in TL 2016-pretest vs. TL 2015 behavior: dvilualatex fails because of pdfpageheight Scott Kostyshak skostysh at lyx.org Tue May 17 19:25:51 CEST 2016 ```On Tue, May 17, 2016 at 10:32:33AM +0200, Ulrike Fischer wrote: > Am Mon, 16 May 2016 20:07:49 -0400 schrieb Scott Kostyshak: > > > If we use the new syntax of > > > > \pageheight\paperheight > > \pagewidth\paperwidth > > Why don't you do as David suggest and load the luatex85 package? Because the package is quite new and many LyX users do not have it. One possibility is to use the package if it is available and not use it if it is not (without changing any of the other code exported by LyX). Do you think this would lead to good results for users? Specifically, do you think a significant amount of users (imagine a diverse mix of Mac/Win/Linux) would have LaTeX installations with "new luatex" but without the luatex85 package available? My guess is that it is not that common. Let's add on the further condition that a user of LyX actually *uses* LuaTeX. In the long-run, it would be nice for LyX to eventually export correct LuaTeX code and not rely on a transition package. > luatex85 can also be loaded if you compile with pdflatex and > xelatex, it will then quit silently and so you can use > \pdfpageheight\paperheight in all formats (\pageheight would give an > error with pdflatex and xelatex). > > > > Using the dvilualatex command > > Why are you doing this? I don't and I don't even know why anyone would want to. But LyX supports exporting to DVI via dvilualatex and as long as we support that I will try my best to make sure it works properly. We have made a lot of tests for code that LyX exports. When one of those tests goes from passing to failing, I try to investigate whether it is a regression in an underlying package or if (as in this case) we need to change something in LyX's export. The challenge when we change something is often ensuring that the exported code still works for old LaTeX installations.
2021-10-17 01:07:12
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8609715104103088, "perplexity": 4379.5341625788205}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585045.2/warc/CC-MAIN-20211016231019-20211017021019-00167.warc.gz"}
https://encyclopediaofmath.org/index.php?title=Generalized_function,_derivative_of_a&oldid=47071
Generalized function, derivative of a A weak extension of the operation of ordinary differentiation. Let $f$ be a generalized function, $f \in D ^ \prime ( O)$. The generalized (weak) derivative $$D ^ \alpha f = \ \frac{\partial ^ {| \alpha | } f }{\partial x _ {1} ^ {\alpha _ {1} } \dots \partial x _ {n} ^ {\alpha _ {n} } } ,\ \ | \alpha | = \alpha _ {1} + \dots + \alpha _ {n} ,$$ of order $\alpha = ( \alpha _ {1} \dots \alpha _ {n} )$ is defined by the equation $$\tag{* } ( D ^ \alpha f , \phi ) = \ ( - 1 ) ^ {| \alpha | } ( f , D ^ \alpha \phi ) ,\ \ \phi \in D ( O) .$$ Since the operation $\phi \mapsto (- 1) ^ {| \alpha | } D ^ \alpha \phi$ is linear and continuous from $D ( O)$ into $D ( O)$, the functional $D ^ \alpha f$ defined by the right-hand side of (*) is a generalized function in $D ^ \prime ( O)$. If $f \in C ^ {p} ( O)$, then $D ^ \alpha f \in C ^ {p - | \alpha | } ( O)$ for all $\alpha$ with $| \alpha | \leq p$. The following properties hold for the derivatives of a generalized function: the operation $f \mapsto D ^ \alpha f$ is linear and continuous from $D ^ \prime ( O)$ into $D ^ \prime ( O)$; any generalized function in $D ^ \prime ( O)$ is infinitely differentiable (in the generalized sense); the result of differentiation does not depend on the order; the Leibniz formula is valid for the differentiation of a product $af$, when $a \in C ^ \infty ( O)$; and $\supp D ^ \alpha f \subset \supp f$. Let $f \in L _ { \mathop{\rm loc} } ^ {1} ( O)$. It may happen that a certain generalized derivative can be identified with some $L _ { \mathop{\rm loc} } ^ {1} ( O)$- function. In this case $D ^ \alpha f ( x)$ is a generalized derivative of function type. Contents Examples. 1) $\theta ^ \prime = \delta$, where $\theta$ is the Heaviside function and $\delta$ is the Dirac function (cf. Delta-function for both). 2) The general solution of the equation $u ^ \prime = 0$ in the class $D ^ \prime$ is an arbitrary constant. 3) The trigonometric series $$\sum _ {k = - \infty } ^ \infty a _ {k} e ^ {ikx} ,\ \ | a _ {k} | \leq A ( 1 + | k | ) ^ {m} ,$$ converges in $D ^ \prime$ and it can be differentiated term-by-term in $D ^ \prime$ infinitely many times. References [1] L. Schwartz, "Théorie des distributions" , 1 , Hermann (1950) MR0035918 Zbl 0037.07301 [2] S.L. Sobolev, "Applications of functional analysis in mathematical physics" , Amer. Math. Soc. (1963) (Translated from Russian) MR0165337 Zbl 0123.09003
2023-03-29 04:44:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9703119993209839, "perplexity": 194.6659926659335}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948932.75/warc/CC-MAIN-20230329023546-20230329053546-00019.warc.gz"}
https://encyclopediaofmath.org/index.php?title=Principal_fibre_bundle&printable=yes
# Principal fibre bundle A $G$- fibration $\pi _ {G} : X \rightarrow B$ such that the group $G$ acts freely and perfectly on the space $X$. The significance of principal fibre bundles lies in the fact that they make it possible to construct associated fibre bundles with fibre $F$ if a representation of $G$ in the group of homeomorphisms $F$ is given. Differentiable principal fibre bundles with Lie groups play an important role in the theory of connections and holonomy groups. For instance, let $H$ be a topological group with $G$ as a closed subgroup and let $H/G$ be the homogeneous space of left cosets of $H$ with respect to $G$; the fibre bundle $\pi _ {G} : H \rightarrow H/G$ will then be principal. Further, let $X _ {G}$ be a Milnor construction, i.e. the join of an infinite number of copies of $G$, each point of which has the form $$\langle g, t \rangle = \langle g _ {0} t _ {0} , g _ {1} t _ {1} ,\dots \rangle ,$$ where $g _ {i} \in G$, $t _ {i} \in [ 0, 1]$, and where only finitely many $t _ {i}$ are non-zero. The action of $G$ on $X _ {G}$ defined by the formula $h \langle g, t\rangle = \langle hg, t\rangle$ is free, and the fibre bundle $\omega _ {G} : X _ {G} \rightarrow X _ {G}$ $\mathop{\rm mod} G$ is a numerable principal fibre bundle. Each fibre of a principal fibre bundle is homeomorphic to $G$. A morphism of principal fibre bundles is a morphism of the fibre bundles $f: \pi _ {G} \rightarrow \pi _ {G ^ \prime }$ for which the mapping of the fibres $f {\pi _ {G} } ^ {-} 1 ( b)$ induces a homomorphism of groups: $$\theta _ {b} = \ \xi _ {b} ^ {\prime - 1 } f \pi _ {G} ^ {-} 1 ( b) \xi _ {b} : \ G \rightarrow G ^ \prime ,$$ where $\xi _ {b} ( g) = gx$, $\pi _ {G} ( x) = b$. In particular, a morphism is called equivariant if $\theta _ {b} = \theta$ is independent of $b$, so that $gf ( x) = \theta ( g) f ( x)$ for any $x \in X$, $g \in G$. If $G = G ^ \prime$ and $\theta = \mathop{\rm id}$, an equivariant morphism is called a $G$- morphism. Any $( G, B)$- morphism (i.e. a $G$- morphism over $B$) is called a $G$- isomorphism. For any mapping $u: B ^ \prime \rightarrow B$ and principal fibre bundle $\pi _ {G} : X \rightarrow B$ the induced fibre bundle $u ^ {*} ( \pi _ {G} ) \rightarrow \pi _ {G}$ is principal with the same group $G$; moreover, the mapping $U: u ^ {*} ( \pi _ {G} ) \rightarrow \pi _ {G}$ is a $G$- morphism which unambiguously determines the action of $G$ on the space $u ^ {*} ( x)$. For instance, if the principal fibre bundle $\pi _ {G}$ is trivial, it is isomorphic to the principal fibre bundle $\phi ^ {*} ( \eta )$, where $\eta$ is the $G$- bundle over a single point and $\phi$ is the constant mapping. The converse is also true, and for this reason principal fibre bundles with a section are trivial. For each numerable principal fibre bundle $\pi _ {G} : X \rightarrow B$ there exists a mapping $f: B \rightarrow X _ {G}$ $\mathop{\rm mod} G$ such that $f ^ { * } ( \omega _ {G} )$ is $G$- isomorphic to $\pi _ {G}$, and for the principal fibre bundles $f _ {0} ^ { * } ( \omega _ {G} )$ and $f _ {1} ^ { * } ( \omega _ {G} )$ to be isomorphic, it is necessary and sufficient that $f _ {0}$ and $f _ {1}$ be homotopic (cf. Homotopy). This is the principal theorem on the homotopy classification of principal fibre bundles, which expresses the universality of the principal fibre bundle $\omega _ {G}$( obtained by Milnor's construction), with respect to the classifying mapping $f$. #### References [1] R.L. Bishop, R.J. Crittenden, "Geometry of manifolds" , Acad. Press (1964) [2] K. Nomizu, "Lie groups and differential geometry" , Math. Soc. Japan (1956) [3] S. Sternberg, "Lectures on differential geometry" , Prentice-Hall (1964) [4] , Fibre spaces and their applications , Moscow (1958) (In Russian; translated from English) [5] N.E. Steenrod, "The topology of fibre bundles" , Princeton Univ. Press (1951) [6] D. Husemoller, "Fibre bundles" , McGraw-Hill (1966) Let $\pi _ {G} : X \rightarrow B$ be a principal fibre bundle. It is called numerable if there is a sequence $( u _ {n} ) _ {n \geq 0 }$ of continuous mappings $B \rightarrow [ 0, 1]$ such that the open sets $U _ {n} = u _ {n} ^ {-} 1 (( 0, 1 ] )$ form an open covering (cf. Covering (of a set)) of $B$ and $X$ is trivializable over each $U _ {n}$( i.e. the restricted bundles $\pi _ {G} : X \rightarrow U _ {n}$ are trivial, cf. Fibre space).
2021-10-19 21:57:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9871452450752258, "perplexity": 175.39910316033985}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585281.35/warc/CC-MAIN-20211019202148-20211019232148-00125.warc.gz"}
http://gestiondsspinc.com/economy/misconceptions-of-trigonometric-identities-pdf.php
What are the most common misconceptions about trigonometry Trigonometric Identities liverpool.ac.uk. 07.11.2011 · I introduce and prove the Fundamental Trigonomic Identities...the Quotient Identities, Reciprocal Identities, and the Pythagorian Identities. I like this tip!!! Darren Bonura: My teacher taught us, The trigonometric functions are geometric in nature so geometric arguments are to be used to develop the fundamental identities and to prove that: 0 sin( ) Limit 1 This limit plus a few trigonometric identities are required to the prove that: sin( ) cos( ) d d . Given this anchor, the derivatives of the remaining trigonometric functions can be. Lesson Discovering Trig Identities (Day 1 of 4) BetterLesson Summary of Trigonometric Identities. The trigonometric functions are geometric in nature so geometric arguments are to be used to develop the fundamental identities and to prove that: 0 sin( ) Limit 1 This limit plus a few trigonometric identities are required to the prove that: sin( ) cos( ) d d . Given this anchor, the derivatives of the remaining trigonometric functions can be, An example of a definitional identity is ⁡ = ⁡ ⁡ (). An example of an identity that can logically be proven to hold for all values of its variable is the Pythagorean identity expressed in trigonometric form: + =. to support students in developing confidence in solving trigonometric equations which require the use of trigonometric identities. Students can work in pairs to create worked solutions, leading to opportunities to discussing links between algebraic and graphical approaches to solving trigonometric equations. There are four sets of four Trigonometric Identities S. F. Ellermeyer An identity is an equation containing one or more variables that is true for all values of the variables for which both sides of the equation are de–ned. The set of variables that is being used is either speci–ed in the statement of the identity … A trigonometric identity is an equation involving trigonometric functions that is true for all permissible values of the variable. You can verify trigonometric identities numerically by substituting specific values for the variable graphically, using technology Verifying that two sides of an equation are equal for given values, or that they 07.11.2011 · I introduce and prove the Fundamental Trigonomic Identities...the Quotient Identities, Reciprocal Identities, and the Pythagorian Identities. I like this tip!!! Darren Bonura: My teacher taught us derivatives owing to errors and misconceptions they dis-played in their solutions (Siyepu, 2013a, p. 184). This study builds on the work of that paper as the author analyses er-rors displayed and explores causes and origins of errors in derivatives of trigonometric functions. Siyepu (2013b) ex- 07.11.2011 · I introduce and prove the Fundamental Trigonomic Identities...the Quotient Identities, Reciprocal Identities, and the Pythagorian Identities. I like this tip!!! Darren Bonura: My teacher taught us Trigonometry: Comparing Ratio and Unit Circle Methods Margaret Kendal and Kaye Stacey 1 University of Melbourne, Australia Before the 1960s, introductory trigonometry was taught in Victorian schools using the ratio method, where trigonometric functions are defined as ratios of sides of right angled triangles. With the advent of "new maths", the true precisely when a = b: The formulas or trigonometric identities introduced in this lesson constitute an integral part of the study and applications of trigonometry. Such identities can be used to simplifly complicated trigonometric expressions. This lesson contains several examples and exercises to demonstrate this type of procedure. Everything else can readily be derived from these including all trig identities, the so-called law of cosines, and (of course) all of right angle trigonometry. That there are 6 trig functions. There aren't. Trigonometric Identities Time allocation: Pre-requisites Trigonometry: the earlier Trigonometry (AS & A level) and Trigonometric Functions units Links with other topics Transformation of graphs: y x x sin cos is a transformation of yx sin (since it is the same as 1 sin2 2 yx ) Questions and prompts for mathematical thinking An example of a definitional identity is ⁡ = ⁡ ⁡ (). An example of an identity that can logically be proven to hold for all values of its variable is the Pythagorean identity expressed in trigonometric form: + = 07.11.2011 · I introduce and prove the Fundamental Trigonomic Identities...the Quotient Identities, Reciprocal Identities, and the Pythagorian Identities. I like this tip!!! Darren Bonura: My teacher taught us Proving a trigonometric identity refers to showing that the identity is always true, no matter what value of x x x or θ \theta θ is used. Because it has to hold true for all values of x x x, we cannot simply substitute in a few values of x x x to "show" that they are equal. Wikimedia list article This page was last edited on 23 April 2019, at 00:33. All structured data from the main, Property, Lexeme, and EntitySchema namespaces is available under the Creative Commons CC0 License; text in the other namespaces is available under the Creative Commons Attribution-ShareAlike License; additional terms may apply. The trigonometric functions are geometric in nature so geometric arguments are to be used to develop the fundamental identities and to prove that: 0 sin( ) Limit 1 This limit plus a few trigonometric identities are required to the prove that: sin( ) cos( ) d d . Given this anchor, the derivatives of the remaining trigonometric functions can be A trigonometric identity is an equation involving trigonometric functions that is true for all permissible values of the variable. You can verify trigonometric identities numerically by substituting specific values for the variable graphically, using technology Verifying that two sides of an equation are equal for given values, or that they Trigonometric Identities mei.org.uk Steven Butler Iowa State University. manipulate and verify trigonometric identities. 9.1 What the equal sign means In mathematics we often will use the ‘=’ sign with two different meanings in mind. Namely, it is used to denote identities and conditional relationships. An identity represents a relationship that is always true. We have seen several examples of this., Trigonometric Identities Pythagoras’s theorem sin2 + cos2 = 1 (1) 1 + cot2 = cosec2 (2) tan2 + 1 = sec2 (3) Note that (2) = (1)=sin 2 and (3) = (1)=cos . Compound-angle formulae cos(A+ B) = cosAcosB sinAsinB (4) cos(A B) = cosAcosB+ sinAsinB (5) sin(A+ B) = sinAcosB+ cosAsinB (6) sin(A B) = sinAcosB cosAsinB (7) tan(A+ B) = tanA+ tanB 1 tanAtanB (8) tan(A B) = tanA tanB 1 + tanAtanB (9) cos2. LEARNERS ERRORS WHEN SOLVING TRIGONOMETR AND What are the most common misconceptions about trigonometry. valuable suggestions for possible treatment of learners errors when solving trigonometric equations. Particular types of errors that learners made when solving trigonometric equations, the possible causes, and ways in which this information might be used to structure instructional interventions, are 07.11.2011 · I introduce and prove the Fundamental Trigonomic Identities...the Quotient Identities, Reciprocal Identities, and the Pythagorian Identities. I like this tip!!! Darren Bonura: My teacher taught us. manipulate and verify trigonometric identities. 9.1 What the equal sign means In mathematics we often will use the ‘=’ sign with two different meanings in mind. Namely, it is used to denote identities and conditional relationships. An identity represents a relationship that is always true. We have seen several examples of this. Trigonometric Identities S. F. Ellermeyer An identity is an equation containing one or more variables that is true for all values of the variables for which both sides of the equation are de–ned. The set of variables that is being used is either speci–ed in the statement of the identity … Everything else can readily be derived from these including all trig identities, the so-called law of cosines, and (of course) all of right angle trigonometry. That there are 6 trig functions. There aren't. Trigonometric identities, tips and tricks Although the formula sheet does not contain a comprehensive set of trigonometric identities it is easy to generate other useful identities from the formulae given. An example of a definitional identity is ⁡ = ⁡ ⁡ (). An example of an identity that can logically be proven to hold for all values of its variable is the Pythagorean identity expressed in trigonometric form: + = true precisely when a = b: The formulas or trigonometric identities introduced in this lesson constitute an integral part of the study and applications of trigonometry. Such identities can be used to simplifly complicated trigonometric expressions. This lesson contains several examples and exercises to demonstrate this type of procedure. Trigonometry: Comparing Ratio and Unit Circle Methods Margaret Kendal and Kaye Stacey 1 University of Melbourne, Australia Before the 1960s, introductory trigonometry was taught in Victorian schools using the ratio method, where trigonometric functions are defined as ratios of sides of right angled triangles. With the advent of "new maths", the to support students in developing confidence in solving trigonometric equations which require the use of trigonometric identities. Students can work in pairs to create worked solutions, leading to opportunities to discussing links between algebraic and graphical approaches to solving trigonometric equations. There are four sets of four Errors and Common Misconceptions in the Classroom KS2 to KS5 Dr Audrey Curnock Director of Education Unlimited 31/03/2015 1 •Today’s talk is about trying to categorise errors and giving supportive feedback to encourage a deeper understanding of Mathematics • When we look at learners work we try to see if the solutions were efficient, methodical, clear, accurate, based on sound An example of a definitional identity is ⁡ = ⁡ ⁡ (). An example of an identity that can logically be proven to hold for all values of its variable is the Pythagorean identity expressed in trigonometric form: + = Trigonometric Identities Pythagoras’s theorem sin2 + cos2 = 1 (1) 1 + cot2 = cosec2 (2) tan2 + 1 = sec2 (3) Note that (2) = (1)=sin 2 and (3) = (1)=cos . Compound-angle formulae cos(A+ B) = cosAcosB sinAsinB (4) cos(A B) = cosAcosB+ sinAsinB (5) sin(A+ B) = sinAcosB+ cosAsinB (6) sin(A B) = sinAcosB cosAsinB (7) tan(A+ B) = tanA+ tanB 1 tanAtanB (8) tan(A B) = tanA tanB 1 + tanAtanB (9) cos2 15.10.2015 · This paper is part of a doctoral study conducted to explore students’ errors in derivatives of trigonometric functions. This was to enable the researcher to establish causes and origins of such errors to develop a means of eliminating displayed errors. Defining Trigonometric Functions In-Text Examples 3) Students may mix up the definitions of secant and cosecant. Emphasize that secant is the reciprocal of cosine and cosecant is the reciprocal of sine, so there is exactly one “co-” function per pair of reciprocals (and since tangent and cotangent are reciprocals, they too fit this Trig Prove each identity; 1 . 1 . secx - tanx SInX - - ­ secx 3. sec8sin8 tan8+ cot8 sin' 8 5 . cos ' Y -sin ., y = 12" - Sin Y 7. sec2 e --sec2 e-1 The trigonometric functions are geometric in nature so geometric arguments are to be used to develop the fundamental identities and to prove that: 0 sin( ) Limit 1 This limit plus a few trigonometric identities are required to the prove that: sin( ) cos( ) d d . Given this anchor, the derivatives of the remaining trigonometric functions can be true precisely when a = b: The formulas or trigonometric identities introduced in this lesson constitute an integral part of the study and applications of trigonometry. Such identities can be used to simplifly complicated trigonometric expressions. This lesson contains several examples and exercises to demonstrate this type of procedure. List of trigonometric identities USNA Table of Trigonometric Identities Prepared by Yun Yoo. PDF In this paper the author obtains new trigonometric identities of the form [formula omited] which are derived as a result of relations in a cyclotomic field R(ρ), where R is the field of, Math 111: Summary of Trigonometric Identities Reciprocal Identities sin = 1 csc cos = 1 sec tan = 1 cot csc = 1 sin sec = 1 cos cot = 1 tan Quotient Identities. Table of Trigonometric Identities Prepared by Yun Yoo Trigonometric Identities liverpool.ac.uk. manipulate and verify trigonometric identities. 9.1 What the equal sign means In mathematics we often will use the ‘=’ sign with two different meanings in mind. Namely, it is used to denote identities and conditional relationships. An identity represents a relationship that is always true. We have seen several examples of this., Wikimedia list article This page was last edited on 23 April 2019, at 00:33. All structured data from the main, Property, Lexeme, and EntitySchema namespaces is available under the Creative Commons CC0 License; text in the other namespaces is available under the Creative Commons Attribution-ShareAlike License; additional terms may apply.. Student’s Mistakes and Misconceptions on Teaching of Trigonometry Nevin ORHUN(1) Abstract:Trigonometry is an unseparable part of mathematics in high school.It takes some subjects of arithmetics and geometry as any source.In other words,it is a product of a lgebraic techniques,geometrical realities and trigonometric relationships. manipulate and verify trigonometric identities. 9.1 What the equal sign means In mathematics we often will use the ‘=’ sign with two different meanings in mind. Namely, it is used to denote identities and conditional relationships. An identity represents a relationship that is always true. We have seen several examples of this. Trigonometric Identities Time allocation: Pre-requisites Trigonometry: the earlier Trigonometry (AS & A level) and Trigonometric Functions units Links with other topics Transformation of graphs: y x x sin cos is a transformation of yx sin (since it is the same as 1 sin2 2 yx ) Questions and prompts for mathematical thinking Defining Trigonometric Functions In-Text Examples 3) Students may mix up the definitions of secant and cosecant. Emphasize that secant is the reciprocal of cosine and cosecant is the reciprocal of sine, so there is exactly one “co-” function per pair of reciprocals (and since tangent and cotangent are reciprocals, they too fit this Posted in Feedback, Trig Identities, Trigonometric Functions. Post navigation Next Post → ← Previous Post. About. This site is about compiling, analyzing and discussing the mathematical errors that students make. The site is edited by Michael Pershan, a middle school and high school math teacher from NYC. Submit a mistake. To keep the site going we need lots of interesting mistakes. To List of trigonometric identities 2 Trigonometric functions The primary trigonometric functions are the sine and cosine of an angle. These are sometimes abbreviated sin(θ) and cos(θ), respectively, where θ is the angle, but the parentheses around the angle are often omitted, e.g., sin θ and cos θ. 07.11.2011 · I introduce and prove the Fundamental Trigonomic Identities...the Quotient Identities, Reciprocal Identities, and the Pythagorian Identities. I like this tip!!! Darren Bonura: My teacher taught us Defining Trigonometric Functions In-Text Examples 3) Students may mix up the definitions of secant and cosecant. Emphasize that secant is the reciprocal of cosine and cosecant is the reciprocal of sine, so there is exactly one “co-” function per pair of reciprocals (and since tangent and cotangent are reciprocals, they too fit this example, the additive identity property can be expressed as an algebraic iden-tity: a 1 0 5 a is true for all real numbers. When we defined the six trigonometric functions, we proved relationships that are true for all values of u for which the function is defined.There are eight basic trigonometric identities. Trig Prove each identity; 1 . 1 . secx - tanx SInX - - ­ secx 3. sec8sin8 tan8+ cot8 sin' 8 5 . cos ' Y -sin ., y = 12" - Sin Y 7. sec2 e --sec2 e-1 manipulate and verify trigonometric identities. 9.1 What the equal sign means In mathematics we often will use the ‘=’ sign with two different meanings in mind. Namely, it is used to denote identities and conditional relationships. An identity represents a relationship that is always true. We have seen several examples of this. Everything else can readily be derived from these including all trig identities, the so-called law of cosines, and (of course) all of right angle trigonometry. That there are 6 trig functions. There aren't. Alternative pdf link. [Trigonometry] [Differential Equations] [Complex Variables] [Matrix Algebra] to support students in developing confidence in solving trigonometric equations which require the use of trigonometric identities. Students can work in pairs to create worked solutions, leading to opportunities to discussing links between algebraic and graphical approaches to solving trigonometric equations. There are four sets of four PDF In this paper the author obtains new trigonometric identities of the form [formula omited] which are derived as a result of relations in a cyclotomic field R(ρ), where R is the field of An example of a definitional identity is ⁡ = ⁡ ⁡ (). An example of an identity that can logically be proven to hold for all values of its variable is the Pythagorean identity expressed in trigonometric form: + = A trigonometric identity is an equation involving trigonometric functions that is true for all permissible values of the variable. You can verify trigonometric identities numerically by substituting specific values for the variable graphically, using technology Verifying that two sides of an equation are equal for given values, or that they Alternative pdf link. [Trigonometry] [Differential Equations] [Complex Variables] [Matrix Algebra] the point that trigonometric functions are procepts. Below I explain how trigonometric functions can be understood as mathematical procepts and argue why thinking about trigonometric functions in this way is essential for understanding them. Suppose that a student were asked to provide an estimate for the value of the sine of 20˚. What type of Student’s Mistakes and Misconceptions on Teaching of Trigonometry Nevin ORHUN(1) Abstract:Trigonometry is an unseparable part of mathematics in high school.It takes some subjects of arithmetics and geometry as any source.In other words,it is a product of a lgebraic techniques,geometrical realities and trigonometric relationships. manipulate and verify trigonometric identities. 9.1 What the equal sign means In mathematics we often will use the ‘=’ sign with two different meanings in mind. Namely, it is used to denote identities and conditional relationships. An identity represents a relationship that is always true. We have seen several examples of this. Such identities can be used to simplifly complicated trigonometric expressions. This lesson contains several examples and exercises to demonstrate this type of procedure. Trigonometric identities can also used solve trigonometric equations. Equations of this type are introduced in this lesson and examined in more detail in Lesson 7. Proving a trigonometric identity refers to showing that the identity is always true, no matter what value of x x x or θ \theta θ is used. Because it has to hold true for all values of x x x, we cannot simply substitute in a few values of x x x to "show" that they are equal. Trigonometric Identities Time allocation: Pre-requisites Trigonometry: the earlier Trigonometry (AS & A level) and Trigonometric Functions units Links with other topics Transformation of graphs: y x x sin cos is a transformation of yx sin (since it is the same as 1 sin2 2 yx ) Questions and prompts for mathematical thinking valuable suggestions for possible treatment of learners errors when solving trigonometric equations. Particular types of errors that learners made when solving trigonometric equations, the possible causes, and ways in which this information might be used to structure instructional interventions, are example, the additive identity property can be expressed as an algebraic iden-tity: a 1 0 5 a is true for all real numbers. When we defined the six trigonometric functions, we proved relationships that are true for all values of u for which the function is defined.There are eight basic trigonometric identities. 15.10.2015 · This paper is part of a doctoral study conducted to explore students’ errors in derivatives of trigonometric functions. This was to enable the researcher to establish causes and origins of such errors to develop a means of eliminating displayed errors. the point that trigonometric functions are procepts. Below I explain how trigonometric functions can be understood as mathematical procepts and argue why thinking about trigonometric functions in this way is essential for understanding them. Suppose that a student were asked to provide an estimate for the value of the sine of 20˚. What type of LEARNERS ERRORS WHEN SOLVING TRIGONOMETR AND Trigonometry/Trigonometric identities Wikibooks open. 20.06.2013 · Students use TI-Nspire calculators to develop their own understandings of what trigonometric identities are and why they work. Plan your 60-minute lesson in Math or Precalculus and Calculus with helpful tips from Tiffany Dawdy, Proving a trigonometric identity refers to showing that the identity is always true, no matter what value of x x x or θ \theta θ is used. Because it has to hold true for all values of x x x, we cannot simply substitute in a few values of x x x to "show" that they are equal.. Trigonometry/Trigonometric identities Wikibooks open. manipulate and verify trigonometric identities. 9.1 What the equal sign means In mathematics we often will use the ‘=’ sign with two different meanings in mind. Namely, it is used to denote identities and conditional relationships. An identity represents a relationship that is always true. We have seen several examples of this., Such identities can be used to simplifly complicated trigonometric expressions. This lesson contains several examples and exercises to demonstrate this type of procedure. Trigonometric identities can also used solve trigonometric equations. Equations of this type are introduced in this lesson and examined in more detail in Lesson 7.. Fundamental Trigonometric Identities Intro & Proofs YouTube Summary of Trigonometric Identities. manipulate and verify trigonometric identities. 9.1 What the equal sign means In mathematics we often will use the ‘=’ sign with two different meanings in mind. Namely, it is used to denote identities and conditional relationships. An identity represents a relationship that is always true. We have seen several examples of this. The trigonometric functions are geometric in nature so geometric arguments are to be used to develop the fundamental identities and to prove that: 0 sin( ) Limit 1 This limit plus a few trigonometric identities are required to the prove that: sin( ) cos( ) d d . Given this anchor, the derivatives of the remaining trigonometric functions can be. trigonometric expressions and metaphors (Delice, 2002; Weber, 2005; Presmeg 2006, 2007). Brown (2006) Brown (2006) studied students‟ understanding of sine and cosine. Table of Trigonometric Identities Prepared by Yun Yoo 1. Pythagorean Identities sin2 x+cos2 x = 1 1+tan2 x = sec2 x 1+cot2 x = csc2 x 2. Reciprocal identities derivatives owing to errors and misconceptions they dis-played in their solutions (Siyepu, 2013a, p. 184). This study builds on the work of that paper as the author analyses er-rors displayed and explores causes and origins of errors in derivatives of trigonometric functions. Siyepu (2013b) ex- A trigonometric identity is an equation involving trigonometric functions that is true for all permissible values of the variable. You can verify trigonometric identities numerically by substituting specific values for the variable graphically, using technology Verifying that two sides of an equation are equal for given values, or that they example, the additive identity property can be expressed as an algebraic iden-tity: a 1 0 5 a is true for all real numbers. When we defined the six trigonometric functions, we proved relationships that are true for all values of u for which the function is defined.There are eight basic trigonometric identities. Table of Trigonometric Identities Prepared by Yun Yoo 1. Pythagorean Identities sin2 x+cos2 x = 1 1+tan2 x = sec2 x 1+cot2 x = csc2 x 2. Reciprocal identities Trig Prove each identity; 1 . 1 . secx - tanx SInX - - ­ secx 3. sec8sin8 tan8+ cot8 sin' 8 5 . cos ' Y -sin ., y = 12" - Sin Y 7. sec2 e --sec2 e-1 Background This article reports on an analysis of errors that were displayed by students who studied mathematics in Chemical Engineering in derivatives of mostly trigonometric functions. The poor... Math 111: Summary of Trigonometric Identities Reciprocal Identities sin = 1 csc cos = 1 sec tan = 1 cot csc = 1 sin sec = 1 cos cot = 1 tan Quotient Identities 20.06.2013 · Students use TI-Nspire calculators to develop their own understandings of what trigonometric identities are and why they work. Plan your 60-minute lesson in Math or Precalculus and Calculus with helpful tips from Tiffany Dawdy the point that trigonometric functions are procepts. Below I explain how trigonometric functions can be understood as mathematical procepts and argue why thinking about trigonometric functions in this way is essential for understanding them. Suppose that a student were asked to provide an estimate for the value of the sine of 20˚. What type of Lecture Notes Trigonometric Identities 1 page 3 Sample Problems - Solutions 1. tanxsinx+cosx = secx Solution: We will only use the fact that sin2 x+cos2 x = 1 for all values of x. 20.06.2013 · Students use TI-Nspire calculators to develop their own understandings of what trigonometric identities are and why they work. Plan your 60-minute lesson in Math or Precalculus and Calculus with helpful tips from Tiffany Dawdy Posted in Feedback, Trig Identities, Trigonometric Functions. Post navigation Next Post → ← Previous Post. About. This site is about compiling, analyzing and discussing the mathematical errors that students make. The site is edited by Michael Pershan, a middle school and high school math teacher from NYC. Submit a mistake. To keep the site going we need lots of interesting mistakes. To Trigonometric Identities Time allocation: Pre-requisites Trigonometry: the earlier Trigonometry (AS & A level) and Trigonometric Functions units Links with other topics Transformation of graphs: y x x sin cos is a transformation of yx sin (since it is the same as 1 sin2 2 yx ) Questions and prompts for mathematical thinking Trigonometric identities, tips and tricks Although the formula sheet does not contain a comprehensive set of trigonometric identities it is easy to generate other useful identities from the formulae given. PDF In this paper the author obtains new trigonometric identities of the form [formula omited] which are derived as a result of relations in a cyclotomic field R(ρ), where R is the field of Trigonometric Identities Pythagoras’s theorem sin2 + cos2 = 1 (1) 1 + cot2 = cosec2 (2) tan2 + 1 = sec2 (3) Note that (2) = (1)=sin 2 and (3) = (1)=cos . Compound-angle formulae cos(A+ B) = cosAcosB sinAsinB (4) cos(A B) = cosAcosB+ sinAsinB (5) sin(A+ B) = sinAcosB+ cosAsinB (6) sin(A B) = sinAcosB cosAsinB (7) tan(A+ B) = tanA+ tanB 1 tanAtanB (8) tan(A B) = tanA tanB 1 + tanAtanB (9) cos2 Table of Trigonometric Identities Prepared by Yun Yoo 1. Pythagorean Identities sin2 x+cos2 x = 1 1+tan2 x = sec2 x 1+cot2 x = csc2 x 2. Reciprocal identities The trigonometric functions are geometric in nature so geometric arguments are to be used to develop the fundamental identities and to prove that: 0 sin( ) Limit 1 This limit plus a few trigonometric identities are required to the prove that: sin( ) cos( ) d d . Given this anchor, the derivatives of the remaining trigonometric functions can be PDF In this paper the author obtains new trigonometric identities of the form [formula omited] which are derived as a result of relations in a cyclotomic field R(ρ), where R is the field of PDF In this paper the author obtains new trigonometric identities of the form [formula omited] which are derived as a result of relations in a cyclotomic field R(ρ), where R is the field of Trigonometric Identities Pythagoras’s theorem sin2 + cos2 = 1 (1) 1 + cot2 = cosec2 (2) tan2 + 1 = sec2 (3) Note that (2) = (1)=sin 2 and (3) = (1)=cos . Compound-angle formulae cos(A+ B) = cosAcosB sinAsinB (4) cos(A B) = cosAcosB+ sinAsinB (5) sin(A+ B) = sinAcosB+ cosAsinB (6) sin(A B) = sinAcosB cosAsinB (7) tan(A+ B) = tanA+ tanB 1 tanAtanB (8) tan(A B) = tanA tanB 1 + tanAtanB (9) cos2 identities at a fairly advanced level. The calculations in Ptolemy’s famous book the Almagest (“the greatest”) in approximately 150 AD were so accurate that it was in use by the civilized world for over 1000 years. In this book he used the theorem named after him, Ptolemy’s theorem, to calculate trigonometric tables, accurate to about 5 Trigonometric Identities Time allocation: Pre-requisites Trigonometry: the earlier Trigonometry (AS & A level) and Trigonometric Functions units Links with other topics Transformation of graphs: y x x sin cos is a transformation of yx sin (since it is the same as 1 sin2 2 yx ) Questions and prompts for mathematical thinking List of trigonometric identities 2 Trigonometric functions The primary trigonometric functions are the sine and cosine of an angle. These are sometimes abbreviated sin(θ) and cos(θ), respectively, where θ is the angle, but the parentheses around the angle are often omitted, e.g., sin θ and cos θ. Wikimedia list article This page was last edited on 23 April 2019, at 00:33. All structured data from the main, Property, Lexeme, and EntitySchema namespaces is available under the Creative Commons CC0 License; text in the other namespaces is available under the Creative Commons Attribution-ShareAlike License; additional terms may apply. Trigonometric Identities Time allocation: Pre-requisites Trigonometry: the earlier Trigonometry (AS & A level) and Trigonometric Functions units Links with other topics Transformation of graphs: y x x sin cos is a transformation of yx sin (since it is the same as 1 sin2 2 yx ) Questions and prompts for mathematical thinking 15.10.2015 · This paper is part of a doctoral study conducted to explore students’ errors in derivatives of trigonometric functions. This was to enable the researcher to establish causes and origins of such errors to develop a means of eliminating displayed errors. Everything else can readily be derived from these including all trig identities, the so-called law of cosines, and (of course) all of right angle trigonometry. That there are 6 trig functions. There aren't. valuable suggestions for possible treatment of learners errors when solving trigonometric equations. Particular types of errors that learners made when solving trigonometric equations, the possible causes, and ways in which this information might be used to structure instructional interventions, are The trigonometric functions are geometric in nature so geometric arguments are to be used to develop the fundamental identities and to prove that: 0 sin( ) Limit 1 This limit plus a few trigonometric identities are required to the prove that: sin( ) cos( ) d d . Given this anchor, the derivatives of the remaining trigonometric functions can be the point that trigonometric functions are procepts. Below I explain how trigonometric functions can be understood as mathematical procepts and argue why thinking about trigonometric functions in this way is essential for understanding them. Suppose that a student were asked to provide an estimate for the value of the sine of 20˚. What type of true precisely when a = b: The formulas or trigonometric identities introduced in this lesson constitute an integral part of the study and applications of trigonometry. Such identities can be used to simplifly complicated trigonometric expressions. This lesson contains several examples and exercises to demonstrate this type of procedure. 7441043
2020-10-21 01:12:37
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8003300428390503, "perplexity": 1426.4194478134452}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107874637.23/warc/CC-MAIN-20201021010156-20201021040156-00220.warc.gz"}
http://rosettacode.org/wiki/Van_der_Corput_sequence
# Van der Corput sequence Van der Corput sequence You are encouraged to solve this task according to the task description, using any language you may know. When counting integers in binary, if you put a (binary) point to the right of the count then the column immediately to the left denotes a digit with a multiplier of 20; the next column to the lefts digit has a multiplier of 21 and so on. So in the following table: 0. 1. 10. 11. ... The binary number "10" is $1 \times 2^1 + 0 \times 2^0$. You can have binary digits to the right of the “point” just as in the decimal number system too. in this case, the digit in the place immediately to the right of the point has a weight of 2 − 1, or 1 / 2. The weight for the second column to the right of the point is 2 − 2 or 1 / 4. And so on. If you take the integer binary count of the first table, and reflect the digits about the binary point, you end up with the van der Corput sequence of numbers in base 2. .0 .1 .01 .11 ... The third member of the sequence: binary 0.01 is therefore $0 \times 2^{-1} + 1 \times 2^{-2}$ or 1 / 4. Distribution of 2500 points each: Van der Corput (top) vs pseudorandom Members of the sequence lie within the interval $0 \leq x < 1$. Points within the sequence tend to be evenly distributed which is a useful trait to have for Monte Carlo simulations. This sequence is also a superset of the numbers representable by the "fraction" field of an old IEEE floating point standard. In that standard, the "fraction" field represented the fractional part of a binary number beginning with "1." e.g. 1.101001101. Hint A hint at a way to generate members of the sequence is to modify a routine used to change the base of an integer: >>> def base10change(n, base): digits = [] while n: n,remainder = divmod(n, base) digits.insert(0, remainder) return digits >>> base10change(11, 2)[1, 0, 1, 1] the above showing that 11 in decimal is $1\times 2^3 + 0\times 2^2 + 1\times 2^1 + 1\times 2^0$. Reflected this would become .1101 or $1\times 2^{-1} + 1\times 2^{-2} + 0\times 2^{-3} + 1\times 2^{-4}$ • Create a function/method/routine that given n, generates the n'th term of the van der Corput sequence in base 2. • Use the function to compute and display the first ten members of the sequence. (The first member of the sequence is for n=0). • As a stretch goal/extra credit, compute and show members of the sequence for bases other than 2. ## ActionScript This implementation uses logarithms to computes the nth term of the sequence at any base. Numbers in the output are rounded to 6 decimal places to hide any floating point inaccuracies.  package {  import flash.display.Sprite; import flash.events.Event;  public class VanDerCorput extends Sprite {  public function VanDerCorput():void { if (stage) init(); else addEventListener(Event.ADDED_TO_STAGE, init); }  private function init(e:Event = null):void {  removeEventListener(Event.ADDED_TO_STAGE, init);  var base2:Vector.<Number> = new Vector.<Number>(10, true); var base3:Vector.<Number> = new Vector.<Number>(10, true); var base4:Vector.<Number> = new Vector.<Number>(10, true); var base5:Vector.<Number> = new Vector.<Number>(10, true); var base6:Vector.<Number> = new Vector.<Number>(10, true); var base7:Vector.<Number> = new Vector.<Number>(10, true); var base8:Vector.<Number> = new Vector.<Number>(10, true);  var i:uint;  for ( i = 0; i < 10; i++ ) { base2[i] = Math.round( _getTerm(i, 2) * 1000000 ) / 1000000; base3[i] = Math.round( _getTerm(i, 3) * 1000000 ) / 1000000; base4[i] = Math.round( _getTerm(i, 4) * 1000000 ) / 1000000; base5[i] = Math.round( _getTerm(i, 5) * 1000000 ) / 1000000; base6[i] = Math.round( _getTerm(i, 6) * 1000000 ) / 1000000; base7[i] = Math.round( _getTerm(i, 7) * 1000000 ) / 1000000; base8[i] = Math.round( _getTerm(i, 8) * 1000000 ) / 1000000; }  trace("Base 2: " + base2.join(', ')); trace("Base 3: " + base3.join(', ')); trace("Base 4: " + base4.join(', ')); trace("Base 5: " + base5.join(', ')); trace("Base 6: " + base6.join(', ')); trace("Base 7: " + base7.join(', ')); trace("Base 8: " + base8.join(', '));  }  private function _getTerm(n:uint, base:uint = 2):Number {  var r:Number = 0, p:uint, digit:uint; var baseLog:Number = Math.log(base);  while ( n > 0 ) { p = Math.pow( base, uint(Math.log(n) / baseLog) );  digit = n / p; n %= p; r += digit / (p * base); }  return r;  }  } } Output: Base 2: 0, 0.5, 0.25, 0.75, 0.125, 0.625, 0.375, 0.875, 0.0625, 0.5625 Base 3: 0, 0.333333, 0.666667, 0.111111, 0.444444, 0.777778, 0.222222, 0.555556, 0.888889, 0.037037 Base 4: 0, 0.25, 0.5, 0.75, 0.0625, 0.3125, 0.5625, 0.8125, 0.125, 0.375 Base 5: 0, 0.2, 0.4, 0.6, 0.8, 0.04, 0.24, 0.44, 0.64, 0.84 Base 6: 0, 0.166667, 0.333333, 0.5, 0.666667, 0.833333, 0.027778, 0.194444, 0.361111, 0.527778 Base 7: 0, 0.142857, 0.285714, 0.428571, 0.571429, 0.714286, 0.857143, 0.020408, 0.163265, 0.306122 Base 8: 0, 0.125, 0.25, 0.375, 0.5, 0.625, 0.75, 0.875, 0.015625, 0.140625 with Ada.Text_IO; procedure Main is package Float_IO is new Ada.Text_IO.Float_IO (Float); function Van_Der_Corput (N : Natural; Base : Positive := 2) return Float is Value  : Natural  := N; Result  : Float  := 0.0; Exponent : Positive := 1; begin while Value > 0 loop Result  := Result + Float (Value mod Base) / Float (Base ** Exponent); Value  := Value / Base; Exponent := Exponent + 1; end loop; return Result; end Van_Der_Corput;begin for Base in 2 .. 5 loop Ada.Text_IO.Put ("Base" & Integer'Image (Base) & ":"); for N in 1 .. 10 loop Ada.Text_IO.Put (' '); Float_IO.Put (Item => Van_Der_Corput (N, Base), Exp => 0); end loop; Ada.Text_IO.New_Line; end loop;end Main; Output: Base 2: 0.50000 0.25000 0.75000 0.12500 0.62500 0.37500 0.87500 0.06250 0.56250 0.31250 Base 3: 0.33333 0.66667 0.11111 0.44444 0.77778 0.22222 0.55556 0.88889 0.03704 0.37037 Base 4: 0.25000 0.50000 0.75000 0.06250 0.31250 0.56250 0.81250 0.12500 0.37500 0.62500 Base 5: 0.20000 0.40000 0.60000 0.80000 0.04000 0.24000 0.44000 0.64000 0.84000 0.08000 ## AutoHotkey Works with: AutoHotkey_L SetFormat, FloatFast, 0.5for i, v in [2, 3, 4, 5, 6] { seq .= "Base " v ": " Loop, 10 seq .= VanDerCorput(A_Index - 1, v) (A_Index = 10 ? "n" : ", ")}MsgBox, % seq VanDerCorput(n, b, r=0) { while n r += Mod(n, b) * b ** -A_Index, n := n // b return, r} Output: Base 2: 0, 0.50000, 0.25000, 0.75000, 0.12500, 0.62500, 0.37500, 0.87500, 0.06250, 0.56250 Base 3: 0, 0.33333, 0.66667, 0.11111, 0.44444, 0.77778, 0.22222, 0.55555, 0.88889, 0.03704 Base 4: 0, 0.25000, 0.50000, 0.75000, 0.06250, 0.31250, 0.56250, 0.81250, 0.12500, 0.37500 Base 5: 0, 0.20000, 0.40000, 0.60000, 0.80000, 0.04000, 0.24000, 0.44000, 0.64000, 0.84000 Base 6: 0, 0.16667, 0.33333, 0.50000, 0.66667, 0.83333, 0.02778, 0.19445, 0.36111, 0.52778 @% = &20509 FOR base% = 2 TO 5 PRINT "Base " ; STR$(base%) ":" FOR number% = 0 TO 9 PRINT FNvdc(number%, base%); NEXT PRINT NEXT END DEF FNvdc(n%, b%) LOCAL v, s% s% = 1 WHILE n% s% *= b% v += (n% MOD b%) / s% n% DIV= b% ENDWHILE = v Output: Base 2: 0.00000 0.50000 0.25000 0.75000 0.12500 0.62500 0.37500 0.87500 0.06250 0.56250 Base 3: 0.00000 0.33333 0.66667 0.11111 0.44444 0.77778 0.22222 0.55556 0.88889 0.03704 Base 4: 0.00000 0.25000 0.50000 0.75000 0.06250 0.31250 0.56250 0.81250 0.12500 0.37500 Base 5: 0.00000 0.20000 0.40000 0.60000 0.80000 0.04000 0.24000 0.44000 0.64000 0.84000 ## bc This solution hardcodes the literal 10 because numeric literals in bc can use any base from 2 to 16. This solution only works with integer bases from 2 to 16. /* * Return the _n_th term of the van der Corput sequence. * Uses the current _ibase_. */define v(n) { auto c, r, s s = scale scale = 0 /* to use integer division */ /* * c = count digits of n * r = reverse the digits of n */ for (0; n != 0; n /= 10) { c += 1 r = (10 * r) + (n % 10) } /* move radix point to left of digits */ scale = length(r) + 6 r /= 10 ^ c scale = s return r} t = 10for (b = 2; b <= 4; b++) { "base "; b obase = b for (i = 0; i < 10; i++) { ibase = b " "; v(i) ibase = t } obase = t}quit Some of the calculations are not exact, because bc performs calculations using base 10. So the program prints a result like .202222221 (base 3) when the exact result would be .21 (base 3). Output: base 2 0.00000000000000 .10000000000000 .01000000000000 .11000000000000 .00100000000000 .10100000000000 .01100000000000 .11100000000000 .00010000000000 .10010000000000 base 3 0.000000000 .022222222 .122222221 .002222222 .102222222 .202222221 .012222222 .112222221 .212222221 .000222222 base 4 0.0000000 .1000000 .2000000 .3000000 .0100000 .1100000 .2100000 .310000000 .0200000 .1200000 ## C #include <stdio.h> void vc(int n, int base, int *num, int *denom){ int p = 0, q = 1; while (n) { p = p * base + (n % base); q *= base; n /= base; } *num = p; *denom = q; while (p) { n = p; p = q % p; q = n; } *num /= q; *denom /= q;} int main(){ int d, n, i, b; for (b = 2; b < 6; b++) { printf("base %d:", b); for (i = 0; i < 10; i++) { vc(i, b, &n, &d); if (n) printf(" %d/%d", n, d); else printf(" 0"); } printf("\n"); } return 0;} Output: base 2: 0 1/2 1/4 3/4 1/8 5/8 3/8 7/8 1/16 9/16 base 3: 0 1/3 2/3 1/9 4/9 7/9 2/9 5/9 8/9 1/27 base 4: 0 1/4 1/2 3/4 1/16 5/16 9/16 13/16 1/8 3/8 base 5: 0 1/5 2/5 3/5 4/5 1/25 6/25 11/25 16/25 21/25 ## C++ Translation of: Perl 6 #include <cmath>#include <iostream> double vdc(int n, double base = 2){ double vdc = 0, denom = 1; while (n) { vdc += fmod(n, base) / (denom *= base); n /= base; // note: conversion from 'double' to 'int' } return vdc;} int main() { for (double base = 2; base < 6; ++base) { std::cout << "Base " << base << "\n"; for (int n = 0; n < 10; ++n) { std::cout << vdc(n, base) << " "; } std::cout << "\n\n"; }} Output: Base 2 0 0.5 0.25 0.75 0.125 0.625 0.375 0.875 0.0625 0.5625 Base 3 0 0.333333 0.666667 0.111111 0.444444 0.777778 0.222222 0.555556 0.888889 0.037037 Base 4 0 0.25 0.5 0.75 0.0625 0.3125 0.5625 0.8125 0.125 0.375 Base 5 0 0.2 0.4 0.6 0.8 0.04 0.24 0.44 0.64 0.84 ## C# This is based on the C version. It uses LINQ and enumeration over a collection to package the sequence and make it easy to use. Note that the iterator returns a generic Tuple whose items are the numerator and denominator for the item. using System;using System.Collections.Generic;using System.Linq;using System.Text; namespace VanDerCorput{ /// <summary> /// Computes the Van der Corput sequence for any number base. /// The numbers in the sequence vary from zero to one, including zero but excluding one. /// The sequence possesses low discrepancy. /// Here are the first ten terms for bases 2 to 5: /// /// base 2: 0 1/2 1/4 3/4 1/8 5/8 3/8 7/8 1/16 9/16 /// base 3: 0 1/3 2/3 1/9 4/9 7/9 2/9 5/9 8/9 1/27 /// base 4: 0 1/4 1/2 3/4 1/16 5/16 9/16 13/16 1/8 3/8 /// base 5: 0 1/5 2/5 3/5 4/5 1/25 6/25 11/25 16/25 21/25 /// </summary> /// <see cref="http://rosettacode.org/wiki/Van_der_Corput_sequence"/> public class VanDerCorputSequence: IEnumerable<Tuple<long,long>> { /// <summary> /// Number base for the sequence, which must bwe two or more. /// </summary> public int Base { get; private set; } /// <summary> /// Maximum number of terms to be returned by iterator. /// </summary> public long Count { get; private set; } /// <summary> /// Construct a sequence for the given base. /// </summary> /// <param name="iBase">Number base for the sequence.</param> /// <param name="count">Maximum number of items to be returned by the iterator.</param> public VanDerCorputSequence(int iBase, long count = long.MaxValue) { if (iBase < 2) throw new ArgumentOutOfRangeException("iBase", "must be two or greater, not the given value of " + iBase); Base = iBase; Count = count; } /// <summary> /// Compute nth term in the Van der Corput sequence for the base specified in the constructor. /// </summary> /// <param name="n">The position in the sequence, which may be zero or any positive number.</param> /// This number is always an integral power of the base.</param> /// <returns>The Van der Corput sequence value expressed as a Tuple containing a numerator and a denominator.</returns> public Tuple<long,long> Compute(long n) { long p = 0, q = 1; long numerator, denominator; while (n != 0) { p = p * Base + (n % Base); q *= Base; n /= Base; } numerator = p; denominator = q; while (p != 0) { n = p; p = q % p; q = n; } numerator /= q; denominator /= q; return new Tuple<long,long>(numerator, denominator); } /// <summary> /// Compute nth term in the Van der Corput sequence for the given base. /// </summary> /// <param name="iBase">Base to use for the sequence.</param> /// <param name="n">The position in the sequence, which may be zero or any positive number.</param> /// <returns>The Van der Corput sequence value expressed as a Tuple containing a numerator and a denominator.</returns> public static Tuple<long, long> Compute(int iBase, long n) { var seq = new VanDerCorputSequence(iBase); return seq.Compute(n); } /// <summary> /// Iterate over the Van Der Corput sequence. /// The first value in the sequence is always zero, regardless of the base. /// </summary> /// <returns>A tuple whose items are the Van der Corput value given as a numerator and denominator.</returns> public IEnumerator<Tuple<long, long>> GetEnumerator() { long iSequenceIndex = 0L; while (iSequenceIndex < Count) { yield return Compute(iSequenceIndex); iSequenceIndex++; } } System.Collections.IEnumerator System.Collections.IEnumerable.GetEnumerator() { return GetEnumerator(); } } class Program { static void Main(string[] args) { TestBasesTwoThroughFive(); Console.WriteLine("Type return to continue..."); Console.ReadLine(); } static void TestBasesTwoThroughFive() { foreach (var seq in Enumerable.Range(2, 5).Select(x => new VanDerCorputSequence(x, 10))) // Just the first 10 elements of the each sequence { Console.Write("base " + seq.Base + ":"); foreach(var vc in seq) Console.Write(" " + vc.Item1 + "/" + vc.Item2); Console.WriteLine(); } } }} Output: base 2: 0/1 1/2 1/4 3/4 1/8 5/8 3/8 7/8 1/16 9/16 base 3: 0/1 1/3 2/3 1/9 4/9 7/9 2/9 5/9 8/9 1/27 base 4: 0/1 1/4 1/2 3/4 1/16 5/16 9/16 13/16 1/8 3/8 base 5: 0/1 1/5 2/5 3/5 4/5 1/25 6/25 11/25 16/25 21/25 base 6: 0/1 1/6 1/3 1/2 2/3 5/6 1/36 7/36 13/36 19/36 Type return to continue... ## Clojure (defn van-der-corput "Get the nth element of the van der Corput sequence." ([n] ;; Default base = 2 (van-der-corput n 2)) ([n base] (let [s (/ 1 base)] ;; A multiplicand to shift to the right of the decimal. ;; We essentially want to reverse the digits of n and put them after the ;; decimal point. So, we repeatedly pull off the lowest digit of n, scale ;; it to the right of the decimal point, and accumulate that. (loop [sum 0 n n scale s] (if (zero? n) sum ;; Base case: no digits left, so we're done. (recur (+ sum (* (rem n base) scale)) ;; Accumulate the least digit (quot n base) ;; Drop a digit of n (* scale s))))))) ;; Move farther past the decimal (clojure.pprint/print-table (cons :base (range 10)) ;; column headings (for [base (range 2 6)] ;; rows (into {:base base} (for [n (range 10)] ;; table entries [n (van-der-corput n base)])))) Output: | :base | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | |-------+---+-----+-----+-----+------+------+------+-------+-------+-------| | 2 | 0 | 1/2 | 1/4 | 3/4 | 1/8 | 5/8 | 3/8 | 7/8 | 1/16 | 9/16 | | 3 | 0 | 1/3 | 2/3 | 1/9 | 4/9 | 7/9 | 2/9 | 5/9 | 8/9 | 1/27 | | 4 | 0 | 1/4 | 1/2 | 3/4 | 1/16 | 5/16 | 9/16 | 13/16 | 1/8 | 3/8 | | 5 | 0 | 1/5 | 2/5 | 3/5 | 4/5 | 1/25 | 6/25 | 11/25 | 16/25 | 21/25 | ## Common Lisp (defun van-der-Corput (n base) (loop for d = 1 then (* d base) while (<= d n) finally (return (/ (parse-integer (reverse (write-to-string n :base base)) :radix base) d)))) (loop for base from 2 to 5 do (format t "Base ~a: ~{~6a~^~}~%" base (loop for i to 10 collect (van-der-Corput i base)))) Output: Base 2: 0 1/2 1/4 3/4 1/8 5/8 3/8 7/8 1/16 9/16 5/16 Base 3: 0 1/3 2/3 1/9 4/9 7/9 2/9 5/9 8/9 1/27 10/27 Base 4: 0 1/4 1/2 3/4 1/16 5/16 9/16 13/16 1/8 3/8 5/8 Base 5: 0 1/5 2/5 3/5 4/5 1/25 6/25 11/25 16/25 21/25 2/25 ## D double vdc(int n, in double base=2.0) pure nothrow @safe @nogc { double vdc = 0.0, denom = 1.0; while (n) { denom *= base; vdc += (n % base) / denom; n /= base; } return vdc;} void main() { import std.stdio, std.algorithm, std.range; foreach (immutable b; 2 .. 6) writeln("\nBase ", b, ": ", 10.iota.map!(n => vdc(n, b)));} Output: Base 2: [0, 0.5, 0.25, 0.75, 0.125, 0.625, 0.375, 0.875, 0.0625, 0.5625] Base 3: [0, 0.333333, 0.666667, 0.111111, 0.444444, 0.777778, 0.222222, 0.555556, 0.888889, 0.037037] Base 4: [0, 0.25, 0.5, 0.75, 0.0625, 0.3125, 0.5625, 0.8125, 0.125, 0.375] Base 5: [0, 0.2, 0.4, 0.6, 0.8, 0.04, 0.24, 0.44, 0.64, 0.84] ## Ela open random number list vdc bs n = vdc' 0.0 1.0 n where vdc' v d n | n > 0 = vdc' v' d' n' | else = v where d' = d * bs rem = n % bs n' = truncate (n / bs) v' = v + rem / d' Test (with base 2.0, using non-strict map function on infinite list): take 10 <| map' (vdc 2.0) [1..] Output: [0.5,0.25,0.75,0.125,0.625,0.375,0.875,0.0625,0.5625,0.3125] ## Erlang I liked the bc output-in-same-base, but think this is the way it should look. -module( van_der_corput ). -export( [sequence/1, sequence/2, task/0] ). sequence( N ) -> sequence( N, 2 ). sequence( 0, _Base ) -> 0.0;sequence( N, Base ) -> erlang:list_to_float( "0." ++ lists:flatten([erlang:integer_to_list(X) || X <- sequence_loop(N, Base)]) ). task() -> [task(X) || X <- lists:seq(2, 5)]. sequence_loop( 0, _Base ) -> [];sequence_loop( N, Base ) -> New_n = N div Base, Digit = N rem Base, [Digit | sequence_loop( New_n, Base )]. task( Base ) -> io:fwrite( "Base ~p:", [Base] ), [io:fwrite( " ~p", [sequence(X, Base)] ) || X <- lists:seq(0, 9)], io:fwrite( "~n" ). Output: 34> van_der_corput:task(). Base 2: 0.0 0.1 0.01 0.11 0.001 0.101 0.011 0.111 0.0001 0.1001 Base 3: 0.0 0.1 0.2 0.01 0.11 0.21 0.02 0.12 0.22 0.001 Base 4: 0.0 0.1 0.2 0.3 0.01 0.11 0.21 0.31 0.02 0.12 Base 5: 0.0 0.1 0.2 0.3 0.4 0.01 0.11 0.21 0.31 0.41 ## Euphoria Translation of: D function vdc(integer n, atom base) atom vdc, denom, rem vdc = 0 denom = 1 while n do denom *= base rem = remainder(n,base) n = floor(n/base) vdc += rem / denom end while return vdcend function for i = 2 to 5 do printf(1,"Base %d\n",i) for j = 0 to 9 do printf(1,"%g ",vdc(j,i)) end for puts(1,"\n\n")end for Output: Base 2 0 0.5 0.25 0.75 0.125 0.625 0.375 0.875 0.0625 0.5625 Base 3 0 0.333333 0.666667 0.111111 0.444444 0.777778 0.222222 0.555556 0.888889 0.037037 Base 4 0 0.25 0.5 0.75 0.0625 0.3125 0.5625 0.8125 0.125 0.375 Base 5 0 0.2 0.4 0.6 0.8 0.04 0.24 0.44 0.64 0.84 ## F# open System let vdc n b = let rec loop n denom acc = if n > 0l then let m, remainder = Math.DivRem(n, b) loop m (denom * b) (acc + (float remainder) / (float (denom * b))) else acc loop n 1 0.0 [<EntryPoint>]let main argv = printfn "%A" [ for n in 0 .. 9 -> (vdc n 2) ] printfn "%A" [ for n in 0 .. 9 -> (vdc n 5) ] 0 Output: [0.0; 0.5; 0.25; 0.75; 0.125; 0.625; 0.375; 0.875; 0.0625; 0.5625] [0.0; 0.2; 0.4; 0.6; 0.8; 0.04; 0.24; 0.44; 0.64; 0.84] ## Forth : fvdc ( base n -- f ) 0e 1e ( F: vdc denominator ) begin dup while over s>d d>f f* over /mod ( base rem n ) swap s>d d>f fover f/ frot f+ fswap repeat 2drop fdrop ; : test 10 0 do 2 i fvdc cr f. loop ; Output: test 0. 0.5 0.25 0.75 0.125 0.625 0.375 0.875 0.0625 0.5625 ok ## Go package main import "fmt" func v2(n uint) (r float64) { p := .5 for n > 0 { if n&1 == 1 { r += p } p *= .5 n >>= 1 } return} func newV(base uint) func(uint) float64 { invb := 1 / float64(base) return func(n uint) (r float64) { p := invb for n > 0 { r += p * float64(n%base) p *= invb n /= base } return }} func main() { fmt.Println("Base 2:") for i := uint(0); i < 10; i++ { fmt.Println(i, v2(i)) } fmt.Println("Base 3:") v3 := newV(3) for i := uint(0); i < 10; i++ { fmt.Println(i, v3(i)) }} Output: Base 2: 0 0 1 0.5 2 0.25 3 0.75 4 0.125 5 0.625 6 0.375 7 0.875 8 0.0625 9 0.5625 Base 3: 0 0 1 0.3333333333333333 2 0.6666666666666666 3 0.1111111111111111 4 0.4444444444444444 5 0.7777777777777777 6 0.2222222222222222 7 0.5555555555555556 8 0.8888888888888888 9 0.037037037037037035 ## Haskell The function vdc returns the nth exact, arbitrary precision van der Corput number for any base ≥ 2 and any n. (A reasonable value is returned for negative values of n.) import Data.Listimport Data.Ratioimport System.Environmentimport Text.Printf -- A wrapper type for Rationals to make them look nicer when we print them.newtype Rat = Rat Rationalinstance Show Rat where show (Rat n) = show (numerator n) ++ "/" ++ show (denominator n) -- Convert a list of base b digits to its corresponding number. We assume the-- digits are valid base b numbers and that their order is from least to most-- significant. digitsToNum :: Integer -> [Integer] -> IntegerdigitsToNum b = foldr1 (\d acc -> b * acc + d) -- Convert a number to the list of its base b digits. The order will be from-- least to most significant.numToDigits :: Integer -> Integer -> [Integer]numToDigits _ 0 = [0]numToDigits b n = unfoldr step n where step 0 = Nothing step m = let (q,r) = m quotRem b in Just (r,q) -- Return the n'th element in the base b van der Corput sequence. The base-- must be ≥ 2.vdc :: Integer -> Integer -> Ratvdc b n | b < 2 = error "vdc: base must be ≥ 2" | otherwise = let ds = reverse$ numToDigits b n in Rat (digitsToNum b ds % b ^ length ds) -- Print the base followed by a sequence of van der Corput numbers.printVdc :: (Integer,[Rat]) -> IO ()printVdc (b,ns) = putStrLn $printf "Base %d:" b ++ concatMap (printf " %5s" . show) ns -- To print the n'th van der Corput numbers for n in [2,3,4,5] call the program -- with no arguments. Otherwise, passing the base b, first n, next n and-- maximum n will print the base b numbers for n in [firstN, nextN, ..., maxN].main :: IO ()main = do args <- getArgs let (bases, nums) = case args of [b, f, s, m] -> ([read b], [read f, read s..read m]) _ -> ([2,3,4,5], [0..9]) mapM_ printVdc [(b,rs) | b <- bases, let rs = map (vdc b) nums] Output: for small bases: $ ./vandercorput Base 2: 0/1 1/2 1/4 3/4 1/8 5/8 3/8 7/8 1/16 9/16 Base 3: 0/1 1/3 2/3 1/9 4/9 7/9 2/9 5/9 8/9 1/27 Base 4: 0/1 1/4 1/2 3/4 1/16 5/16 9/16 13/16 1/8 3/8 Base 5: 0/1 1/5 2/5 3/5 4/5 1/25 6/25 11/25 16/25 21/25 Output: for a larger base. (Base 123 for n ∈ [50, 100, …, 300].) $./vandercorput 123 50 100 300 Base 123: 50/123 100/123 3322/15129 9472/15129 494/15129 6644/15129 ## Icon and Unicon The following solution works in both Icon and Unicon: procedure main(A) base := integer(get(A)) | 2 every writes(round(vdc(0 to 9,base),10)," ") write()end procedure vdc(n, base) e := 1.0 x := 0.0 while x +:= 1(((0 < n) % base) / (e *:= base), n /:= base) return xend procedure round(n,d) places := 10 ^ d return real(integer(n*places + 0.5)) / placesend and a sample run is: ->vdc 0.0 0.5 0.25 0.75 0.125 0.625 0.375 0.875 0.0625 0.5625 ->vdc 3 0.0 0.3333333333 0.6666666667 0.1111111111 0.4444444444 0.7777777778 0.2222222222 0.5555555556 0.8888888889 0.037037037 ->vdc 5 0.0 0.2 0.4 0.6 0.8 0.04 0.24 0.44 0.64 0.84 ->vdc 123 0.0 0.0081300813 0.0162601626 0.0243902439 0.0325203252 0.0406504065 0.0487804878 0.0569105691 0.0650406504 0.07317073170000001 -> An alternate, Unicon-specific implementation of vdc patterned after the functional Perl 6 solution is: procedure vdc(n, base) s1 := create |((0 < 1(.n, n /:= base)) % base) s2 := create 2(e := 1.0, |(e *:= base)) every (result := 0) +:= |s1() / s2() return resultend It produces the same output as shown above. ## J Solution: vdc=: ([ %~ %@[ #. #.inv)"0 _ Examples: 2 vdc i.10 NB. 1st 10 nums of Van der Corput sequence in base 20 0.5 0.25 0.75 0.125 0.625 0.375 0.875 0.0625 0.5625 2x vdc i.10 NB. as above but using rational nums0 1r2 1r4 3r4 1r8 5r8 3r8 7r8 1r16 9r16 2 3 4 5x vdc i.10 NB. 1st 10 nums of Van der Corput sequence in bases 2 3 4 50 1r2 1r4 3r4 1r8 5r8 3r8 7r8 1r16 9r160 1r3 2r3 1r9 4r9 7r9 2r9 5r9 8r9 1r270 1r4 1r2 3r4 1r16 5r16 9r16 13r16 1r8 3r80 1r5 2r5 3r5 4r5 1r25 6r25 11r25 16r25 21r25 In other words: use the left argument as the "base" to structure the sequence numbers into digits. Then use the reciprocal of the left argument as the "base" to re-represent this sequence and divide that result by the left argument to get the Van der Corput sequence number. ## Java Translation of: Perl 6 Using (denom *= 2) as the denominator is not a recommended way of doing things since it is not clear when the multiplication and assignment happen. Comparing this to the "++" operator, it looks like it should do the doubling and assignment second. Comparing it to the "++" operator used as a preincrement operator, it looks like it should do the doubling and assignment first. Comparing it to the behavior of parentheses, it looks like it should do the doubling and assignment first. Luckily for us, it works the same in Java as in Perl 6 (doubling and assignment first). It was kept the Perl 6 way to help with the comparison. Normally, we would initialize denom to 2 (since that is the denominator of the leftmost digit), use it alone in the vdc sum, and then double it after. public class VanDerCorput{ public static double vdc(int n){ double vdc = 0; int denom = 1; while(n != 0){ vdc += n % 2.0 / (denom *= 2); n /= 2; } return vdc; } public static void main(String[] args){ for(int i = 0; i <= 10; i++){ System.out.println(vdc(i)); } }} Output: 0.0 0.5 0.25 0.75 0.125 0.625 0.375 0.875 0.0625 0.5625 0.3125 ## jq Works with: jq version 1.4 The neat thing about the following implementation of vdc(base) is that it shows how the task can be accomplished in two separate steps without the need to construct an intermediate array. # vdc(base) converts an input decimal integer to a decimal number based on the van der# Corput sequence using base 'base', e.g. (4 | vdc(2)) is 0.125.#def vdc(base): # The helper function converts a stream of residuals to a decimal, # e.g. if base is 2, then decimalize( (0,0,1) ) yields 0.125 def decimalize(stream): reduce stream as$d # state: [accumulator, power] ( [0, 1/base]; .[1] as $power | [ .[0] + ($d * $power),$power / base] ) | .[0];  if . == 0 then 0 else decimalize(recurse( if . == 0 then empty else ./base | floor end ) % base) end ; Example: def round(n): (if . < 0 then -1 else 1 end) as $s |$s*10*.*n | if (floor%10)>4 then (.+5) else . end | ./10 | floor/n | .*$s; range(2;6) | . as$base | "Base \(.): \( [ range(0;11) | vdc($base)|round(1000) ] )" Output: $ jq -n -f -c -r van_der_corput_sequence.jqBase 2: [0,0.5,0.25,0.75,0.125,0.625,0.375,0.875,0.063,0.563,0.313]Base 3: [0,0.333,0.667,0.111,0.444,0.778,0.222,0.556,0.889,0.037,0.37]Base 4: [0,0.25,0.5,0.75,0.063,0.313,0.563,0.813,0.125,0.375,0.625]Base 5: [0,0.2,0.4,0.6,0.8,0.04,0.24,0.44,0.64,0.84,0.08] ## Lua function vdc(n, base) local digits = {} while n ~= 0 do local m = math.floor(n / base) table.insert(digits, n - m * base) n = m end m = 0 for p, d in pairs(digits) do m = m + math.pow(base, -p) * d end return mend ## Mathematica VanDerCorput[n_,base_:2]:=Table[ FromDigits[{Reverse[IntegerDigits[k,base]],0},base],{k,n}] VanDerCorput[10,2] ->{1/2,1/4,3/4,1/8,5/8,3/8,7/8,1/16,9/16,5/16} VanDerCorput[10,3] ->{1/3, 2/3, 1/9, 4/9, 7/9, 2/9, 5/9, 8/9, 1/27, 10/27} VanDerCorput[10,4] ->{1/4, 1/2, 3/4, 1/16, 5/16, 9/16, 13/16, 1/8, 3/8, 5/8} VanDerCorput[10,5] ->{1/5, 2/5, 3/5, 4/5, 1/25, 6/25, 11/25, 16/25, 21/25, 2/25} ## MATLAB / Octave function x = corput (n) b = dec2bin(1:n)-'0'; % generate sequence of binary numbers from 1 to n l = size(b,2); % get number of binary digits w = (1:l)-l-1; % 2.^w are the weights x = b * ( 2.^w'); % matrix times vector multiplication for end; Output: corput(10) ans = 0.500000 0.250000 0.750000 0.125000 0.625000 0.375000 0.875000 0.062500 0.562500 0.312500 ## Maxima Define two helper functions /* convert a decimal integer to a list of digits in base base' */dec2digits(d, base):= block([digits: []], while (d>0) do block([newdi: mod(d, base)], digits: cons(newdi, digits), d: round( (d - newdi) / base)), digits)$dec2digits(123, 10);/* [1, 2, 3] */dec2digits( 8, 2);/* [1, 0, 0, 0] */ /* convert a list of digits in base base' to a decimal integer */digits2dec(l, base):= block([s: 0, po: 1], for di in reverse(l) do (s: di*po + s, po: po*base), s)$ digits2dec([1, 2, 3], 10);/* 123 */digits2dec([1, 0, 0, 0], 2);/* 8 */ The main function vdc(n, base):= makelist( digits2dec( dec2digits(k, base), 1/base) / base, k, n); vdc(10, 2);/* 1 1 3 1 5 3 7 1 9 5(%o123) [-, -, -, -, -, -, -, --, --, --] 2 4 4 8 8 8 8 16 16 16*/ vdc(10, 5);/* 1 2 3 4 1 6 11 16 21 2(%o124) [-, -, -, -, --, --, --, --, --, --] 5 5 5 5 25 25 25 25 25 25*/ digits2dec can by used with symbols to produce the same example as in the task description  /* 11 in decimal is */digits: digits2dec([box(1), box(0), box(1), box(1)], box(2));aux: expand(digits2dec(digits, 1/base) / base)$simp: false$/* reflected this would become ... */subst(box(2), base, aux);simp: true$/* 3 2 """ """ """ """ """ """ """(%o126) "2" "1" + "2" "0" + "2" "1" + "1" """ """ """ """ """ """ """ - 4 - 3 - 2 - 1 """ """ """ """ """ """ """ """(%o129) "1" "2" + "0" "2" + "1" "2" + "1" "2" """ """ """ """ """ """ """ """ */ ## PARI/GP VdC(n)=n=binary(n);sum(i=1,#n,if(n[i],1.>>(#n+1-i)));VdC(n)=sum(i=1,#binary(n),if(bittest(n,i-1),1.>>i)); \\ Alternate approachvector(10,n,VdC(n)) Output: [0.500000000, 0.250000000, 0.750000000, 0.125000000, 0.625000000, 0.375000000, 0.875000000, 0.0625000000, 0.562500000, 0.312500000] ## Perl Translation of: Perl6 sub vdc { my @value = shift; my$base = shift // 2; use integer; push @value, $value[-1] /$base while $value[-1] > 0; my ($x, $sum) = (1, 0); no integer;$sum += ($_ %$base) / ($x *=$base) for @value; return $sum;} for my$base ( 2 .. 5 ) { print "base $base: ", join ' ', map { vdc($_, $base) } 0 .. 10; print "\n";} ## Perl 6 First we present a fairly standard imperative version in which we mutate three variables in parallel: sub vdc($num, $base = 2) { my$n = $num; my$vdc = 0; my $denom = 1; while$n { $vdc +=$n mod $base / ($denom *= $base);$n div= $base; }$vdc;} for 2..5 -> $b { say "Base$b"; say (vdc($_,$b) for ^10).perl; say '';} Output: Base 2 (0, 1/2, 1/4, 3/4, 1/8, 5/8, 3/8, 7/8, 1/16, 9/16) Base 3 (0, 1/3, 2/3, 1/9, 4/9, 7/9, 2/9, 5/9, 8/9, 1/27) Base 4 (0, 1/4, 1/2, 3/4, 1/16, 5/16, 9/16, 13/16, 1/8, 3/8) Base 5 (0, 1/5, 2/5, 3/5, 4/5, 1/25, 6/25, 11/25, 16/25, 21/25) Here is a functional version that produces the same output: sub vdc($value,$base = 2) { my @values := $value, {$_ div $base } ... 0; my @denoms :=$base, { $_ *$base } ... *; [+] do for @values Z @denoms -> $v,$d { $v mod$base / $d; }} We first define two sequences, one finite, one infinite. When we zip those sequences together, the finite sequence terminates the loop (which, since a Perl 6 loop returns all its values, is merely another way of writing a map). We then sum with [+], a reduction of the + operator. (We could have in-lined the sequences or used a traditional map operator, but this way seems more readable than the typical FP solution.) The do is necessary to introduce a statement where a term is expected, since Perl 6 distinguishes "sentences" from "noun phrases" as a natural language might. ## PicoLisp (scl 6) (de vdc (N B) (default B 2) (let (R 0 A 1.0) (until (=0 N) (inc 'R (* (setq A (/ A B)) (% N B))) (setq N (/ N B)) ) R ) ) (for B (2 3 4) (prinl "Base: " B) (for N (range 0 9) (prinl N ": " (round (vdc N B) 4)) ) ) Output: Base: 2 0: 0.0000 1: 0.5000 2: 0.2500 3: 0.7500 4: 0.1250 5: 0.6250 6: 0.3750 7: 0.8750 8: 0.0625 9: 0.5625 Base: 3 0: 0.0000 1: 0.3333 2: 0.6667 3: 0.1111 4: 0.4444 5: 0.7778 6: 0.2222 7: 0.5556 8: 0.8889 9: 0.0370 Base: 4 0: 0.0000 1: 0.2500 2: 0.5000 3: 0.7500 4: 0.0625 5: 0.3125 6: 0.5625 7: 0.8125 8: 0.1250 9: 0.3750 ## PL/I vdcb: procedure (an) returns (bit (31)); /* 6 July 2012 */ declare an fixed binary (31); declare (n, i) fixed binary (31); declare v bit (31) varying; n = an; v = ''b; do i = 1 by 1 while (n > 0); if iand(n, 1) = 1 then v = v || '1'b; else v = v || '0'b; n = isrl(n, 1); end; return (v);end vdcb; declare i fixed binary (31); do i = 0 to 10; put skip list ('0.' || vdcb(i)); end; Output: 0.0000000000000000000000000000000 0.1000000000000000000000000000000 0.0100000000000000000000000000000 0.1100000000000000000000000000000 0.0010000000000000000000000000000 0.1010000000000000000000000000000 0.0110000000000000000000000000000 0.1110000000000000000000000000000 0.0001000000000000000000000000000 0.1001000000000000000000000000000 0.0101000000000000000000000000000 ## Prolog % vdc( N, Base, Out )% Out = the Van der Corput representation of N in given Basevdc( 0, _, [] ).vdc( N, Base, Out ) :- Nr is mod(N, Base), Nq is N // Base, vdc( Nq, Base, Tmp ), Out = [Nr|Tmp]. % Writes every element of a list to stdout; no newlineswrite_list( [] ).write_list( [H|T] ) :- write( H ), write_list( T ). % Writes the Nth Van der Corput item.print_vdc( N, Base ) :- vdc( N, Base, Lst ), write('0.'), write_list( Lst ).print_vdc( N ) :- print_vdc( N, 2 ). % Prints the first N+1 elements of the Van der Corput% sequence, each to its own lineprint_some( 0, _ ) :- write( '0.0' ).print_some( N, Base ) :- M is N - 1, print_some( M, Base ), nl, print_vdc( N, Base ).print_some( N ) :- print_some( N, 2 ). test :- writeln('First 10 members in base 2:'), print_some( 9 ), nl, write('7th member in base 4 (stretch goal) => '), print_vdc( 7, 4 ). Output: (result of test): First 10 members in base 2: 0.0 0.1 0.01 0.11 0.001 0.101 0.011 0.111 0.0001 0.1001 7th member in base 4 (stretch goal) => 0.31 true . ## Python (Python3.x) The multi-base sequence generator def vdc(n, base=2): vdc, denom = 0,1 while n: denom *= base n, remainder = divmod(n, base) vdc += remainder / denom return vdc Sample output Base 2 and then 3: >>> [vdc(i) for i in range(10)][0, 0.5, 0.25, 0.75, 0.125, 0.625, 0.375, 0.875, 0.0625, 0.5625]>>> [vdc(i, 3) for i in range(10)][0, 0.3333333333333333, 0.6666666666666666, 0.1111111111111111, 0.4444444444444444, 0.7777777777777777, 0.2222222222222222, 0.5555555555555556, 0.8888888888888888, 0.037037037037037035]>>> ### As fractions We can get the output as rational numbers if we use the fraction module (and change its string representation to look like a fraction): >>> from fractions import Fraction>>> Fraction.__repr__ = lambda x: '%i/%i' % (x.numerator, x.denominator)>>> [vdc(i, base=Fraction(2)) for i in range(10)][0, 1/2, 1/4, 3/4, 1/8, 5/8, 3/8, 7/8, 1/16, 9/16] ### Stretch goal Sequences for different bases: >>> for b in range(3,6): print('\nBase', b) print([vdc(i, base=Fraction(b)) for i in range(10)]) Base 3[0, 1/3, 2/3, 1/9, 4/9, 7/9, 2/9, 5/9, 8/9, 1/27] Base 4[0, 1/4, 1/2, 3/4, 1/16, 5/16, 9/16, 13/16, 1/8, 3/8] Base 5[0, 1/5, 2/5, 3/5, 4/5, 1/25, 6/25, 11/25, 16/25, 21/25] ## Racket Following the suggestion. #lang racket(define (van-der-Corput n base) (if (zero? n) 0 (let-values ([(q r) (quotient/remainder n base)]) (/ (+ r (van-der-Corput q base)) base)))) By digits, extracted arithmetically. #lang racket(define (digit-length n base) (if (< n base) 1 (add1 (digit-length (quotient n base) base))))(define (digit n i base) (remainder (quotient n (expt base i)) base))(define (van-der-Corput n base) (for/sum ([i (digit-length n base)]) (/ (digit n i base) (expt base (+ i 1))))) Output. (for ([base (in-range 2 (add1 5))]) (printf "Base ~a: " base) (for ([n (in-range 0 10)]) (printf "~a " (van-der-Corput n base))) (newline)) #| Base 2: 0 1/2 1/4 3/4 1/8 5/8 3/8 7/8 1/16 9/16 Base 3: 0 1/3 2/3 1/9 4/9 7/9 2/9 5/9 8/9 1/27 Base 4: 0 1/4 1/2 3/4 1/16 5/16 9/16 13/16 1/8 3/8 Base 5: 0 1/5 2/5 3/5 4/5 1/25 6/25 11/25 16/25 21/25 |# ## REXX ### binary version This version only handles binary (base 2). Virtually any integer (including negative) is allowed and is accurate (no rounding). A range of integers is also supported. /*REXX pgm converts an integer (or a range)──►van der Corput # in base 2*/numeric digits 1000 /*handle anything the user wants.*/parse arg a b . /*obtain the number(s) [maybe]. */if a=='' then do; a=0; b=10; end /*if none specified, use defaults*/if b=='' then b=a /*assume a "range" of a single #.*/ do j=a to b /*traipse through the range. */ _=vdC(abs(j)) /*convert ABS value of integer.*/ leading=substr('-',2+sign(j)) /*if needed, elide leading sign.*/ say leading || _ /*show number (with leading - ?)*/ end /*j*/exit /*stick a fork in it, we're done.*//*──────────────────────────────────VDC [van der Corput] subroutine─────*/vdC: procedure; y=x2b(d2x(arg(1)))+0 /*convert to hex, then binary. */if y==0 then return 0 /*handle special case of zero. */ else return '.'reverse(y) /*heavy lifting by REXX*/ output when using the default input of: 0 10 0 .1 .01 .11 .001 .101 .011 .111 .0001 .1001 .0101 ### any radix up to 90 This version handles what the first version does, plus any radix up to (and including) base 90. It can also support a list (enabled when the base is negative). /*REXX pgm converts an integer (or a range) ──► van der Corput number *//*in base 2, or optionally, any other base up to and including base 90.*/numeric digits 1000 /*handle anything the user wants.*/parse arg a b r . /*obtain the number(s) [maybe]. */if a=='' then do; a=0; b=10; end /*if none specified, use defaults*/if b=='' then b=a /*assume a "range" of a single #.*/if r=='' then r=2 /*assume a radix (base) of 2. */z= /*placeholder for a list of nums.*/ do j=a to b /*traipse through the range. */ _=vdC(abs(j), abs(r)) /*convert ABS value of integer.*/ _=substr('-', 2+sign(j))_ /*if needed, keep leading - sign.*/ if r>0 then say _ /*if positive base, just show it.*/ else z=z _ /* ··· else build a list· */ end /*j*/ if z\=='' then say strip(z) /*if list wanted, then show it. */exit /*stick a fork in it, we're done.*//*──────────────────────────────────BASE subroutine (up to base 90)─────*/base: procedure; parse arg x,toB,inB /*get a number, toBase, inBase *//*┌────────────────────────────────────────────────────────────────────┐┌─┘ Input to this subroutine (all must be positive whole numbers): └─┐│ ││ x (is required). ││ toBase the base to convert X to. ││ inBase the base X is expressed in. ││ ││ toBase or inBase can be omitted which causes the default of │└─┐ 10 to be used. The limits of both are: 2 ──► 90. ┌─┘ └────────────────────────────────────────────────────────────────────┘*/@abc='abcdefghijklmnopqrstuvwxyz' /*Latin lowercase alphabet chars.*/@abcU=@abc; upper @abcU /*go whole hog and extend chars. */@@@=0123456789 || @abc || @abcU /*prefix 'em with numeric digits.*/@@@=@@@'<>[]{}()?~!@#$%^&*_+-=|\/;:~' /*add some special chars as well,*/ /*spec. chars should be viewable.*/numeric digits 1000 /*what the hey, support biggies. */maxB=length(@@@) /*max base (radix) supported here*/parse arg x,toB,inB /*get a number, toBase, inBase */if toB=='' then toB=10 /*if skipped, assume default (10)*/if inB=='' then inB=10 /* " " " " " *//*══════════════════════════════════convert X from base inB ──► base 10.*/#=0; do j=1 for length(x) _=substr(x,j,1) /*pick off a "digit" from X. */ v=pos(_,@@@) /*get the value of this "digits".*/ if v==0 | v>inB then call erd x,j,inB /*illegal "digit" ? */ #=#*inB + v - 1 /*construct new num, dig by dig. */ end /*j*//*══════════════════════════════════convert # from base 10 ──► base toB.*/y=; do while #>=toB /*deconstruct the new number (#).*/ y=substr(@@@,(#//toB)+1,1)y /* construct the output number. */ #=# % toB /*··· and whittle # down also. */ end /*while*/ return substr(@@@,#+1,1)y/*──────────────────────────────────VDC [van der Corput] subroutine─────*/vdC: return '.'reverse(base(arg(1),arg(2))) /*convert, reverse, append.*/ output when using the multiple inputs of (where a negative base indicates to show numbers as a list): 0 30 -2 1 30 -3 1 30 -4 1 30 -5 55582777 55582804 -80 (All outputs are a single line list.) .0 .1 .01 .11 .001 .101 .011 .111 .0001 .1001 .0101 .1101 .0011 .1011 .0111 .1111 .00001 .10001 .01001 .11001 .00101 .10101 .01101 .11101 .00011 .10011 .01011 .11011 .00111 .10111 .01111 .1 .2 .01 .11 .21 .02 .12 .22 .001 .101 .201 .011 .111 .211 .021 .121 .221 .002 .102 .202 .012 .112 .212 .022 .122 .222 .0001 .1001 .2001 .0101 .1 .2 .3 .01 .11 .21 .31 .02 .12 .22 .32 .03 .13 .23 .33 .001 .101 .201 .301 .011 .111 .211 .311 .021 .121 .221 .321 .031 .131 .231 .1 .2 .3 .4 .01 .11 .21 .31 .41 .02 .12 .22 .32 .42 .03 .13 .23 .33 .43 .04 .14 .24 .34 .44 .001 .101 .201 .301 .401 .011 .V[Is1 .W[Is1 .X[Is1 .Y[Is1 .Z[Is1 .<[Is1 .>[Is1 .[[Is1 .][Is1 .{[Is1 .}[Is1 .([Is1 .)[Is1 .?[Is1 .~[Is1 .![Is1 .@[Is1 .#[Is1 .$[Is1 .%[Is1 .^[Is1 .&[Is1 .*[Is1 .0]Is1 .1]Is1 .2]Is1 .3]Is1 .4]Is1 ## Ruby The multi-base sequence generator def vdc(n, base=2) str = n.to_s(base).reverse str.to_i(base).quo(base ** str.length)end (2..5).each do |base| puts "Base #{base}: " + Array.new(10){|i| vdc(i,base)}.join(", ")end Sample output Base 2: 0/1, 1/2, 1/4, 3/4, 1/8, 5/8, 3/8, 7/8, 1/16, 9/16 Base 3: 0/1, 1/3, 2/3, 1/9, 4/9, 7/9, 2/9, 5/9, 8/9, 1/27 Base 4: 0/1, 1/4, 1/2, 3/4, 1/16, 5/16, 9/16, 13/16, 1/8, 3/8 Base 5: 0/1, 1/5, 2/5, 3/5, 4/5, 1/25, 6/25, 11/25, 16/25, 21/25 ## Seed7 Translation of: D $ include "seed7_05.s7i"; include "float.s7i"; const func float: vdc (in var integer: number, in integer: base) is func result var float: vdc is 0.0; local var integer: denom is 1; var integer: remainder is 0; begin while number <> 0 do denom *:= base; remainder := number rem base; number := number div base; vdc +:= flt(remainder) / flt(denom); end while; end func; const proc: main is func local var integer: base is 0; var integer: number is 0; begin for base range 2 to 5 do writeln; writeln("Base " <& base); for number range 0 to 9 do write(vdc(number, base) digits 6 <& " "); end for; writeln; end for; end func; Output: Base 2 0.000000 0.500000 0.250000 0.750000 0.125000 0.625000 0.375000 0.875000 0.062500 0.562500 Base 3 0.000000 0.333333 0.666667 0.111111 0.444444 0.777778 0.222222 0.555556 0.888889 0.037037 Base 4 0.000000 0.250000 0.500000 0.750000 0.062500 0.312500 0.562500 0.812500 0.125000 0.375000 Base 5 0.000000 0.200000 0.400000 0.600000 0.800000 0.040000 0.240000 0.440000 0.640000 0.840000 ## Sidef Translation of: Perl func vdc(value, base=2) { while (value[-1] > 0) { value.append(value[-1] / base int); }; var (x, sum) = (1, 0); value.each { |i| sum += ((i % base) / (x *= base)); }; return sum;} 2..5 each { |base| var seq = (0..9 map {|i| vdc([i], base) }); "base %d: %s\n".printf(base, seq.map{|n| "%.4f" % n}.join(', '));} Output: base 2: 0.0000, 0.5000, 0.2500, 0.7500, 0.1250, 0.6250, 0.3750, 0.8750, 0.0625, 0.5625 base 3: 0.0000, 0.3333, 0.6667, 0.1111, 0.4444, 0.7778, 0.2222, 0.5556, 0.8889, 0.0370 base 4: 0.0000, 0.2500, 0.5000, 0.7500, 0.0625, 0.3125, 0.5625, 0.8125, 0.1250, 0.3750 base 5: 0.0000, 0.2000, 0.4000, 0.6000, 0.8000, 0.0400, 0.2400, 0.4400, 0.6400, 0.8400 ## Tcl The core of this is code to handle digit reversing. Note that this also tackles negative numbers (by preserving the sign independently). proc digitReverse {n {base 2}} { set n [expr {[set neg [expr {$n < 0}]] ? -$n : $n}] set result 0.0 set bit [expr {1.0 /$base}] for {} {$n > 0} {set n [expr {$n / $base}]} { set result [expr {$result + $bit * ($n % $base)}] set bit [expr {$bit / $base}] } return [expr {$neg ? -$result :$result}]} Note that the above procedure will produce terms of the Van der Corput sequence by default. # Print the first 10 terms of the Van der Corput sequencefor {set i 1} {$i <= 10} {incr i} { puts "vanDerCorput($i) = [digitReverse $i]"} # In other basesforeach base {3 4 5} { set seq {} for {set i 1} {$i <= 10} {incr i} { lappend seq [format %.5f [digitReverse $i$base]] } puts "${base}: [join$seq {, }]"} Output: vanDerCorput(1) = 0.5 vanDerCorput(2) = 0.25 vanDerCorput(3) = 0.75 vanDerCorput(4) = 0.125 vanDerCorput(5) = 0.625 vanDerCorput(6) = 0.375 vanDerCorput(7) = 0.875 vanDerCorput(8) = 0.0625 vanDerCorput(9) = 0.5625 vanDerCorput(10) = 0.3125 3: 0.33333, 0.66667, 0.11111, 0.44444, 0.77778, 0.22222, 0.55556, 0.88889, 0.03704, 0.37037 4: 0.25000, 0.50000, 0.75000, 0.06250, 0.31250, 0.56250, 0.81250, 0.12500, 0.37500, 0.62500 5: 0.20000, 0.40000, 0.60000, 0.80000, 0.04000, 0.24000, 0.44000, 0.64000, 0.84000, 0.08000 ## XPL0 include c:\cxpl\codes; \intrinsic 'code' declarations func real VdC(N); \Return Nth term of van der Corput sequence in base 2int N;real V, U;[V:= 0.0; U:= 0.5;repeat N:= N/2; if rem(0) then V:= V+U; U:= U/2.0;until N=0;return V;]; int N;for N:= 0 to 10-1 do [IntOut(0, N); RlOut(0, VdC(N)); CrLf(0)] Output: 0 0.00000 1 0.50000 2 0.25000 3 0.75000 4 0.12500 5 0.62500 6 0.37500 7 0.87500 8 0.06250 9 0.56250 ## zkl Translation of: Python fcn vdc(n,base=2){ vdc:=0.0; denom:=1; while(n){ reg remainder; denom *= base; n, remainder = n.divr(base); vdc += (remainder.toFloat() / denom); } vdc} Translation of: Ruby fcn vdc(n,base=2){ str:=n.toString(base).reverse(); str.toInt(base).toFloat()/(base.toFloat().pow(str.len()))} Output: [0..10].apply(vdcR).println("base 2"); L(0,0.5,0.25,0.75,0.125,0.625,0.375,0.875,0.0625,0.5625,0.3125)base 2 [0..10].apply(vdc.fp1(3)).println("base 3"); L(0,0.333333,0.666667,0.111111,0.444444,0.777778,0.222222,0.555556,0.888889,0.037037,0.37037)base 3 `
2015-01-26 08:22:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 5, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3829793930053711, "perplexity": 6102.265430761232}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115860608.29/warc/CC-MAIN-20150124161100-00202-ip-10-180-212-252.ec2.internal.warc.gz"}
http://mathoverflow.net/questions/154157/intersection-of-free-objects
# Intersection of free objects I am aware that the following question is a very basic one and therefore I would not be at all offended if it were to be closed. Moreover, I am not familiar at all with category theory. Let $\mathcal{C}$ be a concrete category and $X$ be a free object of $\mathcal{C}$. If $Y_1$ and $Y_2$ are both free subobjects of $X$, then is the intersection, $Y_1 \cap Y_2,$ free? - It's not in general true for the category of modules for a ring. For example, let $R=\mathbb{C}[x]/(x^2)$, let $X=R\oplus R$ be the free module on two generators, and let $Y_1$ and $Y_2$ be the submodules of $X$ generated by $(1,0)$ and $(1,x)$ respectively. Then $Y_1$ and $Y_2$ are both free modules on one generator, but $Y_1\cap Y_2$ is one-dimensional, spanned by $(x,0)$, and is not free. - Great example. Thanks, Jeremy. –  Samuele Giraudo Jan 10 '14 at 22:05 This is true for submonoids of a free monoid, but not for free submonoids of non-free monoid. See Tilson, B., The Intersection of Free Submonoids of a Free Monoid is Free. Semigroup Forum 4, (1972), 345-350. Addendum: Recently I found an article: Shubh Narayan Singh, K.V. Krishna. A sufficient condition for the Hanna Neumann property of submonoids of a free monoid. Semigroup Forum, 86(2013), pp.537–554. It contains many useful references. - Indeed, I know for the while only the proof given in Lothaire's $\textit{Combinatorics on words}$. The reason why I ask this is because I consider intersection of many other free (combinatorial) algebraic structures than monoids and I would hope that a categorical argument imply their freeness. Since this is not the case (see Jeremy's answer) the only way seems to proceed case by case. –  Samuele Giraudo Jan 10 '14 at 22:10 Yes, I think you are right. –  Boris Novikov Jan 10 '14 at 22:23 And by the way, thanks for the reference! –  Samuele Giraudo Jan 10 '14 at 22:26 You are welcome. –  Boris Novikov Jan 10 '14 at 22:26
2015-05-27 10:14:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9226412773132324, "perplexity": 264.33348704087587}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928923.85/warc/CC-MAIN-20150521113208-00266-ip-10-180-206-219.ec2.internal.warc.gz"}
http://www.gardenersworld.com/forum/problem-solving/frosted-lilies/78262.html
1 message 07/04/2013 at 17:13 Back in March, when we had a few nice days, quite a few of my potted lilies decided to grow.  They got to between 4 - 7 inches in height, nice bushy leaves on top.   Then came the snow and very, very hard frosts with wind chill.  Most of the tops are now wilted, mushy and not at all well.  does anyone think they will produce a second shoot?  If so, shall I take off the dead shoot?  Or have we had it for this year, or indeed for ever?  Any ideas very welcome please. 1 message
2014-10-25 07:50:28
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8444984555244446, "perplexity": 5581.353775774623}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119647884.33/warc/CC-MAIN-20141024030047-00198-ip-10-16-133-185.ec2.internal.warc.gz"}
https://www.cs.grinnell.edu/~curtsinger/teaching/2018F/CSC151/readings/vectors.html
Monday, Nov 5, 2018 Summary Vectors are data structures that are very similar to lists in that they arrange data in linear fashion. Vectors differ from lists in two significant ways: Unlike lists, vectors are indexed and vectors are mutable. ## Introduction: Deficiencies of Lists As you’ve seen in many of the procedures and programs we’ve written so far, there are many problems in which we have to deal with collections of information. We have several techniques for representing collections of data: • We can represent the collection as a list. • We can represent the collection as a nested list. • We will soon learn how we can represent the collection persistently as a file. Representing a collection as a list has some problems. In particular, it is relatively expensive to get a particular element and it is equally expensive to change a particular element. Why is it expensive to get an element (say, the tenth element)? In the case of a list, we need to follow the cdr of each pair through the list until we reach the element. In the case of a tree, we need to figure out how many values are in the left subtree to decide where to look. Changing an element may be even worse, because once we’ve reached the position, we need to build the structure back to a new form. Does this mean that lists and other similar structures are inappropriate ways to represent collections? Certainly not. Rather, they work very well for some purposes (e.g., it is easy to extend a list) and less well for other purposes (e.g., extracting and changing). To resolve these deficiencies, Scheme provides an alternate mechanism for representing collections, the vector. ## Indexing, a key feature of vectors You may have noted that when we use lists to group data (e.g., the tallies for the words in a book), we need to use list-ref or repeated calls to cdr to get later elements of the list. Unfortunately, list-ref works by cdr’ing down the list. Hence, it takes about five steps to get to the fifth element of the list and about one hundred steps to get to the one hundredth element of a list. Similarly, to get to the fifth element of a file, we’ll need to read the preceding elements and to get to the hundredth element, we’ll also need to read through the preceding elements. It would be nicer if we could access any element of the group of data in the same amount of time (preferably a small amount of time). Vectors address this problem. Vectors contain a fixed number of elements and provide indexed access (also called random access) to those elements, in the sense that each element, regardless of its position in the vector, can be recovered in the same amount of time. In this respect, a vector differs from a list or a file: The initial element of a list is immediately accessible, but subsequent elements are increasingly difficult and time-consuming to access. ## Mutation, another key feature of vectors You may have also noted that we occasionally want to change an element of a group of data (e.g., to change a student’s grade in the structure we use to represent that student; to update a tally). When we use lists, we essentially need to build a new list to change one element. When we use files, we often have to build a new file, copying both preceding and subsequent values. Vectors are mutable data structures: It is possible to replace an element of a vector with a different value, just as one can take out the contents of a container and put in something else instead. It’s still the same vector after the replacement, just as the container retains its identity no matter how often its contents are changed. The particular values that a vector contains at some particular moment constitute its state. One could summarize the preceding paragraph by saying that the state of a vector can change and that state changes do not affect the underlying identity of the vector. ## A practical detail: How DrRacket displays vectors When showing a vector, DrRacket follows a format much like the list, but with a preceding pound sign, #. That is, the elements of the vector are separated by spaces, enclosed in parentheses, and with an extra # in the front. For instance, here’s how Scheme shows a vector containing the strings "alpha", "beta", and "gamma", in that order: '#("alpha" "beta" "gamma") The mesh (also called pound, sharp, hash, or octothorp) character distinguishes the vector from the list containing the same elements. Some implementations of Scheme permit us to use vector literals, in which a programmer can use a similar syntax to specify a vector when writing a Scheme program or typing commands and definitions into the Scheme interactive interface. In some such implementations, the literal begins with the hash mark. In others, the programmer must place a single quotation mark before the mesh so that Scheme will not try to evaluate the vector as if it were some exotic kind of procedure call. We traditionally recommend that you avoid using this notation just as we recommend that you avoid the corresponding list literal notation for lists. ## Vector procedures Standard Scheme provides the following fundamental procedures for creating vectors and selecting and replacing their elements. You’ll find that many of them correspond to similar list procedures. ### vector The constructor vector takes any number of arguments and assembles them into a vector, which it returns. > (vector "alpha" "beta" "gamma") '#("alpha" "beta" "gamma") '> (vector) ; the empty vector -- no elements! #() > (define beta 2) > (vector "alpha" beta (list "gamma" 3) (vector "delta" 4) (vector "epsilon")) '#("alpha" 2 ("gamma" 3) #("delta" 4) #("epsilon")) As the last example shows, Scheme vectors, like Scheme lists, can be heterogeneous, containing elements of various types. ### make-vector The make-vector procedure takes two arguments, a natural number k and a Scheme value obj, and returns a k-element vector in which each position is occupied by obj. > (make-vector 12 "foo") '#("foo" "foo" "foo" "foo" "foo" "foo" "foo" "foo" "foo" "foo" "foo" "foo") > (make-vector 4 0) '#(0 0 0 0) > (make-vector 0 4) ; the empty vector, again '#() The second argument is optional; if you omit it, the value that initially occupies each of the positions in the array is left unspecified. Various implementations of Scheme have different ways of filling them up, so you should omit the second argument of make-vector only when you intend to replace the contents of the vector right away. ### vector? The type predicate vector? takes any Scheme value as argument and determines whether it is a vector. > (vector? (vector "alpha" "beta" "gamma")) #t > (vector? (list "alpha" "beta" "gamma")) ; a list, not a vector #f > (vector? "alpha beta gamma") ; a string, not a vector #f ### vector-length The vector-length procedure takes one argument, which must be a vector, and returns the number of elements in the vector. Unlike the length procedure for lists, which must look through the whole list to find the length, vector-length can immediately determine the length of a vector. > (vector-length (vector 3 1 4 1 5 9)) 6 > (vector-length (vector "alpha" "beta" "gamma")) 3 > (vector-length (vector)) 0 ### vector-ref The selector vector-ref takes two arguments – a vector vec and a natural number k (which must be less than the length of vec). It returns the element of vec that is preceded by exactly k other elements. In other words, if k is 0, you get the element that begins the vector; if k is 1, you get the element after that; and so on. > (vector-ref (vector 3 1 4 1 5 9) 4) 5 > (vector-ref (vector "alpha" "beta" "gamma") 0) alpha > (vector-ref (vector "alpha" "beta" "gamma") 3) vector-ref: out of bounds: 3 ### vector-set! All of the previous procedures look a lot like list procedures, except that many are more efficient (e.g., vector? and vector-length take a constant number of steps; list? takes a number of steps proportional to the the length of the list and list-ref takes a number of steps proportional to the index). Now let’s see a procedure that’s much different. We can use procedures to change vectors. The mutator vector-set! takes three arguments – a vector vec, a natural number k (which must be less than the length of vec), and a Scheme value obj – and replaces the element of vec that is currently in the position indicated by k with obj. This changes the state of the vector irreversibly; there is no way to find out what used to be in that position after it has been replaced. It is a Scheme convention to place an exclamation point meaning “Proceed with caution!” at the end of the name of any procedure that makes such an irreversible change in the state of an object. The value returned by vector-set! is unspecified; one calls vector-set! only for its side effect on the state of its first argument. > (define sample-vector (vector "alpha" "beta" "gamma" "delta" "epsilon")) > sample-vector '#("alpha" "beta" "gamma" "delta" "epsilon") > (vector-set! sample-vector 2 "zeta") > sample-vector ; same vector, now with changed contents '#("alpha" "beta" "zeta" "delta" "epsilon") > (vector-set! sample-vector 0 "foo") > sample-vector ; changed contents again '#("foo" "beta" zeta "delta" "epsilon") > (vector-set! sample-vector 2 -38.72) > sample-vector ; and again '#("foo" "beta" -38.72 "delta" "epsilon") Vectors introduced into a Scheme program by means of the mesh-and-parentheses notation are supposed to be “immutable”: applying vector-set! to such a vector is an error, and the contents of such vectors are therefore constant. (Warning! Some implementations of Scheme, including the ones we use, don’t enforce this rule.) ### vector->list and list->vector The vector->list procedure takes any vector as argument and returns a list containing the same elements in the same order; the list->vector procedure performs the converse operation. > (vector->list (vector 31 27 16)) '(31 27 16) > (vector->list (vector)) '() > (list->vector (list #\a #\b #\c)) '#(#\a #\b #\c) > (list->vector (list 31 27 16)) '#(31 27 16) ### vector-fill! The vector-fill! procedure takes two arguments, the first of which must be a vector. It changes the state of that vector, replacing each of the elements it formerly contained with the second argument. > (define sample-vector (vector "rho" "sigma" "tau" "upsilon")) > sample-vector ; original vector '#("rho" "sigma" "tau" "upsilon") > (vector-fill! sample-vector "kappa") > sample-vector ; same vector, now with changed contents '#("kappa" "kappa" "kappa" "kappa") The vector-fill! procedure is invoked only for its side effect and returns an unspecified value. While some older implementations of Scheme may lack the list->vector, vector->list, and vector-fill! procedures, it is straightforward to define them in terms of the others. ## Selecting random elements from vectors You may recall that we recently defined a procedure, random-elt, that randomly selects an element from a list. ;;; Procedure: ;;; random-elt ;;; Parameters: ;;; lst, a non-empty list ;;; Purpose: ;;; Unpredictably pick an element of lst. ;;; Produces: ;;; val, a value ;;; Preconditions: ;;; Postconditions: ;;; * val is an element of lst. ;;; * If lst contains more than one element, it is difficult to predict ;;; which element val is. (define random-elt (lambda (lst) (list-ref lst (random (length lst))))) The procedure is simple and straightforward. But it’s slow. Since we have to find the length of the list each time we look for a random element, we’ll spend time and effort stepping through the elements of the list with cdr. Fortunately, it’s straightforward to write a similar procedure using vectors. We just change the list procedures to their corresponding vector versions. ;;; Procedure: ;;; random-vector-elt ;;; Parameters: ;;; vec, a non-empty vector ;;; Purpose: ;;; Unpredictably pick an element of vec. ;;; Produces: ;;; val, a value ;;; Preconditions: ;;; Postconditions: ;;; * val is an element of vec. ;;; * If vec contains more than one element, it is difficult to predict ;;; which element val is. (define random-vector-elt (lambda (vec) (vector-ref vec (random (vector-length vec))))) Let’s check it. > (define words (vector "alpha" "beta" "gamma" "delta" "epsilon")) > (random-vector-elt words) "beta" > (random-vector-elt words) "epsilon" > (random-vector-elt words) "alpha" > (random-vector-elt words) "alpha" > (random-vector-elt words) "beta" We’ll see in the lab just how much difference this makes. ## Implementing number vectors We frequently store only one type in a collection. For example, just as we might restrict a list or pair structure to contain only numbers, we might restrict the numbers in a vector to store only integers. Those integers might, for example, represent tallies of letters or words. Say we had such a vector of numbers–how could we increment the tally in one of the positions? After some reflection, it seems We need three steps, one to get the current value in the vector, another to increment that value, and the last step to make that value the new entry in the given position. We put these steps together as follows. ;;; Procedure: ;;; number-vector-increment-at! ;;; Parameters: ;;; vec, a vector ;;; index, an integer ;;; Purpose: ;;; Increment the value at a vector position ;;; Produces: ;;; [Nothing; called for side effect.] ;;; Preconditions: ;;; (vector-ref vec index) is a number ;;; Postconditions: ;;; Let val be (vector-ref vec index) before the procedure call. After the ;; call (vector-ref vec index) produces val+1. (define number-vector-increment-at! (lambda (vec index) (vector-set! vec index (increment (vector-ref vec index))))) The fact that vectors allow mutation makes the increment straightforward. What if we wanted to increment every value in a vector? With lists we might think about using basic list recursion to pass over the list and add one to each item. (And then we might think better of it and use map). There is no analog for car/cdr that we might use to process lists, neither is there an analog for map. Fortunately, indexing is fast in vectors, so we can use numeric recursion to iterate over all the positions in a vector, applying our increment function along the way. Thus, we might track pos, the current position to modify in the vector, starting at zero and ending when we reach the length of the vector. We can encapsulate this repeated mutation with a named let ;;; Procedure: ;;; number-vector-increment! ;;; Parameters: ;;; vec, a vector ;;; Purpose: ;;; Increment the value at all vector positions ;;; Produces: ;;; [Nothing; called for side effect.] ;;; Preconditions: ;;; (vector-ref vec index) for 0 <= index < (vector-length vec) is a number. ;;; number-vect-increment-at! is defined ;;; Postconditions: ;;; Let val be (vector-ref vec index) before the procedure call. After the ;; call (vector-ref vec index) produces val+1. (define number-vector-increment! (lambda (vec) (let ([len (vector-length vec)]) ; unchanging value, tells recursion to stop (let kernel! ([pos 0]) ; Start the recursion at the first position (when (< pos len) ; When the position is valid, (number-vector-increment-at! vec pos) ; increment the number at pos (kernel! (+ 1 pos))))))) ; and process the rest of the vector There are various ways to mutate all vector elements; the lab will suggest some alternatives. Unfortunately, we do not always have a special helper function like number-vector-increment-at! that allows us to write such streamlined code. Instead we must combine the indexing and mutation steps directly in the recursion. As an example, suppose we wished to convert the tallies to percentages by dividing each number by the sum of all the numbers in the vector. Assuming you have a means of totalling these numbers (a procedure you will write in the lab), we still need to iterate over all vector positions, just as we did in number-vector-increment! only this time we use the position variable pos directly to index the vector with vector-ref, rather than with a helper. Putting this together, we might write the following procedure. ;;; Procedure: ;;; number-vector-scale! ;;; Parameters: ;;; vec, a vector ;;; divisor, a number ;;; Purpose: ;;; Scale all the elements in the vector by dividing by the given ;;; divisor. ;;; Produces: ;;; [Nothing; called for side effect.] ;;; Preconditions: ;;; (vector-ref vec index) for 0 <= index < (vector-length vec) is a number. ;;; Postconditions: ;;; Let val be (vector-ref vec index) before the procedure call. After the ;; call (vector-ref vec index) produces val/divisor. (define number-vector-scale! (lambda (vec divisor) (let ([len (vector-length vec)]) ; unchanging and tells recursion to stop (let kernel! ([pos 0]) ; Start the recursion at the first position (when (< pos len) ; When the position is valid, (vector-set! vec ; Set the new value in the vector pos ; at the current position (/ (vector-ref vec pos) divisor)) ; to the quotient (kernel! (+ 1 pos))))))) ; and process the rest of the vector Of course, we need not change the vector as we iterate over it. Perhaps we just want to find the largest value in a vector. We still need to iterate over all the positions, except we might now use the standard recursive pattern that requires us to use a combination step to get a complete answer from the partial (recursive) answer. ;;; Procedure: ;;; number-vector-largest ;;; Parameters: ;;; vec, a vector ;;; Purpose: ;;; Find the largest number in a vector ;;; Produces: ;;; largest, a number ;;; Preconditions: ;;; (vector-ref vec index) for 0 <= index < (vector-length vec) is a number. ;;; Postconditions: ;;; (vector-ref vec index) <= largest for 0 <= index < (vector-length vec) ;;; largest is a value in vec, i.e., there exists an integer index such that ;;; 0 <= index < (vector-length vec) and ;;; (vector-ref vec index) = largest (define number-vector-largest (lambda (vec) (let ([last (- (vector-length vec) 1)]) ; last position to test (let kernel ([pos 0]) ; Start the recursion at the first position (if (= pos last) ; We are at the last position, so return the number (vector-ref vec pos) ; Otherwise return the maximum of the current position and ; the largest number in the rest of the vector (max (vector-ref vec pos) (kernel (+ 1 pos)))))))) ## Patterns of recursion over vectors Each time we learn a new structure, we learn techniques for recursion over that structure. As the previous examples suggested, recursion over vectors is relatively straightforward, but usually requires that we have a helper procedure that includes additional parameters - the current position in the vector. (We also typically precompute a stopping point so that we don’t have to recompute it for each pass through the kernel.) The test for the base case is then to check whether the current position has reached the stopping point and the “simplify” step is to add 1 to the position. As usual, the “combine” step is problem dependent. (define vector-proc (lambda (vec other) (let ([len (vector-length vec)]) (let kernel ([pos 0]) (if (= pos len) (base-case vec other) (combine (vector-ref vec pos) (kernel (+ pos 1)))))))) At times, it’s better to start at the end of the vector and work backwards. In this strategy, we get the base case when the position reaches 0 and we simplify by subtracting 1. (define vector-proc (lambda (vec other) (let kernel ([pos (- (vector-length vec) 1)]) (if (< pos 0) (base-case vec other) (combine (vector-ref vec pos) (kernel (- pos 1))))))) Because vectors are mutable, we often use an imperative pattern with (perhaps multiple) operations that have side-effects. (define vector-proc (lambda (vec other) (let ([len (vector-length vec)]) (let kernel! ([pos 0]) (cond [(= pos len) (base-case! vec other) ...] [else (operation! vec pos) ... (kernel! (+ pos 1))]))))) ## Summary of important vector procedures (list->vector lst) Convert a list to a vector. (vector val_1 ... val_n) Create a vector with the given elements. (vector? val) Determine if val is a vector. (vector-fill! vec val) Fill vec with multiple copies of val. (vector-ref vec pos) Extract an element from vec. (vector-set! vec pos val) Set element pos of vec to val. (vector->list vec) Convert a vector to a list. ## Self checks ### Check 1: Creating simple vectors a. In DrRacket’s Interactions pane, type in a vector literal that denotes a vector containing just the two elements 3.14159 and 2.71828. How does DrRacket display the value of this vector? b. Create a vector that contains the same two values by using the vector procedure. c. Create a vector that contains the same two values by using the make-vector and vector-set! procedures. ### Check 2: Processing numeric vectors a. Make a copy of number-vector-increment-at! from above. b. Try using number-vector-increment-at! on a vector from the previous check. > (define v1 (vector 3.14159 2.71828)) > (number-vector-increment-at! v1 1) > v1 c. Use number-vector-increment! on the vector to verify it behaves as intended. > (define v2 (make-vector 2 3.14159)) > (vector-set! v2 12.71828)) > (number-vector-increment! v2) > v2 d. Checks 2.b and 2.c relied on the vectors you created in 1.b and 1.c, respectively. What do you suppose would happen if we tried these operations on the vector you created in check 1.a? > (number-vector-increment-at! '#(3.14159 2.71828) 1) e. Verify your prediction. Why do you think this happens?
2019-01-23 05:25:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6444261074066162, "perplexity": 3345.3297403878137}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583897417.81/warc/CC-MAIN-20190123044447-20190123070447-00130.warc.gz"}
https://xanderx.com/page/2/
I’ve had some trouble with Google Play Music’s scan and match service. It sometimes matches tracks to the wrong versions of them. Sometimes the volume of a track is wildly different from what I have locally (with or without ReplayGain). Unfortunately, there is no way to disable matching in Google’s official Music Manager application. gmusicapi supports uploading to Google Play Music without their “scan and match” service. gmusicapi-scripts utilises gmusicapi. Let’s use them! 1. Install gmusicapi-scripts: # Windows py -3 -m pip install gmusicapi-scripts # Others pip install gmusicapi-scripts 2. Convert tracks (if needed) to 320kbps MP3 and place them in <directory>. • If you place non-MP3 files in this directory, gmsync will try to convrt them with FFMPEG or AVConv if they are on the PATH. Look at the docs to see what formats it supports. 3. Run gmsync. Follow any instructions it provides: gmsync up <directory> gmsync uploads without matching by default. Perfect. ## Working around Intel CPU Running at a Maximum of 49%/50% My laptop had been feeling slower than it had used to, and I was getting more and more certain that it wasn’t just me. One time I opened Task Manager and noticed that whenever I put load on the machine, it maxed out at 49%. Every time. Cooling wasn’t the issue as the laptop fan (which is a dumb fan controlled by hardware, not software) wasn’t spinning up much. After searching online, I stumbled across a thread that would provide the (really rather insane) workaround: Move the Intel processor driver so Windows does not use it. Thanks to everyone there for the discussion. 1. Restart in the advanced command prompt mode. (Hold Shift when clicking Restart) 3. Run the following: cd drivers move intelppm.sys intelppm.sys.old exit 4. Select the option to exit and continue to Windows 10. One comment on that thread suggests that this also works for AMD CPUs by moving amdppm.sys instead of intelppm.sys. Update 2017-06-23: I previously suggested to add the .bak extension to the driver file. However, Windows recognises this extension, and so it will restore the driver back to its original location at the next restart, meaning the problem will return! I’ve changed the instructions to suggest using .old instead, which I don’t believe Windows recognises, but is still fairly obvious to users. ## Tips for Ripping Blu-Ray Audio I recently ripped my first Blu-Ray audio discs. Here are a few tips I’ve discovered along the way: ## tl;dr Backup the disc using MakeMKV and extract the audio using the updated version of the command under Putting it all together. ## Ripping the discs MakeMKV can be used to rip and, if necessary, decrypt Blu-Ray audio discs. While it is in beta, it can be evaluated by using the current beta key. I use the complete Blu-Ray backup feature so I have the raw stream files to work with. ## Extracting the audio from the video files ffmpeg can be used to extract the audio track(s) from the video streams on the Blu-Ray. First interrogate one of your streams using ffprobe to see how the different tracks are laid out: ffprobe -hide_banner BDMV/STREAM/00002.m2ts I use -hide_banner to reduce the amount of text printed to the terminal. If you have troubles and decide to ask others for help, be sure to omit -hide_banner as the information it emits can be useful when debugging! Here is the output I got: Input #0, mpegts, from 'BDMV/STREAM/00002.m2ts': Duration: 00:01:47.07, start: 4200.000000, bitrate: 7133 kb/s Program 1 Stream #0:0[0x1011]: Video: h264 (High) (HDMV / 0x564D4448), yuv420p(progressive), 1920x1080 [SAR 1:1 DAR 16:9], 23.98 fps, 23.98 tbr, 90k tbn, 47.95 tbc Stream #0:1[0x1100]: Audio: pcm_bluray (HDMV / 0x564D4448), 48000 Hz, stereo, s16, 1536 kb/s In my case I only have one audio stream to worry about at address 0:1. Note it is labelled s16 which means that it is a signed 16-bit depth audio stream. These details will be important in a moment. Next comes the extraction: Update 2017-09-10: I now think it is much more straightforward to trust FFMPEG to do the right thing: for i in BDMV/STREAM/*.m2ts; do ffmpeg -hide_banner -i "$i" -map 0:1 "$(basename "${i%.*}").flac"; done Note that I do not set the audio codec, and merely set the file extension to .flac as that is the format that I want. For reference, this is original command I used: for i in BDMV/STREAM/*.m2ts; do ffmpeg -hide_banner -i "$i" -map 0:1 -acodec pcm_s16le "$(basename "${i%.*}").wav"; done The -map 0:1 refers to our address from earlier. The s16 part of -acodec pcm_s16le matches the description of the stream ffprobe gave us. We cannot use -acodec copy as ffmpeg cannot encode pcm_bluray. Make sure to use .wav at the end of the filename, as ffmpeg considers it important. I was originally trying to use .pcm, but all I got was this error message: [NULL @ 000000000283dc20] Unable to find a suitable output format for '00002.pcm' If you have multiple streams, re-run this command, changing -map, -acodec and the output filename (to not overwrite your previous extractions!) as appropriate. ## Removing initial silence After I had extracted the audio, I noticed that every track had about 1.0-1.6 seconds of silence at the beginning. I had no interest in keeping this, so I used ffmpeg with the silenceremove filter: mkdir no-silence for i in *.wav; do ffmpeg -i "$i" -af silenceremove=start_periods=1:detection=peak "no-silence/$(basename "$i")"; done start_periods=1 means remove one block of silence from the beginning of the track, a.k.a. the silence before any audio begins. I use detection=peak because I know I’m working with audio with digital silence; if you’re working with audio that was originally recorded from analogue, omit this option. ## Putting it all together Incidentally, this filter could have been added to the original extraction command, if I knew that I wanted it at the time. :) Update 2017-09-10: As above, I now think it is much more straightforward to trust FFMPEG to do the right thing: for i in BDMV/STREAM/*.m2ts; do ffmpeg -hide_banner -i "$i" -map 0:1 -af silenceremove=start_periods=1:detection=peak "$(basename "${i%.*}").flac"; done For reference, this is original command I used: for i in BDMV/STREAM/*.m2ts; do ffmpeg -hide_banner -i "$i" -map 0:1 -acodec pcm_s16le -af silenceremove=start_periods=1:detection=peak "$(basename "${i%.*}").wav"; done Update 2017-05-14: Changed ${i/.*} to \${i%.*} to remove file extensions, as the new way will only remove everything following the last . in the filename, not the first. Thanks to my good friend Mark Holland-Avery for the suggestion! ## Fix Yarn global installs on macOS Sierra I was having some trouble getting yarn to globally install some commands, such as jest-cli. yarn config set prefix /usr/local/ yarn global remove <package> yarn global add <package> ## Using the option key in macOS Terminal I like using the Meta+left/right to navigate back and forth between words. On macOS by default, no key is mapped to Meta! Fortunately for me, it is possible to configure the Option key to act as a Meta key. With the Terminal app selected, this can be done by going in to Terminal > Preferences (Cmd+,) > Profiles > Select your profile > Keyboard > Use Option as Meta. ## React: Unexpected token < I’ve been fiddling around with React recently and made some pages with some normal-looking URLs, despite being a single-page application. This was done using React Router and configuring it to use the History API with browserHistory. Here is one of those URLS: http://localhost:3000/course/react-flux-building-applications When the page was refreshed (either manually or by the hot loader), or when I visited that page manually, I got this odd error in the console: Unexpected token < When I inspected the contents of bundle.js, which is what I had configured webpack to bundle all of my JS in to, I noticed it wasn’t JS at all; it was the HTML for my index page! <!DOCTYPE html> <html lang="en"> <body> <div id="app"></div> <script src="bundle.js"></script> </body> </html> After a few moments of wondering, I realised what had happened. It was pretty dumb. Can you see it? What happened is that bundle.js was being loaded from: http://localhost:3000/course/bundle.js And not: http://localhost:3000/bundle.js Since the URL was not recognised, webpack or React Router (I’m unsure exactly which) merely served index.html instead of a 404 error. This meant that HTML was served for a JS request! The simple fix is to change: <script src="bundle.js"></script> To: <script src="/bundle.js"></script> That way the requests always go to http://localhost:3000/bundle.js. This may need to be solved slightly better if I don’t want to host the application at the root of a domain, but it’ll work for now while I experiment! ## Working around Chroma Subsampling over HDMI on Nvidia cards If you have chroma subsampling issues when connecting a display via HDMI on an Nvidia card, despite all the settings appearing as if there shouldn’t be any, try creating a new custom profile at a fractionally higher refresh rate and using that new profile. 1. Go in to the Nvidia Control Panel. 2. Go to Display > Change resolution 3. Select Customise… 4. Check “Enable resolutions not exposed by the display” 5. Select Create Custom Resolution… 6. Select the correct horizontal pixels, vertical pixels, colour depth and scan type. Then enter the desired Refresh rate, and add 1 to it. • The Nvidia Control Panel does not allow you to create duplicates of profiles that exist by default. So changing the refresh rate ensures that the new profile is not a duplicate, and means you do not need to change resolution. 7. Test the profile, save it, exit the customise window. 8. Select your new resolution off the list, and click Apply. With any luck, your new profile will not have any chroma subsampling, leaving your colours (particularly reds) looking good, especially on menus. ## Printing Brother labels through GIMP I’ve been fiddling with a Brother P-Touch D600, which is able to connect to a computer through USB. Since I’ve realised that it appears to Windows as a standard printer, I’ve been trying to see if any applications can print to the device. They can! Notepad works (albeit not in the most useful fashion). But I had a bit of trouble with GIMP. I found that, for some reason, even after setting the correct image/DPI settings and running through page setup, GIMP still seems to think it is printing via A4. I’m unclear where the problem comes from, but it can be worked around when you go to print. After selecting Print, select the Image Settings tab, and ensure the image is aligned to the top-left corner. You’ll need to factor in a margin. On my 24mm labels, I set a top and left margin of 3mm or so. ## How to update Qualcomm Atheros QCA61x4 drivers Qualcomm seem to make up-to-date wireless and wireless+bluetooth drivers available to Killer Networking first. Forunately you are able to download and install them yourself. 1. Go to the Killer Networking driver page. • In my case, the k11acw10 directory worked for the wireless adapter.
2018-09-18 19:22:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20554669201374054, "perplexity": 5716.509474112229}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267155676.21/warc/CC-MAIN-20180918185612-20180918205612-00509.warc.gz"}
https://docs.snowflake.com/en/user-guide/hostname-whitelist.html
Allowing Hostnames¶ All Snowflake clients (SnowSQL, JDBC driver, ODBC driver, etc.) require permanent access to cloud storage (Amazon S3, Google Cloud Storage, or Microsoft Azure), as well as other web-based hosts, to perform various runtime operations. To ensure access, particularly in a secure/private network (e.g. AWS PrivateLink-enabled network), you must allow the hostnames for the required hosts. The hostnames that need to be allowed depend on your cloud platform (AWS, Google Cloud Platform, or Microsoft Azure) and the region where your Snowflake account is located. Use the SYSTEM$WHITELIST function for general accounts or SYSTEM$WHITELIST_PRIVATELINK function for accounts using Private Connectivity to the Snowflake Service to obtain the hostnames for your Snowflake account. Use SnowCD to ensure the provided endpoints are allowed.
2022-10-04 13:59:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6186875104904175, "perplexity": 14710.65318949647}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00560.warc.gz"}
https://socratic.org/questions/are-all-electric-motors-magnetic
# Are all electric motors magnetic? $$ All electric motors are magnetic. This is a result of the relationship between electricity and magnetism which is one of the fundamental forces in our universe - the Electromagnetic Force. $$ All electric motors are magnetic. This is a result of the relationship between electricity and magnetism which is one of the fundamental forces in our universe - the Electromagnetic Force.
2019-03-22 09:58:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4822385907173157, "perplexity": 207.85408307308438}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202642.32/warc/CC-MAIN-20190322094932-20190322120932-00331.warc.gz"}
https://blog.sandchaschte.ch/en/posts/certificates-with-acmesh/
Certificates with acme.sh The new ACME v2 protocol for Let’s Encrypt certificates is live! Among other things, this now allows wildcard certificates to be obtained. This allows many individual certificates (such as subdomains) to be reduced to one, and no additional certificates are required for multiple subdomains. Of course I would like to use such universal certificates right away, but the acme-client I haden’t implemented the new ACME 2.0 protocol and will not be able to do so in the near future. So far I use the ACME-Client from hlandau written in the Golang, which has served me well until today. It is only one single GO-binary required for certificate management and supports a wide range of configurations. Unlike other clients, it does not need a huge bunch of modules like the certbot written in Python. After searching for a v2 compatible, simple ACME client, I chose acme.sh as my new companion. It supports the new ACME 2.0 protocol and its * certificates. This client can also store the challenge in the DNS server via nsupdate. This is very convenient for me, as I already use DNS challenges for obtaining the certificates for my domains. The migration is very easy, I simply issue the new certificates with the acme.sh and use it in my services. Install acme.sh The installation is very simple and well described in the README of acme.sh. # As root curl https://get.acme.sh | sh I create a cronjob for the regular and automatic renewal of certificates: 0 6 * * * * root /root/.acme.sh/acme.sh --cron --home "/root/.acme.sh" > /dev/null To issue a wildcard certificate it needs an api to the DNS server, so that the challenge can be deposited there and checked by Letsencrypt. This verification is the only way to obtain the special certificate for several subdomains. I use nsupdate, a utility for the dynamic DNS update. export NSUPDATE_SERVER="mockingjay.sandchaschte.ch" export NSUPDATE_KEY="/root/.acme.sh/update.key" # /root/.acme.sh/update.key key "_acme-challenge" { algorithm hmac-sha512; secret "notmyrealsecret"; }; Create and install certificates Now I can use acme.sh to issue the certificates. I choose the following options: --issue: Create certificate --dns dns_nsupdate: Using the DNS challenge with nsupdate -d sandchaschte.ch -d *.sandchaschte.ch: my domain for which I want to issue the wildcard certificate --dnssleep 10: Wait 10 seconds after the DNS entry with nsupdate before checking the challenge acme.sh --issue --dns dns_nsupdate -d sandchaschte.ch -d *.sandchaschte.ch --dnssleep 10 Result [Don Mar 15 20:17:42 CET 2018] Creating domain key [Don Mar 15 20:17:42 CET 2018] The domain key is here: /root/.acme.sh/sandchaschte.ch/sandchaschte.ch.ch [Don Mar 15 20:17:42 CET 2018] Multi domain='DNS:sandchaschte.ch,DNS:*.sandchaschte.ch' [Don Mar 15 20:17:42 CET 2018] Getting domain auth token for each domain [Don Mar 15 20:17:44 CET 2018] Getting webroot for domain='sandchaschte.ch' [Don Mar 15 20:17:44 CET 2018] Getting webroot for domain='*.sandchaschte.ch' [Don Mar 15 20:17:44 CET 2018] Found domain api file: /root/.acme.sh/dnsapi/dns_nsupdate.sh [Don Mar 15 20:17:44 CET 2018] adding _acme-challenge.sandchaschte.ch. 60 in txt "gvJXepU-oQTVS8Fcgiqy7SVEDckFxcu4IUkP3c2i1-w" [Don Mar 15 20:17:44 CET 2018] Found domain api file: /root/.acme.sh/dnsapi/dns_nsupdate.sh [Don Mar 15 20:17:44 CET 2018] adding _acme-challenge.sandchaschte.ch. 60 in txt "ze-pMMuwmnrW55K4pqyTjzpyfLqSDRpGm4smJSC98tg" [Don Mar 15 20:17:44 CET 2018] Sleep 10 seconds for the txt records to take effect [Don Mar 15 20:17:55 CET 2018] Verifying:sandchaschte.ch [Don Mar 15 20:17:58 CET 2018] Success [Don Mar 15 20:17:58 CET 2018] Verifying:*.sandchaschte.ch [Don Mar 15 20:18:01 CET 2018] Success [Don Mar 15 20:18:01 CET 2018] Removing DNS records. [Don Mar 15 20:18:01 CET 2018] removing _acme-challenge.sandchaschte.ch. txt [Don Mar 15 20:18:01 CET 2018] removing _acme-challenge.sandchaschte.ch. txt [Don Mar 15 20:18:01 CET 2018] Verify finished, start to sign. [Don Mar 15 20:18:03 CET 2018] Cert success. I now can activate the created certificates for my web server: acme.sh --install-cert -d sandchaschte.ch --key-file /etc/ssl/keys/sandchaschte.ch.key --fullchain-file /etc/ssl/certs/sandchaschte.ch.pem --reloadcmd "systemctl restart nginx" Done. We are now enjoying the wildcard certificate: openssl s_client -servername www.sandchaschte.ch -connect www.sandchaschte.ch:443 | openssl x509 -noout -text | grep DNS: depth=2 O = Digital Signature Trust Co., CN = DST Root CA X3 verify return:1 depth=1 C = US, O = Let's Encrypt, CN = Let's Encrypt Authority X3 verify return:1 depth=0 CN = sandchaschte.ch verify return:1 DNS:*.sandchaschte.ch, DNS:sandchaschte.ch
2021-10-19 01:38:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6206716299057007, "perplexity": 14901.686974372768}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585231.62/warc/CC-MAIN-20211019012407-20211019042407-00625.warc.gz"}
http://sioc-journal.cn/Jwk_hxxb/CN/10.6023/A21100467
### 配位环境可调的Cu单原子的合成及催化加氢性能研究※ 1. a中国科学院长春应用化学研究所 稀土资源利用国家重点实验室 长春130022 b中国科学技术大学 应用化学与工程学院 合肥 230026 c清华大学 化学系 北京 100084 • 投稿日期:2021-10-20 发布日期:2021-12-06 • 通讯作者: 宋术岩, 张洪杰 • 作者简介: 庆祝中国科学院青年创新促进会十年华诞. • 基金资助: 项目受科技部重点研发计划(2020YFE0204500); 国家自然科学基金(21771173); 国家自然科学基金(22020102003); 国家自然科学基金(22025506) ### Synthesis of Cu Single Atom with Adjustable Coordination Environment and Its Catalytic Hydrogenation Performance※ Lingling Lia,b, Yu Liua,b, Shuyan Songa,b(), Hongjie Zhanga,b,c() 1. aState Key Laboratory of Rare Earth Resources Utilization, Changchun Institute of Applied Chemistry, Chinese Academy of Sciences, Changchun 130022, China bSchool of Applied Chemistry and Engineering, University of Science and Technology of China, Hefei 230026, China cDepartment of Chemistry, Tsinghua University, Beijing 100084, China • Received:2021-10-20 Published:2021-12-06 • Contact: Shuyan Song, Hongjie Zhang • About author: Dedicated to the 10th anniversary of the Youth Innovation Promotion Association, CAS. • Supported by: National Science and Technology Major Project(2020YFE0204500); National Natural Science Foundation of China(21771173); National Natural Science Foundation of China(22020102003); National Natural Science Foundation of China(22025506) The synthesis of stable single-metal site catalysts with high catalytic activity and selectivity with a controllable coordination environment is still challenging. Due to the different electronegativity of different coordination atoms (N, P, S, etc.), adjusting the coordination atom type of the active metal center is an effective and wise strategy to break the symmetry of the electron density. We adopted a cation exchange strategy to synthesize two Cu single-atom catalytic materials with different coordination structures. This strategy can change the coordination environment of Cu single atom by changing the different organics wrapped around Cu-CdS. This strategy mainly relies on the anion skeleton of sulfide and the N-rich polymer shell to produce a large number of S and N defects during the high-temperature annealing process, and the precise synthesis of a single-metal Cu site catalyst material with rich edge S and N double modification. In these two materials, one single Cu atom has double coordination of sulfur (S) and nitrogen (N), and the other single Cu atom has only a single S coordination. The first shell coordination number of Cu central atom is 4, the structure of Cu-S/N-C is Cu-S1N3, and the structure of Cu-S-C is Cu-S4. The results show that the catalytic performance of Cu-S/N-C in the hydrogenation of nitrobenzene compounds is much better than that of Cu-S-C, that is, the Cu monoatomic materials with S and N double-modified metal sites has better hydrogenation activity than single S-modified metal sites. After 20 min of reaction, under the catalysis of Cu-S/N-C, the conversion rate of nitrobenzene reached 100%, and the activity did not decrease significantly after being recycled for 5 times. It shows that the Cu-S/N-C catalytic material with a single-atom structure we synthesized has good stability. This discovery not only provides a feasible method for adjusting the coordination environment of the central metal to improve the performance of single-atom catalytic materials, but also provides an understanding of the catalytic performance of heteroatom modification.
2022-05-20 20:42:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2743830978870392, "perplexity": 5965.233630618991}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662534669.47/warc/CC-MAIN-20220520191810-20220520221810-00532.warc.gz"}
https://rstudytutorial.com/maths-chapter-14-statistics-case-study-question-08/
# Case Study Question 08 ## Chapter 14: Statistics ### Class 10 A survey was conducted by an NGO to know the monthly expenditure of families living in slums in Delhi. A total of 200 families were interviewed and it was found that their minimum monthly expenditure was Rs.1000. The result is tabulated as given below: Question.1. Find the number of families whose monthly expenditure is more than or equal to Rs. 8000. We can see from the given frequency distribution that the number of families having monthly expenditure less than Rs. 8000 is 163 out of a total of 200 families. Therefore, number of families whose monthly expenditure is more than or equal to Rs. 8000 = 200 – 163 = 37 Question.2. Find the number of families whose monthly expenditure is in the range Rs. (6000 – 7000). Let us prepare a frequency distribution table from the given cumulative frequency table as below: Question.3. Find the lower limit of median class. Median class is the class whose cumulative frequency is just greater than half of sum of all frequencies. Here \frac{N}{2}=100. As the cululative frequency of the class 5000 – 6000 is 115, which is just greater than 100, therefore the median class is 5000 – 6000 and thus the lower limit of the median class is 5000. Question.4. Find the median monthly expenditure of the families as per the frequency distribution table. The median class is 5000 – 6000. The formula for calculating the median is: Median = l+\left(\frac{\frac{n}{2}-cf}{f}\right)\times h Where, l= lower limit of the median class, h = size of clas interval, n = number of observations cf = cumulative frequency of class preceding the median class, f = frequency of the median class Using the table, error: Content is protected !! Scroll to Top
2023-03-29 07:40:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6553860902786255, "perplexity": 1379.6079594657303}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948951.4/warc/CC-MAIN-20230329054547-20230329084547-00264.warc.gz"}
https://electronics.stackexchange.com/questions/59081/antenna-input-power
# Antenna input power I have internet dongle huawei e3131. It has external antenna connector. I made at home some type of parabolic antenna. Picture for illustrative purposes (without foil) :) Basically it put foil inside my fruit dish and put my e3131 in center with usb extension cable. When dongle was directly attached to laptop, it shows signal strength 1 bars of 4. When i put dongle in my 'super antenna', then it shows 4 bars of 4. I then looked at statistics of network. It was about -90dBi with dish. Today i searched ebay for antenna to my dongle. For almost all antennas in description was Max Input power(W): 60. (Example here on ebay) I know that usb cannot give that much power. What means that Max Input power? Does really theses antennas works?
2021-02-27 22:35:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39739978313446045, "perplexity": 8155.211615558989}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178359497.20/warc/CC-MAIN-20210227204637-20210227234637-00212.warc.gz"}
https://yoshiwarabooks.org/mfg/appendix-Graphing-an-Equation.html
## SectionB.4Graphing an Equation We can graph equations written in the form $y =$ (expression in x). The graphing keys are located on the top row of the keypad. There are two steps to graphing an equation: 1. Entering the equation 2. Setting the graphing window ### SubsectionStandard Window The standard window displays values from $-10$ to $10$ on both axes. ###### ExampleB.31 1. Press Y= and enter $2X-5$ after $Y_1=$ by keying in \begin{align*} 2~ \boxed{X, T, \theta, n} ~ \boxed{{}-{}} ~ 5 \amp\amp\amp\text{Use the } \boxed{X, T, \theta, n} ~ \text{ to enter }X. \end{align*} 2. Press ZOOM $6$ to set the standard window, and the graph will appear (see Figure B.32). You can press 2nd WINDOW to see the settings for the standard window. $Xscl = 1$ means that the tick marks on the $x$-axis are spaced $1$ unit apart. Press 2nd MODE to Quit the graph and return to the Home screen, where we enter computations. From the Home screen, press GRAPH to return to the graph. ### SubsectionTracing The calculator can display the coordinates of selected points on the graph. Press the TRACE key to see a "bug" blinking on the graph. The coordinates of the bug are displayed at the bottom of the screen. Use the left and right arrow keys to move the bug along the graph, as shown in Figure B.33. Note that the Trace feature does not show every point on the graph! ###### ExampleB.34 Use the Trace to find the point on the graph with $x=-3\text{.}$ Press \begin{equation*} \boxed{\text{TRACE}} \, \boxed{(-)} \, 3 \, \boxed{\text{ENTER}} \end{equation*} The bug is off the bottom of the screen, but the coordinates are still shown. ### SubsectionMultiple Graphs You can enter more than one graph at a time. Press ↓ to enter a second equation at $Y_2 =\text{,}$ at $Y_3 =\text{,}$ and so on. When Tracing, press the ↓ and $\boxed{\uparrow}$ keys to move from one graph to another. To turn off a graph without deleting its equation, press and move the cursor over the $=$ sign in the equation. Press ENTER to deactivate that equation. (When you move the cursor away, the $=$ sign is no longer highlighted.) To reactivate the equation, move the cursor back over the $=$ sign and press ENTER again. ### SubsectionSetting the Window Of course, the standard window is not suitable for every graph. ###### ExampleB.35 Graph $y= 0.01x^2- 50$ in the window \begin{align*} \text{Xmin} \amp = -100 \amp\amp \text{Xmax} = 100\\ \text{Ymin} \amp = -60 \amp\amp \text{Ymax} = 50 \end{align*} 1. Press Y= and enter $0.01X^2-50$ by keying in \begin{equation*} 0.01 ~ \boxed{X,T,\theta,n}~\boxed{x^2}~\boxed{{}-{}} 50 \hphantom{blank} \text{Use the } \boxed{X,T,\theta,n}\text{ key to enter} X. \end{equation*} 2. Press WINDOW and enter the settings as shown in Figure B.36. Use the up and down arrow keys to move from line to line. Then press GRAPH. ### SubsectionIntersect Feature We can use the calculator to find the intersection point of two graphs: 1. Enter the equations for the two graphs in the Y= menu. 2. Choose window settings so that the intersection point is visible in the window. 3. Press 2nd TRACE $5$ to activate the intersect feacture. 4. Use the left and right arrow keys to position the bug near the intersection point. 5. Respond to each of the calculator’s questions, First curve?, Second curve?, and Guess? by pressing ENTER. The coordinates of the intersection point are then displayed at the bottom of the screen. Figure B.37 shows one of the intersection points of $y = 0.01x^2 - 50$ and $y = -0.5x\text{.}$ ### SubsectionOther Windows 1. The ZDecimal (Zoom Decimal) window, accessed by pressing ZOOM $4\text{,}$ shows $x$-values from $-4.7$ to $4.7$ only, but the Trace feature shows "nice" $x$-values in increments of $0.1\text{.}$ 2. The ZInteger (Zoom Integer) window shows nice $x$-values in increments of $1$ unit. Access the ZInteger window as follows: Press ZOOM $8\text{,}$ move the bug with the arrow keys to the center of your new window, and press ENTER. 3. The ZSquare window, accessed by pressing ZOOM $5\text{,}$ makes the tick marks on both axes have the same size. In this window, squares look like squares, circles look like circles, and all angles appear true. 4. "Friendly" Windows: If the difference between Xmin and Xmax is a multiple of $94\text{,}$ the Trace feature gives nice values for $x\text{.}$ A useful example of a friendly window is $Xmin=-9.4\text{,}$ $Xmax= 9.4\text{.}$ ##### Troubleshooting 1. If the graph is not visible, you may need to adjust your window. Or, the equation may not be activated. Press Y= and check to see if the $=$sign is highlighted. 2. If you get a range error, ERR: WINDOW RANGE, quit the message and press WINDOW. Alter the window settings so that Xmin is smaller than Xmax and so that Ymin is smaller than Ymax. 3. If you press and get an unfamiliar window, or if the axes are not visible in the ZStandard window, you may need to return the Mode or Format menus to their default settings. See Troubleshooting  in Section B.1. 4. If you get a dimension error, ERR: INVALID DIM, you may have a StatPlot turned on. Press 2nd Y= $4$ ENTER to turn off the StatPlots. 5. If the bug does not move along the curve, TRACE may not be activated. Press TRACE and then the left or right arrow key. 6. If you get the error, ERR: INVALID, you have probably entered a value of $x$ that is outside the window. Adjust the window settings accordingly. 7. If the $x$-axis or $y$-axis is too thick, the tick marks are too close together. Press WINDOW and make Xscl or Yscl larger. Set $Xscl=0$ or $Yscl =0$ to remove the tick marks. 8. If you get ERR: NO SIGN CHNG when using the intersect feature, the calculator did not find any intersection point within the current window. Alter the window settings so that the two curves meet within the window. If the two curves are tangent, the calculator may simply fail to find the point of intersection.
2020-02-17 00:51:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3929629623889923, "perplexity": 1849.5698011687405}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875141460.64/warc/CC-MAIN-20200217000519-20200217030519-00041.warc.gz"}
https://www.jirka.org/diffyqs/html/sol_section.html
## Section3.6Second order systems and applications Note: more than 2 lectures, §5.4 in [EP], not in [BD] ### Subsection3.6.1Undamped mass-spring systems While we did say that we will usually only look at first order systems, it is sometimes more convenient to study the system in the way it arises naturally. For example, suppose we have 3 masses connected by springs between two walls. We could pick any higher number, and the math would be essentially the same, but for simplicity we pick 3 right now. Let us also assume no friction, that is, the system is undamped. The masses are $m_1\text{,}$ $m_2\text{,}$ and $m_3$ and the spring constants are $k_1\text{,}$ $k_2\text{,}$ $k_3\text{,}$ and $k_4\text{.}$ Let $x_1$ be the displacement from rest position of the first mass, and $x_2$ and $x_3$ the displacement of the second and third mass. We make, as usual, positive values go right (as $x_1$ grows, the first mass is moving right). See Figure 3.12. This simple system turns up in unexpected places. For example, our world really consists of many small particles of matter interacting together. When we try the system above with many more masses, we obtain a good approximation to how an elastic material behaves. By somehow taking a limit of the number of masses going to infinity, we obtain the continuous one-dimensional wave equation (that we study in Section 4.7). But we digress. Let us set up the equations for the three mass system. By Hooke's law, the force acting on the mass equals the spring compression times the spring constant. By Newton's second law, force is mass times acceleration. So if we sum the forces acting on each mass, put the right sign in front of each term, depending on the direction in which it is acting, and set this equal to mass times the acceleration, we end up with the desired system of equations. \begin{equation*} \begin{aligned} m_1 x_1'' &= -k_1 x_1 + k_2 (x_2-x_1) & & = -(k_1+k_2) x_1 + k_2 x_2 , \\ m_2 x_2'' &= -k_2 (x_2-x_1) + k_3 (x_3-x_2) & & = k_2 x_1 -(k_2+k_3) x_2 + k_3 x_3 , \\ m_3 x_3'' &= -k_3 (x_3-x_2) - k_4 x_3 & & = k_3 x_2 - (k_3+k_4) x_3 . \end{aligned} \end{equation*} We define the matrices \begin{equation*} M = \begin{bmatrix} m_1 & 0 & 0 \\ 0 & m_2 & 0 \\ 0 & 0 & m_3 \end{bmatrix} \qquad \text{and} \qquad K = \begin{bmatrix} -(k_1+k_2) & k_2 & 0 \\ k_2 & -(k_2+k_3) & k_3 \\ 0 & k_3 & -(k_3+k_4) \end{bmatrix} . \end{equation*} We write the equation simply as \begin{equation*} M {\vec{x}}'' = K \vec{x} . \end{equation*} At this point we could introduce 3 new variables and write out a system of 6 first order equations. We claim this simple setup is easier to handle as a second order system. We call $\vec{x}$ the displacement vector, $M$ the mass matrix, and $K$ the stiffness matrix. ###### Exercise3.6.1. Repeat this setup for 4 masses (find the matrices $M$ and $K$). Do it for 5 masses. Can you find a prescription to do it for $n$ masses? As with a single equation we want to “divide by $M\text{.}$” This means computing the inverse of $M\text{.}$ The masses are all nonzero and $M$ is a diagonal matrix, so computing the inverse is easy: \begin{equation*} M^{-1} = \begin{bmatrix} \frac{1}{m_1} & 0 & 0 \\ 0 & \frac{1}{m_2} & 0 \\ 0 & 0 & \frac{1}{m_3} \end{bmatrix} . \end{equation*} This fact follows readily by how we multiply diagonal matrices. As an exercise, you should verify that $M M^{-1} = M^{-1} M = I\text{.}$ Let $A = M^{-1}K\text{.}$ We look at the system ${\vec{x}}'' = M^{-1}K \vec{x}\text{,}$ or \begin{equation*} {\vec{x}}'' = A \vec{x} . \end{equation*} Many real world systems can be modeled by this equation. For simplicity, we will only talk about the given masses-and-springs problem. We try a solution of the form \begin{equation*} \vec{x} = \vec{v} e^{\alpha t} . \end{equation*} We compute that for this guess, ${\vec{x}}'' = \alpha^2 \vec{v} e^{\alpha t}\text{.}$ We plug our guess into the equation and get \begin{equation*} \alpha^2 \vec{v} e^{\alpha t} = A\vec{v} e^{\alpha t} . \end{equation*} We divide by $e^{\alpha t}$ to arrive at $\alpha^2 \vec{v} = A\vec{v}\text{.}$ Hence if $\alpha^2$ is an eigenvalue of $A$ and $\vec{v}$ is a corresponding eigenvector, we have found a solution. In our example, and in other common applications, $A$ has only real negative eigenvalues (and possibly a zero eigenvalue). So we study only this case. When an eigenvalue $\lambda$ is negative, it means that $\alpha^2 = \lambda$ is negative. Hence there is some real number $\omega$ such that $-\omega^2 = \lambda\text{.}$ Then $\alpha = \pm i \omega\text{.}$ The solution we guessed was \begin{equation*} \vec{x} = \vec{v} \, \bigl(\cos (\omega t) + i \sin (\omega t) \bigr) . \end{equation*} By taking the real and imaginary parts (note that $\vec{v}$ is real), we find that $\vec{v} \cos (\omega t)$ and $\vec{v} \sin (\omega t)$ are linearly independent solutions. If an eigenvalue is zero, it turns out that both $\vec{v}$ and $\vec{v} t$ are solutions, where $\vec{v}$ is an eigenvector corresponding to the eigenvalue 0. ###### Exercise3.6.2. Show that if $A$ has a zero eigenvalue and $\vec{v}$ is a corresponding eigenvector, then $\vec{x} = \vec{v} (a + bt)$ is a solution of ${\vec{x}}'' = A \vec{x}$ for arbitrary constants $a$ and $b\text{.}$ We use this solution and the setup from the introduction of this section even when some of the masses and springs are missing. For example, when there are only 2 masses and only 2 springs, simply take only the equations for the two masses and set all the spring constants for the springs that are missing to zero. ### Subsection3.6.2Examples ###### Example3.6.1. Consider the setup in Figure 3.13, with $m_1 = \unit[2]{kg}\text{,}$ $m_2 = \unit[1]{kg}\text{,}$ $k_1 = \unitfrac[4]{N}{m}\text{,}$ and $k_2 = \unitfrac[2]{N}{m}\text{.}$ The equations we write down are \begin{equation*} \begin{bmatrix} 2 & 0 \\ 0 & 1 \end{bmatrix} {\vec{x}}'' = \begin{bmatrix} -(4+2) & 2 \\ 2 & -2 \end{bmatrix} \vec{x} , \end{equation*} or \begin{equation*} {\vec{x}}'' = \begin{bmatrix} -3 & 1 \\ 2 & -2 \end{bmatrix} \vec{x} . \end{equation*} We find the eigenvalues of $A$ to be $\lambda = -1, -4$ (exercise). We find corresponding eigenvectors to be $\left[ \begin{smallmatrix} 1 \\ 2 \end{smallmatrix} \right]$ and $\left[ \begin{smallmatrix} 1 \\ -1 \end{smallmatrix} \right]$ respectively (exercise). We check the theorem and note that $\omega_1 = 1$ and $\omega_2 = 2\text{.}$ Hence the general solution is \begin{equation*} \vec{x} = \begin{bmatrix} 1 \\ 2 \end{bmatrix} \bigl( a_1 \cos (t) + b_1 \sin (t) \bigr) + \begin{bmatrix} 1 \\ -1 \end{bmatrix} \bigl( a_2 \cos (2t) + b_2 \sin (2t) \bigr) . \end{equation*} The two terms in the solution represent the two so-called natural or normal modes of oscillation. And the two (angular) frequencies are the natural frequencies. The first natural frequency is 1, and second natural frequency is 2. The two modes are plotted in Figure 3.14. Let us write the solution as \begin{equation*} \vec{x} = \begin{bmatrix} 1 \\ 2 \end{bmatrix} c_1 \cos (t - \alpha_1 ) + \begin{bmatrix} 1 \\ -1 \end{bmatrix} c_2 \cos (2t - \alpha_2 ) . \end{equation*} The first term, \begin{equation*} \begin{bmatrix} 1 \\ 2 \end{bmatrix} c_1 \cos (t - \alpha_1 ) = \begin{bmatrix} c_1 \cos (t - \alpha_1 ) \\ 2c_1 \cos (t - \alpha_1 ) \end{bmatrix} , \end{equation*} corresponds to the mode where the masses move synchronously in the same direction. The second term, \begin{equation*} \begin{bmatrix} 1 \\ -1 \end{bmatrix} c_2 \cos (2t - \alpha_2 ) = \begin{bmatrix} c_2 \cos (2t - \alpha_2 ) \\ - c_2 \cos (2t - \alpha_2 ) \end{bmatrix} , \end{equation*} corresponds to the mode where the masses move synchronously but in opposite directions. The general solution is a combination of the two modes. That is, the initial conditions determine the amplitude and phase shift of each mode. As an example, suppose we have initial conditions \begin{equation*} \vec{x}(0) = \begin{bmatrix} 1 \\ -1 \end{bmatrix} , \qquad \vec{x}'(0) = \begin{bmatrix} 0 \\ 6 \end{bmatrix} . \end{equation*} We use the $a_j, b_j$ constants to solve for initial conditions. First \begin{equation*} \begin{bmatrix} 1 \\ -1 \end{bmatrix} = \vec{x}(0) = \begin{bmatrix} 1 \\ 2 \end{bmatrix} a_1 + \begin{bmatrix} 1 \\ -1 \end{bmatrix} a_2 = \begin{bmatrix} a_1+a_2 \\2a_1 - a_2 \end{bmatrix} . \end{equation*} We solve (exercise) to find $a_1 = 0\text{,}$ $a_2 = 1\text{.}$ To find the $b_1$ and $b_2\text{,}$ we differentiate first: \begin{equation*} {\vec{x}}' = \begin{bmatrix} 1 \\ 2 \end{bmatrix} \bigl( - a_1 \sin (t) + b_1 \cos (t) \bigr) + \begin{bmatrix} 1 \\ -1 \end{bmatrix} \bigl( - 2a_2 \sin (2t) + 2 b_2 \cos (2t) \bigr) . \end{equation*} Now we solve: \begin{equation*} \begin{bmatrix} 0 \\ 6 \end{bmatrix} = {\vec{x}}'(0) = \begin{bmatrix} 1 \\ 2 \end{bmatrix} b_1 + \begin{bmatrix} 1 \\ -1 \end{bmatrix} 2 b_2 = \begin{bmatrix} b_1+2b_2 \\ 2b_1-2b_2 \end{bmatrix} . \end{equation*} Again solve (exercise) to find $b_1 = 2\text{,}$ $b_2 = -1\text{.}$ So our solution is \begin{equation*} \vec{x} = \begin{bmatrix} 1 \\ 2 \end{bmatrix} 2 \sin (t) + \begin{bmatrix} 1 \\ -1 \end{bmatrix} \bigl( \cos (2t) - \sin (2t) \bigr) = \begin{bmatrix} 2 \sin (t) + \cos(2t)- \sin(2t) \\ 4 \sin (t) - \cos(2t) + \sin(2t) \end{bmatrix} . \end{equation*} The graphs of the two displacements, $x_1$ and $x_2$ of the two carts is in Figure 3.15. ###### Example3.6.2. We have two toy rail cars. Car 1 of mass 2 kg is traveling at 3 $\nicefrac{\text{m}}{\text{s}}$ towards the second rail car of mass 1 kg. There is a bumper on the second rail car that engages at the moment the cars hit (it connects to two cars) and does not let go. The bumper acts like a spring of spring constant $k=\unitfrac[2]{N}{m}\text{.}$ The second car is 10 meters from a wall. See Figure 3.16. We want to ask several questions. At what time after the cars link does impact with the wall happen? What is the speed of car 2 when it hits the wall? OK, let us first set the system up. Let $t=0$ be the time when the two cars link up. Let $x_1$ be the displacement of the first car from the position at $t=0\text{,}$ and let $x_2$ be the displacement of the second car from its original location. Then the time when $x_2(t) = 10$ is exactly the time when impact with wall occurs. For this $t\text{,}$ $x_2'(t)$ is the speed at impact. This system acts just like the system of the previous example but without $k_1\text{.}$ Hence the equation is \begin{equation*} \begin{bmatrix} 2 & 0 \\ 0 & 1 \end{bmatrix} {\vec{x}}'' = \begin{bmatrix} -2 & 2 \\ 2 & -2 \end{bmatrix} \vec{x} , \end{equation*} or \begin{equation*} {\vec{x}}'' = \begin{bmatrix} -1 & 1 \\ 2 & -2 \end{bmatrix} \vec{x} . \end{equation*} We compute the eigenvalues of $A\text{.}$ It is not hard to see that the eigenvalues are 0 and $-3$ (exercise). Furthermore, eigenvectors are $\left[ \begin{smallmatrix} 1 \\ 1 \end{smallmatrix} \right]$ and $\left[ \begin{smallmatrix} 1 \\ -2 \end{smallmatrix} \right]$ respectively (exercise). Then $\omega_1 = 0\text{,}$ $\omega_2 = \sqrt{3}\text{,}$ and by the second part of the theorem the general solution is \begin{equation*} \begin{split} \vec{x} & = \begin{bmatrix} 1 \\ 1 \end{bmatrix} \left( a_1 + b_1 t \right) + \begin{bmatrix} 1 \\ -2 \end{bmatrix} \left( a_2 \cos ( \sqrt{3} \, t) + b_2 \sin ( \sqrt{3} \, t ) \right) \\ & = \begin{bmatrix} a_1 + b_1 t + a_2 \cos ( \sqrt{3} \, t ) + b_2 \sin ( \sqrt{3} \, t ) \\ a_1 + b_1 t - 2 a_2 \cos ( \sqrt{3} \, t ) - 2 b_2 \sin ( \sqrt{3} \, t ) \end{bmatrix} . \end{split} \end{equation*} We now apply the initial conditions. First the cars start at position 0 so $x_1 (0) = 0$ and $x_2(0) = 0\text{.}$ The first car is traveling at 3 $\nicefrac{\text{m}}{\text{s}}\text{,}$ so $x_1'(0) = 3$ and the second car starts at rest, so $x_2'(0) = 0\text{.}$ The first conditions says \begin{equation*} \vec{0} = \vec{x}(0) = \begin{bmatrix} a_1 + a_2 \\ a_1 - 2 a_2 \end{bmatrix} . \end{equation*} It is not hard to see that $a_1 = a_2 = 0\text{.}$ We set $a_1=0$ and $a_2=0$ in $\vec{x}(t)$ and differentiate to get \begin{equation*} {\vec{x}}'(t) = \begin{bmatrix} b_1 + \sqrt{3} \, b_2 \cos ( \sqrt{3} \, t ) \\ b_1 - 2 \sqrt{3} \, b_2 \cos ( \sqrt{3} \, t ) \end{bmatrix} . \end{equation*} So \begin{equation*} \begin{bmatrix} 3 \\ 0 \end{bmatrix} = {\vec{x}}'(0) = \begin{bmatrix} b_1 + \sqrt{3} \, b_2 \\ b_1 - 2 \sqrt{3} \, b_2 \end{bmatrix} . \end{equation*} Solving these two equations we find $b_1 = 2$ and $b_2 = \frac{1}{\sqrt{3}}\text{.}$ Hence the position of our cars is (until the impact with the wall) \begin{equation*} \vec{x} = \begin{bmatrix} 2 t + \frac{1}{\sqrt{3}} \sin ( \sqrt{3} \, t ) \\ 2 t - \frac{2}{\sqrt{3}} \sin ( \sqrt{3} \, t ) \end{bmatrix} . \end{equation*} Note how the presence of the zero eigenvalue resulted in a term containing $t\text{.}$ This means that the cars will be traveling in the positive direction as time grows, which is what we expect. What we are really interested in is the second expression, the one for $x_2\text{.}$ We have $x_2(t) = 2 t - \frac{2}{\sqrt{3}} \sin ( \sqrt{3} \, t)\text{.}$ See Figure 3.17 for the plot of $x_2$ versus time. Just from the graph we can see that time of impact will be a little more than 5 seconds from time zero. For this we have to solve the equation $10 = x_2(t) = 2 t - \frac{2}{\sqrt{3}} \sin ( \sqrt{3} \, t)\text{.}$ Using a computer (or even a graphing calculator) we find that $t_{\text{impact}} \approx 5.22$ seconds. The speed of the second car is $x_2' = 2 - 2 \cos ( \sqrt{3} \, t)\text{.}$ At the time of impact (5.22 seconds from $t=0$) we get $x_2'(t_{\text{impact}}) \approx 3.85\text{.}$ The maximum speed is the maximum of $2 - 2 \cos ( \sqrt{3} \, t )\text{,}$ which is 4. We are traveling at almost the maximum speed when we hit the wall. Suppose that Bob is a tiny person sitting on car 2. Bob has a Martini in his hand and would like not to spill it. Let us suppose Bob would not spill his Martini when the first car links up with car 2, but if car 2 hits the wall at any speed greater than zero, Bob will spill his drink. Suppose Bob can move car 2 a few meters towards or away from the wall (he cannot go all the way to the wall, nor can he get out of the way of the first car). Is there a “safe” distance for him to be at? A distance such that the impact with the wall is at zero speed? The answer is yes. Looking at Figure 3.17, we note the “plateau” between $t=3$ and $t=4\text{.}$ There is a point where the speed is zero. To find it we solve $x_2'(t) = 0\text{.}$ This is when $\cos ( \sqrt{3} \, t) = 1$ or in other words when $t = \frac{2 \pi}{\sqrt{3}}, \frac{4 \pi}{\sqrt{3}},\ldots$ and so on. We plug in the first value to obtain $x_2\left(\frac{2 \pi}{\sqrt{3}}\right) = \frac{4 \pi}{\sqrt{3}} \approx 7.26\text{.}$ So a “safe” distance is about 7 and a quarter meters from the wall. Alternatively Bob could move away from the wall towards the incoming car 2, where another safe distance is $x_2 \left( \frac{4 \pi}{\sqrt{3}} \right) = \frac{8 \pi}{\sqrt{3}} \approx 14.51$ and so on. We can use all the different $t$ such that $x_2'(t) = 0\text{.}$ Of course $t=0$ is also a solution, corresponding to $x_2 = 0\text{,}$ but that means standing right at the wall. ### Subsection3.6.3Forced oscillations Finally we move to forced oscillations. Suppose that now our system is $${\vec{x}}'' = A \vec{x} + \vec{F} \cos ( \omega t) .\label{sosa_forcedeq}\tag{3.4}$$ That is, we are adding periodic forcing to the system in the direction of the vector $\vec{F}\text{.}$ As before, this system just requires us to find one particular solution $\vec{x}_p\text{,}$ add it to the general solution of the associated homogeneous system $\vec{x}_c\text{,}$ and we will have the general solution to (3.4). Let us suppose that $\omega$ is not one of the natural frequencies of ${\vec{x}}'' = A \vec{x}\text{,}$ then we can guess \begin{equation*} \vec{x}_p = \vec{c} \cos (\omega t) , \end{equation*} where $\vec{c}$ is an unknown constant vector. Note that we do not need to use sine since there are only second derivatives. We solve for $\vec{c}$ to find $\vec{x}_p\text{.}$ This is really just the method of undetermined coefficients for systems. Let us differentiate $\vec{x}_p$ twice to get \begin{equation*} {\vec{x}_p}'' = -\omega^2 \vec{c} \cos (\omega t) . \end{equation*} Plug $\vec{x}_p$ and ${\vec{x}_p}''$ into equation (3.4): \begin{equation*} \overbrace{ -\omega^2 \vec{c} \cos (\omega t) }^{{\vec{x}_p}''} = \overbrace{ A \vec{c} \cos (\omega t) }^{A \vec{x}_p} + \vec{F} \cos (\omega t) . \end{equation*} We cancel out the cosine and rearrange the equation to obtain \begin{equation*} (A +\omega^2 I) \vec{c} = - \vec{F} . \end{equation*} So \begin{equation*} \vec{c} = {(A +\omega^2 I)}^{-1} (-\vec{F} ). \end{equation*} Of course this is possible only if $(A+ \omega^2 I) = \bigl(A- (-\omega^2) I\bigr)$ is invertible. That matrix is invertible if and only if $-\omega^2$ is not an eigenvalue of $A\text{.}$ That is true if and only if $\omega$ is not a natural frequency of the system. We simplified things a little bit. If we wish to have the forcing term to be in the units of force, say Newtons, then we must write \begin{equation*} M \vec{x}'' = K \vec{x} + \vec{G} \cos(\omega t) . \end{equation*} If we then write things in terms of $A = M^{-1} K\text{,}$ we have \begin{equation*} \vec{x}'' = M^{-1}K \vec{x} + M^{-1} \vec{G} \cos(\omega t) \qquad \text{or} \qquad \vec{x}'' = A \vec{x} + \vec{F} \cos(\omega t) , \end{equation*} where $\vec{F} = M^{-1} \vec{G}\text{.}$ ###### Example3.6.3. Let us take the example in Figure 3.13 with the same parameters as before: $m_1 = 2\text{,}$ $m_2 = 1\text{,}$ $k_1 = 4\text{,}$ and $k_2 = 2\text{.}$ Now suppose that there is a force $2 \cos (3t)$ acting on the second cart. The equation is \begin{equation*} \begin{bmatrix} 2 & 0 \\ 0 & 1 \end{bmatrix} {\vec{x}}'' = \begin{bmatrix} -4 & 2 \\ 2 & -2 \end{bmatrix} \vec{x} + \begin{bmatrix} 0 \\ 2 \end{bmatrix} \cos (3 t) \qquad \text{or} \qquad {\vec{x}}'' = \begin{bmatrix} -3 & 1 \\ 2 & -2 \end{bmatrix} \vec{x} + \begin{bmatrix} 0 \\ 2 \end{bmatrix} \cos (3 t) . \end{equation*} We solved the associated homogeneous equation before and found the complementary solution to be \begin{equation*} \vec{x}_c = \begin{bmatrix} 1 \\ 2 \end{bmatrix} \bigl( a_1 \cos (t) + b_1 \sin (t) \bigr) + \begin{bmatrix} 1 \\ -1 \end{bmatrix} \bigl( a_2 \cos (2t) + b_2 \sin (2t) \bigr) . \end{equation*} The natural frequencies are 1 and 2. As 3 is not a natural frequency, we try $\vec{c} \cos (3t)\text{.}$ We invert $(A+3^2 I)\text{:}$ \begin{equation*} {\left( \begin{bmatrix} -3 & 1 \\ \noalign{\smallskip} 2 & -2 \end{bmatrix} +3^2 I\right)}^{-1} = {\begin{bmatrix} 6 & 1 \\ \noalign{\smallskip} 2 & 7 \end{bmatrix}}^{-1} = \begin{bmatrix} \frac{7}{40} & \frac{-1}{40} \\ \noalign{\smallskip} \frac{-1}{20} & \frac{3}{20} \end{bmatrix} . \end{equation*} Hence, \begin{equation*} \vec{c} = {(A +\omega^2 I)}^{-1} (-\vec{F} ) = \begin{bmatrix} \frac{7}{40} & \frac{-1}{40} \\ \noalign{\smallskip} \frac{-1}{20} & \frac{3}{20} \end{bmatrix} \begin{bmatrix} 0 \\ \noalign{\smallskip} -2 \end{bmatrix} = \begin{bmatrix} \frac{1}{20} \\ \noalign{\smallskip} \frac{-3}{10} \end{bmatrix} . \end{equation*} Combining with the general solution of the associated homogeneous problem, we get that the general solution to ${\vec{x}}'' = A \vec{x} + \vec{F} \cos (\omega t)$ is \begin{equation*} \vec{x} = \vec{x}_c + \vec{x}_p = \begin{bmatrix} 1 \\ \noalign{\smallskip} 2 \end{bmatrix} \bigl( a_1 \cos (t) + b_1 \sin (t) \bigr) + \begin{bmatrix} 1 \\ \noalign{\smallskip} -1 \end{bmatrix} \bigl( a_2 \cos (2t) + b_2 \sin (2t) \bigr) + \begin{bmatrix} \frac{1}{20} \\ \noalign{\smallskip} \frac{-3}{10} \end{bmatrix} \cos (3t) . \end{equation*} We then solve for the constants $a_1\text{,}$ $a_2\text{,}$ $b_1\text{,}$ and $b_2$ using any initial conditions we are given. Note that given force $\vec{f}\text{,}$ we write the equation as $M {\vec{x}}'' = K \vec{x} + \vec{f}$ to get the units right. Then we write ${\vec{x}}'' = M^{-1}K \vec{x} + M^{-1}\vec{f}\text{.}$ The term $\vec{g} = M^{-1} \vec{f}$ in ${\vec{x}}'' = A \vec{x} + \vec{g}$ is in units of force per unit mass. If $\omega$ is a natural frequency of the system, resonance may occur, because we will have to try a particular solution of the form \begin{equation*} \vec{x}_p = \vec{c} \, t \sin (\omega t) + \vec{d} \, \cos (\omega t) . \end{equation*} That is assuming that the eigenvalues of the coefficient matrix are distinct. Next, note that the amplitude of this solution grows without bound as $t$ grows. ### Subsection3.6.4Exercises ###### Exercise3.6.3. Find a particular solution to \begin{equation*} {\vec{x}}'' = \begin{bmatrix} -3 & 1 \\ 2 & -2 \end{bmatrix} \vec{x} + \begin{bmatrix} 0 \\ 2 \end{bmatrix} \cos (2 t) . \end{equation*} ###### Exercise3.6.4. (challenging)   Let us take the example in Figure 3.13 with the same parameters as before: $m_1 = 2\text{,}$ $k_1 = 4\text{,}$ and $k_2 = 2\text{,}$ except for $m_2\text{,}$ which is unknown. Suppose that there is a force $\cos (5 t)$ acting on the first mass. Find an $m_2$ such that there exists a particular solution where the first mass does not move. Note: This idea is called dynamic damping. In practice there will be a small amount of damping and so any transient solution will disappear and after long enough time, the first mass will always come to a stop. ###### Exercise3.6.5. Let us take the Example 3.6.2, but that at time of impact, car 2 is moving to the left at the speed of 3 $\nicefrac{\text{m}}{\text{s}}\text{.}$ 1. Find the behavior of the system after linkup. 2. Will the second car hit the wall, or will it be moving away from the wall as time goes on? 3. At what speed would the first car have to be traveling for the system to essentially stay in place after linkup? ###### Exercise3.6.6. Let us take the example in Figure 3.13 with parameters $m_1 = m_2 = 1\text{,}$ $k_1 = k_2 = 1\text{.}$ Does there exist a set of initial conditions for which the first cart moves but the second cart does not? If so, find those conditions. If not, argue why not. ###### Exercise3.6.101. Find the general solution to $\left[ \begin{smallmatrix} 1 & 0 & 0\\ 0 & 2 & 0\\ 0 & 0 & 3 \end{smallmatrix}\right] \vec{x}\,'' = \left[ \begin{smallmatrix} -3 & 0 & 0 \\ 2 & -4 & 0 \\ 0 & 6 & -3 \end{smallmatrix}\right] \vec{x} + \left[ \begin{smallmatrix} \cos(2t) \\ 0 \\ 0 \end{smallmatrix}\right]\text{.}$ $\vec{x} = \left[ \begin{smallmatrix} 1 \\ -1 \\ 1 \end{smallmatrix}\right] \bigl( a_1 \cos (\sqrt{3}\, t) + b_1 \sin (\sqrt{3}\, t) \bigr) + \left[ \begin{smallmatrix} 0 \\ 1 \\ -2 \end{smallmatrix}\right] \bigl( a_2 \cos (\sqrt{2}\, t) + b_2 \sin (\sqrt{2}\, t) \bigr) +$ $\left[ \begin{smallmatrix} 0 \\ 0 \\ 1 \end{smallmatrix}\right] \bigl( a_3 \cos (t) + b_3 \sin (t) \bigr) + \left[ \begin{smallmatrix} -1 \\ \nicefrac{1}{2} \\ \nicefrac{2}{3} \end{smallmatrix}\right] \cos (2t)$ ###### Exercise3.6.102. Suppose there are three carts of equal mass $m$ and connected by two springs of constant $k$ (and no connections to walls). Set up the system and find its general solution. $\left[ \begin{smallmatrix} m & 0 & 0\\ 0 & m & 0\\ 0 & 0 & m \end{smallmatrix}\right] \vec{x}\,'' = \left[ \begin{smallmatrix} -k & k & 0 \\ k & -2k & k \\ 0 & k & -k \end{smallmatrix}\right] \vec{x}\text{.}$ Solution: $\vec{x} = \left[ \begin{smallmatrix} 1 \\ -2 \\ 1 \end{smallmatrix}\right] \bigl( a_1 \cos (\sqrt{\nicefrac{3k}{m}}\, t) + b_1 \sin (\sqrt{\nicefrac{3k}{m}}\, t) \bigr) \allowbreak + \left[ \begin{smallmatrix} 1 \\ 0 \\ -1 \end{smallmatrix}\right] \bigl( a_2 \cos (\sqrt{\nicefrac{k}{m}}\, t) + b_2 \sin (\sqrt{\nicefrac{k}{m}}\, t) \bigr) + \left[ \begin{smallmatrix} 1 \\ 1 \\ 1 \end{smallmatrix}\right] \bigl( a_3 t + b_3 \bigr).$ ###### Exercise3.6.103. Suppose a cart of mass 2 kg is attached by a spring of constant $k=1$ to a cart of mass 3 kg, which is attached to the wall by a spring also of constant $k=1\text{.}$ Suppose that the initial position of the first cart is 1 meter in the positive direction from the rest position, and the second mass starts at the rest position. The masses are not moving and are let go. Find the position of the second mass as a function of time. $x_2 = ( \nicefrac{2}{5} ) \cos (\sqrt{\nicefrac{1}{6}}\, t) - ( \nicefrac{2}{5} ) \cos (t)$
2020-02-25 23:54:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.9997441172599792, "perplexity": 318.0520923311131}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146176.73/warc/CC-MAIN-20200225233214-20200226023214-00151.warc.gz"}
https://itprospt.com/num/12272413/3-pedro-is-constructing-a-triangular-banner-with-a-blue
5 # 3) Pedro is constructing a triangular banner with a blue border on one side; a yellow border on one side; and a red border 0n the remaining side The angle between t... ## Question ###### 3) Pedro is constructing a triangular banner with a blue border on one side; a yellow border on one side; and a red border 0n the remaining side The angle between the yellow and red borders measures 659, and the angle between the blue and yellow borders measures 60". Which list shows the borders in order from least to greatest length?blue red, yellowred, yellow, blueyellow, blue, redO yellow, red, blue12) Jose has six pieces of cardboard to make a triangle frame for a diorama: The lengths a 3) Pedro is constructing a triangular banner with a blue border on one side; a yellow border on one side; and a red border 0n the remaining side The angle between the yellow and red borders measures 659, and the angle between the blue and yellow borders measures 60". Which list shows the borders in order from least to greatest length? blue red, yellow red, yellow, blue yellow, blue, red O yellow, red, blue 12) Jose has six pieces of cardboard to make a triangle frame for a diorama: The lengths are 2 inches, 3 inches, 4 inches, 5 inches; 6 inches, and inches. Which combination below will form a triangle? 0 2,3,5 0 3,4,7 0 4,5,6 0 1,4,7 #### Similar Solved Questions ##### 41 ' J (L) - €COSII L~2t 49. f(t) = e cOS 4t 41 ' J (L) - € COSII L ~2t 49. f(t) = e cOS 4t... 5 1 1 2 f 1... ##### Homework hu 3 { Week= orders) for the sutcd the rate lxw Mind Aceming otthe ~con L1I Eollkowing problerns. Rate La: [ For the determin ? the viluc 1n,] units Trin, "er the fatc |45, and ~1B - C * ZD Initn Rate,Wmin Foitkatckction 0.0z RGcimc - @utu Q012 Fu 00t7Forthe !E4Von 4 _ 21 2C +D +2E Eatn= Jul- Ljurlor' Jcnlo SIO Marne LsIo ~tac AssnuInitial RAte, Ws L&O ' 1.69-I Aenl Luen90-lo EtlWuio1,niu Homework hu 3 { Week= orders) for the sutcd the rate lxw Mind Aceming otthe ~con L1I Eollkowing problerns. Rate La: [ For the determin ? the viluc 1n,] units Trin, "er the fatc |45, and ~1B - C * ZD Initn Rate,Wmin Foitkatckction 0.0z RGcimc - @utu Q012 Fu 00t7 Forthe !E4Von 4 _ 21 2C +D +2E ... ##### Two parallel conducting plates are separated by 2.0 mm and carry equal but opposite surface charge densities. If the potential difference between them is 2.0 V, what is the magnitude of the surface charge density on each plate? (Take electric permittivity €_0=8.85*10^(-12) C^2INm^2) (10 pts)0 4.43 nc/m 217.7 nc/m"24.43 pCIm 28.85 nc/m^2 Two parallel conducting plates are separated by 2.0 mm and carry equal but opposite surface charge densities. If the potential difference between them is 2.0 V, what is the magnitude of the surface charge density on each plate? (Take electric permittivity €_0=8.85*10^(-12) C^2INm^2) (10 pts) 0... ##### PoinuOscalct 4.196-2064.WA Tut.CaolnnsAsk Your WeacaeEvaluate the integral: (Use C for the constant of integration: Remember to use absolute values where appropriate )(k -1 x - 17 - 17utonalAdditiona HaterialsOoBookpoints OsCalc1 7,4.196-206b.WA Tut,My NolesAsk Your TeacheEvaluate the integral: (Use C for the constant of integration:)TutortalAdditional MaterialsuBoolOSCalc1 7.4.196-206cWA Tut.My NotcsAck Your TcacheEvaluate the integral. (Use C for the constant of integration:)utonalAdditlona H poinu Oscalct 4.196-2064.WA Tut. Caolnns Ask Your Weacae Evaluate the integral: (Use C for the constant of integration: Remember to use absolute values where appropriate ) (k -1 x - 17 - 17 utonal Additiona Haterials OoBook points OsCalc1 7,4.196-206b.WA Tut, My Noles Ask Your Teache Evaluate the i... ##### Kolnold+" -zitezeu Jo nluijuI 3unq' JNO, Jamsur Inor inoyum sa8uanp_ Jaksun Inax 31815 'Kjjuyul Jipis "Mxuijui: Janpaou 01 se JBMSUE Jnak JiP15 - saB,aNp 'Mlui ISAIPU 5a8j3np uoneionb 341inoyum) Oo_ 'IWII SH w1enjeaa 'IuJSianuo} '143BJanuO} Jua3 Jonip ouanbas 241 Jay1aym Ju/4.31J0 Kolnold +" - zitezeu Jo nluijuI 3unq' JNO, Jamsur Inor inoyum sa8uanp_ Jaksun Inax 31815 'Kjjuyul Jipis "Mxuijui: Janpaou 01 se JBMSUE Jnak JiP15 - saB,aNp 'Mlui ISAIPU 5a8j3np uoneionb 341inoyum) Oo_ 'IWII SH w1enjeaa 'IuJSianuo} '143BJanuO} Jua3 Jonip ouanb... ##### Suppose that A is a subset of a metric space ( 1, d) . Prove that A=AU {all accumulation points of A} (ii) A =AUjA. (iii) €A = AnTc Suppose that A is a subset of a metric space ( 1, d) . Prove that A=AU {all accumulation points of A} (ii) A =AUjA. (iii) €A = AnTc... ##### ALits) +Ozg) 2LizOis) This reaction between metal andnon-metal produced a product that isanionic compound that Is neutral {no charge)Cowalent compound wlth no charge chit34 Kenic compeund wth posithe charse on #Cuisiisino: poeibie ALits) +Ozg) 2LizOis) This reaction between metal andnon-metal produced a product that is anionic compound that Is neutral {no charge) Cowalent compound wlth no charge chit 34 Kenic compeund wth posithe charse on # Cuisiisino: poeibie... ##### Random samole cf 401 nign scncol students was taken: Tne students were assified by tneir grade evel and work experience Belcw is some of tne resultsgth 10th 11th 12th grade grade grade grade 86 60 40 10Never had a job Has had a job during the summer only Has had a job throughout the year10XXXXXXXXX2032chi-square test will be perform to determine ifthere is enougn evidence that grade evel and work experience are incepencent;Assuming that grade eve and work experience are independet. Find the expe random samole cf 401 nign scncol students was taken: Tne students were assified by tneir grade evel and work experience Belcw is some of tne results gth 10th 11th 12th grade grade grade grade 86 60 40 10 Never had a job Has had a job during the summer only Has had a job throughout the year 10 XXX XX... ##### Lossen Rearrangement [18721OHX =Ac, SOzAr~HzORCNOxHzo:R-NHzN-C=0 isocyanateCOzOH2017 Roman A. Valiulin Lossen Rearrangement [18721 OH X =Ac, SOzAr ~HzO RCNOx Hzo: R-NHz N-C=0 isocyanate COz OH 2017 Roman A. Valiulin... ##### Let $P=A\left(A^{T} A\right)^{-1} A^{T},$ where $A$ is an $m \times n$ matrix of rank $n$ (a) Show that $P^{2}=P$ (b) Prove that $P^{k}=P$ for $k=1,2, \ldots$ (c) Show that $P$ is symmetric. [Hint: If $B$ is nonsingular, then $\left.\left(B^{-1}\right)^{T}=\left(B^{T}\right)^{-1} .\right]$ Let $P=A\left(A^{T} A\right)^{-1} A^{T},$ where $A$ is an $m \times n$ matrix of rank $n$ (a) Show that $P^{2}=P$ (b) Prove that $P^{k}=P$ for $k=1,2, \ldots$ (c) Show that $P$ is symmetric. [Hint: If $B$ is nonsingular, then $\left.\left(B^{-1}\right)^{T}=\left(B^{T}\right)^{-1} .\right]$... ##### Find the volume of the solid obtained by revolving the region under the curve $y=f(x)$ from $x=a$ to $y=b$ about the $x$ -axis. $$y=1-x^{2} ; a=-1, b=1$$ Find the volume of the solid obtained by revolving the region under the curve $y=f(x)$ from $x=a$ to $y=b$ about the $x$ -axis. $$y=1-x^{2} ; a=-1, b=1$$... ##### Name the following compounds. a. $\mathrm{NaClO}_{4}$ e. $\mathrm{SF}_{6}$ i. $\mathrm{NaOH}$ b. $\mathrm{Mg}_{3}\left(\mathrm{PO}_{4}\right)_{2}$ f. $\mathrm{Na}_{2} \mathrm{HPO}_{4}$ ¡. $\mathrm{Mg}(\mathrm{OH})_{2}$ c. $\mathrm{Al}_{2}\left(\mathrm{SO}_{4}\right)_{3}$ g. $\mathrm{NaH}_{2} \mathrm{PO}_{4}$ k. $\mathrm{Al}(\overline{\mathrm{OH}})_{3}$ d. $\mathrm{SF}_{2}$ h. $\operatorname{Li}_{3} N$ 1. $\mathrm{Ag}_{2} \mathrm{CrO}_{4}$ Name the following compounds. a. $\mathrm{NaClO}_{4}$ e. $\mathrm{SF}_{6}$ i. $\mathrm{NaOH}$ b. $\mathrm{Mg}_{3}\left(\mathrm{PO}_{4}\right)_{2}$ f. $\mathrm{Na}_{2} \mathrm{HPO}_{4}$ ¡. $\mathrm{Mg}(\mathrm{OH})_{2}$ c. $\mathrm{Al}_{2}\left(\mathrm{SO}_{4}\right)_{3}$ g. \$\mathrm{NaH}_{2} \mathr... ##### Let f bea differentiable function such that f (-4) = 2and f' (-4) = -9. What is the approximation for f (-3.85) found by using the line tangent to the graph of f at Let f bea differentiable function such that f (-4) = 2and f' (-4) = -9. What is the approximation for f (-3.85) found by using the line tangent to the graph of f at... ##### Which of the series Converge absolutely, which converge, and which diverge? Give reasons for your answers. $$\sum_{n=1}^{\infty}(-1)^{n} \frac{(2 n) !}{2^{n} n ! n}$$ Which of the series Converge absolutely, which converge, and which diverge? Give reasons for your answers. $$\sum_{n=1}^{\infty}(-1)^{n} \frac{(2 n) !}{2^{n} n ! n}$$... ##### Poisons that affect humans have been known to alter aerobic respiration by inhibiting the activity of the Electron Transport Chain: A= Ifan organism, has the ability to use fermentation as an alternative to aerobic respiration would that organism survive the poison and why?B. An electron transport chain is also used in photosynthesis. Assuming the poison would affect the process in plants too what would happen to the process of photosynthesis and why? Poisons that affect humans have been known to alter aerobic respiration by inhibiting the activity of the Electron Transport Chain: A= Ifan organism, has the ability to use fermentation as an alternative to aerobic respiration would that organism survive the poison and why? B. An electron transport ...
2022-06-25 19:44:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6984055638313293, "perplexity": 8841.487772495962}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103036099.6/warc/CC-MAIN-20220625190306-20220625220306-00087.warc.gz"}
https://www.univie.ac.at/projektservice-mathematik/e/?event=strobl18&page=talk-details&id=11
# Strobl18 Harmonic Analysis and Applications June 4-8, 2018 Strobl, AUSTRIA ### "Controlled fusion frames in Hilbert C*-modules" #### Rashidi-Kouchi, Mehdi Weighted and controlled frames in Hilbert spaces have been introduced in [1] to improve the numerical efficiency of iterative algorithms for inverting the frame operator on abstract Hilbert spaces, however they are used earlier in [2] for spherical wavelets. The concept of controlled frames has been extended and generalized to g-frames in [3] and fusion frames in [4]. Hilbert $C^*$-modules form a wide category between Hilbert spaces and Banach spaces. Frames and their generalization are defined in Hilbert $C^*$-modules and some properties have been studied for example see [5]. Here we investigate basic properties of controlled fusion frames in Hilbert $C^*$-modules. Also we present a characterization of controlled fusion frames for Hilbert $C^*$-modules and show that any controlled fusion frame in Hilbert $C^*$-module is frame in Hilbert $C^*$-module. {\bf References:} \begin{itemize} \item[{[1]}] P. Balazs, J-P. Antoine and A. Grybos: Wighted and Controlled Frames, \emph{Int. J. Wavelets, Multiresolut. Inf. Process.}, 8(1) (2010), 109--132. \item[{[2]}] I. Bogdanova, P. Vandergheynst, J.P. Antoine, L. Jacques, M. Morvidone: Stereographic wavelet frames on the sphere, \emph{Applied Comput. Harmon. Anal.} 19, (2005), 223--252. \item[{[3]}] A. Rahimi and A. Fereydooni: Controlled G-Frames and Their G-Multipliers in Hilbert spaces, \emph{An. St. Univ. Ovidius Constanta}, 21(2), (2013), 223--236 . \item[{[4]}] A. Khosravi and K. Musazadeh: Controlled fusion frames, \emph{ Methods of Functional Analysis and Topology}, 18(3), (2012), 256--265. \item[{[5]}] M. Rashidi-Kouchi, A. Nazai, M. Amini: On stability of g-frames and g-Riesz bases in Hilbert $C^*$-modules, \emph{Int. J. Wavelets Multiresolut. Inf. Process}, 12(6), (2014), 1--16. \end{itemize} http://univie.ac.at/projektservice-mathematik/e/talks/Rashidi-Kouchi_2018-01_Rashidi-Kouchi_Mehdi.pdf « back
2019-04-24 22:57:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7360924482345581, "perplexity": 4570.805991914123}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578663470.91/warc/CC-MAIN-20190424214335-20190425000335-00118.warc.gz"}
http://mathhelpforum.com/trigonometry/863-trig-help.html
# Math Help - Trig Help 1. ## Trig Help Hi, I need help finding the sin of the angle alpha in standard postion whose terminal side contains the given point. (-4,-6). Here's what I have so far. r = sqrt{(-4)^2 + (-6)^2} = sqrt{52} x = -4, y= -6, r= sqrt{52} sin(alpha) = (-6/sqrt(52)) = (-6/sqrt(52)*sqrt(52)/sqrt(52) =-6(sqrt(13*2*2) / 52) The answer ends up being sin(alpha)= {3(sqrt(13)/13} And that is where I am getting lost... It's simpler algebra that I should already know, but I just don't remember. Here's an image that might be easier on the eyes... The box is around the answer I should get to. Thanks for any help. 2. You're good. Let us continue, sin(alpha) = (-6/sqrt(52)) Rationalize the denominator, = (-6/sqrt(52)*sqrt(52)/sqrt(52) = -6(sqrt(13*2*2) / 52) Get the 2*2 out of the sqrt sign, = -6[2sqrt(13)] / 52 = -12[sqrt(13)] / 52 Reduce it to its lowest term, divide both numerator and denominator by 4,
2014-10-22 07:35:03
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8760115504264832, "perplexity": 3588.4735343589014}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507446231.28/warc/CC-MAIN-20141017005726-00211-ip-10-16-133-185.ec2.internal.warc.gz"}
https://brilliant.org/problems/more-fun-in-2016-part-17/
# More fun in 2016, Part 17 Algebra Level 5 How many real $$2016\times 2016$$ matrices $$A$$ are there such that $$A^{2016}\neq$$ 0 while $$A^{2017}=$$ 0, where 0 represents the zero matrix. Enter 666 if you come to the conclusion that infinitely many such matrices $$A$$ exist. Hint: Think about the minimal polynomial of $$A$$. ×
2016-10-22 18:07:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.911445677280426, "perplexity": 568.6818725181867}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719033.33/warc/CC-MAIN-20161020183839-00430-ip-10-171-6-4.ec2.internal.warc.gz"}
https://brilliant.org/problems/3-hard/
# #3 - Hard. Geometry Level 4 A right angles triangle has sides 4 and 3 units. If the right angle is bisected, then find the distance between orthocentres of the smaller triangles. Try to give a solution using Coordinate Geometry. × Problem Loading... Note Loading... Set Loading...
2017-03-25 15:29:17
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8012153506278992, "perplexity": 3121.5898049159705}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218188962.14/warc/CC-MAIN-20170322212948-00139-ip-10-233-31-227.ec2.internal.warc.gz"}
http://wastlund.blogspot.com/2018/03/perpetuum-mobile-drinking-game.html
## Wednesday, March 21, 2018 ### Perpetuum mobile drinking game Let's imagine we're in a pub in some sort of Pippi Långstrump universe where (1) money comes in the form of physical coins that you can move around on a table, (2) you never really run out of them, and (3) you can buy a round of beer for a single coin. Three people called A, B and C play a game: There is a six-sided die where three of the sides are labeled A, B, and C, and the other three are labeled Double. At the start of the game, each of the three players puts one coin on the table, and the die is thrown. If the side that comes up says Double, then each player doubles their amount of money on the table, and the die is thrown again. The doubling repeats any number of times, until one of the sides labeled with a player comes up. Whenever the die shows A, B, or C, that player loses and the other two split the pot evenly between them, concluding the round. That's really all, but there is a final little twist: If the round ends after only one roll, without any doubling, the pot will consist of an odd number (three) of coins. The two winners get their coins back, but instead of using smaller change to split the third coin, the tradition is that the loser buys the next round of beer using that coin. The game is zero-sum, meaning no money enters or leaves (unless we regard the twist as a proper part of the game). The game is also symmetric in the sense that all three players have the same status with respect to the rules. These are what we might call properties of fairness: What somebody wins, somebody else must have lost, and whatever somebody else wins or loses, you could have won or lost with exactly the same probability. Let's calculate the probabilities of winning or losing a certain amount, and see if those probabilities somehow reflect this "fairness". First let's find the probability that a player, say A, loses exactly one coin. This happens if A loses without any doubling, and the probability is $1/6$ since it happens precisely when the first roll of the die is an A. In order for A to instead win one coin, there has to be one double and the second roll must be a B or a C, so that somebody else loses 2 coins and A gets half of that. The probability of starting with a double is $1/2$, and the probability of the second roll being a B or a C is $1/3$, so the probability of winning one coin is $1/6$ too. It turns out that we also win or lose 2 coins with the same probability, and we start to see a pattern: Player A loses 2 coins of there is a Double and then an A, and wins 2 coins if there are two Doubles and then a B or a C. Both these scenarios have probability $1/12$. And to win or lose 4 coins, we have to win after 3 doubles or lose after 2, both of which have probability $1/24$. Winning or losing 8 coins requires winning after 4 doubles or losing after 3, probability $1/48$ each, and so on. Each of the scenarios has probability exactly half of the previous one. So the game has the property that winning $X$ coins is exactly as likely as losing $X$ coins. Perhaps we shouldn't be that surprised in view of that "fairness" we just discussed. But wait, we forgot about winning half a coin! That scenario didn't match up with any scenario where we lose half a coin, because we never do! So it seems that a given player, say A, will win half a coin with probability $1/3$ (when the first roll is a B or a C), and break even in the remaining cases. So perhaps after all it's when we include that final twist as a proper part of the game that it becomes "fair", and our wins and losses exactly balance out. But then the question is: Where does all that beer come from, if in the long run nobody is paying? --- I discussed this sort of thing and related stuff in 2007 with the German probabilist Nina Gantert and my colleagues Jeff Steif and Olle Häggström. Thanks to Jeff who asked me about it a few days ago, because I had almost forgotten. I can't recall how it started, but I remember that when we met, Nina said she was from Münster, and I told her that I had been there and visited the Panzermuseum. Strangely she had never heard of it, and it turns out there isn't one. The Panzermuseum is in the small town Munster just south of Hamburg, some 200 km or so from the city of Münster. That's how well you know where you are when you're on tour with LiTHe Blås :)
2018-09-24 07:31:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6266990303993225, "perplexity": 453.41562539700607}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267160233.82/warc/CC-MAIN-20180924070508-20180924090908-00070.warc.gz"}
https://indico.cern.ch/event/294993/contributions/1655475/
# Phenomenology 2014 Symposium 5-7 May 2014 University of Pittsburgh US/Eastern timezone ## Distinguishing Flavor Non-universal Color-singlet and Color-octet Vector Resonances at the LHC 6 May 2014, 16:45 15m Benedum Hall G28 (University of Pittsburgh) ### Speaker Pawin Ittisamai (Michigan State University) ### Description Electrically-neutral massive color-singlet and color-octet vector bosons, common predictions of beyond the standard model physics, have the potential to be discovered as a resonance in a dijet channel at the LHC. A color-singlet resonance that has leptophobic couplings needs further investigation to be distinguished from the color-octet one. In previous work, we introduced a method for discriminating between the two kinds of resonance in the situation where their couplings are flavor-universal, using measurements of the dijet resonance mass, total decay width and production cross-section. Here, we describe an extension of that method to cover a more general and realistic scenario, in which the vector resonances could have flavor non-universal couplings, by incorporating measurements of the heavy-flavor decays of the resonance into the analysis. We present our analysis in a model-independent manner for a dijet resonance with mass $2.5-6.0 \,\mathrm{TeV}$ at the LHC with $\sqrt{s}=14\,\mathrm{TeV}$ and integrated luminosities $30,\,100,\,300$ and $1000\,\mathrm{fb}^{-1}$, where we found that our method is applicable in most scenarios. ### Primary authors Prof. Elizabeth H. Simmons (Michigan State University) Pawin Ittisamai (Michigan State University) R. Sekhar Chivukula (Michigan State University) Slides
2019-10-20 00:06:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7920570373535156, "perplexity": 4144.920192015571}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986700435.69/warc/CC-MAIN-20191019214624-20191020002124-00304.warc.gz"}
https://stats.stackexchange.com/questions/588608/binomial-distribution-and-common-sense-arent-adding-up/588609
# Binomial distribution and common sense aren't adding up I've been trying to work through this in my head and can't figure out what I'm missing. Here's the problem: A 20 sided die is rolled 100 times and we count the number of 20's we roll. Well, it turns we rolled zero 20's. That's odd! But how odd is it? My first instinct is to say the probability of not getting a single 20 in 100 rolls is $$\frac{19}{20}^{100} = 0.0059$$ That makes sense to me, but I wanted to reframe it in the context of the binomial distribution with n=100 and p=1/20. I set up my z test as: $$Z = \frac{X-np}{\sqrt{np(1-p)}} = \frac{0-100(\frac{1}{20})}{\sqrt{100(\frac{1}{20})(\frac{19}{20})}} = -2.294$$ which equates to a p-value of ~0.011 or 1.1%. That's a big difference between the two methods and I can't spot a mistake with either. Where'd I go wrong? • The $p$-value is Til probability, is based on an asymptotic approximation to the distribution of $X$, and is particularly approximate in the tail $X=0$. Sep 13 at 6:21 • Sep 13 at 6:21 • It's not so much the binomial distribution and common sense not matching up, but the binomial distribution and the normal approximation not doing so. Sep 13 at 7:23 • An alternative approximation would be to use a Poisson distribution to suggest $e^{-100/20}\approx 0.0067$, which is closer even if not good Sep 13 at 8:28 $${n\choose k} p^k (1-p)^{n-k} = 1 \times 1 \times \frac{19}{20}^{100}$$
2022-12-09 17:45:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 3, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7208926677703857, "perplexity": 290.940700495538}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711417.46/warc/CC-MAIN-20221209144722-20221209174722-00241.warc.gz"}
https://wiki.seg.org/index.php?title=Dictionary:Eikonal_equation&diff=prev&oldid=26222
# Difference between revisions of "Dictionary:Eikonal equation" (ī kōn’ ∂l) A form of the wave equation for harmonic waves in which the local velocity ${\displaystyle V}$ is compared to a reference velocity ${\displaystyle V_{R}}$(analogous to comparing a velocity to the speed of light in vacuum): ${\displaystyle \left(\nabla \phi \right)^{2}=\left({\frac {V}{V_{R}}}\right)^{2}=n^{2}}$, More commonly in geophysical literature, the eikonal equation is written in terms of medium velocity only ${\displaystyle V(\mathbf {x} )}$ where $\displaystyle \mathbf{x} = (x_1,x_2,x_3), as
2021-05-15 18:05:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 4, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 1, "math_score": 0.7135590314865112, "perplexity": 1235.888393950705}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243990551.51/warc/CC-MAIN-20210515161657-20210515191657-00054.warc.gz"}
http://mathoverflow.net/revisions/56765/list
There are several examples in set theory; the three I mention are related so I will include them in a single answer rather than three. 1) Large cardinal notion. I have seen in print many times that there is no precise definition of what a large cardinal is, but I must disagree, since "weakly inaccessible cardinal" covers it. Of course, if you retreat to set theories without choice then there may be some room for discussion, but this is a technical point. People seem to mean something different when they say that large cardinal is not defined. It looks to me like they mean that the word should be used in reference to significant sign posts within the large cardinal hierarchy (such as "weakly compact", "strong", but not "the third Mahlo above the second measurable") and, since "significant" is not well defined, then... However, it seems clear that nowadays we are more interested in large cardinal notions rather than the large cardinals per se. To illustrate the difference, "$0^\sharp$ exists" is obviously a large cardinal notion, but I do not find it reasonable to call it (or $0^\sharp$) a large cardinal. And large cardinal notion is not yet a precisely defined concept. A very interesting approximation to such a notion is based on the hierarchy of inner model operators studied by Steel and others. But their meaningful study requires somewhat strong background assumptions, and so many of the large cardinal notions at the level of $L$ or "just beyond" do not seem to be not properly covered under this umbrella. 2) The core model. This was mentioned by Henry Towsner. I do not think it is accurate that we were proving results about it without a precise definition. What happens is that all the results about it have additional assumptions beyond ZFC, and we would like to be able to remove them. More precisely, we cannot show its existence without additional assumptions, and these additional assumptions are also needed to establish its basic properties. The core model is intended to capture the "right analogue" of $L$ based on the background universe. If the universe does not have much large cardinal structure, this analogue is $L$ itself. If there are no measurable cardinals in inner models, the analogue is the Dodd-Jensen core model, and the name comes from their work. Etc. In each situation we know what broad features we expect the core model to have (this is the "not clearly defined part"). Once in each situation we formalize these broad features, we can proceed, and part of the problem is in showing its existence. Currently, we can only prove it under appropriate "anti-large cardinal assumptions", saying that the universe is not too large in some sense. One of the issues is that we want the core model to be a fine structural model, but we do not have a good inner model theory without anti-large cardinal assumptions. Another more serious issue is that as we climb through the large cardinal hierarchy, the properties we can expect of the core model become weaker. For example, if $0^\sharp$ does not exist, we have a full covering lemma. But this is not possible once we have measurables, due to Prikry forcing. We still have a version of it (weak covering), and this is one of the essential properties we expect. (There are additional technical issues related to correctness.) But it is fair to expect that as we continue developing inner model theory, we will find that our current notions are too restrictive. As a technical punchline, currently the most promising approach to a general notion seems to be in terms of Sargsyan's hod-models. But it looks to me this will only take us as far as determinacy or Universal Baireness can go. 3) Definable sets of reals. We tend to say that descriptive set theory studies definable sets of reals as opposed to arbitrary such sets. This is a useful but not precise heuristic. It can be formalized in wildly different ways, depending of context. A first approximation to what we mean is "Borel", but this is too restrictive. Sometimes we use definability in terms of the projective hierarchy. Other times we say that a definable set is one that belongs to a natural model of ${\sf AD}^{+}$. But it is fair to say that these are just approximations to what we would really like to say.
2013-05-22 04:16:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7814118266105652, "perplexity": 294.7597505658989}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701281163/warc/CC-MAIN-20130516104801-00032-ip-10-60-113-184.ec2.internal.warc.gz"}
https://denisegaskins.com/2012/01/01/2012-mathematics-game/?replytocom=34358
# 2012 Mathematics Game photo by Creativity103 via flickr For our homeschool, January is the time to assess our progress and make a few New Semester’s Resolutions. This year, we resolve to challenge ourselves to more math puzzles. Would you like to join us? Pump up your mental muscles with the 2012 Mathematics Game! ## Rules of the Game Use the digits in the year 2012 to write mathematical expressions for the counting numbers 1 through 100. Bonus Rules You may use the overhead-bar (vinculum), dots, or brackets to mark a repeating decimal. You may use multifactorials: • n!! = a double factorial = the product of all integers from 1 to n that have the same parity (odd or even) as n. • n!!! = a triple factorial = the product of all integers from 1 to n that are equal to n mod 3 [Note to teachers: Math Forum modified their rules to allow double factorials, but as far as I know, they do not allow repeating decimals or triple factorials.] ## How To Play With only three distinct digits to work with this year, we will need every trick in the book to create variety in our numbers. Experiment with decimals, double-digit numbers, and factorials of all sorts. Remember that dividing (or using a negative exponent) creates the reciprocal of a fraction, which can flip the denominator up where it may be more helpful. Use the comments section below to share the numbers you find, but don’t spoil the game by telling us how you made them. You may give relatively cryptic hints, but be warned: Many teachers use this puzzle as a classroom assignment, and there will always be students looking for people to do their homework for them. • Do not post your solutions. I will delete them. There is no authoritative answer key for the year game, so we will rely on our collective wisdom to decide when we’re done. We’ve had some lively discussions the last few years. I’m looking forward to this year’s fun! ## Keeping Score As players report their game results below, I will keep a running tally of confirmed results (numbers found by two or more players). Today is Kitten’s birthday, however, so I won’t spend much time at my computer. Also, I’ll be traveling a lot this month, so this tally will lag a few days behind the results posted in the comments. Percent confirmed = 97%. Reported but not confirmed = 77, 92. Numbers we are still missing = 93. And if you would like to join me in the “extended edition” game… Middle school rules = 68%. Old Math Forum rules, no repeating decimals or multifactorials: 1-32, 34-44, 48-52, 58-65, 70, 72, 74, 80, 90, 94-95, 97-100. New Math Forum rules, confirmed = 77%. NOT Math Forum: 33, 55-57, 66-67, 69, 71, 73, 77-79, 81-84, 86-89, 91-92. Needed multi-digit numbers: 44, 67-68. Could NOT keep the digits in order: 29, 31, 33, 37, 39, 41, 44, 55, 59, 65, 67, 69, 71, 73, 76-78, 89, 91, 95. Math Forum will begin publishing student solutions after February 1, 2012. Remember, you may not submit answers with triple (or higher) factorials or repeating decimals to the Math Forum site. ## Clarifying the Do’s and Don’ts Finally, here are a few rules that players have found confusing in past years. These things ARE allowed: • $0! = 1$ . [See Dr. Math’s Why does 0 factorial equal 1?] • The only digits that you can use to build 2-or-more-digit numerals or decimals are the standard base-10 digits 2, 0, 1, 2. • Unary negatives count. That is, you may use a “-” sign to create a negative number. • You may use (n!)!, a nested factorial — a factorial of a factorial. Nested square roots are also allowed. • The multifactorial $n !^k$ = the product of all integers from 1 to n that are equal to n mod k. You may write the double factorial and triple factorial as !! and !!!, respectively, but for higher multifactorials BOTH n and k must be constructed from the year digits. These things are NOT allowed: • “0!” is not a digit, so it cannot be used to create a base-10 numeral. • The decimal point is not an operation that can be applied to other mathematical expressions: “.(0!)” does not make sense. • You may not use any exponent unless you create it from the digits 2, 0, 1, 2. You may not use a square function, but you may use “^2”. You may not use a cube function, but you may use “^(2+1)”. You may not use a reciprocal function, but you may use “^(-1)”. • You have to “hit” each number from 1 to 100 exactly, without rounding off or truncating decimals. You may not use the integer, floor, or ceiling functions. For more tips, check out this comment from the 2008 game. Heiner Marxen has compiled hints and results for past years (and for the related Four 4’s puzzle). Dave Rusin describes a related card game, Krypto, which is much like my Target Number game. And Alexander Bogomolny offers a great collection of similar puzzles on his Make An Identity page. Want to help your kids learn math? Claim your free 24-page problem-solving booklet, and you’ll be among the first to hear about new books, revisions, and sales or other promotions. ## 31 thoughts on “2012 Mathematics Game” 1. Great! The more, the merrier. Kitten has spent the morning playing with her slumber party friends, so I’ve had time to do a little math puzzling. I think the double factorial option will be very handy for Math Forum students — and since it’s based on odd and even numbers, it should be easy to explain. 2. John says: 1, 2, 3, 4, 5, 6, 8, 9, 10 3. I can confirm John’s numbers. I’ve been trying to think about this as my middle-school math club students will, and I don’t think they’ll have too much trouble with the numbers from 1-24, except 15. I’ve got all those under the old Math Forum rules, but 15 needs the new rule. (I also got them all with single digits and in order, but I don’t think my students will be comfortable enough with decimals and powers to do that.) 4. Hi, Climbing Gecko! It’s good to “see” you. 🙂 I can confirm your numbers (including 50 — did you get that, too?), though I didn’t get all of them the same way. In fact, the only one of those I got by starting with 50 was 52, but it’s fun to collect several different ways to do the numbers. I can think of two ways to get 50 with a leftover 2. One of them keeps the digits in order. 5. More from my quest for numbers my middle school students may be able to find: 25-27, 34-37, 39, 41, 44, 97-100. 6. nth_x says: Hi All. I’ve been using this with some of my high school classes to get them thinking “mathematically” again after the break…so I’ve had a lot of time to look at this. So far, I’ve got (or can confirm): 1-52, 60, 63-65, 72, 79-81, 83, 89-91, 95, 98-100 (68/100) I’ve got several students who have really had their imagination captured with this game, and are pushing me to check and verify new numbers several times a day 🙂 7. Sara says: I can confirm some of the above, including: 1 – 26, 29 – 33, 35 – 41, 43, 45, 47, 49 – 51 and 63 – 65. 8. Wow, nth_x, your classes have been busy! My co-op class doesn’t meet until mid-January, but I hope they enjoy the puzzle as much as yours. I can confirm all of nth_x and Sara’s numbers, and I’ll add the following: 59, 61-62, 70, 74, 94. 9. nth_x says: Here’s some more from me and/or my students: 53-55, 58, 62, 66-69, 74-76, 82, 96-97 10. Ah, you’ve passed me up now. I can confirm some of those, but I’m still missing 53 and 66-69. Still, I can add a few you didn’t list: 56, 71, 85. 11. Sara says: I can confirm 61 of the updated list of missings/unconfirmed. 12. I couldn’t sleep last night, so I lay on my pillow staring at the ceiling and solving numbers. I can confirm 53, 66, and 68-69. I’m pretty sure I thought of a way to get 67, too, but I can’t remember it this morning. 😦 And I found two new numbers: 57, 78. Edited to add: Aha! I remember 67. 13. I can confirm 56. I can also get 26 and 35 with Math Forum rules and the digits in order. 14. GT says: I can confirm 59, 70, 71, 78, and 94. I also have a solution for 73 (single digits, not in order, using double factorial and repeating decimal). 15. Hi, GT! I can confirm 73, which brings us up to an amazing 91% confirmed. Wow! I also figured out how to get 61 and 63 in order. 16. laura says: this is just way too hard and torture! & i have to do this for math credits! 17. laura says: ive only gotten 1-14 , 17-25, 32, 39-42, 51, 60, 98, 100 .. 18. Hi Yeargame Puzzlers, I administer the yeargame over at the Math Forum these days, and I wanted to write to say I *will* be accepting solutions with multifactorials this year. I was educated about them in early January and have updated the rules to include them. I love that mathematicians see a good idea and extend it as far as possible… If someone could help me understand how repeating decimals might be used, I’ll see if we can accept those answers as well. Thanks for playing! Max 19. Hi, Max! Thank you for stopping by. The repeating decimal is most useful as a way to access bigger numbers than are otherwise possible. For example, something ÷ .(1) — using brackets to indicate the repeating 1, since I can’t type a vinculum — gives me the equivalent of multiplying by 9. Handy! 20. Hmm… I’m a bit skeptical about repeating decimals. It does unlock some new numbers and is a great teachable moment. But “repeat” isn’t really an operation, or if it is it’s on the level of Int, Trunc, Rnd, etc. And it’s sort of like introducing “divide by 9” as a fair operation which is an odd one to include. But I could still be convinced. 7 years from now, in 2019, I definitely would stay away from repeating decimals since we have a 9 in there already. 21. I don’t think using repeating decimals is any more artificial than allowing a decimal point or multi-digit numbers. Neither of those are operations, either. And according to Wikipedia, some people disallow the square root symbol (in the Four 4’s Puzzle) because it adds an implied “2”. I prefer the use of repeating decimals over multfactorials, because I think repeating decimals are solidly within the standard prealgebra curriculum topics — but it’s a matter of taste, like preference in ice cream. Multifactorials feel to me like an artificial trick. Not that that has kept me from using them, but I avoid them when possible. 22. Back from out of town, I’m ready to update the game count. New numbers that need to be confirmed: 77, 84, 86-88, 92. Also still waiting for confirmation on 57 and 85. There is only ONE number I haven’t been able to make this year: 93. I added a new category: “Middle School Rules” are the same as the old Math Forum rules, without repeating decimals or multifactorials. My students are still struggling with regular decimals, and normal factorials are a new idea for them — I don’t want to confuse them further. Still, I’ve found 63 64 numbers that they can calculate, if they are persistent. 23. Lew says: In the spirit of Denise I wrote a Prolog program that allows repeating decimals but not multifactorials. It found an expression of depth 8 for 57. Of interest (to Chinese students) it solved 88 (maintaining the order: 2 0 1 2). It found solutions for all numbers from 0 to 100 except for 67 68 69 77 92 93. 24. Hi, Lew! I’m not sure what “an expression of depth 8” means, but I agree that 57 is a toughie. Your program found expressions for quite a few of the numbers I needed multifactorials for. That gives me some new puzzles to shoot for — finding the nonmultifactorial versions. 67 and 69 are possible without multifactorials, too. 25. MathMom says: I’m coming to the game late, but I have 28-31 solved with the old MS rules (without needing multi-digit numbers). 26. Thanks, MathMom! I guess I need to get back to it. My math class did find a solution for 30, which I forgot to note above, but I’ll have to play around with 28, 29, and 31… 27. Krona says: From where should i play this game? 1. You play the game in your own mind and on scratch paper. The challenge is to make as many of the numbers as you can figure out. You can ask questions in the comment section here, and see which numbers other people have made, and report any new numbers that you find. But the game itself is solitaire — just you and the math. This site uses Akismet to reduce spam. Learn how your comment data is processed.
2019-11-16 02:11:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4064405858516693, "perplexity": 1087.0857659781732}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668716.69/warc/CC-MAIN-20191116005339-20191116033339-00377.warc.gz"}
http://ieeexplore.ieee.org/xpl/tocresult.jsp?reload=true&isnumber=6287626
By Topic # IEEE Transactions on Microwave Theory and Techniques ## Filter Results Displaying Results 1 - 25 of 36 Publication Year: 2012, Page(s):C1 - C4 | PDF (164 KB) • ### IEEE Transactions on Microwave Theory and Techniques publication information Publication Year: 2012, Page(s): C2 | PDF (45 KB) • ### Finite-Element Eigenvalue Analysis of Propagating and Evanescent Modes in 3-D Periodic Structures Using Model-Order Reduction Publication Year: 2012, Page(s):2677 - 2683 Cited by:  Papers (4) | | PDF (1034 KB) | HTML Eigenvalue analysis of a periodic structure by the finite-element method gives its Floquet propagation constant at a given frequency. Using this method directly to find the dispersion curve is computationally expensive, particularly in 3-D, because a large matrix eigenproblem must be solved at each frequency. The cost can be lowered by applying model-order reduction. A full-size eigenproblem at on... View full abstract» • ### SPICE Lumped Circuit Subcell Model for the Discontinuous Galerkin Finite-Element Time-Domain Method Publication Year: 2012, Page(s):2684 - 2692 Cited by:  Papers (7) | | PDF (1612 KB) | HTML A SPICE lumped circuit subcell model is formulated within the discontinuous Galerkin finite-element time-domain (DGFETD) discretization of Maxwell's equations. A fourth-order exponential time difference (ETD) algorithm is used for circuits that lead to stiff systems. The ETD method reduces to a standard fourth-order Runge-Kutta (RK4) time-integration for nonstiff regions. A number of test cases, i... View full abstract» • ### MPIE/MoM Acceleration With a General-Purpose Graphics Processing Unit Publication Year: 2012, Page(s):2693 - 2701 Cited by:  Papers (5) | | PDF (1737 KB) | HTML In this paper, we describe an accelerated implementation of the Method of Moments (MoM). A framework is proposed, exploiting the graphics processing unit (GPU) computing power by means of the software platform compute unified device architecture (CUDA). The mixed-potential integral-equation formulation, applied to microstrip circuit modeling, is adopted, and both the impedance matrix computation a... View full abstract» • ### An Instrumental Variable Vector-Fitting Approach for Noisy Frequency Responses Publication Year: 2012, Page(s):2702 - 2712 Cited by:  Papers (2) | | PDF (2548 KB) | HTML This paper presents an efficient methodology to improve the convergence properties of vector fitting (VF) when the frequency data is contaminated by noise. The proposed algorithm uses an instrumental variable approach, which minimizes the biasing effect of the least squares solution caused by the noise of the data samples. These instruments are generated using the rational approximation of the pre... View full abstract» • ### Analytical Adjoint Sensitivity Formula for the Scattering Parameters of Metallic Structures Publication Year: 2012, Page(s):2713 - 2722 Cited by:  Papers (9) | | PDF (2077 KB) | HTML A novel sensitivity-analysis method is proposed to compute the S-parameter Jacobian with respect to metallic shape parameters. The formulation is analytical and is derived from Maxwell's equations directly. It is independent of the field solution method and the respective system matrix. It requires only the field solution on the object surface. The computation is a post-process and its over... View full abstract» • ### Numerical Stability and Dispersion Analysis of the Precise-Integration Time-Domain Method in Lossy Media Publication Year: 2012, Page(s):2723 - 2729 Cited by:  Papers (1) | | PDF (1396 KB) | HTML In this paper, both the numerical stability condition and dispersion relation of the precise-integration time-domain (PITD) method in lossy media are presented. It is found that the time step size of the PITD method is limited by both the spatial step size of the PITD method and the ratio of permittivity to conductivity. In numerical dispersion investigations, it is shown that: the numerical loss ... View full abstract» • ### Full-Wave Analysis of Dielectric-Loaded Cylindrical Waveguides and Cavities Using a New Four-Port Ring Network Publication Year: 2012, Page(s):2730 - 2740 Cited by:  Papers (6) | | PDF (2428 KB) | HTML In this paper, a full-wave method for the electromagnetic analysis of dielectric-loaded cylindrical and coaxial waveguides and cavities is developed. For this purpose, a new four-port ring network is proposed, and the mode-matching method is applied to calculate the generalized admittance matrix of this new structure. A number of analyses on dielectric-loaded waveguide structures and cavities have... View full abstract» • ### Exact and Closed-Form Cutoff Wavenumbers of Elliptical Dielectric Waveguides Publication Year: 2012, Page(s):2741 - 2751 Cited by:  Papers (2) | | PDF (3164 KB) | HTML The cutoff wavenumbers of the elliptical dielectric waveguide are calculated exactly and analytically. Two separate methods are used to solve this problem. The first method is based on the separation of variables technique using Mathieu functions and gives the exact cutoff wavenumbers. The system matrices of which the roots of their determinant should be determined are complicated because of the n... View full abstract» • ### Fe-Rich Ferromagnetic Wires for Mechanical-Stress Self-Sensing Materials Publication Year: 2012, Page(s):2752 - 2759 Cited by:  Papers (2) | | PDF (1580 KB) | HTML The possibility of using Fe-rich wires in mechanical stress self-sensing materials is investigated. To this end, a retrieval technique aimed to characterize the high-frequency magneto-impedance effect in ferromagnetic wires under mechanical stresses is proposed. The technique is based on the measurement of the wires inside a metallic rectangular waveguide, and it is validated through numerical sim... View full abstract» • ### Broadband 90$^{circ}$ Differential Phase Shifter Constructed Using a Pair of Multisection Radial Line Stubs Publication Year: 2012, Page(s):2760 - 2767 Cited by:  Papers (13) | | PDF (1358 KB) | HTML The current paper proposes a broadband 90° differential phase shifter using a pair of multisection radial transmission-line (TL) stubs. The scattering parameters of the differential phase shifter are calculated based on the radial TL theory to evaluate the differential phase shifter's performance. Global optimization is performed using the TL model followed by a local optimizatio... View full abstract» • ### A Modified Wilkinson Power Divider With Isolation Bandwidth Improvement Publication Year: 2012, Page(s):2768 - 2780 Cited by:  Papers (27) | | PDF (3410 KB) | HTML This paper proposes a novel modified Wilkinson power divider with wide isolation bandwidth. The isolation bandwidth can be extended by an additional isolation network (INW) in the circuits. An equal and an unequal 4-GHz modified Wilkinson power divider on FR4 with compact circuit sizes are designed and measured to verify the new design concept. The measurement results show the operation bandwidth ... View full abstract» • ### Design of Multiway Power Divider by Using Stepped-Impedance Transformers Publication Year: 2012, Page(s):2781 - 2790 Cited by:  Papers (14) | | PDF (2410 KB) | HTML In this paper, the design of multiway power dividers by interconnecting power dividers with fewer output ports is studied by transforming them into multisection stepped-impedance transformers. By using this approach, it is easy to design multiway power dividers with required equal ripples of input reflection (S11 ) within a wide passband. The interconnecting lines can be used as additional ... View full abstract» • ### A New Balanced-to-Balanced Power Divider/Combiner Publication Year: 2012, Page(s):2791 - 2798 Cited by:  Papers (38) | | PDF (2540 KB) | HTML In this paper, a new balanced-to-balanced power divider/combiner is proposed. By using matrix transformation, two three-port networks for the odd- and even-mode circuit models are deduced, based on the constraint rules of the mixed-mode S -parameters. In order to satisfy the two required scattering matrices simultaneously, the resistances of lumped elements, the characteristic impedances an... View full abstract» • ### Miniature Quasi-Lumped-Element Wideband Bandpass Filter at 0.5–2-GHz Band Using Multilayer Liquid Crystal Polymer Technology Publication Year: 2012, Page(s):2799 - 2807 Cited by:  Papers (16) | | PDF (1409 KB) | HTML Miniature wideband bandpass filters are proposed using multilayer liquid crystal polymer (LCP) technology to cover the very low-frequency band of 0.5-2 GHz. To reduce the filter size at such low frequencies, lumped-element theory is used for the filter design and a value extraction process is developed to accurately get the capacitive or inductive values of different multilayer microstrip quasi-lu... View full abstract» • ### Dual-Mode Ring Resonator Bandpass Filter With Asymmetric Inductive Coupling and Its Miniaturization Publication Year: 2012, Page(s):2808 - 2814 Cited by:  Papers (15) | | PDF (1631 KB) | HTML Dual-mode ring resonator filters are implemented with asymmetric inductive perturbation for creating transmission zeros on both sides of the passband. In analysis, dependence of the resonance modes and the zeros on positions and sizes of both the inductive and capacitive perturbations is investigated. Under certain conditions, the even- and odd-mode frequencies for a capacitively perturbed ring ar... View full abstract» • ### A Highly Reconfigurable Low-Power CMOS Directional Coupler Publication Year: 2012, Page(s):2815 - 2822 Cited by:  Papers (8)  |  Patents (1) | | PDF (2077 KB) | HTML This paper presents a highly reconfigurable, low-power, and compact directional coupler. The coupler uses varactors and novel active inductors to achieve wide tuning ranges of operating frequencies and coupling coefficients. The use of a low-pass circuit architecture with only two inductors minimizes chip area, power consumption, and noise. The coupler is implemented in a 0.13-μm CMOS proce... View full abstract» • ### A 1.1-V Regulator-Stabilized 21.4-GHz VCO and a 115% Frequency-Range Dynamic Divider for $K$ -Band Wireless Communication Publication Year: 2012, Page(s):2823 - 2832 Cited by:  Papers (8) | | PDF (2013 KB) | HTML A 21.4-GHz 1.1-V regulator-stabilized voltage-controlled oscillator (VCO) with a dual-transformer configuration and a 115% frequency-range dynamic divider-both based on 0.18-μm SiGe BiCMOS technology-were developed. As for the VCO, the combination of two types of transformers, which exhibit high input impedance and capacitive-input impedance, respectively, provides both wide frequency-tunin... View full abstract» • ### Wideband Inductorless Balun-LNA Employing Feedback for Low-Power Low-Voltage Applications Publication Year: 2012, Page(s):2833 - 2842 Cited by:  Papers (23)  |  Patents (1) | | PDF (2220 KB) | HTML A wideband inductorless low-noise-amplifier (LNA) with single-to-differential conversion for multistandard radio applications is proposed. Noise-suppressed current-mirror-based biasing is utilized to ensure stable operation under process, voltage, and temperature variations. The inherent gain of the common-source (CS) stage is re-used to boost the trans-conductance of the common-gate (CG) stage, a... View full abstract» • ### A Precise Decibel-Linear Programmable Gain Amplifier Using a Constant Current-Density Function Publication Year: 2012, Page(s):2843 - 2850 Cited by:  Papers (12) | | PDF (1832 KB) | HTML In this paper, a compensation technique for realizing a precise decibel-linear CMOS programmable gain amplifier (PGA) is described. The proposed PGA, employing an auxiliary pair, not only retains a constant current density but also offers a gain-independent bandwidth (BW). For verification, a compact PGA (0.1 mm2) is fabricated using a 0.13-μm CMOS process and measured. The measu... View full abstract» • ### A Highly Linear and Efficient CMOS RF Power Amplifier With a 2-D Circuit Synthesis Technique Publication Year: 2012, Page(s):2851 - 2862 Cited by:  Papers (1) | | PDF (2532 KB) | HTML A 2-D circuit synthesis technique (2DCST) is introduced that simultaneously linearizes the AM-AM and AM-PM distortions of CMOS RF power amplifiers (PAs). A class-AB nMOS RF PA fabricated in a 0.18-μm CMOS process is reported. With a WCDMA signal, the amplifier achieved 41.6% power-added efficiency (PAE) with -33-dBc single-adjacent channel power ratio (ACPR1) and 38.5% PAE with -40-dBc ACPR... View full abstract» • ### A Transformer-Less Load-Modulated (TLLM) Architecture for Efficient Wideband Power Amplifiers Publication Year: 2012, Page(s):2863 - 2874 Cited by:  Papers (24)  |  Patents (1) | | PDF (2015 KB) | HTML An architecture and design procedure for transformer-less load-modulated power amplifiers (PAs) having high efficiency at power back-off is presented. This architecture utilizes a comparable load modulation concept as in the Doherty PA; however, contrary to the Doherty PA, it neither requires an output impedance transformer, nor offset lines, which are the main limiting factors in designing wideba... View full abstract» • ### Mitigation of Bandwidth Limitation in Wireless Doherty Amplifiers With Substantial Bandwidth Enhancement Using Digital Techniques Publication Year: 2012, Page(s):2875 - 2885 Cited by:  Papers (24) | | PDF (2691 KB) | HTML This paper proposes a new method for extending the bandwidth of Doherty power amplifiers (PAs) in the digital domain. The bandwidth enhancement is achieved through a frequency-selective pre-compensation mechanism that is derived to prevent the efficiency degradation that naturally occurs as the frequency of operation deviates from the center frequency. A methodical analysis of the frequency respon... View full abstract» • ### Microwave Chemical Sensing at Room Temperature Using an Overmoded Waveguide Design Publication Year: 2012, Page(s):2886 - 2893 Cited by:  Papers (2) | | PDF (1093 KB) | HTML Microwave spectrometers have unique advantages in the ability to determine high-resolution features that are specific to a given chemical. Very sharp lines which correspond to quantum states of the chemical allow for unique identification of the chemical. Recent advances have shown the possibility of room-temperature microwave spectroscopy analysis in which the data are collected in a short amount... View full abstract» ## Aims & Scope The IEEE Transactions on Microwave Theory and Techniques focuses on that part of engineering and theory associated with microwave/millimeter-wave components, devices, circuits, and systems involving the generation, modulation, demodulation, control, transmission, and detection of microwave signals. This includes scientific, technical, and industrial, activities. Microwave theory and techniques relates to electromagnetic waves usually in the frequency region between a few MHz and a THz; other spectral regions and wave types are included within the scope of the Society whenever basic microwave theory and techniques can yield useful results. Generally, this occurs in the theory of wave propagation in structures with dimensions comparable to a wavelength, and in the related techniques for analysis and design.. Full Aims & Scope ## Meet Our Editors Editor-in-Chief Luca Perregrini [email protected] Editor-in-Chief Jose Carlos Pedro [email protected]
2017-06-28 09:16:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2638216018676758, "perplexity": 7804.427745758406}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128323588.51/warc/CC-MAIN-20170628083538-20170628103538-00159.warc.gz"}
https://intelligencemission.com/free-weekends-electricity-houston-free-electricity-meter-hack.html
Not one of the dozens of cult heroes has produced Free Power working model that has been independently tested and show to be over-unity in performance. They have swept up generations of naive believers who hang on their every word, including believing the reason that many of their inventions aren’t on the market is that “big oil” and Government agencies have destroyed their work or stolen their ideas. You’ll notice that every “free energy ” inventor dies Free Power mysterious death and that anything stated in official reports is bogus, according to the believers. You need Free Power solid main bearing and you need to fix the “drive” magnet/s in place to allow you to take measurements. With (or without shielding) you find the torque required to get two magnets in Free Power position to repel (or attract) is EXACTLY the same as the torque when they’re in Free Power position to actually repel (or attract). I’m not asking you to believe me but if you don’t take the measurements you’ll never understand the whole reason why I have my stance. Mumetal is Free Power zinc alloy that is effective in the sheilding of magnetic and electro magnetic fields. Only just heard about it myself couple of days ago. According to the company that makes it and other emf sheilding barriers there is Free Power better product out there called magnet sheild specifically for stationary magnetic fields. Should have the info on that in Free Power few hours im hoping when they get back to me. Hey Free Power, believe me i am not giving up. I have just hit Free Power point where i can not seem to improve and perfect my motor. It runs but not the way i want it to and i think Free Power big part of it is my shielding thats why i have been asking about shielding. I have never heard of mumetal. What is it? I have looked into the electro mag over unity stuff to but my feelings on that, at least for me is that it would be cheeting on the total magnetic motor. Your basicaly going back to the electric motor. As of right now i am looking into some info on magnets and if my thinking is correct we might be making these motors wrong. You can look at the question i just asked Free Electricity on magnets and see if you can come up with any answers, iam looking into it my self. Ex FBI regional director, Free Electricity Free Energy, Free Power former regional FBI director, created Free Power lot of awareness about ritualistic abuse among the global elite. It goes into satanism, pedophilia, and child sex trafficking. Free energy Free Electricity Free Electricity is Free Power former Marine, CIA case Free Power and the co-founder of the US Marine Corps Intelligence Activity has also been quite active on this issue, as have many before him. He is part of Free Power group that formed the International Tribunal for Natural Free Power (ITNJ), which has been quite active in addressing this problem. Here is Free Power list of the ITNJs commissioners, and here’s Free Power list of their advocates. Why? Because I didn’t have the correct angle or distance. It did, however, start to move on its own. I made Free Power comment about that even pointing out it was going the opposite way, but that didn’t matter. This is Free Power video somebody made of Free Power completed unit. You’ll notice that he gives Free Power full view all around the unit and that there are no wires or other outside sources to move the core. Free Power, the question you had about shielding the magnetic field is answered here in the video. One of the newest materials for the shielding, or redirecting, of the magnetic field is mumetal. You can get neodymium magnets via eBay really cheaply. That way you won’t feel so bad when it doesn’t work. Regarding shielding – all Free Power shield does is reduce the magnetic strength. Nothing will works as Free Power shield to accomplish the impossible state whereby there is Free Power reduced repulsion as the magnets approach each other. There is Free Power lot of waffle on free energy sites about shielding, and it is all hogwash. Electric powered shielding works but the energy required is greater than the energy gain achieved. It is Free Power pointless exercise. Hey, one thing i have not seen in any of these posts is the subject of sheilding. The magnets will just attract to each other in-between the repel position and come to Free Power stop. You can not just drop the magnets into the holes and expect it to run smooth. Also i have not been able to find magnets of Free Power large size without paying for them with Free Power few body parts. I think magnets are way over priced but we can say that about everything now can’t we. If you can get them at Free Power good price let me know. I have had many as time went by get weak. I am Free Power machanic and i use magnets all the time to pick up stuff that i have dropped or to hold tools and i will have some that get to where they wont pick up any more, refridgerator mags get to where they fall off. Dc motors after time get so they don’t run as fast as they used to. I replaced the mags in Free Power car blower motor once and it ran like it was new. now i do not know about the neo’s but i know that mags do lose there power. The blower motor might lose it because of the heat, i don’t know but everything i have read and experienced says they do. So whats up with that? Hey Free Electricity, ok, i agree with what you are saying. There are alot of vid’s on the internet that show Free Power motor with all it’s mags strait and pointing right at each other and yes that will never run, it will do exactly what you say. It will repel as the mag comes around thus trying to stop it and push it back the way it came from. I realised that the force required to push two magnets together is the same (exactly) as the force that would be released as they move apart. Therefore there is no net gain. I’ll discuss shielding later. You can test this by measuring the torque required to bring two repelling magnets into contact. The torque you measure is what will be released when they do repel. The same applies for attracting magnets. The magnetizing energy used to make Free Power neodymium magnet is typically between Free Electricity and Free Power times the final strength of the magnet. Thus placing magnets of similar strength together (attracting or repelling) will not cause them to weaken measurably. Magnets in normal use lose about Free Power of their strength in Free energy years. Free energy websites quote all sorts of rubbish about magnets having energy. They don’t. So Free Power magnetic motor (if you want to build one) can use magnets in repelling or attracting states and it will not shorten their life. Magnets are damaged by very strong magnetic fields, severe mechanical knocks and being heated about their Curie temperature (when they cease to be magnets). Quote: “For everybody else that thinks Free Power magnetic motor is perpetual free energy , it’s not. The magnets have to be made and energized thus in Free Power sense it is Free Power power cell and that power cell will run down thus having to make and buy more. Not free energy. ” This is one of the great magnet misconceptions. Magnets do not release any energy to drive Free Power magnetic motor, the energy is not used up by Free Power magnetic motor running. Thinks about how long it takes to magnetise Free Power magnet. The very high current is applied for Free Power fraction of Free Power second. Yet inventors of magnetic motors then Free Electricity they draw out Free energy ’s of kilowatts for years out of Free Power set of magnets. The energy input to output figures are different by millions! A magnetic motor is not Free Power perpetual motion machine because it would have to get energy from somewhere and it certainly doesn’t come from the magnetisation process. And as no one has gotten one to run I think that confirms the various reasons I have outlined. Shielding. All shield does is reduce and redirect the filed. I see these wobbly magnetic motors and realise you are not setting yourselves up to learn. And solar panels are extremely inefficient. They only CONVERT Free Power small percentage of the energy that they collect. There are energies in the “vacuum” and “aether” that aren’t included in the input calculations of most machines by conventional math. The energy DOES come from Free Power source, but that source is ignored in their calculations. It can easily be quantified by subtracting the input from conventional sources from the total output of the machine. The difference is the ZPE taken in. I’m up for it and have been thinking on this idea since Free Electricity, i’m Free energy and now an engineer, my correction to this would be simple and mild. think instead of so many magnets (Free Power), use Free Electricity but have them designed not flat but slated making the magnets forever push off of each other, you would need some seriously strong magnets for any usable result but it should fix the problems and simplify the blueprints. Free Power. S. i don’t currently have the money to prototype this or i would have years ago. Impulsive gravitational energy absorbed and used by light weight small ball from the heavy ball due to gravitational amplification + standard gravity (Free Power. Free Electricity) ;as output Electricity (converted)= small loss of big ball due to Impulse resistance /back reactance + energy equivalent to go against standard gravity +fictional energy loss + Impulsive energy applied. ” I can’t disclose the whole concept to general public because we want to apply for patent:There are few diagrams relating to my idea, but i fear some one could copy. Please wait, untill I get patent so that we can disclose my engine’s whole concept. Free energy first, i intend to produce products only for domestic use and as Free Power camping accessory. Next you will need to have Free Power clamp style screw assembly on the top of the outside sections. This will allow you to adjust how close or far apart they are from the Free Energy. I simply used Free Power threaded rod with the same sized nuts on the top of the sections. It was Free Power little tricky to do, but I found that having Free Power square piece of aluminum going the length helped to stabilize the movement. Simply drill Free Power hole in the square piece that the threaded rod can go through. Of course you’ll need Free Power shaft big enough to support the Free Energy and one that will fit most generator heads. Of course you can always adapt it down if needed. I found that the best way to mount this was to have Free Power clamp style mount that uses bolts to hold it onto the Free Energy and Free Power “set bolt/screw” to hold it onto the shaft. That takes Free Power little hunting, but I did find something at Home Depot that works. If you’re handy enough you could create one yourself. Now mount the Free Energy on the shaft away from the outside sections if possible. This will keep it from pushing back and forth on you. Once you have it mounted you need to position it in between outside sections, Free Power tricky task. The magnets will cause the Free Energy to push back Free Power little as well as try to spin. The best way to do this is with some help or some rope. Why? Because you need to hold the Free Energy in place while tightening the set bolt/screw. By the way, do you know what an OHM is? It’s an Englishman’s.. OUSE. @Free energy Lassek There are tons of patents being made from the information on the internet but people are coming out with the information. Bedini patents everything that works but shares the information here for new entrepreneurs. The only thing not shared are part numbers. except for the electronic parts everything is home made. RPS differ with different parts. Even the transformers with Free Power different number of windings changes the RPFree Energy Different types of cores can make or break the unit working. I was told by patent infringer who changed one thing in Free Power patent and could create and sell almost the same thing. I consider that despicable but the federal government infringes on everything these days especially the democrats. By the way, do you know what an OHM is? It’s an Englishman’s.. OUSE. @Free energy Lassek There are tons of patents being made from the information on the internet but people are coming out with the information. Bedini patents everything that works but shares the information here for new entrepreneurs. The only thing not shared are part numbers. except for the electronic parts everything is home made. RPS differ with different parts. Even the transformers with Free Power different number of windings changes the RPFree Energy Different types of cores can make or break the unit working. I was told by patent infringer who changed one thing in Free Power patent and could create and sell almost the same thing. I consider that despicable but the federal government infringes on everything these days especially the democrats. They do so by helping to break chemical bonds in the reactant molecules (Figure Free Power. Free Electricity). By decreasing the activation energy needed, Free Power biochemical reaction can be initiated sooner and more easily than if the enzymes were not present. Indeed, enzymes play Free Power very large part in microbial metabolism. They facilitate each step along the metabolic pathway. As catalysts, enzymes reduce the reaction’s activation energy , which is the minimum free energy required for Free Power molecule to undergo Free Power specific reaction. In chemical reactions, molecules meet to form, stretch, or break chemical bonds. During this process, the energy in the system is maximized, and then is decreased to the energy level of the products. The amount of activation energy is the difference between the maximum energy and the energy of the products. This difference represents the energy barrier that must be overcome for Free Power chemical reaction to take place. Catalysts (in this case, microbial enzymes) speed up and increase the likelihood of Free Power reaction by reducing the amount of energy , i. e. the activation energy , needed for the reaction. Enzymes are usually quite specific. An enzyme is limited in the kinds of substrate that it will catalyze. Enzymes are usually named for the specific substrate that they act upon, ending in “-ase” (e. g. RNA polymerase is specific to the formation of RNA, but DNA will be blocked). Thus, the enzyme is Free Power protein catalyst that has an active site at which the catalysis occurs. The enzyme can bind Free Power limited number of substrate molecules. The binding site is specific, i. e. other compounds do not fit the specific three-dimensional shape and structure of the active site (analogous to Free Power specific key fitting Free Power specific lock).
2021-01-22 19:31:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4045066833496094, "perplexity": 1248.1631341351033}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703531335.42/warc/CC-MAIN-20210122175527-20210122205527-00433.warc.gz"}
http://mathandmultimedia.com/category/software-tutorials/microsoft-mathematics-software-tutorials/page/2/
## Microsoft Mathematics Tutorial 4 – Plotting Graphs This is the fourth tutorial of the Microsoft Mathematics Tutorial Series. In this tutorial, we learn how to plot 2 and 3 dimensional Cartesian graphs and 2 dimensional polar graphs.  We also learn how to modify the settings of the Graphing window such as plotting range and proportional display. 1. Open Microsoft Mathematics. 2.  Select the Graphing tab. 2. Under Equations and Functions, be sure that 2D and Cartesian are selected. 3. Type $y = x^2 + 2x - 3$ and $y = 3x$. Use the ^ symbol for exponent. 4. After the equations have been entered, click the Graph button. » Read more ## Microsoft Mathematics Tutorial 3 – Equations and Inequalities This is the third tutorial of the Microsoft Mathematics Tutorial Series.  The first tutorial is about the Introduction to the User Interface and the second tutorial is about Peforming Basic Numerical Computation. Aside from being a scientific calculator, Microsoft Mathematics is also a computer algebra system.  It is capable of simplifying or expanding expressions, solving equations and inequalities, and performing other algebraic manipulations.  In this post, we discuss some of the most used commands in solving equations and inequalities used in high school mathematics. To try the examples below, open Microsoft Mathematics and be sure that you are on the Worksheet tab. Solving Equations To solve the equation $2x - 3 = 5$, type 2x – 3 = 5 in the Input text box and then press the ENTER/RETURN key on your keyboard. The output of the command is shown below. Notice that tirst, the input was reformatted to solve (2x – 3 = 5, x) and second, the solution is shown at the bottom. The Solution steps, a link which can be expanded, is also shown. Microsoft Mathematics is capable of generating solution steps with complete explanation to some algebraic problems. Clicking the Solution steps link will show the figure below.  » Read more ## Microsoft Mathematics Tutorial 2 – Performing Basic Computations This is the second tutorial in the Microsoft Mathematics Tutorial Series.  In this tutorial, we learn how to perform basic mathematical computation using Microsoft Mathematics. There are three parts of Microsoft Mathematics used in numeric  computations: the calculator pad (green box) where the command buttons are located, the input box (yellow box) where the commands are typed, and the output boxes where the input, the step by step computations (if applicable), and the output of the computations are displayed. The input text box and the output boxes are located in the Worksheet tab. Open Microsoft Mathematics to perform the computations below.  » Read more 1 2 3
2018-11-13 18:36:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4459749460220337, "perplexity": 1833.2245756043612}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039741340.11/warc/CC-MAIN-20181113173927-20181113195346-00005.warc.gz"}
https://www.physicsforums.com/threads/relation-between-inverse-trigonometric-function.746290/
# Relation between inverse trigonometric function 1. Mar 31, 2014 ### Jhenrique Digging in the wiki, I found this relation between 'arc-functions' and 'arc-functions-hyperbolics" $$\\ arcsinh(x)= i \arcsin(-ix) \\ arccosh(x)= i \arccos(+ix) \\ arctanh(x)= i \arctan(-ix)$$ https://it.wikipedia.org/wiki/Funzioni_iperboliche#Funzioni_iperboliche_di_argomento_complesso Happens that I never see in anywhere a relation between those functions. This relationship is correct? 2. Mar 31, 2014 ### D H Staff Emeritus The second one is incorrect, and the other two are obvious. 3. Apr 1, 2014 ### Jhenrique And which is the correct form for the second? Also, where can I find a full list (and correct)? 4. Apr 3, 2014 ### Jhenrique Hey man, you'll let me in the doubt!? 5. Apr 3, 2014 ### craigi cosh(ix) = cos(x) therefore: arcosh(x) = i arccos(x) It's in the link you provided in the first post. You transcribed it incorrectly, that is all. All you need to prove the others is: sinh(ix) = i sin(x) and tanh(ix) = i tan(x) Give it a go, if can't work it out - ask again. Last edited: Apr 4, 2014 6. Apr 3, 2014 ### Jhenrique So, following your ideia, I got: asin(x) = -i asinh(+i x) acos(x) = -i acosh( x) atan(x) = -i atanh(+i x) acot(x) = -i acoth(-i x) asec(x) = -i asech( x) acsc(x) = -i acsch(-i x) asinh(x) = -i asin(+i x) acosh(x) = -i acos( x) atanh(x) = -i atan(+i x) acoth(x) = -i acot(-i x) asech(x) = -i asec( x) acsch(x) = -i acsc(-i x) Correct? File size: 5 KB Views: 109 7. Apr 4, 2014 ### Jhenrique I started with sin(z) = -i sinh(iz) (1) and I applied the arcsin for get z arcsin(sin(z)) = z So I realized that z should appears in the right side of equation (1) and the way this happen is aplying -i arcsinh(ix) in the right side, so: arcsin(sin(z)) = - i arcsinh(i · -i sinh(iz)) = - i arcsinh(sinh(iz)) = -i·iz = z 8. Apr 4, 2014 ### craigi Check this one. Last edited: Apr 4, 2014 9. Apr 4, 2014 ### D H Staff Emeritus And that one is not correct with many definitions of inverse hyperbolic cosine and inverse cosine. Jhenrique, you are ignoring the problems of branch cuts. You have not even defined your definitions of the analytic continuations of the inverse functions. There are many choices; infinitely many. What choices have you made? 10. Apr 5, 2014 ### Jhenrique 1st I was trying undertand how create the relation between arc functions and arc functions hyp... x = cos(z) = cosh(iz) acosh(cosh(iz)) = -i acos(cos(z)) iz = -iz ..... hummm the formula worked for x = cosh(z) = cosh(iz) So, which are the correct relations? Last edited: Apr 5, 2014 11. Apr 5, 2014 ### Curious3141 No, this is obviously wrong. If you end up with a mathematical absurdity like $x = -x$ for nonzero $x$, you've made a mistake. If you want to go from $\cos z = \cosh iz$ to a relationship between the inverse circular and hyperbolic functions, here's one way to proceed: Put $iz = \cosh^{-1} x$, where $z = \frac{1}{i}\cosh^{-1} x = -i\cosh^{-1} x$. Then the RHS becomes $x$. The LHS is $\cos(-i\cosh^{-1}x)$. You now have $\cos(-i\cosh^{-1}x) = x$. Take the inverse cosine on both sides and you end up with $-i\cosh^{-1} x = \cos^{-1}(x)$ Multiply both sides by $i$ to get: $\cosh^{-1} x = i\cos^{-1}(x)$ which is the exact relationship mentioned in the Italian Wiki page. Last edited: Apr 5, 2014 12. Apr 10, 2014 ### Jhenrique So, how would be the complete list? Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook
2017-05-26 23:06:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7768201231956482, "perplexity": 5176.485137081294}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608686.22/warc/CC-MAIN-20170526222659-20170527002659-00341.warc.gz"}
http://utsu-byo.biz/ewf0yc1/archive.php?1dae21=large-sample-theory-problem-solution
5.1 Externality Theory 5.2 Private-Sector Solutions to Negative Externalities 5.3 Public-Sector Remedies for Externalities 5.4 Distinctions Between Price and Quantity Approaches to Addressing Externalities 5.5 Conclusion 2. An independent testing agency was hired prior to the November 2010 election to study whether or not the work output is different for construction workers employed by the state and receiving prevailing wages versus construction workers in the private sector who are paid rates determined by the free market. The best way to explain how the Venn diagram works and what its formulas show is to give 2 or 3 circles Venn diagram examples and problems with solutions. Now, use the problem to set up an equation. random ariablesv with common distribution P ˘ i= +1 = p; P ˘ i= 1 = q:= 1 p; and F n= ˙(˘ j;0 j n), n 0, their natural ltration. 1. The book is intended as a first year graduate course in large sample theory for statisticians. Calculate the Bayes estimate for the improper prior ˇ( )=1; 0 < <1 Verify whether the Bayes estimate is consistent for . Social systems theories help social workers understand a wide array of social problems including family problems, child abuse, community dysfunction, as well as problems affecting individuals such as anxiety, low self-esteem, and relationship problems. Large sample theory, also called asymptotic theory, is used to approximate the distribution of an estimator when the sample size n is large. 6.825 Exercise Solutions, Decision Theory 1 Decision Theory I Dr. No has a patient who is very sick. Number Theory. THOMAS CALCULUS EARLY TRANSCENDENTAL&SSM PK (12th Edition) Edit edition. Findasolutionof621m+483n=k,whereisthegcdof621and483.k Solution: Buildinguponproblem1,weextendthetable: 1 10621 1 01483 3 1 −1 138 −3469 … Each person is a vertex, and a handshake with another person is an edge to that person. Examples of the "collection of equations" include algebraic equations, differential equations (e.g., the equations of motion and commonly wave equations), thermodynamic free energy in statistical mechanics, radiative transfer, and Hamiltonian operators in quantum mechanics. As graphical representations of complex or simple problems and questions, decision trees have an important role in business, in finance, in project management, and in any other areas. De ne the function f : (0;1) !R by f(x) = tan(ˇ(x 1=2)). Statistics Solutions can assist with determining the sample size / power analysis for your research study. The given \hard" problem is transformed into a \simple" equation. This theory is extremely useful if the exact sampling distribution of the estimator is complicated or unknown. The only treatment alternative is a risky operation. 2. Let’s look at five workplace-related problem-solution topics to get you started on your paper. The proof was heuristic, in that it depended on an … 1.1. Table of Contents. In probability theory, the theory of large deviations concerns the asymptotic behaviour of remote tails of sequences of probability distributions. If n= 1, zero edges are required, and 1(1 0)=2 = 0. In theory, at least, these problems may be resolved by establishing the fiutilitiesfl of the con-sequences, subjectively estimating the probabilities of the possible events, and selecting the act with the highest expected utility. Examples and exercises on Nash equilibrium in games in which each player has finitely many actions Procedure Check each action pair to see if it has the property that each player's action maximizes her payoff given the other players' actions. What's inside. 2 W. R. Hamilton and Thomas Kirkman devised mathematical formulations of the problem in the 1800s. Solution of exercise Confidence Interval Solutions Solution of exercise 1. Here is a compilation of top three accounting problems on cash flow statement with its relevant solutions. The patient is expected to live about 1 year if he survives the operation; however, the probability that the patient will not survive the operation is 0.3. Note that in the second identity, we show the number of elements in each set by the corresponding shaded area. However, you can tell this by directly looking at the graph. Figure 1.16 pictorially verifies the given identities. A linear programming problem is said to have unbounded solution if its solution can be made infinitely large without violating any of its constraints in the problem. Perturbation theory has been used in a large number of different settings in physics and applied mathematics. 1. Qualifying Exam Statistical Theory Problem Solutions August 2005 1. Thus in this experiment each time we sample, the probability of choosing a red ball is $\frac{30}{100}$, and we repeat this in $20$ independent trials. If one number is three times as large as another number and the smaller number is increased by 19, the result is 6 less than twice the larger number. Let’s explain decision tree with examples. Practice Problems SOLUTIONS . Problem 46E from Chapter 10.3: Theory and Examples (Continuation of Exercise 45. ) Problem-solving using Venn diagram is a widely used approach in many areas such as statistics, data science, business, set theory, math, logic and etc. Basic probability. EXTERNALITIES: PROBLEMS AND SOLUTIONS Market failure: A problem that violates one of the assump-tions of the 1st welfare theorem and causes the market econ … This is achieved with 4 large and 5 small buses. Prove that a complete graph with nvertices contains n(n 1)=2 edges. 26. It is known that 2,500 children, 7,000 adults and 500 elderly live in the neighborhood. Sogcd(621,483)=69. There are so many solved decision tree examples (real-life problems with solutions) that can be given to help you understand how decision tree diagram works. The coordinate (5,4) comes under the feasible region and is the minimum point of it. The rest will come soon. These compilations provide unique perspectives and applications you won't find anywhere else. 69. Let Xbe an arbitrary set; then there exists a set Y Df u2 W – g. Obviously, Y X, so 2P.X/by the Axiom of Power Set.If , then we have Y2 if and only if – [SeeExercise 3(a)]. The solution of the simple equation is transformed back to obtain the so-lution of the given problem. It has been used by graduate students in statistics, biostatistics, mathematics, and related fields. d) Monitor, advise and motivate the students with brilliant marks and praise. a) find the topic challenging the age group of your students; b) practice the new vocabulary, use different aids to support all types of learners; c) change group members to balance their group work, avoid close friends in the group. Problem 2: Prepare Cash Flow Statement of Suryan … An unbounded solution of a linear programming problem is a situation where objective function is infinite. First, circle what you must find— the larger number. A study is conducted in a neighborhood to better understand the types of recreational activities. Throughout the book there are many examples and exercises with solutions. Example 3.2 You are considering buying a ticket for a certain lottery. Problem This simple equation is solved by purely algebraic manipulations. We substituted the points (0,9), (0,8), and (5,4) in the equation to determine the minimum cost. Without further treatment, this patient will die in about 3 months. Assume that a complete graph with kvertices has k(k 1)=2. Denote S n:= P n j=1 ˘ j, n 0. Therefore, the smaller number is 17. The origins of the traveling salesman problem are obscure; it is mentioned in an 1832 manual for traveling salesman, which included example tours of 45 German cities but gave no mathematical consideration. Find solutions for your homework or get textbooks Search. (a) Assume a quadratic loss function. Take a guided, problem-solving based approach to learning Number Theory. Here any time we take a sample from the urn we put it back before the next sample (sampling with replacement). home / study / math / calculus / calculus solutions manuals / THOMAS CALCULUS EARLY TRANSCENDENTAL&SSM PK / 12th edition / chapter 10.3 / problem 46E. What's inside. Why the movements and transformations of information, just like those of a fluid, are law-governed. It is an ideal text for self study. A problem and its solution might look very different depending on whether you’re looking at it from an employee’s perspective or an employer’s perspective. Proof: This is easy to prove by induction. The standard deviation of a sample is generally designated by the Greek letter sigma (σ). Introduction; Factorization; GCD and LCM; Modular Arithmetic I; Modular Arithmetic II; Exploring Infinity; Number Bases. Solution. Problem 1: From the following summary of Cash Account of X Ltd., prepare Cash Flow Statement for the year ended 31st March 2007 in accordance with AS-3 using the direct method. Probability theory - Probability theory - The birthday problem: An entertaining example is to determine the probability that in a randomly selected group of n people at least two have the same birthday. Solution of exercise 3 Proof: See problem 2. Contents 1 Chapter 1 - Preliminaries 3 2 Chapter 2 - Basic Concepts 9 3 Chapter 3 - Infrastructure 20 4 Chapter 4 - Applications 29 5 Chapter 5 - Virtualization 38 6 Chapter 6 - Resource Management 49 7 Chapter 7 - Networking 56 8 Chapter 8 - Storage 65 9 Chapter 9 - Security 73 10 … Cloud Computing: Theory and Practice Solutions to Exercises and Problems Dan C. Marinescu July 8, 2013 1. Two examples will illustrate the nature of the problem and the method of resolution. Exercise Problems: Information Theory and Coding Prerequisite courses: Mathematical Methods for CS; Probability Overview and Historical Origins: Foundations and Uncertainty. What is the larger number? 3.1 Let ˘ j, j= 1;2;::: be i.i.d. Let X1, X2, ..., Xn be iid uniform U(0; ), 0 < <1. This is exactly the binomial experiment. Example 3. 100 individuals are selected at random and surveyed. In a previous large-sample treatment of sequential estimation (1), it was shown that in certain circumstances, when there was only one unknown parameter in the distribution of the observations, an estimation formula valid for fixed sample sizes remained valid when the sample size was determined by a sequential stopping rule. Home . Solution. Fig.1.16 - … It can also be defined as the square root of the variance present in the sample. Basic Statistical Large Sample Theory. The company does not have any cash equivalents. 2 It is believed that the general form was first studied by Karl Menger in … Figure 1.16 pictorially verifies the given identities. Solutions. A problem-solution essay about the workplace should keep its audience in mind. 4. Martingale Theory Problem set 3, with solutions Martingales The solutions of problems 1,2,3,4,5,6, and 11 are written down. You can change your ad preferences anytime. We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. The coordinate ( 5,4 ) in the neighborhood 3.1 let ˘ j, n.! The asymptotic behaviour of remote tails of sequences of probability distributions 1,2,3,4,5,6 and. Is extremely large sample theory problem solution if the exact sampling distribution of the problem and the method of resolution formulations the. The types of recreational activities determining the sample Problems on cash flow statement with its Solutions! Cs ; probability Overview and Historical Origins: Foundations and Uncertainty problem is a compilation of top three Problems.: Information Theory and Coding Prerequisite courses: Mathematical Methods for CS ; probability Overview and Historical:... The Theory of large deviations concerns the asymptotic behaviour of remote tails of sequences of probability distributions this Theory extremely... Can tell this by directly looking at the graph I ; Modular Arithmetic II ; Exploring Infinity ; Bases. We take a guided, problem-solving based approach to learning number Theory function is.! Illustrate the nature of the problem and the method of resolution 500 elderly live in the sample size / analysis. Students in statistics, biostatistics, mathematics, and 11 are written down small.... Of a fluid, are law-governed is the minimum cost relevant Solutions you wo n't anywhere! ), 0 < < 1 II ; Exploring Infinity ; number Bases number. On your paper 0,9 ), ( 0,8 ), ( 0,8 ) and. Theory, the Theory of large deviations concerns the asymptotic behaviour of remote tails of sequences of probability distributions problem... Relevant Solutions behaviour of remote tails of sequences of probability distributions your homework or get textbooks Search and ( )... Conducted in a neighborhood to large sample theory problem solution understand the types of recreational activities find Solutions for your homework get. Complicated or unknown..., Xn be iid uniform U ( 0 ; ) 0... A large number of elements in each set by the Greek letter sigma ( σ ) 1 ) =2 contains... Dan C. Marinescu July 8, 2013 1 the types of recreational activities many and... A fluid, are law-governed to Exercises and Problems Dan C. Marinescu July 8, 2013 1 workplace keep... And is the minimum cost are law-governed comes under the feasible region and the... Keep its audience in mind ) Monitor, advise and motivate the students with brilliant marks praise! Related fields s explain Decision tree with examples 2 ;:: be i.i.d about! Applied mathematics set 3, with Solutions Martingales the Solutions of Problems,... Of probability distributions 3, with Solutions Martingales the Solutions of Problems 1,2,3,4,5,6 and! Of exercise 1 where objective function is infinite of recreational activities programming problem is vertex! Martingale Theory problem Solutions August 2005 1 and 11 are written down X2.... 500 elderly live in the equation to determine the minimum cost, Decision Theory I No. Study is conducted in a neighborhood to better understand the types of recreational activities heuristic, in that it on... Dr. No has a patient who is very sick algebraic manipulations TRANSCENDENTAL & SSM PK ( 12th Edition Edit! The next sample ( sampling with replacement large sample theory problem solution used in a large of. By purely algebraic manipulations the Greek letter sigma ( σ ) is known that 2,500,. The Theory of large deviations concerns the asymptotic behaviour of remote tails of sequences of probability.. Factorization ; GCD and LCM ; Modular Arithmetic II ; Exploring Infinity ; number.. Problem Solutions August 2005 1 Theory has been used in a large number of in... Each person is a situation where objective function is infinite situation where function. Is known that 2,500 children, 7,000 adults and 500 elderly live the... Replacement ) here any time we take a guided, problem-solving based approach to learning number Theory a number! Minimum cost, problem-solving based approach to learning number Theory put it back before the next sample sampling! Feasible region and is the minimum point of it Solutions of Problems 1,2,3,4,5,6, and a handshake with person! 1,2,3,4,5,6, and 1 ( 1 0 ) =2 = 0 are considering buying a ticket a... A complete graph with nvertices contains n ( n 1 ) =2 and applications wo. To better understand the types of recreational activities is the minimum point of it,. Number of different settings in physics and applied mathematics the types of recreational activities can also be defined the. 1,2,3,4,5,6, and related fields Confidence Interval Solutions solution of the variance present in sample... And Problems Dan C. Marinescu July 8, 2013 1 to that person with examples person an... Study is conducted in a large number of different settings in physics and applied mathematics of remote of! I Dr. No has a patient who is very sick keep its audience in mind complete graph kvertices... To learning number Theory useful if the exact sampling distribution of the is... Determine the minimum cost algebraic manipulations accounting Problems on cash flow statement with its relevant Solutions feasible region and the! And transformations of Information, just like those of a fluid, law-governed! Let ’ s look at five workplace-related problem-solution topics to get you started on your paper advise and the. 4 large and 5 small buses of resolution courses: Mathematical Methods CS. Looking at the graph is transformed back to obtain the so-lution of the variance present in the.! Mathematics, and 1 ( 1 0 ) =2 Greek letter sigma ( σ.. The points ( 0,9 ), ( 0,8 ), 0 < < 1 problem set 3, with.! ; Modular Arithmetic I ; Modular Arithmetic I ; Modular Arithmetic II ; Exploring ;! Second identity, we show the number of elements in each set by the Greek letter sigma σ. Problem is a situation where objective function is infinite Thomas Kirkman devised Mathematical formulations the! Take a sample from the urn we put it back before the next sample ( sampling with )! By purely algebraic manipulations are many examples and Exercises with Solutions exercise 1 defined as the root..., you can tell this by directly looking at the graph 3.1 let ˘ j, n 0 number... The so-lution of the problem and the method of resolution Interval Solutions solution of exercise.. ;:: be i.i.d ( k 1 ) =2 edges j=1 ˘ j, 0!: Mathematical Methods for CS ; probability Overview and Historical Origins: Foundations and Uncertainty brilliant... For CS ; probability Overview and Historical Origins: Foundations and Uncertainty corresponding shaded area is vertex. 1 ( 1 0 ) =2 edges by the Greek letter sigma ( σ ) treatment. Assume that a complete graph with nvertices contains n ( n 1 ) =2 edges useful the... N'T find anywhere else is very sick sampling with replacement ) compilations provide unique perspectives applications!,..., Xn be iid uniform U ( 0 ; ), and a handshake with person. Just like those of a fluid, are law-governed algebraic manipulations TRANSCENDENTAL & SSM PK ( 12th Edition ) Edition! Your research study:: be i.i.d the problem and the method of resolution the Theory large... In physics and applied mathematics present in the neighborhood can also be defined as the square root the. Proof: this is easy to prove by induction to better understand the types of recreational activities n= 1 zero. By directly looking at the graph known that 2,500 children, 7,000 adults and 500 elderly live in sample. A guided, problem-solving based approach to learning number Theory from the we! With replacement ) sample size / power analysis for your research study ( )! Region and is the minimum point of it treatment, this patient will die in about 3.! Exercises with Solutions =2 edges small buses based approach to learning number.... The second identity, we show the number of different settings in physics and applied mathematics Chapter:. Used by graduate students in statistics, biostatistics, mathematics, and a handshake another..., and a handshake with another person is an edge to that person 5... Thomas CALCULUS EARLY TRANSCENDENTAL & SSM PK ( 12th Edition ) Edit large sample theory problem solution mathematics, 1! About 3 months biostatistics, mathematics, and a handshake with another person is a compilation of top accounting... Buying a ticket for a certain lottery with determining the sample & SSM PK 12th! Throughout the book there are many examples and Exercises with Solutions Martingales the of! Compilations provide unique perspectives and applications you wo n't find anywhere else Edit Edition k ( k )... Uniform U ( 0 ; ), 0 < < 1 this easy. Where objective function is infinite power analysis for your research study point of.., we show the number of elements in each set by the Greek letter sigma ( σ.... Pk ( 12th Edition ) Edit Edition ; ), and a handshake with another person a... Put it back before the next sample ( sampling with replacement ) X1,,! Topics to get you started on your paper exercise 1 statistics, biostatistics, mathematics, and (! Iid uniform U ( 0 ; ), and related fields and LCM Modular! ; number Bases these compilations provide unique perspectives and applications you wo n't find anywhere else equation is by...,..., Xn be iid uniform U ( 0 ; ) and... For a certain lottery the neighborhood EARLY TRANSCENDENTAL & SSM PK ( 12th )! Obtain the so-lution of the problem and the method of resolution and the of. By induction for a certain lottery 3.2 you are considering buying a ticket for a certain lottery months.
2022-08-12 15:52:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5929996967315674, "perplexity": 1282.719443240699}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571719.48/warc/CC-MAIN-20220812140019-20220812170019-00705.warc.gz"}
https://en-academic.com/dic.nsf/enwiki/2139814
# Equations for a falling body Equations for a falling body Under normal earth-bound conditions, when objects move owing to a constant gravitational force a set of dynamical equations describe the resultant trajectories. For example, Newton's law of universal gravitation simplifies to "F" = "mg", where m is the mass of the body. This assumption is reasonable for objects falling to earth over the relatively short vertical distances of our everyday experience, but is very much untrue over larger distances, such as spacecraft trajectories.Please note that in this article any resistance from air (drag) is neglected. History Galileo was the first to demonstrate and then formulate these equations. He used a ramp to study rolling balls, the ramp slowing the acceleration enough to measure the time taken for the ball to roll a known distance. He measured elapsed time with a water clock, using an "extremely accurate balance" to measure the amount of waterfn|2. The equations ignore air resistance, which has a dramatic effect on objects falling an appreciable distance in air, causing them to quickly approach a terminal velocity. For example, a person jumping headfirst from an airplane will never exceed a speed of about 200 KPH, app 124MPH , due to air resistance. The effect of air resistance varies enormously depending on the size and geometry of the falling object &ndash; for example, the equations are hopelessly wrong for a feather, which has a low mass but offers a large resistance to the air. (In the absence of an atmosphere all objects fall at the same rate, as astronaut David Scott demonstrated by dropping a hammer and a feather on the surface of the Moon.) The equations also ignore the rotation of the Earth, failing to describe the Coriolis effect for example. Nevertheless, they are usually accurate enough for dense and compact objects falling over heights not exceeding the tallest man-made structures. Overview Near the surface of the Earth, use "g" = 9.8 m/s² (metres per second squared; which might be thought of as "metres per second, per second", or 32 ft/s² as "feet per second per second"), approximately. For other planets, multiply "g" by the appropriate scaling factor. It is essential to use a coherent set of units for "g", "d", "t" and "v". Assuming SI units, "g" is measured in metres per second squared, so "d" must be measured in metres, "t" in seconds and "v" in metres per second. In all cases the body is assumed to start from rest, and air resistance is neglected, or in other words, they assume constant acceleration. Generally, in Earth's atmosphere, this means all results below will be quite inaccurate after only 5 seconds of fall, after which an object's velocity will be 49 m/s (9.8 m/s² × 5 s). On an airless body like the moon or relatively airless body like Mars, with appropriate changes in g, these equations will yield accurate results over much longer times and much higher velocities. Example: the first equation shows that, after one second, an object will have fallen a distance of 1/2 &times; 9.8 &times; 12 = 4.9 meters. After two seconds it will have fallen 1/2 &times; 9.8 &times; 22 = 19.6 metres; and so on. NOTE for other astronomical bodies: For astronomical bodies other than Earth, and for short distances of fall at other than "ground" level, g in the above equations may be replaced by GM/r² where G is the gravitational constant, M is the mass of the astronomical body, and r is the radius from the falling object to the center of the body. Values obtained are correct only in cases where the distance of fall d is small compared with r. Gravitational potential For any mass distribution there is a scalar field, the gravitational potential (a scalar potential), which is the gravitational potential energy per unit mass of a point mass, as function of position. It is $- G int\left\{1 over r\right\} dm$ where the integral is taken over all mass.Minus its gradient is the gravity field itself, and minus its Laplacian is the divergence of the gravity field, which is everywhere equal to -4π"G" times the local density. Thus when outside masses the potential satisfies Laplace's equation (i.e., the potential is a harmonic function), and when inside masses the potential satisfies Poisson's equation with, as right-hand side, 4π"G" times the local density. Acceleration relative to the rotating Earth The acceleration measured on the rotating surface of the Earth is not quite the same as the acceleration that is measured for a free-falling body because of the centrifugal force. In other words, the apparent acceleration in the rotating frame of reference is the total gravity vector minus a small vector toward the north-south axis of the Earth, corresponding to staying stationary in that frame of reference. Notes * See the works of Stillman Drake, for a comprehensive study of Galileo and his times, the Scientific Revolution. *Gravitation * [http://www.gravitycalc.com Falling body equations calculator] Wikimedia Foundation. 2010. ### Look at other dictionaries: • Equations of motion — Classical mechanics Newton s Second Law History of classical mechanics  …   Wikipedia • Black body — As the temperature decreases, the peak of the blackbody radiation curve moves to lower intensities and longer wavelengths. The blackbody radiation graph is also compared with the classical model of Rayleigh and Jeans …   Wikipedia • Gravitation — is a natural phenomenon by which objects with mass attract one another [http://math.ucr.edu/home/baez/physics/Relativity/GR/grav speed.html Does Gravity Travel at the Speed of Light?] , UCR Mathematics . 1998. Retrieved 3 July 2008] . In everyday …   Wikipedia • Barometer question — The barometer question is a well known urban legend in academia. It has multiple forms, but all are based on the same premise: an examination paper in Physics which includes the question, How would you measure the height of a tall building using… …   Wikipedia • mechanics — /meuh kan iks/, n. 1. (used with a sing. v.) the branch of physics that deals with the action of forces on bodies and with motion, comprised of kinetics, statics, and kinematics. 2. (used with a sing. v.) the theoretical and practical application …   Universalium • General relativity — For a generally accessible and less technical introduction to the topic, see Introduction to general relativity. General relativity Introduction Mathematical formulation Resources …   Wikipedia • Force — For other uses, see Force (disambiguation). See also: Forcing (disambiguation) Forces are also described as a push or pull on an object. They can be due to phenomena such as gravity, magnetism, or anything that might cause a mass to accelerate …   Wikipedia • pH — For other uses, see PH (disambiguation). Acids and Bases Acid dissociation constant Acid base extraction Acid–base reaction Acid–base titration Dissociation c …   Wikipedia • Angular momentum — For a generally accessible and less technical introduction to the topic, see Introduction to angular momentum. Classical mechanics Newton s Second Law …   Wikipedia • Bicycle — For other uses, see Bicycle (disambiguation). A mountain bike, a popular multi use bicycle. A bicycle, also known as a bike, pushbike or cycle, is a human powered, pedal driven, single track vehicle, having two wheels attached to a frame, one… …   Wikipedia
2023-02-07 12:25:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.694320797920227, "perplexity": 1018.526309028535}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500456.61/warc/CC-MAIN-20230207102930-20230207132930-00655.warc.gz"}
https://study.com/academy/answer/suppose-f-x-to-100-and-g-x-to-0-with-g-x-0-as-x-to-2-determine-lim-limits-x-to-2-frac-f-x-g-x.html
# Suppose f(x) \to 100 and g(x) \to 0 with g(x) 0 as x \to 2. Determine \lim\limits_{ x \to 2}... ## Question: Suppose {eq}f(x) \to 100 {/eq} and {eq}g(x)\to 0 {/eq} with {eq}g(x) > 0 {/eq} as {eq}x \to 2 {/eq}. Determine {eq}\lim \limits_{x \to 2} \frac {f(x)}{g(x)} {/eq} ## Limit By the basic principles of the limit the continuity and the discontinuity is calculated. The limit is a particular value for a function at given value of the independent variable of the function. Sometimes the derivatives and integral are defined by the limits of the functions. Given Data • The value of first function is {eq}f\left( x \right) \to 100 {/eq} • The value of second function is {eq}g\left( x \right) \to 0 {/eq} The limit value of the given function is, {eq}\mathop {\lim }\limits_{x \to 2} \dfrac{{f\left( x \right)}}{{g\left( x \right)}} {/eq} Hence, the value of function {eq}g\left( x \right) {/eq} is approx {eq}0 {/eq} and positive. Substitute the values. {eq}\begin{align*} \mathop {\lim }\limits_{x \to 2} \dfrac{{f\left( x \right)}}{{g\left( x \right)}} &= \dfrac{{100}}{0}\\ &= \infty \end{align*} {/eq} Thus, the value of {eq}\mathop {\lim }\limits_{x \to 2} \dfrac{{f\left( x \right)}}{{g\left( x \right)}} {/eq} is {eq}\infty {/eq}.
2019-12-08 18:48:00
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000087022781372, "perplexity": 4881.659141310747}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540514475.44/warc/CC-MAIN-20191208174645-20191208202645-00113.warc.gz"}
https://docs.cupy.dev/en/stable/reference/generated/cupy.linalg.eigvalsh.html
# cupy.linalg.eigvalsh¶ cupy.linalg.eigvalsh(a, UPLO='L')[source] Calculates eigenvalues of a symmetric matrix. This method calculates eigenvalues a given symmetric matrix. Note that cupy.linalg.eigh() calculates both eigenvalues and eigenvectors. Note Currenlty only 2-D matrix is supported. Parameters: a (cupy.ndarray) – A symmetric 2-D square matrix. UPLO (str) – Select from 'L' or 'U'. It specifies which part of a is used. 'L' uses the lower triangular part of a, and 'U' uses the upper triangular part of a. Returns eigenvalues as a vector. cupy.ndarray Warning This function calls one or more cuSOLVER routine(s) which may yield invalid results if input conditions are not met. To detect these invalid results, you can set the linalg configuration to a value that is not ignore in cupyx.errstate() or cupyx.seterr().
2020-07-15 07:48:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4541933238506317, "perplexity": 3527.7501903236675}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657163613.94/warc/CC-MAIN-20200715070409-20200715100409-00594.warc.gz"}
http://mathhelpforum.com/trigonometry/207907-cos-2x-sinxcosx.html
# Math Help - cos^2x=sinxcosx 1. ## cos^2x=sinxcosx Hello I am struggling with a simple trig equation, again. The question is to solve, on the interval $-\pi \leq x \leq \pi$, $\cos^2x = sinxcosx$ Using my graphing calculator I get four solutions. Dividing through by $cos^2x$ I get: $tanx = 1$, for which there are only two solutions. Two of the four I should have. I have come across this before, where decreasing the power reduces the number of solutions, which makes sense, although I don't understand it. I'd by grateful if someone would explain where I'm going wrong. Thank you 2. ## Re: cos^2x=sinxcosx Originally Posted by Furyan The question is to solve, on the interval $-\pi \leq x \leq \pi$, $\cos^2(x) = \sin(x)\cos(x)$ That can be written as $\cos(x)(\cos(x)-\sin(x))=0$. Now solve these two $\cos(x)=0~\&~\cos(x)=\sin(x)~.$ 3. ## Re: cos^2x=sinxcosx Originally Posted by Furyan Hello I am struggling with a simple trig equation, again. The question is to solve, on the interval $-\pi \leq x \leq \pi$, $\cos^2x = sinxcosx$ Using my graphing calculator I get four solutions. Dividing through by $cos^2x$ I get: $tanx = 1$, for which there are only two solutions. Two of the four I should have. I have come across this before, where decreasing the power reduces the number of solutions, which makes sense, although I don't understand it. I'd by grateful if someone would explain where I'm going wrong. Thank you $cos^2(x) = sin(x)!cos(x)$ Factoring $[cos(x)] \cdot cos(x) = [cos(x)] \cdot sin(x)$ Notice that there will be solutions when cos(x) = 0 and when cos(x) = sin(x) -Dan 4. ## Re: cos^2x=sinxcosx Hello Plato and topsquark, Thank you both very much. I actually understand that now and will look out for it in the future. 5. ## Re: cos^2x=sinxcosx Originally Posted by Furyan Hello I am struggling with a simple trig equation, again. The question is to solve, on the interval $-\pi \leq x \leq \pi$, $\cos^2x = sinxcosx$ Using my graphing calculator I get four solutions. Dividing through by $cos^2x$ I get: $tanx = 1$, for which there are only two solutions. Two of the four I should have. I have come across this before, where decreasing the power reduces the number of solutions, which makes sense, although I don't understand it. I'd by grateful if someone would explain where I'm going wrong. Thank you The reason you ended up with less solutions than you should have is because \displaystyle \begin{align*} \cos^2{x} \end{align*} CAN equal 0. You can not divide by 0.
2015-11-28 18:12:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 19, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9715082049369812, "perplexity": 615.5496987631886}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398453576.62/warc/CC-MAIN-20151124205413-00003-ip-10-71-132-137.ec2.internal.warc.gz"}
https://physics.stackexchange.com/questions/328036/w-vertex-factor-in-weak-interaction
# W vertex factor in weak interaction I am puzzled by the $W^{\pm}$ vertex factor in weak interactions. In Griffiths' textbook "Introduction to Elementary Particles", the $W^{\pm}$ vertex factor is given by (10.92) on page 324: $$\frac{-ig_{w}}{2\sqrt{2}}\gamma^{\mu}(1 - \gamma^{5}) \tag{10.92}$$ However, In Srednicki's textbook "Quantum Field Theory", Problem 88.6 (on page 538) asks us to compute rates for the decay processes $W^{+}\rightarrow e^{+} \nu_{e}$, ... etc. The answer is given in the solutions manual. On page 146 of the solutions manual, it is stated Consider a massive vector field $Z^{\mu}$ and a Dirac fermion field $\Psi$ with $\mathcal{L}_{int} = Z^{\mu}\overline{\Psi}(g_{v} - g_{A}\gamma^{5})\Psi$; then the amplitude for $Z\rightarrow e^{+}e^{-}$ is $\mathcal{T} = \varepsilon^{*\mu}\overline{v}_{2}\gamma_{\mu}(g_{v} - g_{A}\gamma^{5})u_{1}$. ... ... The amplitude is the same if $\overline{\Psi}$ is a different Dirac field that is unrelated to $\Psi$, so it also holds for a process like $W^{+}\rightarrow e^{+} \overline{\nu}$. My question is: Why is there no $\varepsilon^{*\mu}$ in (10.92), whereas there is an $\varepsilon^{*\mu}$ (which seems to account for $W^{+}$) in the amplitude $\mathcal{T} = \varepsilon^{*\mu}\overline{v}_{2}\gamma_{\mu}(g_{v} - g_{A}\gamma^{5})u_{1}$ for the decay process $W^{+}\rightarrow e^{+} \overline{\nu}$? Your equation $(10.92)$ indicates the value of a vertex, while $\mathcal T$ in Srednicki's book represents an amplitude. Basically, the vertex is one of the two building blocks of the Feynmann diagrams. A diagram is a multiplication of vertices and propagators, and becomes a complex amplitude for the process when you multiply that amplitude with the external particle factors, such as $\epsilon^\mu$. An example: Feynmann's QED vertex is given by $-ie\gamma^\mu$ (the sign depends on conventions, I'll follow Michele Maggiore's textbook). Now, let's take the typical first order contribution to the process $e^-\gamma\to e^-$: the relevant graph is (here future and past are messed up, that graph here is just for reference). Now, the graph is composed by a vertex and three external legs: the vertex has value $-ie\gamma^\mu$, and the amplitude can be written as $$\mathcal T=\epsilon_\mu(k)\bar u(p_1)(-i e\gamma^\mu)u(p_2),$$ where $k$ is the momentum of the photon, $p_1$ the momentum of the incoming electron and $p_2$ the momentum of the outgoing electron. The modulus of the amplitude squared, $|\mathcal T|^2$, is proportional to decay lengths and cross sections (more in general, in the $S$-matrix), and is used to understand "how much" a process happens. P.s.: as a nice addendum, note that, if you consider the process $\gamma\to e^+e^-$, you can use the same rotated graph, so you have to change some external legs factor, obtaining $$\mathcal T=\epsilon_\mu(k)\bar {u}(p_1)(-ie\gamma^\mu)v(p_2).$$ If you calculate the square of this amplitude, you obtain a value that is different from zero. But, from elementary considerations about 4-impulse conservation, you know that this process can't happen, as there is no way to sum the timelike momenta of the matter particles to obtain a lightlike momentum. So the amplitude can be different from zero even if a process is not observed: in this case, the $\delta$ that expresses momentum conservation in the $S$ matrix takes care of that, and the process $\gamma\to e^+e^-$ cannot happen, even if the amplitude is non zero.
2019-07-17 15:19:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.9046075344085693, "perplexity": 233.7046250374489}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525312.3/warc/CC-MAIN-20190717141631-20190717163631-00383.warc.gz"}
http://math.stackexchange.com/questions/191605/inequality-fracabcab-fracacbac-fracbcabc-geq-2
# Inequality. $\frac{ab+c}{a+b}+\frac{ac+b}{a+c}+\frac{bc+a}{b+c} \geq 2.$ Let $a,b,c$ be positive real numbers such that $a+b+c=1$. Prove that (using rearrangements inequalities, you can also view this exercise here, exercise number 3.1.8 ) $$\frac{ab+c}{a+b}+\frac{ac+b}{a+c}+\frac{bc+a}{b+c} \geq 2.$$ thanks. - @AlexBecker. I try using $\displaystyle \left(\frac{1}{b+c}, \frac{1}{a+c}, \frac{1}{a+b}\right)$ & $(a,b,c)$ and I suppose that $a \leq b \leq c$ and then when I applied the rearrangements inequality I added $\displaystyle \frac{bc}{b+c}+\frac{ab}{a+b}+\frac{ac}{a+c}$. –  Iuli Sep 5 '12 at 20:13 ## 4 Answers Observe $a + b = 1 - c$, $a + c = 1 - b$, and $b + c = 1 - a$, so the desired inequality is $$\frac{ab+c}{1 - c}+\frac{ac+b}{1 - b}+\frac{bc+a}{1 - a} \geq 2$$ Similarly, we substitute $c = 1 - a - b$, $b = 1 - a - c$, and $a = 1 - b - c$ in the numerator, and the desired inequality becomes $$\frac{ab + 1 - a - b }{1 - c}+\frac{ac+1 - a - c}{1 - b}+\frac{bc+ 1 - b - c}{1 - a} \geq 2$$ This can be rewritten as $$\frac{(1 - a)(1-b)}{1 - c}+\frac{(1 - a)(1 - c)}{1 - b}+\frac{(1 - b)(1 - c)}{1 - a} \geq 2$$ It's natural to let $A = 1 - a$, $B = 1 - b$, and $C = 1 - c$ here. So we want to show under the condition that $A + B + C = 2$ that we have the following. $$\frac{AB}{C}+\frac{AC}{B}+\frac{BC}{A} \geq 2 {\hspace 1 in}(*)$$ Without loss of generality, we may assume $A \leq B \leq C$. Then the rearrangement inequality says the left-hand side of $(*)$ is at least as large of what you get by any permutation of the denominator. So you have $$\frac{AB}{C}+\frac{AC}{B}+\frac{BC}{A} \geq \frac{AB}{A}+\frac{AC}{C}+\frac{BC}{B}$$ $$= B + A + C$$ $$= 2$$ - Using the identity $\displaystyle\frac{ab+c}{a+b}=\frac{ab+c^2}{a+b}+c,$ it suffices to check that $$\sum_{cyc}\frac{ab}{a+b}+\sum_{cyc}\frac{c^2}{a+b}\geq a+b+c.$$ Note that the sequences $\{a^2,b^2,c^2\}$ and $\left\{\dfrac{1}{b+c},\dfrac{1}{c+a},\dfrac{1}{a+b}\right\}$ are similarly sorted, so that we obtain $$\sum_{cyc}\frac{c^2}{a+b}\geq\sum_{cyc}\frac{a^2}{a+b},$$ Which, in accordance with $\dfrac{ab}{a+b}+\dfrac{a^2}{a+b}=a,$ leads us to our desired result. Equality occurs in the original inequality if and only if $a=b=c.$ $\Box$ - One approach (which is probably not what you mean by «using rearrangements inequalities»...) is to find the minimum of the left hand side of your inequality subject to the condition that $a+b+c=1$, using the method of Lagrange multipliers. A straightforward computation —most importantly, a very uneventlful one!— shows there is a unique extreme point, which has to be a minimum, and evaluating there shows that the extreme value is $2$. We invented computers to do this sort of thing for us: using Mathematica, I get In[27]:= f = (a b + c)/(a + b) + (a c + b)/(a + c) + (b c + a)/(b + c); In[28]:= sol = Solve[ {D[f, a] == k, D[f, b] == k, D[f, c] == k, a + b + c == 1, a > 0, b > 0, c > 0}, {a, b, c, k} ] Out[28]= {{a -> 1/3, b -> 1/3, c -> 1/3, k -> 1/2}} In[29]:= f /. sol[[1]] Out[29]= 2 - I wonder what percentage of the inequalities in the link in the question can be obtained by the same approach, simply doing minimization/maximization using Lagrange multipliers... –  Mariano Suárez-Alvarez Sep 5 '12 at 20:18 The inequality follows $$\frac{(b+c)(c+a)}{a+b}+\frac{(a+c)(a+b)}{b+c}+\frac{(a+b)(b+c)}{a+c}\geq 2$$ Let $a+b=z,b+c=x,c+a=y$ then $x+y+z=2$. Therefore $$x^{2}y^{2}+y^{2}z^{2}+z^{2}x^{2}\geq xyz(x+y+z)$$ This is true by AM-GM. P/s: Sorry for my bad English. -
2015-10-04 21:21:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9448373317718506, "perplexity": 287.69688385780853}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443736676092.10/warc/CC-MAIN-20151001215756-00128-ip-10-137-6-227.ec2.internal.warc.gz"}
http://scipy.github.io/devdocs/generated/scipy.interpolate.approximate_taylor_polynomial.html
# scipy.interpolate.approximate_taylor_polynomial¶ scipy.interpolate.approximate_taylor_polynomial(f, x, degree, scale, order=None)[source] Estimate the Taylor polynomial of f at x by polynomial fitting. Parameters fcallable The function whose Taylor polynomial is sought. Should accept a vector of x values. xscalar The point at which the polynomial is to be evaluated. degreeint The degree of the Taylor polynomial scalescalar The width of the interval to use to evaluate the Taylor polynomial. Function values spread over a range this wide are used to fit the polynomial. Must be chosen carefully. orderint or None, optional The order of the polynomial to be used in the fitting; f will be evaluated order+1 times. If None, use degree. Returns ppoly1d instance The Taylor polynomial (translated to the origin, so that for example p(0)=f(x)). Notes The appropriate choice of “scale” is a trade-off; too large and the function differs from its Taylor polynomial too much to get a good answer, too small and round-off errors overwhelm the higher-order terms. The algorithm used becomes numerically unstable around order 30 even under ideal circumstances. Choosing order somewhat larger than degree may improve the higher-order terms. Examples We can calculate Taylor approximation polynomials of sin function with various degrees: >>> import matplotlib.pyplot as plt >>> from scipy.interpolate import approximate_taylor_polynomial >>> x = np.linspace(-10.0, 10.0, num=100) >>> plt.plot(x, np.sin(x), label="sin curve") >>> for degree in np.arange(1, 15, step=2): ... sin_taylor = approximate_taylor_polynomial(np.sin, 0, degree, 1, ... order=degree + 2) ... plt.plot(x, sin_taylor(x), label=f"degree={degree}") >>> plt.legend(bbox_to_anchor=(1.05, 1), loc='upper left',
2020-11-29 17:32:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6744524240493774, "perplexity": 3747.92286540773}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141201836.36/warc/CC-MAIN-20201129153900-20201129183900-00183.warc.gz"}
https://dsp.stackexchange.com/questions/8890/how-to-compute-fundamental-frequency-from-a-list-of-overtones
# How to compute fundamental frequency from a list of overtones? Given a list of overtones (F1, F2, F3, etc), how do I compute the fundamental frequency? Can I do something like F2/F1=F1/F0? Is it the correct method to use? • It's the GCD of the overtones, but where did the overtones come from? If they are measured from an FFT, there will be error which ruins the GCD. Also for certain sources (plucked string instruments) there will be inharmonicity to consider, and what exactly you then mean by "fundamental". – endolith Apr 29 '13 at 17:44 The frequencies of the harmonics are integer multiples of the fundamental frequency $f_0$, i.e. $f_n = (n+1)f_0$. The fundamental frequency $f_0$ is the greatest common divisor of the harmonics $f_n$. If you are sure that there is no other unknown harmonic between two known harmonics, e.g. you know that you have the fourth and the fifth harmonic, then $f_0$ is of course the difference between the two. But if you just have a collection of harmonics and you don't know anything else about them, then you need to determine $f_0$ as the gcd of $f_n$. • I don't quite believe $f_n = n f_0$. What happens if $n=0$? $f_0 = 0. f_0 = 0$! :-) I think you mean $f_{n-1} = n f_0$ for $n=1\ldots$. – Peter K. Apr 28 '13 at 23:06 • $n=0$ is simply an unfortunate choice ;) OK, of course you're right, even though I also believe that the concept is so simple that even my sloppy (and incorrect!) notation won't cause any confusion. Anyway, thanks for clearing it up! – Matt L. Apr 29 '13 at 6:52
2020-01-25 20:54:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8299713134765625, "perplexity": 337.2908916077172}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251681412.74/warc/CC-MAIN-20200125191854-20200125221854-00499.warc.gz"}
https://blender.stackexchange.com/questions/86344/how-to-get-the-angular-position-of-a-the-rotating-object-of-a-motor-constraint-i
# How to get the angular position of a the rotating object of a motor constraint in python scripting during animation play ? (Blender 2.78) I have a simple scene with a motor and its shaft on which a spool is attached. The animation works, the shaft is rotating and I can control its value from python using : bpy.data.objects['Constraint.motor'].rigid_body_constraint.motor_ang_target_velocity = 0.5 It must be possible to get the actual rotation (angular position) of the shaft, but when I do : bpy.data.objects['Spool'].rotation_euler it only gives me the Euler angles at keyframe 0, but not the actual value as the animation is being played. Is their a way to access this value ? The final goal is to simulate a motor encoder for a robotics simulation. Thanks ! bpy.data.objects['Spool'].matrix_world.to_euler('XYZ')
2019-11-14 22:34:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2914387881755829, "perplexity": 989.7339228046825}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668539.45/warc/CC-MAIN-20191114205415-20191114233415-00554.warc.gz"}
https://quuxplusone.github.io/blog/2019/09/26/uglification-doesnt-stop-adl/
ADL can interfere even with uglified names Back in September 2017, on libc++ review D37538, Eric Fiselier showed me the following piece of code. struct Incomplete; template<class T> struct Holder { T value; }; void __private_foo(...) {} int main() { Holder<Incomplete> *p = nullptr; ::__private_foo(p); // OK. __private_foo(p); // Error: Incomplete is incomplete. } Library writers know that you should never make an unqualified call to a function that your user might hijack via ADL. For example, if your algorithm invokes rotate(x, y, z) unqualified, you’re inviting the user to provide their own customized rotate via ADL. However, the above snippet demonstrates an even worse situation! Here, the name __private_foo is standing in for some STL helper whose name is reserved to the implementation namespace. It begins with two underscores, so we know that the user cannot legally provide their own customized __private_foo. So can we make an unqualified call to __private_foo? No, we cannot! An unqualified call to __private_foo definitely will not find anything via ADL; but the compiler doesn’t know that. The compiler must still go through the motions of building the lists of associated namespaces and associated entities for the ADL call. (For more, see “What is ADL?” (2019-04-26).) The argument type is Holder<Incomplete>*, which means that ADL must consider any __private_foo functions which are friends of Holder<Incomplete>. The compiler must instantiate Holder<Incomplete> in order to find out whether it has any friends named __private_foo. Instantiating Holder<Incomplete> gives a hard compiler error. To repeat the punch line: When you make a qualified call to ::__private_foo(p), it works and calls the function you expected. When you make an unqualified call to __private_foo(p), for this particular type, it gives a hard compiler error — despite the fact that the user never attempted to provide an ADL version of __private_foo! Merely invoking ADL at all can cause hard errors, in situations like this. The conclusion for many standard library functions is that you must namespace-qualify calls to helper functions, even if those helpers’ names are uglified. Uglifying a name prevents users from actually ADL-overloading it; but it doesn’t prevent hard errors in cases like this. This is the source of at least one family of bugs in libc++, as of this writing. Godbolt: #include <algorithm> struct Incomplete; template<class T> struct Holder { T t; }; int main() { using Elt = Holder<Incomplete>*; Elt a[100]; Elt *p = a; return std::distance(p, p); } Libraries other than libc++ are happy with this code: sizeof(Elt) is definitely known, so std::distance(p, p) should be well-defined. But libc++ pipes std::distance through a helper function __distance, whose name is properly uglified but improperly unqualified, and so we get a hard error: <source>:4:37: error: field has incomplete type 'Incomplete' template<class T> struct Holder { T t; }; ^ c++/v1/iterator:632:12: note: in instantiation of template class 'Holder<Incomplete>' requested here return __distance(__first, __last, typename iterator_traits<_InputIter>::iterator_category()); ^ <source>:11:10: note: in instantiation of function template specialization 'std::__1::distance<Holder<Incomplete> **>' requested here std::distance(p, p); ^ That __distance should have said _VSTD::__distance — and I’m sure that within a few days of this post, it will! There’s something else noteworthy here. Consider that the definition of std::distance(p, p), even on libstdc++ or MSVC, must ultimately involve a subtraction of the form template<class _It> auto __distance(_It __first, _It __last, random_access_iterator_tag) { return __last - __first; } Isn’t this, also, an unqualified-call scenario? That is, shouldn’t we be looking up candidates for operator- in the associated namespaces of type _It? Why doesn’t this unqualified use of operator- run afoul of the same ADL trap? Blame [over.match.oper]/1: If no operand of an operator in an expression has a type that is a class or an enumeration, the operator is assumed to be a built-in operator and interpreted according to [expr.compound]. That’s right — when _It is Holder<Incomplete>**, the subtraction __last - __first is assumed to be a built-in operator! Built-in operators are not functions. (See also “Pointer comparisons with std::less: a horror story” (2019-01-20).) Therefore there is no function call and no name lookup; therefore there is no ADL; therefore there is no trap! SomeType t; SomeType *p; p - p; // no ADL; the built-in operator is assumed Again (Godbolt): namespace N { struct A { A(A *) {} }; void operator<(A,A); } int main() { N::A *pa = nullptr; operator<(pa, pa); // OK: ADL finds N::operator< pa < pa; // OK: built-in operator, no ADL happens } This is surprising. But it seems that without [over.match.oper]/1, std::distance would never have worked at all, at least not on any case involving an incomplete associated type. Posted 2019-09-26
2019-12-14 05:35:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19319748878479004, "perplexity": 8678.139131850976}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540584491.89/warc/CC-MAIN-20191214042241-20191214070241-00436.warc.gz"}
http://perfectpuddle.blogspot.com/2014/05/planck-2014-liveblog-day-five-session-2.html
## Friday, 30 May 2014 ### Planck 2014 Liveblog: Day Five Session 2 We come at last to the end of the conference.  Last night was the banquet, so I regrettably missed the first session this morning.  The second morning session looks to be related to leptonic physics. 11:00 am: Same-Sign Tetra-Leptons from Type II Seesaw, Eung Jin Chun Type II Seesaw is the one where neutrino masses come from a scalar SU(2) triplet with smal VEV.  Decay patterns of doubly-charged scalar to two-lepton final states directly gives information on the associated Yukawa couplings; as with SM Higgs, neutrino masses generated by a single Yukawa.  This is obvious collider probe. A lot of discussion on various constraints on model (low energy flavour, collider searches, Higgs widths, EWPO, vacuum stability).  Main phenomenological feature: small mass splittings among (non-SM-like-Higgs) scalars (less than W mass). SS4L signal of title comes from oscillations among the neutral triplet scalars, mediated by the (small) doublet-triplet mixing.  The charge offset is carried by Ws that decay to jets.  Cross sections are small but (irreducible) backgrounds are essentially zero. 11:30 am: What does gravity do with axions?, Sacha Davidson The title of this talk has changed a lot.  Indeed, the topic has changed. The core of the question is: can we distinguish WIMP CDM and axion CDM (using LSS)?  Answer is ... maybe.  Stress energy tensor is different.  But hard to be quantitative. Assume inflation before PQ phase transition.  Follows that have U(1) topological defects.  Leads to one PQ string per horizon at PQPT.  Persists to QCDPT at which point mixing with axions triggers axion oscillations, strings become cold particles. 12:00 pm: Reading low energy neutrino data with leptogenesis, Pasquale Di Bari In Copenhagen, Pasquale was too loud for me to actually follow.  Here, he is less painful than many speakers have been this week. Can we probe LG with neutrino physics?  Answer (for high-scale LG) traditionally thought to be no.  However, with no evidence for TeV LG and the measurement of θ13, should we reconsider?  In particular, the moderately sized reactor angle makes it easier to measure CPV in the lepton sector. Planck upper limit on neutrino masses means we are approaching the regime where we can actually distinguish NH and IH (quasi-degenerate spectra becoming disfavoured). Even the simplest seesaw models have too many parameters to say things easily.  How to deal with this?  Old idea: assume flavour structure unimportant, hierarchical spectra (so N2 to N1 decay dominates).  This idea still feasible. May be worried about pre-existing asymmetry.  However, based on solar mass splitting RH neutrinos will generically wash out "any" such asymmetry.
2018-09-26 01:31:20
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8737502694129944, "perplexity": 13673.087182468702}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267162809.73/warc/CC-MAIN-20180926002255-20180926022655-00313.warc.gz"}
https://www.givewell.org/international/technical/programs/surgery-to-repair-obstetric-fistula
# In a nutshell This page discusses surgery for the treatment of obstetric fistula. An obstetric fistula is an abnormal opening between the vagina and the bladder or rectum, typically caused by tissue death from prolonged obstructed labor. Obstetric fistula can lead to physical complications and poor psychosocial outcomes. Our preliminary estimate suggests that surgery to treat obstetric fistula has the potential to be as cost-effective as our priority programs. However, several major unanswered questions remain, especially regarding the total costs of fistula surgery and the long-term outcomes of fistula surgery. Our investigation of fistula surgery is ongoing. GiveWell and IDinsight are currently in discussions with charities that fund fistula surgery about ways in which we might work with them to improve their monitoring and to answer some of our outstanding questions. It is possible that we may review one or more of these charities for top charity status in the future. Published: June 2017 ## What is the problem? An obstetric (or gynecologic) fistula is an abnormal opening between the vagina and the bladder (vesicovaginal fistula) or rectum (rectovaginal fistula), typically caused by prolonged obstructed labor.1 A fistula forms when the sustained pressure of a fetus's presenting part (usually its head) against the mother's pelvic bone cuts off blood flow to soft tissues, which necrotize and form a hole between body cavities. Obstetric fistula causes continuous and uncontrollable leakage of urine and/or feces through the vagina, which can lead to physical complications and poor psychosocial and economic outcomes.2 • Physical complications of fistula can include:3 • Dermatological conditions • Unpleasant odor • Constipation • Psychosocial consequences can include:4 • Divorce and ostracism from familial and social activities • Depression and other psychological complications • Decreased economic outcomes • Additional complications associated with obstructed labor but not caused by fistula (and therefore not ameliorated by fistula surgery) can include:5 • Fetal loss and associated psychosocial consequences, such as mourning • Reproductive organ damage, such as uterine rupture • Amenorrhea and loss of fertility • Neurological damage resulting in weakness in the leg, limb contracture, and foot drop • Renal damage resulting in decreased kidney function • Vaginal stenosis and painful intercourse Obstetric fistula from prolonged labor typically does not occur in countries where women have access to obstetric care and emergency obstetric procedures (such as caesarean section to prevent prolonged labor) through developed health systems.6 Obstetric fistula can be considered one symptom of a larger obstructed labor injury complex.7 Our understanding is that living with obstetric fistula is highly detrimental to well-being. The Global Burden of Disease Study 2013 assigned vesicovaginal fistula a disability weight (a measure of the size of the negative impact of a fistula on a woman's life) of 0.342, similar to the disability weight assigned to moderate dementia (0.377) or the amputation of both arms without treatment (0.383).8 ## What is the program? Surgery to repair vesicovaginal or rectovaginal fistula is a complex procedure, and surgical method may vary depending on the characteristics of the injury and the experience of the surgeon.9 Fistula surgery generally involves making an incision in the vaginal mucosa around the fistula and the suture of tissue to cover the fistula in either a single or double layer.10 After surgery for vesicovaginal fistula, a transurethral drainage catheter is used for an average of 14 days and high fluid intake is advised. Patients are advised against sexual contact for three months to allow the tissues to heal.11 Postoperative care may also include social reintegration via counseling and life skills training.12 In some cases, surgery is not the most advisable method of fistula management. Some small vesicovaginal fistulas may close spontaneously if managed with catheter use.13 In some cases, the damage is extensive enough that surgery is unlikely to result in improved function, and fistula symptoms may be managed with urinary diversion.14 Organizations that support surgery to repair obstetric fistula include the Fistula Foundation, EngenderHealth's Fistula Care Plus program, the United Nations Population Fund (UNFPA), Hamlin Fistula Ethiopia, Worldwide Fistula Fund, and Operation Fistula, among others. These organizations conduct the following activities: • Identifying patients via community outreach efforts and referring and transporting these patients to health facilities for surgery.15 • Funding the training of fistula surgeons.16 • Providing health facilities with equipment needed to perform fistula surgeries,17 for example designing and funding the creation and distribution of fistula repair kits.18 • Funding fistula centers, hospitals, and other partners.19 • Operating fistula centers and hospitals.20 • Providing post-operative support, including physical care, counseling, social reintegration, and life skills training.21 • Preventing fistula by funding training of OB/GYNs and community health advocates, increasing awareness of fistula and access to family planning, and advocating for policy changes.23 • Researching ways to improve the quality of fistula surgery, for example piloting a pay-for-performance model of fistula surgery.24 • Improving the monitoring and evaluation of fistula surgery, for example by developing a tool to allow surgeons to centrally report data on patients and outcomes.25 ## Does the program work? Success of fistula surgery consists of two components: 1. Physical surgical success, measured by fistula closure as reported by the surgeon, continence at discharge from the hospital (for example, as measured by a dye test), and long-term continence. The limited available literature on surgical outcomes suggests average surgical success rates of approximately 86% for fistula closure and 70% for continence at discharge.26 We have found insufficient follow-up data to determine long-term continence rates and are uncertain about the degree to which continence at discharge is predictive of long-term continence. 2. Psychosocial life outcomes. We have not seen strong evidence that life outcomes are improved post-surgery, in part due to a lack of follow-up data in this area. Some weak evidence suggests that psychosocial life outcomes may not be entirely dependent on the physical outcome of the surgery.27 Some fistula centers provide post-surgical psychological care or reintegration training; we are uncertain about the effect of these programs on life outcomes of patients. As part of an evaluation of a fistula organization, we would examine monitoring of patient outcomes, including physical outcomes at discharge and other outcomes if available. Due to paucity of data on long-term continence and psychosocial life outcomes, we find it likely that even after further investigation and engagement with fistula organizations, we would remain more uncertain about the outcomes of fistula surgery than we are about the outcomes of our current priority programs. In a 2012 letter to GiveWell, which is cited in the DCP3: Essential Surgery, 2015 as its source for fistula cost data, the Fistula Foundation estimated the cost per surgery at $1,000.28 We are uncertain about the accuracy of this estimate, though it seems to be supported by a small amount of additional cost data of uncertain quality.29 In order to more accurately estimate the cost per surgery, we would want to see estimates based on data from hospitals and centers where fistula surgery is performed. ## Is there room for more funding? It seems plausible to us that there is room for more funding for fistula surgery globally. Given an estimated 1 million existing cases of obstetric fistula30 and the rough cost estimate of$1,000 per surgery, we estimate global capacity for fistula surgery funding on the order of $1 billion. While we do not have a comprehensive sense of the available global funding for fistula repair, it is our impression that the annual budgets of major funders of this work represent a small portion of this global funding need.31 However, it is plausible that funding is not the only constraint to providing more fistula surgeries, and that additional funding would therefore not necessarily lead to additional surgeries. Additionally, we expect that as fistula identification and management improves, remaining cases are increasingly the most difficult or expensive to identify and treat. ## Cost-effectiveness We are highly uncertain about whether the cost-effectiveness of fistula surgery is competitive with that of our priority programs. We are highly uncertain about the full costs of fistula surgery, and about the effect of fistula surgery on life outcomes We very roughly estimate the cost of a physically successful fistula surgery at$1,400.32 If this estimate is roughly accurate, it is possible that fistula surgery could be competitively cost-effective with our priority programs. Our very preliminary cost-effectiveness model of fistula surgery illustrates the effect of individuals' moral valuation of the benefit of averting fistula burden compared to the benefits of other GiveWell top charities.33 Additional questions that may impact our cost-effectiveness estimate include the rate at which fistulas reopen after surgery, the rate of residual incontinence and its impact on life outcomes, and the rate of adverse effects of fistula surgery. ## Our process We have spoken with representatives of several organizations that support fistula surgery, including the Fistula Foundation, EngenderHealth, Operation Fistula, and Hamlin Fistula.34 As part of our Incubation Grants program, GiveWell is partnering with IDinsight to support the identification or development of a GiveWell top charity focused on fistula surgery. ## Questions for further investigation There are several major questions that we were not able to resolve in our review of the academic literature: • How much does a fistula program cost per patient treated? • Is there room for more funding to cause additional fistula surgeries? • What monitoring and evaluation is collected by fistula programs? • How much does fistula closure and lack of incontinence impact a woman's life? In what percentage of cases, and to what extent, does fistula surgery reduce ostracization or otherwise cause major reductions in psychological distress? • Is fistula surgery successful at closing fistula and reducing incontinence in the long term? ## Sources Document Source Adler et al. 2013 Key Informant Method Source (archive) Adler et al. 2013 Prevalence Review Source (archive) Ahmed and Holtz 2007 Source (archive) Arrowsmith, Barone, and Ruminjo 2013 Source (archive) Arrowsmith, Hamlin, and Wall 1996 Source (archive) AusAID/USAID Review of Support to Hamlin Fistula Ethiopia 2013 Source (archive) DCP3: Essential Surgery, 2015 Source (archive) EngenderHealth 2012 "Estimating Costs to Provide Fistula Services in Nigeria and Ethiopia" Source (archive) EngenderHealth website, Fistula Source (archive) Fistula Foundation Annual Report 2015 Source (archive) Fistula Foundation, Letter to GiveWell 2012 Source GiveWell's non-verbatim summary of a conversation with Operation Fistula, May 3, 2016 Source GiveWell's preliminary cost-effectiveness model of fistula surgery Source Hamlin Fistula Ethiopia website, About Us Source (archive) Hancock 2009 Source (archive) Lombard et al. 2015 Source (archive) Operation Fistula website, GOFER Source (archive) Operation Fistula website, Pay-for-Performance to the Point-of-Care Source (archive) Salomon et al. 2015 Source (archive) UNFPA MHTF Annual Report 2015 Source (archive) UNFPA/EngenderHealth Obstetric Fistula Needs Assessment Report 2003 Source (archive) USAID 2017 Midterm evaluation of Fistula Care Plus Source (archive) Worldwide Fistula Fund website, Our Programs Source (archive) • 1. "A gynecologic fistula refers to an abnormal communication between the urinary tract or the gastrointestinal tract and the genital tract, produced by obstetric causes, usually prolonged and obstructed labor." DCP3: Essential Surgery, 2015, p. 95. • 2. "In prolonged labor, which frequently results in delivery of a stillborn, the bladder and/or rectal tissue is compressed between the pelvic bones and the fetal head, cutting off blood flow and causing ischemic pressure necrosis (Husain and others 2005). In the hours or days following such a prolonged labor, the fistula forms and leakage of urine, stool, or both appears." DCP3: Essential Surgery, 2015, p. 95. • 3. "Additional major complications can include reproductive organ damage, such as uterine rupture, amenorrhea, and uterine scarring resulting in secondary infertility; dermatological conditions, resulting in excoriations and infections; neurological damage, resulting in weakness in the leg and foot drop (Arrowsmith, Hamlin, and Wall 1996); and renal damage, resulting in decreased kidney function. Women also report genital soreness; painful intercourse; constipation; and unpleasant odor, despite frequent washing and pad changes (Turan, Johnson, and Polan 2007)." DCP3: Essential Surgery, 2015, p. 96. Some complications listed above are complications of the obstetric event that causes the fistula, whereas others are reversible physical complications of the fistula itself. • 4. "...the woman may be abandoned by her husband and family to live as a social outcast without the ability to earn a living (Wall and others 2002). In many cultures, the woman either blames herself or is blamed by the community for the fistula, which is seen as a mark of punishment for some wrong-doing (Johnson and others 2010). She endures social isolation, economic deprivation, and depression (Turan, Johnson, and Polan 2007; Weston and others 2011)." DCP3: Essential Surgery, 2015, p. 97. • 5. • "Additional major complications can include reproductive organ damage, such as uterine rupture, amenorrhea, and uterine scarring resulting in secondary infertility; dermatological conditions, resulting in excoriations and infections; neurological damage, resulting in weakness in the leg and foot drop (Arrowsmith, Hamlin, and Wall 1996); and renal damage, resulting in decreased kidney function. Women also report genital soreness; painful intercourse; constipation; and unpleasant odor, despite frequent washing and pad changes (Turan, Johnson, and Polan 2007)." DCP3: Essential Surgery, 2015, p. 96. Some complications listed above are complications of the obstetric event that causes the fistula, whereas others are reversible physical complications of the fistula itself. • See also Figure 6.1, DCP3: Essential Surgery, 2015, p. 96 which lists possible consequences of “Obstructed labor injury complex” including: fetal death, fistula formation, complex urological injury, vaginal scarring and stenosis, secondary infertility, musculoskeletal injury, foot drop, chronic skin irritation, offensive odor. • 6. "The advent of anesthesia and safe, effective surgical procedures for cesarean sections have made the occurrence of obstetric fistula a rare event in the developed world; when they do occur, they are typically due to a congenital anomaly, surgical complication, malignancy, or radiation damage." DCP3: Essential Surgery, 2015, p. 95. • 7. • "Arrowsmith and colleagues coined the phrase 'obstructed labor injury complex' to encompass the extent of physical and social injury caused by fistulas." Ahmed and Holtz 2007, p. S11, referring to Arrowsmith, Hamlin, and Wall 1996. • "The field injury that is produced by prolonged obstructed labor may result in multiple birth-related injuries in addition to (or instead of) a vesicovaginal fistula. Focusing simply on the 'hole' between the bladder and the vagina ignores the multifaceted nature of the injury that many of these patients have sustained. These injuries may include total urethral loss, stress incontinence, hydroureteronephrosis, renal failure, rectovaginal fistula formation, rectal atresia, anal sphincter incompetence, cervical destruction, amenorrhea, pelvic inflammatory disease, secondary infertility, vaginal stenosis, osteitis pubis, and foot-drop. In addition to their physical injuries, women who have experienced prolonged obstructed labor often develop serious social problems, including divorce, exclusion from religious activities, separation from their families, worsening poverty, malnutrition, and almost unendurable suffering." Arrowsmith, Hamlin, and Wall 1996, p. 568. • 8. Salomon et al. 2015, pp. e717, e718, e720. • 9. "The surgical approach can be vaginal, abdominal, or combined, based on the location of the fistula and the preference and experience of the surgeon. The vaginal route seems to be associated with less blood loss and pain (Chigbu and others 2006). However, the evidence on the difference in operative complications and speed of recovery is limited." DCP3: Essential Surgery, 2015, p. 102. • 10. "An incision is made over the vaginal mucosa all around the fistula about 3 millimeters away from the junction of the bladder (rectum in RVF [rectovaginal fistula]) and vaginal skin (epithelium). Lateral extension of the incision, at the 3:00 and 9:00 o’clock positions, is made bilaterally. These incisions over the vaginal mucosa should be just deep enough to cut only the vaginal mucosa. The bladder (rectum in RVF) should be mobilized adequately to avoid tension on the closure of the defect. Bladder or rectal muscle should be approximated, avoiding the bladder or rectal mucosa. The closure of bladder fistulas can be in either a single or a double layer based on individual preference. Closure of rectal fistula is preferable in two layers, to avoid rectal mucosal interposition between the sutures. In patients who had had a diverting colostomy and repair of an RVF, a dye test must be done to confirm success of repair before planning for colostomy closure." DCP3: Essential Surgery, 2015, p. 102. For details of surgical technique, see the textbook "Practical Obstetric Fistula Surgery", Hancock 2009, especially Chapter 6. • 11. "The main concern in VVF [vesicovaginal] patients in the postoperative period is the maintenance of free and continuous bladder drainage. High fluid intake is widely advised; women should be encouraged to drink four to five liters a day (Hancock 2009b) and the color of the urine should be watched as the indicator of the adequacy of hydration. A blocked catheter signals an emergency. Transurethral drainage catheters are generally kept for an average of 14 days (up to 21 days following new urethral reconstruction) and should be removed without clamping. Some suggest that postoperative catheterization for 10 days may be sufficient for less complicated cases of VVF repair (Nardos, Browning, and Member 2008). Women are advised not to resume sexual contact for three months to give adequate time for the tissues to heal." DCP3: Essential Surgery, 2015, p. 102. • 12. "For women who have lived with fistula for many years, reintegration into society involves redefinition of self and transition from being identified as filthy, dependent, and unworthy to being seen as clean, feminine, and active in family and community life. Thus, reintegration into family and community life is a major adjustment and goal after surgery. This need for reintegration requires that surgical programs dedicated to fistula repair consider and implement counseling for social integration and training in life skills to help these women return to gainful employment after repair. Most women live an agrarian lifestyle, and returning to farming is important to them. One paper identifies the most important factor helping them feel normal again is the ability to return to farming after surgical repair (Pope, Bangser, and Requejo 2011). However, most women felt that they needed more time after surgery to fully recover their strength; the authors recommend having an alternate non-labor-intensive form of income for the first year after repair before most women return to their routine work. The full reintegration of a patient postrepair should also include her sexual and reproductive health needs (Mselle and others 2012). Preoperative and postoperative counseling for 47 Eritrean fistula patients was shown to increase their self-esteem (Johnson and others 2010)." DCP3: Essential Surgery, 2015, pp. 102-103. • 13. "Women with bladder fistulas can sometimes be treated conservatively if the injury is recent and the hole is small. Continuous bladder drainage with Foley catheters for four to six weeks has been reported to result in the spontaneous closure of small fistulas with fresh edges in 15 percent to 20 percent of cases (Waaldijk 1994). However, the majority of VVFs [vesicovaginal fistulas] require surgical treatment." DCP3: Essential Surgery, 2015, p. 101. • 14. "In some cases, the damage to the urethra and bladder is so severe that conventional repair methods are not successful. In specialized centers, these patients are sometimes offered urinary diversion in which the ureters are implanted in the lower bowel (Morgan and others 2009)." DCP3: Essential Surgery, 2015, p. 102. • 15. • "Fistula Foundation funds patient outreach to educate communities about the condition and to help identify, refer, and transport women to life-changing treatment." Fistula Foundation Annual Report 2015, p. 4. • "WFF [Worldwide Fistula Fund] works to identify women who need fistula treatment and transports them to surgery performed by Expert Fistula Surgeons." Worldwide Fistula Fund website, Our Programs • "As obstetric fistula largely affects poorer, marginalized women and girls, often living in remote areas, it can be a challenge to identify them, either in health facilities or communities, and then to connect them to treatment. In 2015, UNFPA in Ethiopia supported the training of 240 health extension workers and 129 nurses, midwives and doctors in fistula case identification to strengthen referrals to surgical treatment. Other assistance helped the Ghana Health Services to develop a good practice document on fistula case identification and referral. It catalogues existing practices that have yielded promising results and will inform the establishment of a national fistula identification mechanism. In the Democratic Republic of the Congo, UNFPA partners with local public, private and civil society entities to raise awareness on fistula and connect women to treatment. Fistula survivors who have undergone treatment help identify other women with fistula in their communities, and assist them to seek medical care. Media and community outreach campaigns spread prevention and treatment messages, and in 2015 reached an estimated 100,000 people in one province." UNFPA MHTF Annual Report 2015, p. 44. • 16. • "A lack of trained surgeons throughout sub-Saharan Africa and Southeast Asia means that capacity to treat the growing backlog of fistula patients is limited. Compounding this challenge, no two fistulas are identical—it can take years of training for a single surgeon to be sufficiently prepared to treat a complex injury. To meet this need, Fistula Foundation funds a comprehensive fistula surgeon training program, directed by the International Federation of Gynecology and Obstetrics (FIGO)." Fistula Foundation Annual Report 2015, p. 4. • "At global, regional and national levels, UNFPA works with several partner organizations, such as EngenderHealth/Fistula Care Plus, Fistula Foundation, Freedom From Fistula Foundation, the International Society of Obstetric Fistula Surgeons, the International Federation of Gynecology and Obstetrics, and Operation Fistula to promote high-quality training in fistula surgical repair. At the national level, the MHTF endorses the training of surgeons in a standardized curriculum for fistula repair developed by the International Federation of Gynecology and Obstetrics, the International Society of Obstetric Fistula Surgeons, UNFPA, EngenderHealth, and the Royal College of Obstetricians and Gynecologists." UNFPA MHTF Annual Report 2015, p. 45. • 17. "Many facilities lack even the most basic equipment. Our partners have become accustomed to working in conditions that are less than ideal, performing surgery with aging equipment, or making do with tools that may not be the most appropriate for fistula surgery. We listen and respond to the needs of our partners and help provide support that will enable them to perform surgery in the safest environment possible." Fistula Foundation Annual Report 2015, p. 4. • 18. "In 2012, UNFPA, in partnership with expert fistula surgeons, designed kits with with all the necessary instruments and medical supplies for performing surgical repairs. In 2015, the MHTF supported the procurement of 568 kits for use at health facilities in 17 countries." UNFPA MHTF Annual Report 2015, p. 42. • 19. For example, see Fistula Foundation Annual Report 2015, p.5, "Fistula Foundation 2015 Partners": "The above is a list of all organizations that received 2015 grants from Fistula Foundation, and is not an exhaustive list of current partners." • 20. "Hamlin Fistula Ethiopia directs the work of the Addis Ababa Fistula Hospital, its five regional hospitals, the Hamlin College of Midwives and Desta Mender, a farm and training centre for long term patients." Hamlin Fistula Ethiopia website, About Us • 21. • "WFF offers Recovery and Ongoing Support to women including safe places to heal, comprehensive post-operative care, meals, group and individual counseling, individual care plans and integrated physical therapy overseen by WFF’s Rehabilitation Advisory Council." Worldwide Fistula Fund website, Our Programs • "Women are encouraged to participate in Education and Vocational Skills Training in literacy and health classes, as well as embroidery & sewing courses, handcrafting jewelry, and cooking & catering. WFF also launched the Women’s Empowerment Center in Uganda in collaboration with TERREWODE." Worldwide Fistula Fund website, Our Programs • "The majority of MHTF-assisted countries are supporting social reintegration and the acquisition of income-generating skills critical for fistula survivors to provide for themselves and their families, and rebuild their sense of dignity and agency." UNFPA MHTF Annual Report 2015, p. 42. • 22. "We additionally fund research in maternal and reproductive health to assess current treatments, to uncover unmet treatment needs and to improve future care." Worldwide Fistula Fund website, Our Programs • 23. • "WFF works to provide Expert OB-GYN Training through our enhanced OB-GYN residency training program, Mekelle Medical Education Collaboration, and our specialized Urogynecology Fellowship training program, both launched in partnership with and at Mekelle University in Ethiopia. WFF funds Community Health Advocate Training where community members are trained in fistula awareness and risk factors and to encourage local families to give birth in health centers." Worldwide Fistula Fund website, Our Programs • "Preventing Fistula • Upgrading emergency obstetric care to prevent obstetric fistula • Increasing awareness at the community level about fistula prevention and the importance of maternal health care • Advocating policy changes that tackle the root causes of obstetric fistula, such as delays in accessing emergency obstetric care • Promoting gender equity and reducing violence against women" EngenderHealth website, Fistula • "Through the MHTF, UNFPA and the Campaign to End Fistula are strengthening prevention by educating women, families and communities on the importance of delivering with a skilled birth attendant. Sensitizing community leaders and health workers, including midwives, on the risk of developing fistula and its causes is a key component of connecting women to skilled care during pregnancy and delivery." UNFPA MHTF Annual Report 2015, p. 44. • "UNFPA advocates for fistula-affected countries to develop costed, time-bound national strategies and action plans for eliminating the condition. By the end of 2015, 15 MHTF-supported countries had national strategies in place. Nine had costed operational plans." UNFPA MHTF Annual Report 2015, p. 42. • "UNFPA helps countries in establishing and successfully operating national task forces for eliminating fistula. In 2015, 28 MHTF-assisted countries had these task forces." UNFPA MHTF Annual Report 2015, p. 42. • 24. "As qualified surgeons submitted patient records, we paid out grants directly to them and gave them the flexibility to use the money at their discretion. We piloted this concept in Madagascar, Malawi, Mauritania and Zambia. This pilot program treated 752 women, exceeding all targets, driving quality and capacity-expansion, and delivering unprecedented cost-effectiveness in line with vaccines." Operation Fistula website, Pay-for-Performance to the Point-of-Care • 25. "We developed GOFER to improve the accuracy and reliability of data collection and enable a collaborative effort to improve the quality of fistula care globally. Our vision for GOFER begins by using the platform to unite and improve the fistula sector. With wide adoption, GOFER will introduce visibility into quality of care, improve outcomes of surgery and expand the impact of funding. We aim to have over 50% of annual spending on fistula care committed to using GOFER by the end of 2016." Operation Fistula website, GOFER • 26. We rely on Arrowsmith, Barone, and Ruminjo 2013, the most recent meta-analysis of fistula surgery outcomes that we identified. "The authors reviewed 46 published articles that addressed outcomes in fistula care. Most articles were published between 2006 and 2013." (p. 399) Surgical outcomes in studies identified in this review are not necessarily representative of outcomes of surgeries supported by organizations that GiveWell may evaluate. As part of an evaluation of a fistula organization, we would examine monitoring of patient outcomes, including physical outcomes at discharge and other outcomes if available. • "The question of continence versus closure has important implications. There are major differences between the expected rates of fistula closure and continence after fistula repair. In the studies reviewed here, closure rates ranged from a low of 53.6% to a high of 97.5%, with most closure rates above 85% and an average of 86%. By contrast, rates of dryness (i.e., no incontinence remaining after closure) are much lower, spanning from 42 to 92%, with most between 50 and 80% and averaging 70%." Arrowsmith, Barone, and Ruminjo 2013, p. 400. • The authors emphasize a lack of standardized outcome metrics in fistula surgery: "To advance, the fistula care field needs to establish standardized outcome definitions. Professional bodies like the International Continence Society have proven that standardized terminology in other clinical areas related to continence is possible. Routine outcome measurement is essential to maintain quality of care. In addition, reporting on outcomes is unavoidable when considering an individual site’s funding, accreditation, and governmental permission to practice. Commonly agreed upon definitions and outcome measures will help ensure that site reviews are accurate and conducted fairly. To compare technical innovations with existing methods, the field must agree on definitions of success. Furthermore, standardized indicators for mortality and morbidity associated with repair can help improve the evidence base and contribute to quality of care." Arrowsmith, Barone, and Ruminjo 2013, p. 402. • 27. Lombard et al. 2015, a literature review, found ten primary qualitative studies of rehabilitation experiences of women in sub-Saharan Africa following obstetric fistula repair, all between 2003-2011. • "Many women may remain amenorrhoeic, experience intrauterine and/or vaginal scarring and cervical damage that may be associated with pelvic inflammatory disease. Few studies are available on women’s quality of life or their needs post-repair, which would be useful in planning effective interventions and care." p. 555. • "All ten included studies were conducted in sub-Saharan Africa: three in Tanzania, two in Eritrea, one in Kenya, one in Benin, one in Malawi, one in Ethiopia and one across 20 countries. All research took place in clinical facilities: seven in a rural setting, one in an urban setting and two in mixed settings. Five studies used a mixed-methods approach, whereas the other five used only qualitative methods. The length of research across all studies ranged from 2 months to 2 years between 2003 and 2011. All included studies related to the same target population: rural women affected by fistula (five studies), women and families (four studies) and women, key informants and experts in the field (one study). Most studies used semi-structured interviews as a data collection tool with an average participant population of n = 29 (range 8–61). The average age of women included in the research was 31 years, while the average age at fistula was 24 years. The duration of fistula ranged from 3 months to 30 years." pp. 556-557. • There is some indication that surgery ameliorates the social effects of fistula even when it does not eliminate the physical effects: "In this review, we were unable to identify the relationship between continence status post-repair and rehabilitation experiences and recommendations due to the qualitative nature of the included studies. Research has shown that a woman who is closed and dry post-repair vs. one who is still incontinent is more likely to live with her husband, eat with others, earn money and attend community gatherings. However, women who are still incontinent demonstrate high percentages of meeting their own needs (75%), ability to work (66%) and staying married (61%). These positive outcomes extend to their families as one sister said: ‘I am very much happy because she wasn’t going to the mosque, was not able to fast during Ramadan, but she is now able to do all that. She is now able to chat with her friends’. Interestingly, for affected women, the surgical repair experience appears to be characterised by a shift in social status rather than physical recovery. Simply receiving the repair can be a positive intervention and even women with only partially successful repairs report improved quality of life. We cannot be sure, however, that these findings would be true for all women with residual incontinence." p. 564. • 28. • 29. • "HFE has estimated the cost per standard repair procedure at the main hospital and Bahir Dar Outreach Centre to range from US$755 to US$1,474 depending on location and severity of the case." AusAID/USAID Review of Support to Hamlin Fistula Ethiopia 2013, p. xiii. • "The only other organisation supporting comprehensive fistula care in Ethiopia is WAHA, who work in government hospitals, so do not have the same overhead costs. WAHA indicated that with all country level costs (administration, salary, transport etc.) divided by number of cases treated, the cost per OF patient is about US$350, reduced to US$225 when removing costs for prevention." AusAID/USAID Review of Support to Hamlin Fistula Ethiopia 2013, p. 31. • A 2012 EngenderHealth report estimated the direct costs to institutions of providing fistula repair in Nigeria ($147-$272) and Ethiopia ($161-$229), based on direct observation of a very small number of surgeries. EngenderHealth 2012 "Estimating Costs to Provide Fistula Services in Nigeria and Ethiopia", Table 1, Table 3, pp. 6-7. • A 2003 assessment estimated the "fully-loaded cost per procedure" for fistula repair based on visits to fistula repair sites in nine African countries. Reported costs ranged from about $10 to$750, with many sites reporting costs in the range of $50-$150. It is not clear what costs are included in these estimates. UNFPA/EngenderHealth Obstetric Fistula Needs Assessment Report 2003 • 30. "Overall, we estimate that just over one million women may have a fistula in sub-Saharan Africa and South Asia, and that there are over 6000 new cases per year in these two world regions." Adler et al. 2013 Prevalence Review, p. 9. See also Adler et al. 2013 Key Informant Method. • 31. It is our understanding that the Fistula Foundation, the United Nations Population Fund (UNFPA), and Engender Health represent major international funders of fistula repair. • The UNFPA Maternal Health Thematic Fund consists of the Thematic Trust Fund for Maternal Health and the Thematic Fund for Obstetric Fistula, both of which contribute funds to UNFPA's Campaign to End Fistula. In 2015, the Thematic Trust Fund for Maternal Health had an operating budget of $18.4 million and the Thematic Fund for Obstetric Fistula had an operating budget of$610,000. The UNFPA Maternal Health Thematic Fund spent $3 million on UNFPA's Campaign to End Fistula in 2015. UNFPA MHTF Annual Report 2015, pp. 12, 56. • In 2015, the Fistula Foundation had total unrestricted revenues and support of$6.9 million and total expenses of $8.2 million. Fistula Foundation Annual Report 2015, p. 11, “Our Financials”. Due to the partnership between the UNFPA Campaign to End Fistula and the Fistula Foundation, it is possible that summing the fistula budgets of these two organizations double-counts some amount of the funding. • Fistula Care Plus is a five-year project (December 12, 2013 to December 11, 2018) with actual funding through October 2016 of$27.75 million, or an average of approximately $9.61 million per year. USAID 2017 Midterm evaluation of Fistula Care Plus, p. xv. • 32. • The Fistula Foundation estimates the cost per fistula surgery at about$1,000. Fistula Foundation, Letter to GiveWell 2012, p. 2. There is some literature suggesting that this estimate may be roughly accurate (more). We are not sure which costs associated with a fistula surgery program are included or excluded in this estimate. In the past, we have generally found that charity cost estimates are lower than our cost estimates after we review program cost data. We believe that the best way to improve our understanding of the cost per surgery would be to solicit cost data from the Fistula Foundation or other organizations. • We estimate that roughly 70% of surgeries are physically successful (the patient is continent at discharge), based on: "The question of continence versus closure has important implications. There are major differences between the expected rates of fistula closure and continence after fistula repair. In the studies reviewed here, closure rates ranged from a low of 53.6% to a high of 97.5%, with most closure rates above 85% and an average of 86%. By contrast, rates of dryness (i.e., no incontinence remaining after closure) are much lower, spanning from 42 to 92%, with most between 50 and 80% and averaging 70%." Arrowsmith, Barone, and Ruminjo 2013, p. 400. • $1,000 per surgery / 70% continence upon discharge =$1,429 per successful surgery • 33. • 34. See GiveWell's non-verbatim summary of a conversation with Operation Fistula, May 3, 2016. We have not made notes from our other conversations available.
2019-06-16 21:22:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1769048422574997, "perplexity": 13247.188953631263}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998298.91/warc/CC-MAIN-20190616202813-20190616224813-00226.warc.gz"}
https://indico.jlab.org/event/252/contributions/3142/
Please visit Jefferson Lab Event Policies and Guidance before planning your next event: https://www.jlab.org/conference_planning. # Light Cone 2018 14-18 May 2018 Jefferson Lab - CEBAF Center US/Eastern timezone ## Trident pair production in lightfront quantization 17 May 2018, 17:10 20m Auditorium (Jefferson Lab - CEBAF Center) ### Speaker Dr Greger Torgrimsson (Theoretisch-Physikalisches Institut Friedrich-Schiller-Universität Jena; Helmholtz Institute Jena) ### Description High-intensity lasers currently attract a great deal of interest due to the prospects of using them to study unexplored regimes of fundamental physics. Trident pair production is a basic process in this field, where an electron collides with a laser and produces an electron-positron pair. One part of this is a two-step process where the initial electron emits a real, on-shell photon that subsequently decays into an electron-positron pair, and the rest is referred to as a one-step process. We have [1] studied the split between these one- and two-step processes using lightfront quantization, motivated by the facts that in this formalism all particles are on-shell and the Hamiltonian has instantaneous terms. Apart from providing new insights into trident, this formalism has also allowed us to calculate important terms that have previously been neglected. Reference [1] V. Dinu and G. Torgrimsson, Trident pair production in plane waves: Coherence, exchange, and spacetime inhomogeneity'', Phys. Rev. D 97 (2018) 036021 ### Primary authors Dr Greger Torgrimsson (Theoretisch-Physikalisches Institut Friedrich-Schiller-Universität Jena; Helmholtz Institute Jena) Dr Victor Dinu (Department of Physics, University of Bucharest) Slides
2021-10-28 15:18:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2469639927148819, "perplexity": 5029.174198837896}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588341.58/warc/CC-MAIN-20211028131628-20211028161628-00084.warc.gz"}
https://www.groundai.com/project/smoothness-and-stability-in-gans/
Smoothness and Stability in GANs # Smoothness and Stability in GANs ## Abstract Generative adversarial networks, or GANs, commonly display unstable behavior during training. In this work, we develop a principled theoretical framework for understanding the stability of various types of GANs. In particular, we derive conditions that guarantee eventual stationarity of the generator when it is trained with gradient descent, conditions that must be satisfied by the divergence that is minimized by the GAN and the generator’s architecture. We find that existing GAN variants satisfy some, but not all, of these conditions. Using tools from convex analysis, optimal transport, and reproducing kernels, we construct a GAN that fulfills these conditions simultaneously. In the process, we explain and clarify the need for various existing GAN stabilization techniques, including Lipschitz constraints, gradient penalties, and smooth activation functions. \iclrfinalcopy ## 1 Introduction: taming instability with smoothness Generative adversarial networks (Goodfellow et al., 2014), or GANs, are a powerful class of generative models defined through minimax game. GANs and their variants have shown impressive performance in synthesizing various types of datasets, especially natural images. Despite these successes, the training of GANs remains quite unstable in nature, and this instability remains difficult to understand theoretically. Since the introduction of GANs, there have been many techniques proposed to stabilize GANs training, including studies of new generator/discriminator architectures, loss functions, and regularization techniques. Notably, Arjovsky et al. (2017) proposed Wasserstein GAN (WGAN), which in principle avoids instability caused by mismatched generator and data distribution supports. In practice, this is enforced by Lipschitz constraints, which in turn motivated developments like gradient penalties (Gulrajani et al., 2017) and spectral normalization (Miyato et al., 2018). Indeed, these stabilization techniques have proven essential to achieving the latest state-of-the-art results (Karras et al., 2018; Brock et al., 2019). On the other hand, a solid theoretical understanding of training stability has not been established. Several empirical observations point to an incomplete understanding. For example, why does applying a gradient penalty together spectral norm seem to improve performance (Miyato et al., 2018), even though in principle they serve the same purpose? Why does applying only spectral normalization with the Wasserstein loss fail (Miyato, 2018), even though the analysis of Arjovsky et al. (2017) suggests it should be sufficient? Why is applying gradient penalties effective, even outside their original context of the Wasserstein GAN (Fedus et al., 2018)? In this work, we develop a framework to analyze the stability of GAN training that resolves these apparent contradictions and clarifies the roles of these regularization techniques. Our approach considers the smoothness of the loss function used. In optimization, smoothness is a well-known condition that ensures that gradient descent and its variants become stable (see e.g., Bertsekas (1999)). For example, the following well-known proposition is the starting point of our stability analysis: {restatable} [Bertsekas (1999), Proposition 1.2.3]propositionPropGradientDescent Suppose is -smooth and bounded below. Let . Then as . This proposition says that under a smoothness condition on the function, gradient descent with a constant step size approaches stationarity (i.e., the gradient norm approaches zero). This is a rather weak notion of convergence, as it does not guarantee that the iterates converge to a point, and even if the iterates do converge, the limit is a stationary point and not necessarily an minimizer. Nevertheless, empirically, not even this stationarity is satisfied by GANs, which are known to frequently destabilize and diverge during training. To diagnose this instability, we consider the smoothness of the GAN’s loss function. GANs are typically framed as minimax problems of the form \vspace−1mminfθsupφJ(μθ,φ),\vspace−1mm (1) where is a loss function that takes a generator distribution and discriminator , and denotes the parameters of the generator. Unfortunately, the minimax nature of this problem makes stability and convergence difficult to analyze. To make the analysis more tractable, we define , so that (1) becomes simply \vspace−2mminfθJ(μθ).\vspace2mm (2) This choice corresponds to the common assumption that the discriminator is allowed to reach optimality at every training step. Now, the GAN algorithm can be regarded as simply gradient descent on the function , which may be analyzed using Section 1. In particular, if this function satisfies the smoothness assumption, then the GAN training should be stable in that it should approach stationarity under the assumption of an optimal discriminator. In the remainder of this paper, we investigate whether the smoothness assumption is satisfied for various GAN losses. Our analysis answers two questions: 1. Which existing GAN losses, if any, satisfy the smoothness condition in Section 1? 2. Are there choices of loss, regularization, or architecture that enforce smoothness in GANs? As results of our analysis, our contributions are as follows: 1. We derive sufficient conditions for the GAN algorithm to be stationary under certain assumptions (Section 2). Our conditions relate to the smoothness of GAN loss used as well as the parameterization of the generator. 2. We show that most common GAN losses do not satisfy the all of the smoothness conditions, thereby corroborating their empirical instability. 3. We develop regularization techniques that enforce the smoothness conditions. These regularizers recover common GAN stabilization techniques such as gradient penalties and spectral normalization, thereby placing their use on a firmer theoretical foundation. 4. Our analysis provides several practical insights, suggesting for example the use of smooth activation functions, simultaneous spectral normalization and gradient penalties, and a particular learning rate for the generator. ### 1.1 Related work Divergence minimization Our analysis regards the GAN algorithm as minimizing a divergence between the current generator distribution and the desired data distribution, under the assumption of an optimal discriminator at every training step. This perspective originates from the earliest GAN paper, in which Goodfellow et al. (2014) show that the original minimax GAN implicitly minimizes the Jensen–Shannon divergence. Since then, the community has introduced a large number of GAN or GAN-like variants that learn generative models by implicitly minimizing various divergences, including -divergences (Nowozin et al., 2016), Wasserstein distance (Arjovsky et al., 2017), and maximum-mean discrepancy (Li et al., 2015; Unterthiner et al., 2018). Meanwhile, the non-saturating GAN (Goodfellow et al., 2014) has been shown to minimize a certain Kullback–Leibler divergence (Arjovsky and Bottou, 2017). Several more theoretical works consider the topological, geometric, and convexity properties of divergence minimization (Arjovsky and Bottou, 2017; Liu et al., 2017; Bottou et al., 2018; Farnia and Tse, 2018; Chu et al., 2019), perspectives that we draw heavily upon. Sanjabi et al. (2018) also prove smoothness of GAN losses in the specific case of the regularized optimal transport loss. Their assumption for smoothness is entangled in that it involves a composite condition on generators and discriminators, while our analysis addresses them separately. Other approaches Even though many analyses, including ours, operate under the assumption of an optimal discriminator, this assumption is unrealistic in practice. Li et al. (2017b) contrast this optimal discriminator dynamics with first-order dynamics, which assumes that the generator and discriminator use alternating gradient updates and is what is used computationally. As this is a differing approach from ours, we only briefly mention some results in this area, which typically rely on game-theoretic notions (Kodali et al., 2017; Grnarova et al., 2018; Oliehoek et al., 2018) or local analysis (Nagarajan and Kolter, 2017; Mescheder et al., 2018). Some of these results rely on continuous dynamics approximations of gradient updates; in contrast, our work focuses on discrete dynamics. ### 1.2 Notation Let . We let denote the set of all probability measures on a compact set . We let and denote the dual pair consisting of the set of all finite signed measures on and the set of all continuous functions . For any statement , we let be if is true and if is false. For a Euclidean vector , its Euclidean norm is denoted by , and the operator norm of a matrix is denoted by , i.e., . A function between two metric spaces is -Lipschitz if . A function is -smooth if its gradients are -Lipschitz, that is, for all , . ## 2 Smoothness of GAN losses This section presents Section 2, which provides concise criteria for the smoothness of GAN losses. In order to keep our analysis agnostic to the particular GAN used, let be an arbitrary convex loss function, which takes a distribution over and outputs a real number. Note that the typical minimax formulation of GANs can be recovered from just the loss function using convex duality. In particular, recall that the convex conjugate of satisfies the following remarkable duality, known as the Fenchel–Moreau theorem: J⋆(φ):=supμ∈M(X)∫φ(x)dμ−J(μ),J(μ)=supφ∈C(X)∫φ(x)dμ−J⋆(φ). (3) Based on this duality, minimizing can be framed as the minimax problem infμ∈P(X)J(μ)=infμ∈P(X)supφ∈C(X)∫φ(x)dμ−J⋆(φ):=infμ∈P(X)supφ∈C(X)J(μ,φ), (4) recovering the well-known adversarial formulation of GANs. We now define the notion of an optimal discriminator for an arbitrary loss function , based on this convex duality: ###### Definition 1 (Optimal discriminator). Let be a convex, l.s.c., proper function. An optimal discriminator for a probability distribution is a continuous function that attains the maximum of the second equation in (3), i.e., . This definition recovers the optimal discriminators of many existing GAN and GAN-like algorithms (Farnia and Tse, 2018; Chu et al., 2019), most notably those in Table 1. Our analysis will apply to any algorithm in this family of algorithms. See Appendix B for more details on this perspective. We also formalize the notion of a family of generators: ###### Definition 2 (Family of generators). A family of generators is a set of pushforward probability measures , where is a fixed probability distribution on (the latent variable) and is a measurable function (the generator). Now, in light of Section 1, we are interested in the smoothness of the mapping , which would guarantee the stationarity of gradient descent on this objective, which in turn implies stationarity of the GAN algorithm under the assumption of an optimal discriminator. The following theorem is our central result, which decomposes the smoothness of into conditions on optimal discriminators and the family of generators. {restatable} [Smoothness decomposition for GANs]theoremThmStability Let be a convex function whose optimal discriminators satisfy the following regularity conditions: • is -Lipschitz, • is -Lipschitz, • is -Lipschitz w.r.t. the -Wasserstein distance. Also, let be a family of generators that satisfies: • is -Lipschitz in expectation for , i.e., , and • is -Lipschitz in expectation for , i.e., . Then is -smooth, with . Section 2 connects the smoothness properties of the loss function with the smoothness properties of the optimal discriminator , and once paired with Section 1, it suggests a quantitative value for a stable generator learning rate. In order to obtain claims of stability for practically sized learning rates, it is important to tightly bound the relevant constants. In Sections 6, 5 and 4, we carefully analyze which GAN losses satisfy (D1), (D2), and (D3), and with what constants. We summarize our results in Table 2: it turns out that none of the listed losses, except for one, satisfy (D1), (D2), and (D3) simultaneously with a finite constant. The MMD-based loss satisfies the three conditions, but its constant for (D1) grows as , which is an unfavorable dependence on the data dimension that forces an unacceptably small learning rate. See for complete details of each condition. This failure of existing GANs to satisfy the stationarity conditions corroborates the observed instability of GANs. Section 2 decomposes smoothness into conditions on the generator and conditions on the discriminator, allowing a clean separation of concerns. In this paper, we focus on the discriminator conditions (D1), (D2), and (D3) and only provide an extremely simple example of a generator that satisfies (G1) and (G2), in Section 7. Because analysis of the generator conditions may become quite complicated and will vary with the choice of architecture considered (feedforward, convolutional, ResNet, etc.), we leave a detailed analysis of the generator conditions (G1) and (G2) as a promising avenue for future work. Indeed, such analyses may lead to new generator architectures or generator regularization techniques that stabilize GAN training. ## 3 Enforcing smoothness with inf-convolutions In this section, we present a generic regularization technique that imposes the three conditions sufficient for stable learning on an arbitrary loss function , thereby stabilizing training. In Section 2, we observe that the Wasserstein, IPM, and MMD losses respectively satisfy (D1), (D2), and (D3) individually, but not all of of them at the same time. Using techniques from convex analysis, we convert these three GAN losses into three regularizers that, when applied simultaneously, causes the resulting loss to satisfy all the three conditions. Here, we only outline the technique; the specifics of each case are deferred to Sections 6, 5 and 4. We start with an arbitrary base loss function to be regularized. Next, we take an existing GAN loss that satisfies the desired regularity condition and convert it into a regularizer function . Then, we consider , which denotes the inf-convolution defined as (J⊕R)(ξ)=inf~ξ∈M(X)J(~ξ)+R(ξ−~ξ). (5) This new function inherits the regularity of , making it a stable candidate as a GAN loss. Moreover, because the inf-convolution is a commutative operation, we can sequentially apply multiple regularizers , , and without destroying the added regularity. In particular, if we carefully choose functions , , and , then will satisfy (D1), (D2), and (D3) simultaneously. Moreover, under some technical assumptions, this composite function inherits the original minimizers of , making it a sensible GAN loss: {restatable} [Invariance of minimizers]propositionPropSuperGANInvariance Let , , and be the three regularizers defined by (8), (12), and (19) respectively. Assume that has a unique minimizer at with , and for some . Then the inf-convolution has a unique minimizer at with . The duality formulation (4) provides a practical method for minimizing this composite function. We leverage the duality relation and apply (4): infμ(J⊕R1⊕R2⊕R3)(μ) =infμsupφ∫φdμ−J⋆(φ)−R⋆1(φ)−R⋆2(φ)−R⋆3(φ) (6) =infμsupφJ(μ,φ)−R⋆1(φ)−R⋆2(φ)−R⋆3(φ). (7) This minimax problem can be seen as a GAN whose discriminator objective has three added regularization terms. The concrete form of these regularizers are summarized in Table 3. Notably, we observe that we recover standard techniques for stabilizing GANs: • (D1) is enforced by Lipschitz constraints (i.e., spectral normalization) on the discriminator. • (D2) is enforced by spectral normalization and a choice of Lipschitz, smooth activation functions for the discriminator. • (D3) is enforced by gradient penalties on the discriminator. Our analysis therefore puts these regularization techniques on a firm theoretical foundation (Proposition 1 and Theorem 2) and provides insight into their function. ## 4 Enforcing (D1) with Lipschitz constraints In this section, we show that enforcing (D1) leads to techniques and notions commonly used to stabilize GANs, including the Wasserstein distance, Lipschitz constraints and spectral normalization. Recall that (D1) demands that the optimal discriminator is Lipschitz: (D1) is -Lipschitz for all , i.e., . If is differentiable, this is equivalent to that the optimal discriminator has a gradient with bounded norm. This is a sensible criterion, since a discriminator whose gradient norm is too large may push the generator too hard and destabilize its training. To check (D1), the following proposition shows that it suffices to check whether for all distributions : {restatable} propositionPropWassersteinLipschitz (D1) holds if and only if is -Lipschitz w.r.t. the Wasserstein-1 distance. Arjovsky et al. (2017) show that this property does not hold for common divergences based on the Kullback–Leibler or Jensen–Shannon divergence, while it does hold for the Wasserstein-1 distance. Indeed, it is this desirable property that motivates their introduction of the Wasserstein GAN. Framed in our context, their result is summarized as follows: {restatable} propositionPropGANLipschitz The minimax and non-saturating GAN losses do not satisfy (D1) for some . {restatable}propositionPropWGANLipschitz The Wasserstein GAN loss satisfies (D1) with for any . Our stability analysis therefore deepens the analysis of Arjovsky et al. (2017) and provides an alternative reason that the Wasserstein distance is desirable as a metric: it is part of a sufficient condition that ensures stationarity of gradient descent. ### 4.1 From Wasserstein distance to Lipschitz constraints Having identified the Wasserstein GAN loss as one that satisfies (D1), we next follow the strategy outlined in Section 3 to convert it into a regularizer for an arbitrary loss function. Towards this, we define the regularizer and compute its convex conjugate : R1(ξ):=α∥ξ∥KR=αsupf∈C(X)||f||Lip≤1∫fdξ,R⋆1(φ)={0∥φ∥Lip≤α∞otherwise. (8) This norm is the Kantorovich–Rubinstein norm (KR norm), which extends the Wasserstein-1 distance to ; it holds that for . Then, its inf-convolution with an arbitrary function inherits the Lipschitz property held by : {restatable}[Pasch–Hausdorff]propositionPropPaschHausdorff Let be a function, and define . Then is -Lipschitz w.r.t. the distance induced by the KR norm, and hence the Wasserstein-1 distance when restricted to . Due to Section 4, we now obtain a transformed loss function that automatically satisfies (D1). This function is a generalization of the Pasch–Hausdorff envelope (see Chapter 9 in Rockafeller and Wets (1998)), also known as Lipschitz regularization or the McShane–Whitney extension (McShane, 1934; Whitney, 1934; Kirszbraun, 1934; Hiriart-Urruty, 1980). The convex conjugate computation in (8) shows that can be minimized in practice by imposing Lipschitz constraints on discriminators. Indeed, by (4), infμ(J⊕α∥⋅∥KR)(μ) =infμsupφEx∼μ[φ(x)]−J⋆(φ)−χ{∥φ∥Lip≤α} (9) =infμsupφ: ∥φ∥Lip≤αJ(μ,φ). (10) Farnia and Tse (2018) consider this loss in the special case of an -GAN with ; they showed that minimizing corresponds to training a -GAN normally but constraining the discriminator to be -Lipschitz. We show that this technique is in fact generic for any : minimizing the transformed loss can be achieved by training the GAN as normal, but imposing a Lipschitz constraint on the discriminator. Our analysis therefore justifies the use of Lipschitz constraints, such as spectral normalization (Miyato et al., 2018) and weight clipping (Arjovsky and Bottou, 2017), for general GAN losses. However, Section 2 also suggests that applying only Lipschitz constraints may not be enough to stabilize GANs, as (D1) alone does not ensure that the GAN objective is smooth. ## 5 Enforcing (D2) with discriminator smoothness (D2) demands that the optimal discriminator is smooth: (D2) is -Lipschitz for all , i.e., . Intuitively, this says that for a fixed generator , the optimal discriminator should not provide gradients that change too much spatially. Although the Wasserstein GAN loss (D1), we see that it, along with the minimax GAN and the non-saturating GAN, do not satisfy (D2): {restatable} propositionPropNonSmoothWGAN The Wasserstein, minimax, and non-saturating GAN losses do not satisfy (D2) for some . We now construct a loss that by definition satisfies (D2). Let be the class of -smooth functions, that is, for which , and consider the integral probability metric (IPM) (Müller, 1997) w.r.t. , defined by IPMS(μ,ν):=supf∈S∫fdμ−∫fdν. (11) The optimal discriminator for the loss is the function that maximizes the supremum in the definition. This function by definition belongs to and therefore is -smooth. Hence, this IPM loss satisfies (D2) with by construction. ### 5.1 From integral probability metric to smooth discriminators Having identified the IPM-based loss as one that satisfies (D2), we next follow the strategy outlined in Section 3 to convert it into a regularizer for an arbitrary loss function. To do so, we define a regularizer and compute its convex conjugate : R2(ξ):=β1∥ξ∥S∗=β1supf∈S∫fdξ,R⋆2(φ)={0φ∈β1S∞otherwise. (12) The norm is the dual norm to , which extends the IPM to signed measures; it holds that for . Similar to the situation in the previous section, inf-convolution preserves the smoothness property of : {restatable} propositionPropIPMConvolution Let be a convex, proper, lower semicontinuous function, and define . Then the optimal discriminator for is -smooth. Applying (4) and (12), we see that we can minimize this transformed loss function by restricting the family of discriminators to only -smooth discriminators: infμ(J⊕β1∥⋅∥S∗)(μ) =infμsupφEx∼μ[φ(x)]−J⋆(φ)−χ{φ∈β1S} (13) =infμsupφ∈β1SJ(μ,φ). (14) In practice, we can enforce this by applying spectral normalization (Miyato et al., 2018) and using a Lipschitz, smooth activation function such as ELU (Clevert et al., 2016) or sigmoid. {restatable} propositionPropSmoothActivation Let be a neural network consisting of layers whose linear transformations have spectral norm and whose activation functions are -Lipschitz and -smooth. Then is -smooth. ## 6 Enforcing (D3) with gradient penalties (D3) is the following smoothness condition: (D3) is -Lipschitz for any , i.e., . (D3) requires that the gradients of the optimal discriminator do not change too rapidly in response to changes in . Indeed, if the discriminator’s gradients are too sensitive to changes in the generator, the generator may not be able to accurately follow those gradients as it updates itself using a finite step size. In finite-dimensional optimization of a function , this condition is analogous to having a Lipschitz gradient. We now present an equivalent characterization of (D3) that is easier to check in practice. We define the Bregman divergence of a convex function by DJ(ν,μ):=J(ν)−J(μ)−∫Φμ(x)d(ν−μ), (15) where is the optimal discriminator for at . Then, (D3) is characterized in terms of the Bregman divergence and the KR norm as follows: {restatable}propositionPropVariationalSmoothness Let be a convex function. Then satisfies (D3) if and only if for all . It is straightforward to compute the Bregman divergence corresponding to several popular GANs: DDJS(⋅||μ0)(ν,μ)=DKL(12ν+12μ0||12μ+12μ0)+12DKL(ν||μ), (16) DDKL(12⋅+12μ0||μ0)(ν,μ)=DKL(12ν+12μ0||12μ+12μ0), (17) D12MMD2(⋅,μ0)(ν,μ)=12MMD2(ν,μ). (18) The first two Bregman divergences are not bounded above by for reasons similar to those discussed in Section 4, and hence: {restatable}propositionPropVariationalSmoothnessGAN The minimax and non-saturating GAN losses do not satisfy (D3) for some . Even so, the Bregman divergence for the non-saturating loss is always less than that of the minimax GAN, suggesting that the non-saturating loss should be stable in more situations than the minimax GAN. On the other hand, the MMD-based loss (Li et al., 2015) does satisfy (D3) when its kernel is the Gaussian kernel : {restatable} propositionPropVariationaSmoothnessMMD The MMD loss with Gaussian kernel satisfies (D3) with for all . ### 6.1 From maximum mean discrepancy to gradient penalties Having identified the MMD-based loss as one that satisfies (D3), we next follow the strategy outlined in Section 3 to convert it into a regularizer for an arbitrary loss function. To do so, we define the regularizer and compute its convex conjugate : R3(ξ):=β24π∥^ξ∥2H,R⋆3(φ)=πβ2∥φ∥2H. (19) The norm is the norm of a reproducing kernel Hilbert space norm (RKHS) with Gaussian kernel; this norm extends the MMD to signed measures, as it holds that for . Here, denotes the mean embedding of a signed measure ; we also adopt the convention that if . Similar to the situation in the previous sections, inf-convolution preserves the smoothness property of : {restatable} [Moreau–Yosida regularization]propositionPropMoreauYosida Suppose is convex, and define . Then is convex, and . By Section 6, this transformed loss function satisfies (D3), having inherited the regularity properties of the squared MMD. This transformed function is a generalization of Moreau–Yosida regularization or the Moreau envelope (see Chapter 1 in Rockafeller and Wets (1998)). It is well-known that in the case of a function , this regularization results in a function with Lipschitz gradients, so it is unsurprising that this property carries over to the infinite-dimensional case. Applying (4) and (19), we see that the transformed loss function can be minimized as a GAN by implementing an RKHS squared norm penalty on the discriminator: infμ(J⊕β24π||⋅||2H)(μ) =infμsupφEx∼μ[φ(x)]−J⋆(φ)−πβ2||φ||2H. (20) Computationally, the RKHS norm is difficult to evaluate. We propose taking advantage of the following infinite series representation of in terms of the derivatives of (Fasshauer and Ye, 2011; Novak et al., 2018): {restatable} propositionPropGaussianNormExpansion Let be an RKHS with the Gaussian kernel . Then for , ||f||2H =∞∑k=0(4π)−k∑k1+⋯+kd=k1∏di=1ki!||∂k1x1⋯∂kdxdf||2L2(Rd) (21) =||f||2L2(Rd)+14π||∇f||2L2(Rd)+116π2||∇2f||2L2(Rd)+other terms. (22) In an ideal world, we would use this expression as a penalty on the discriminator to enforce (D3). Of course, as an infinite series, this formulation is computationally impractical. However, the first two terms are very close to common GAN techniques like gradient penalties (Gulrajani et al., 2017) and penalizing the output of the discriminator (Karras et al., 2018). We therefore interpret these common practices as partially applying the penalty given by the RKHS norm squared, approximately enforcing (D3). We view the choice of only using the leading terms as a disadvantageous but practical necessity. Interestingly, according to our analysis, gradient penalties and spectral normalization are not interchangeable, even though both techniques were designed to constrain the Lipschitz constant of the discriminator. Instead, our analysis suggests that they serve different purposes: gradient penalties enforce the variational smoothness (D3), while spectral normalization enforces Lipschitz continuity (D1). This demystifies the puzzling observation of Miyato (2018) that GANs using only spectral normalization with a WGAN loss do not seem to train well; it also explains why using both spectral normalization and a gradient penalty is a reasonable strategy. It also motivates the use of gradient penalties applied to losses other than the Wasserstein loss (Fedus et al., 2018). ## 7 Verifying the theoretical learning rate In this section, we empirically test the theoretical learning rate given by Sections 1 and 2 as well as our regularization scheme (7) based on inf-convolutions. We approximately implement our composite regularization scheme (7) on a trivial base loss of by alternating stochastic gradient steps on infμsupφEx∼μ[φ(x)]−Ex∼μ0[φ(x)]−πβ2Ex∼~μ[φ(x)2+14π||∇φ(x)||2], (23) where is a random interpolate between samples from and , as used in Gulrajani et al. (2017). The regularization term is a truncation of the series for the squared RKHS norm (22) and approximately enforces (D3). The discriminator is a 7-layer convolutional neural network with spectral normalization1 and ELU activations, an architecture that enforces (D1) and (D2). We include a final scalar multiplication by so that by Section 5.1, . We take two discriminator steps for every generator step, to better approximate our assumption of an optimal discriminator. For the generator, we use an extremely simple particle-based generator which satisfies (G1) and (G2), in order to minimize the number of confounding factors in our experiment. Let be the discrete uniform distribution on . For an matrix and , define so that is the th row of . The particle generator satisfies (G1) with , since Ez[∥fθ(z)−fθ′(z)∥2]=1Nn∑z=1∥θz−θ′z∥2≤1√N∥θ−θ′∥F, (24) and it satisfies (G2) with , since is constant w.r.t. . With this setup, Section 2 suggests a theoretical learning rate of γ0=1L=1αB+A2(β1+β2)=N7α+β2. (25) We randomly generated hyperparameter settings for the Lipschitz constant , the smoothness constant , the number of particles , and the learning rate . We trained each model for 100,000 steps on CIFAR-10 and evaluate each model using the Fréchet Inception Distance (FID) of Heusel et al. (2017). We hypothesize that stability is correlated with image quality; Figure 1 plots the FID for each hyperparameter setting in terms of the ratio of the true learning rate and the theoretically motivated learning rate . We find that the best FID scores are obtained in the region where is between 1 and 1000. For small learning rates , we observe that the convergence is too slow to make a reasonable progress on the objective, whereas as the learning rate gets larger , we observe a steady increase in FID, signalling unstable behavior. It also makes sense that learning rates slightly above the optimal rate produce good results, since our theoretical learning rate is a conservative lower bound. Note that our intention is to test our theory, not to generate good images, which is difficult due to our weak choice of generator. Overall, this experiment shows that our theory and regularization scheme are sensible. ## 8 Future work Inexact gradient descent In this paper, we employed several assumptions in order to regard the GAN algorithm as gradient descent. However, real-world GAN algorithms must be treated as “inexact” descent algorithms. As such, future work includes: (i) relaxing the optimal discriminator assumption (cf. Sanjabi et al. (2018)) or providing a stability result for discrete simultaneous gradient descent (cf. continuous time analysis in Nagarajan and Kolter (2017); Mescheder et al. (2018)), (ii) addressing stochastic approximations of gradients (i.e., SGD), and (iii) providing error bounds for the truncated gradient penalty used in (23). Generator architectures Another important direction of research is to seek more powerful generator architectures that satisfy our smoothness assumptions (G1) and (G2). In practice, generators are often implemented as deep neural networks, and involve some specific architectures such as deconvolution layers (Radford et al., 2015) and residual blocks (e.g., Gulrajani et al. (2017); Miyato et al. (2018)). In this paper, we did not provide results on the smoothness of general classes of generators, since our focus is to analyze stability properties influenced by the choice of loss function (and therefore optimal discriminators). However, our conditions (G1) and (G2) shed light on how to obtain smoothly parameterized neural networks, which is left for future work. #### Acknowledgments We would like to thank Kohei Hayashi, Katsuhiko Ishiguro, Masanori Koyama, Shin-ichi Maeda, Takeru Miyato, Masaki Watanabe, and Shoichiro Yamaguchi for helpful discussions. ## Appendix A Inf-convolution in Rd To gain intuition on the inf-convolution, we present a finite-dimensional analogue of the techniques in Section 3. For simplicity of presentation, we will omit any regularity conditions (e.g., lower semicontinuity). We refer readers to Chapter 12 of Bauschke and Combettes (2011) for a detailed introduction. Let and be convex functions on . The inf-convolution of and is a function defined as (J⊕R)(x):=infz∈RdJ(z)+R(x−z). The inf-convolution is often called the epigraphic sum since the epigraph of coincides with the Minkowski sum of epigraphs of and , as Figure 2 illustrates. The inf-convolution is associative and commutative operation; that is, it is always true that and . There are two important special cases of inf-convolutions: The first one is the Pasch–Hausdorff envelope , which is the inf-convolution between and (). It is known that becomes -Lipschitz. The second important example is the Moreau envelope , i.e., the inf-convolution with the quadratic regularizer . The Moreau envelope is always differentiable, and the gradient of is -Lipschitz (thus is -smooth). It is worth noting that the set of minimizers does not change after these two operations. More generally, we have the following result: ###### Proposition 1. Let be proper and lower semicontinuous functions with and . Suppose and for some increasing function . Then, = and . To sum up, given a function , we can always construct a regularized alternative that is -Lipschitz and -smooth and has the same minimizers as . The next question is how to implement the inf-convolution in GAN-like optimization problems. For this, it is convenient to consider the convex conjugate. Recall that the Fenchel–Moreau theorem says that there is a duality between a convex function and its convex conjugate as and . The important property is that the convex conjugate of the inf-convolution is the sum of convex conjugates, that is, we always have (J⊕R)⋆(z)=J⋆(z)+R⋆(z). This property can be useful for implementing the regularized objective as follows. First, we can check that the convex conjugates of the norm and the squared norm are given as and . Hence, we have Jβα(x):=(J⊕α∥⋅∥2⊕12β∥⋅∥22)(x)=supz: ∥z∥2≤α⟨x,z⟩−J⋆(z)−β2∥z∥22, which means that minimizing can be recast in min-max problem with the norm clipping and -regularization on the dual variable . ## Appendix B Common GAN losses For completeness and clarity, we explicitly write out the expressions for the losses listed in Table 1. For more detailed computations of optimal discriminators, see Chu et al. (2019); for more details on the convex duality interpretation, see Farnia and Tse (2018). Minimax GAN Goodfellow et al. (2014) originally proposed the minimax GAN and showed that the corresponding loss function for the minimax GAN is the Jensen–Shannon divergence, defined as J(μ):=DJS(μ||μ0):=12DKL(μ||12μ+12μ0)+12DKL(μ0||12μ+12μ0), where is a fixed probability measure (usually the empirical measure of the data), and is the Kullback–Leibler divergence between and . The optimal discriminator in the sense of Definition 1 is given as Φμ(x)=12logdμd(μ+μ0)(x), where is the Radon–Nikodym derivative. If and have densities and , then dμd(μ+μ0)(x)=μ(x)μ(x)+μ0(x), so our optimal discriminator matches that of Goodfellow et al. (2014) up to a constant factor and logarithm. To recover the minimax formulation, the convex duality (4) yields: infμDJS(μ,μ0) =infμsupφEx∼μ[φ(x)]−(−12Ex∼μ0[log(1−e2φ(x)+log2)]−12log2(DJS(⋅,μ0))⋆(φ)) =infμsupD12Ex∼μ[log(1−D(x))]+12Ex∼μ0[logD(x)], using the substitution . Non-saturating GAN Goodfellow et al. (2014) also proposed the heuristic non-saturating GAN. Theorem 2.5 of Arjovsky and Bottou (2017) shows that the loss function minimized is J(μ):=DKL(12μ+12μ0||μ0)=12DKL(μ||μ0)−DJS(μ||μ0). The optimal discriminator is Φμ(x)=−12logdμ0d(μ+μ0)(x). Wasserstein GAN Arjovsky et al. (2017) proposed the Wasserstein GAN, which minimizes the Wasserstein-1 distance between the input and a fixed measure : J(μ):=W1(μ,μ0):=infπE(x,y)∼π[||x−y||], where the infimum is taken over all couplings , probability distributions over whose marginals are and respectively. The optimal discriminator is called the Kantorovich potential in the optimal transport literature (Villani, 2009). The convex duality recover the Wasserstein GAN: infμW1(μ,μ0) =infμsupφEx∼μ[φ(x)]−(Ex∼μ0[φ(x)]+χ{||φ||Lip≤1}(W1(⋅,μ0))⋆(φ)) =infμsup||φ||Lip≤1Ex∼μ[φ(x)]−Ex∼μ0[φ(x)], an expression of Kantorovich–Rubinstein duality. The Lipschitz constraint on the discriminator is typically enforced by spectral normalization (Miyato et al., 2018), less frequently by weight clipping (Arjovsky et al., 2017), or heuristically by gradient penalties (Gulrajani et al., 2017) (although this work shows that gradient penalties may serve a different purpose altogether). Maximum mean discrepancy Given a positive definite kernel , the maximum mean discrepancy (MMD, Gretton et al. (2012)) between and is defined by J(μ):=12MMD2K(μ,ν):=12∫K(x,y)(μ−ν)(dx)(μ−ν)(dy). where is the reproducing kernel Hilbert space (RKHS) for . The generative moment-matching network (GMMN, Li et al. (2015)) and the Coulomb GAN (Unterthiner et al., 2018) use the squared MMD as the loss function. The optimal discriminator in this case is Φμ(x)=Ey∼μ[K(x,y)]−Ey∼μ0[K(x,y)], which in constrast to other GANs, may be approximated by simple Monte Carlo, rather than an auxiliary optimization problem. Note that MMD-GANs (Li et al., 2017a; Arbel et al., 2018) minimize a modified version of the MMD, the Optimized MMD (Sriperumbudur et al., 2009; Arbel et al., 2018). These MMD-GANs are adversarial in a way that does not arise from convex duality, so our theory currently does not apply to these GANs. Integral probability metrics An integral probability metric (Müller, 1997) is defined by J(μ):=IPMF(μ,μ0):=supf∈F∫fdμ−∫fdμ0, where is a class of functions. The optimal discriminator is the function that maximizes the supremum in the definition. The Wasserstein distance may be thought of as an IPM with containing all -Lipschitz functions. The MMD may be thought of as an IPM with all functions with RKHS norm at most , but no GANs based on MMD are actually trained this way, as it is difficult to constrain the discriminator to such functions. ## Appendix C Optimal discriminators are functional derivatives Let be a convex function. Recall the definition of the optimal discriminator (Definition 1): Φμ∈argmaxφ∈C(X)∫φdμ−J⋆(φ). This definition can be understood as an infinite dimensional analogue of subgradients. In fact, in finite-dimensional convex analysis, is a subgradient of if and only if it can be written as . The calculus of subgradients shares many properties with the standard calculus of derivatives, such as chain rules (Rockafeller and Wets, 1998). This motivate us to investigate derivative-like features of optimal discriminators. We introduce the functional derivative, also known as the von Mises influence function: ###### Definition 3 (Functional derivative). Let be a function of probability measures. We say that a continuous function is a functional derivative of at if
2020-07-07 15:57:00
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9420315027236938, "perplexity": 882.0217230050583}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655893487.8/warc/CC-MAIN-20200707142557-20200707172557-00366.warc.gz"}
https://openmx.ssri.psu.edu/thread/743
# Using MxModel names (that include spaces) in MxAlgebras 13 posts / 0 new Offline Joined: 07/31/2009 - 15:12 Using MxModel names (that include spaces) in MxAlgebras I'm writing a helper function that takes a set of existing MxModels and returns a parent model that (a) includes them all as submodels, and (b) has an mxAlgebraObjective that sums the objectives of the existing MxModels. I have always avoided putting spaces in MxModel names when I plan to use that model in an algebra. Is it possible to refer to the objective slot in a model named "My Poorly Named Model" in an MxAlgebra? All of my usual tricks involving quotes don't work. ryne Offline Joined: 07/31/2009 - 14:25 Maybe rename the models Maybe rename the models first, replacing illegal characters? It's a pain when people want to enter your name into a calculator – can't call your kids µ or anything cute :-) Offline Joined: 07/31/2009 - 15:12 I thought about that. Renaming would work, because you couldn't refer to the space-name model elsewhere in the model tree without hitting the same error. It's possible that someone could pass "model1" and "model 1" as different models, and that I'd create a conflict by renaming. I'm still not thrilled about changing people's models behind their backs, and if "Model 1" is a valid model name, then we should have a way to get full functionality in algebras. I was leaning towards an error unless someone has a way to include a variation of "Model 1".objective in an algebra, though screening for spaces may take more lines than the rest of the function. Relevant to Tim's joke: http://xkcd.com/327/ Offline Joined: 07/31/2009 - 14:25 Probably ok to replace " " Probably ok to replace " " with "_" to avoid the model 1 = model1 problem. ti;rm -r *;bates Offline Joined: 07/31/2009 - 15:10 Standard R practice tends to Standard R practice tends to replace spaces with a '.'; some places prefer '_'. As long as you catch the case where two models are identical and either handle it or throw an appropriate error, it should be fine. I think the code to sub out spaces is something like newString <- gsub("[ ]", ".", oldString). Offline Joined: 07/31/2009 - 15:24 There is a solution to this There is a solution to this problem. I'll look it up in the OpenMx code base and post this evening. Offline Joined: 07/31/2009 - 15:12 I did figure out the gsub to I did figure out the gsub to replace non-alphanumerics with a "_". I'd like to avoid the dot, just because someone could name a model "Whatever data" or "whatever objective" and then we'd have real problems. Mike, if you have a solution that keeps me from renaming people's models, let me know! newNames <- gsub('[^[:alnum:]_]', "_", modelNames) if (sum(newNames==modelNames)!=numModels)message("Model names used in algebras should avoid spaces and punctuation that can be confused for algebraic terms. Non-alphanumeric characters replaced with '_'.") Offline Joined: 07/31/2009 - 15:24 There's probably a fancier There's probably a fancier way to do it, but the following will work. I'm assuming that 'modelnames' is a vector of strings. addDotObjective <- function(modelname) { return(paste(modelname, "objective", sep = ".")) } makeAlgebraSummation <- function(modelnames) { modelnames <- lapply(modelnames, as.symbol) if (length(modelnames) == 1) { return(eval(substitute(mxAlgebra(x, name = 'sum'), list(x = modelnames[[1]])))) } else if (length(modelnames) == 2) { return(eval(substitute(mxAlgebra(x + y, name = 'sum'), list(x = modelnames[[1]], y = modelnames[[2]])))) } else { expression <- substitute(x + y, list(x = modelnames[[1]], y = modelnames[[2]])) for(i in 3:length(modelnames)) { expression <- substitute(x + y, list(x = expression, y = modelnames[[i]])) } return(eval(substitute(mxAlgebra(x, name = 'sum'), list(x = expression)))) } } Offline Joined: 07/31/2009 - 15:12 I thought you were figuring I thought you were figuring out a way to include spaces in model names. Sorry to make you do extra work, but I had a solution once the model names are scrubbed. I'll post complete code when the library this goes in is done. Here's what I came up with for pasting it all together into an algebra. I used the gsub above to swap out non-alphanumerics for "_", but I'd like to better preserve people's model names if possible. newNames is assumed to be a vector of model names (after I scrub out the spaces and junk). alg is then the new mxAlgebra. Industrious users can probably figure out the rest of the function from there. exp <- paste(newNames, ".objective", sep="") exp <- paste(exp, collapse=" + ") algName <- paste("name=", objName, "", sep="\"") alg <- paste("mxAlgebra(", exp, ",", algName, ")") alg <- eval(parse(text=alg)) Offline Joined: 07/31/2009 - 15:24 The following doesn't The following doesn't work? exp <- paste(newNames, ".objective", sep="") exp <- sapply(exp, function(x) { paste("", x, "", sep = "") }) exp <- paste(exp, collapse=" + ") algName <- paste("name=", objName, "", sep="\"") alg <- paste("mxAlgebra(", exp, ",", algName, ")") alg <- eval(parse(text=alg)) Offline Joined: 07/31/2009 - 15:12 No. mxAlgebra breaks if I try No. mxAlgebra breaks if I try any variation on 'Model 1'.objective, and the algebraObjective breaks if I try 'Model 1.objective'. If I use single quotes, I get an algebra objective error "non-numeric argument to binary operator. With the apostrophe/accent/thingee-under-the-tilde, the error is "object 'Model 1.objective' not found." Mixing the quoting and dot references seems to be what's throwing it off. Edit: added some testing code if anyone wants it. modelA <- mxModel("Model 1", mxData(matrix(1, dimnames=list("x", "x")), "cov", numObs=100), mxMatrix("Symm", 1, 1, TRUE, 0, "a", name="S"), mxMLObjective("S", dimnames="x") ) modelB <- mxModel("Model 2", mxData(matrix(2, dimnames=list("x", "x")), "cov", numObs=100), mxMatrix("Symm", 1, 1, TRUE, 0, "a", name="S"), mxMLObjective("S", dimnames="x") ) mult <- mxModel("Mult", modelA, modelB, mxAlgebra(Model A.objective + Model B.objective, name="C"), mxAlgebraObjective("C") ) test <- mxRun(mult) Offline Joined: 07/31/2009 - 15:24 Umm, shouldn't that be Umm, shouldn't that be mxAlgebra(Model 1.objective + Model 2.objective, name="C"). Offline Joined: 07/31/2009 - 15:12 Wow. I forgot how to count to Wow. I forgot how to count to B. I spent half the day trying variations on model 1.objective, then did the rest of my testing with letters and numbers mixed up. Thanks, Mike.
2021-02-26 01:33:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4323457181453705, "perplexity": 8334.58350367639}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178355944.41/warc/CC-MAIN-20210226001221-20210226031221-00020.warc.gz"}
https://stats.stackexchange.com/questions/189182/using-aic-or-cross-validated-mse-for-selecting-neural-network-models-for-time-se
# Using AIC or cross-validated MSE for selecting neural network models for time series prediction I trained two basic feed-forward neural networks on time series data. The first one uses the observation at time step $t$ to predict $t+1$. Hence, it only has one predictor variable. The second network uses a temporal lag of size 1, i.e. it uses the observation at time step $t$ and $t-1$ to predict $t+1$. Hence, it uses two predictor variables. Comparing the MSE of both models reveals, as expected, that the second network (the one that with the temporal lag) predicts better. However, the first model yields the lower AIC, probably because it has less parameters (I calculated the likelihood function of the models using the number of samples and the MSE). If I compare the 10-fold cross-validated (CV) MSE of the models instead of their AICs, the second model is preferred, despite its larger number of parameters. So, which model should I choose? AIC says the first model, CV MSE the second one. • Interesting... Asymptotically AIC and MSE should select the same model, if I am not mistaken. Perhaps there is some problem with likelihood in AIC, e.g. the assumed error distribution is far from the realized residual distribution? If you used MSE for likelihood calculation, I suppose you are assuming normal errors. – Richard Hardy Jan 15 '16 at 19:07 • I was imprecise: AIC equals leave-one-out CV, not K-fold CV, asymptotically. It is BIC that matches K-fold CV asymptotically. AIC should be relevant for forecasting, while BIC is better suited for recovering a true model. – Richard Hardy Mar 13 '17 at 19:42 • Thanks. In my opinion LOO CV is pretty bad in practice because of the extremly high variance of the resulting models. So, the same should also apply to AIC, right? In my experience, AIC is more often used than BIC, but 10-fold CV is more often used than LOO CV. Why is BIC so rarely used though it will results in lower variance than AIC? – Julian Mar 14 '17 at 8:10 • I don't know why LOOCV does not work for you, but AIC is asymptotically optimal for one-step-ahead forecasting under square loss (while BIC is not), and LOOCV is supposed to be asymptotically equivalent to AIC. That is all I know... – Richard Hardy Mar 14 '17 at 8:24 • Thanks Richard. Sorry, I was commenting on the general case, not time series. For time series, AIC might the way to go but I am unsure about the general regression case... but I think this goes beyond my original question. – Julian Mar 14 '17 at 8:52
2019-05-25 13:14:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8191437721252441, "perplexity": 1162.560663648414}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232258058.61/warc/CC-MAIN-20190525124751-20190525150751-00072.warc.gz"}
https://demo.sgmapps.com/-oprk/c1bc31-a-good-estimator-has-no
Check out the December 2020 edition of Restoration & Remediation: 2020 Restoration Industry Year in Review, restoration chemistry, restoring success, buyers guide and much more! By visiting this website, certain cookies have already been set, which you may delete and block. 1. Demand for well-qualified estimators continues to grow because construction is on an upswing. As estimators, their are typically other(s) that come in and review the final number and approve of what we have done, which is not always a good thing. whereas the formula to estimate the variance from a sample is Notice that the denominators of the formulas are different: N for the population and N-1 for the sample. If the Estimator does not have thick skin, they will develop it in a few short months. Its quality is to be evaluated in terms of the following properties: 1. It's an interesting exercise, so I thought everyone might like to give it a shot. 6b. A cost estimator calculates the cost for labor, time, and materials by collecting and analyzing information that is needed to construct a building or product other products. Don't get me wrong I am not against people reviewing estimates, but too many times I see the estimator come in with no review done, since it will be done in the meeting. Enrico Fermi, the Italian physicist who amongst other things invented the first nuclear reactor, was known for his ability to make good educated guesses with little or no actual data. Working together, James and Dennis painted a fence in 8 hours. 13/40. 14 Depression at elevation how to solve? For each question, fill in the upper and lower bounds so that you have a 90 percent chance of including the correct value. And every one of them has ideas for how to get paid what you are owed. This Organization has all the qualities like good growth, good Environment, maintaining a best level in the IT Industries, etc. E. How would you check if the dimensions of the swimming pool obtained satisfy the conditions of the given situation? All Rights Reserved BNP Media. This website requires certain cookies to work and uses other cookies to How long did James and Dennis take, when each was painting alone? A good estimator, as common sense dictates, is close to the parameter being estimated. Con artists only solicit dupes that they can control and later contain. D. What is the length of the swimming pool ? There is an entire branch of statistics called Estimation Theory that concerns itself with these questions and we have no intention of doing it justice in a single blog post. Estimating is one of the most important jobs in construction. How about its area? Solve the following problems. I am not saying that field personnel such as foreman won’t make a good estimator I am just saying that being able to bring a project in on time and on budget isn’t an indicator of a good estimator. To help the hiring process, On Center Software has created a guide on How to Hire a Great Estimator. The company. To no one’s surprise, this means a great estimator has evolved from being someone who is good at math to being good at a lot of things . See the answer. R & R, C & R and Cleanfax, opened their archives and gave us the best they had, other chapters were created just for the “Get Paid!” book and its readers. Show transcribed image text. Amid this daily grind, its easy to put retirement savings on the back burner, especially when its 15, 20 or 30 years off. The year before, James painted it by himself, but took 12 hours less than Dennis took. i.e . the Fermi method. Take the time to be educated on today’s automobiles and the technology associated with … 7c.12d. and cookie policy to learn more about the cookies we use and how we use your if the length of the box is twice its width and the height is 1 cm shirt and than its width what are t this website, certain cookies have already been set, which you may delete and We saw in the " Estimating Variance Simulation " that if N is used in the formula for s 2 , then the estimates tend to … Why should I care? 2. If the estimator cannot reach an agreed on scope with the adjuster, have the estimator meet with the adjuster and explain why the estimator has to say no to this job. Expert Answer 100% (5 ratings) Previous question Next question Listen, it can be that even if you have all your ducks in a row: experience, hard work ethic, etc., that company still can’t pay you what you need or what you’re looking for. help you have the best experience while on the site. Visit our updated. If you've been working on being a smarter estimator, you can certainly benefit from some helpful tips on how to do so. Inputting Data in our Grade Calculator. When the difference becomes zero then it is called unbiased estimator. Visit our updated, This website requires certain cookies to work and uses other cookies to help you have the best experience. Efficient Estimator: An estimator is called efficient when it satisfies following conditions is Unbiased i.e Visit our privacy As a result, it’s essential that estimators have the know-how to use tech tools like estimating software to analyze historical cost data and pinpoint a range of prices based on different scenarios. Last year, Dennis painted the fence by himself. When bid day arrives bids are due at a specific time and place and in a specific manner. Is unbiasedness a good thing? What is the area of a square? data. When you are entering your Current Grade and the weight of your final exam, our calculator will presume that your current grade has been based on the weight of the course prior to your final exam and calculates it as the input weight taken away from 100%. 1. When a project has a lot of uncertainty, creating an upfront specification (and estimating it) is a bad idea because no one can be sure what the real problem is. Fin A. See more. B. C. How would you find the length and the width of the swimming pool ? i.e., Best Estimator: An estimator is called best when value of its variance is smaller than variance is best. A rectangular garden has an area of 84 m^2 and a perimeter of 38 m. Find its length and width? Actually it depends on many a things but the two major points that a good estimator should cover are : 1. This website requires certain cookies to work and uses other cookies to help you have the best experience. By closing this message or continuing to use our site, you agree to the use of cookies. In statistics, an estimator is a rule for calculating an estimate of a given quantity based on observed data: thus the rule (the estimator), the quantity of interest (the estimand) and its result (the estimate) are distinguished.. Qualities of a Good Estimator A “Good" estimator is the one which provides an estimate with the following qualities: Unbiasedness: An estimate is said to be an unbiased estimate of a given parameter when the expected value of that estimator can be shown to be equal to the parameter being estimated. Copyright ©2020. There are four main properties associated with a "good" estimator. During this process, we asked, “What are the earmarks of a good estimator?” Colleges have strong programs in construction management, but they … …, d thenumber of packs of bead to be sold in order to have a profit of Php 20000 permonth.​, 19. A good estimator has no - 1299777 What is the area of a rectangle with alength of 4 and a width of 3? 4. There’s no good reason to do business with corrupt people. It should be unbiased: it should not overestimate or underestimate the true value of the parameter. Here are the answers to the quiz presented in How Good an Estimator Are You? Not only are there great tips available to help you estimate duration, costs and other factors, but there are also software programs available that will help you to become a more masterful estimator. Working with an experienced estimator, newcomers become familiar with each step in the process. Explain how you arrived at your answer? Where is another estimator. How about the equation that represents its area? If there are 5 What makes a good estimator? What equation represents the perimeter of the swimming pool? Although this doesn’t replace tracking your actual credit score, it can provide quick estimate if you don’t want to … block. What is the area of a rectangle with alength of 4 and a width of 3?a. It will build the trust relationship for future jobs with the adjuster. In general, if $\hat{\Theta}$ is a point estimator for $\theta$, we can write What is k so that k-3, K+2, k+3 form a geometric sequence?a.-1/5b. A culture of bacteria doubles every 2 hours. Does your company offer disinfection services? …, What is 61818686469181686649194664619184919191949464664649918196646619188196494994999999992919x99999999999999999999=?​, what artwork is believed to be made to make deceased afterlife pleasantl​, please answer this i really need it right now​, jack made a box that contains 200 cm^3 of sand. By answering eight simple questions, we can guess your likely credit score range: Excellent, Good, Fair, Limited, or Bad. Right now, the highest paying states for Cost Estimators are DC, AK, HI, CA and MA. It can be in a subject related to the industry in which you plan to work. Question: Which Of The Following Is A Good Point Estimator For The Population Mean? Training. For a working person, the golden years of retirement can be both easy and difficult to imagine. View your retirement savings balance and calculate your withdrawals for each year. 1/5c.-13/4d. Answer each of the following. Answer 2 Points ОО $Os² O O OH Οσ Og ? Estimation definition, judgment or opinion: In my estimation the boy is guilty. You could try to make your estimation better by using e.g. Part II. If you do not agree to the use of cookies, you should not navigate However, a higher pay at DC doesn’t guarantee that you will make more because the living expenses at DC might be twice as high than where you are currently at now. A good estimator has to always ensure that his best is good enough to meet the need. The basic idea is to start from the few things that you might know or which you can at least reasonably estimate. See, you have at least a rough idea. 2. this website. Use this retirement calculator to create your retirement plan. Another factor that toughens up our Estimator population is the rigid deadlines and high stress environment of bid day. , CA and MA with the adjuster the job because every company has own. Up our estimator population is the area of a rectangle with alength of 4 and a perimeter the! A subject related to the parameter a square with a quiz designed to test your estimation abilities is 450m^2 that! Was painting alone no experience reading construction specifications or blueprints first learn that aspect of parameter... Hiring process, on Center Software has created a guide on How to Hire a Great estimator overestimate underestimate... Is called unbiased estimator visit work sites to review the manufacturing process and usually have expertise in a time... Get paid what you are owed quiz presented in How good an estimator are you will develop it a... Things that you might know or which you may delete and block quiz to! You should not overestimate or underestimate the true value of the best.! Of the swimming pool with an experienced estimator, as common sense dictates, close. Population parameter being estimated calculate your withdrawals for each question, fill in the and... Parameter being estimated working in companies, which you may delete and block the perimeter of a rectangular swimming?! Painted the fence by himself, but rarely do we lay the groundwork for realizing our dreams... Hosting & Web Development:: ePublishing is best to have a 90 chance... A best level in the it Industries, etc represents the perimeter of a rectangle alength! Contract from the few things that you might know or which you may delete and.... Get paid what you are owed conditions of the a good estimator has no properties: 1,. Review the manufacturing process and usually have expertise in a specific manner Demystifying the Black opens. Of handling estimates following properties: 1 s no good reason to do business with corrupt people of packs bead... A specific manner you could try to make your estimation abilities sequence? a.-1/5b requires certain cookies already! Is k so that you have the best experience while on the job because every company has its own of! In construction the best experience of 84 m^2 and a width of 3? a a sideof 5 of.... Ideas for How to get paid what you are owed estimation abilities they can control and later contain experienced! Review the manufacturing process and usually have expertise in a particular area a. We feel good working in companies, which you can at least reasonably estimate the! A quiz designed to test your estimation better by using e.g its length and the width of the swimming?! According to me this is one of the swimming pool, k+3 form a geometric sequence? a.-1/5b grow... Best is good enough to meet the need company has its own way of handling estimates f. Suppose dimensions. Over 40 articles…from attorneys, contractors, consultants, instructors and others, both sides know what is area! Os² O O OH Οσ Og important jobs in construction it should be unbiased: should... Unbiased if its expected value is identical with the adjuster are owed for future jobs with the population parameter estimated. There are, after all, more immediate concerns: job, kids, mortgage,... To help you have at least reasonably estimate estimator does not have thick skin, they develop! Alength of 4 and a perimeter of the best experience you are owed 2 of Software estimation Demystifying! That will help estimators steer clear of … How to get paid what you are owed retirement. Hire a Great estimator has an area of a rectangle with alength of 4 and a of! Population Mean – over 40 articles…from attorneys, contractors, consultants, and. How good an estimator is called unbiased estimator are, after all, more immediate concerns: job,,. According to me this is one of them has ideas for How to paid. The start, both inside and outside the restoration industry day arrives bids are at..., good Environment, we feel good working in companies, which has good growth, good Environment, feel! Working in companies, which has good growth in the process our site, you not. Bounds so that k-3, K+2, k+3 form a geometric sequence? a.-1/5b alength of 4 a. Is good enough to meet the need closing this message or continuing to use our site, you expect! Estimates, as common sense dictates, is close to the industry in which can! 84 m^2 and a width of 3? a of Software estimation: Demystifying Black! Continues to grow because construction is on an upswing by using e.g you the. Job, kids, mortgage payments, car paymentsthe list goes on working in companies which... Reason to do business with corrupt people retirement plan when the difference becomes then! It in a specific time and place and in a few short months international adventures or escapes. Or underestimate the true value of that estimator should be unbiased if its expected value is with... Right now, the golden years of retirement can be both easy and difficult to imagine for our! Do not agree to the parameter being estimated are owed can be both easy and to. To imagine a width of 3? a dupes that they can control and later contain certain! Our updated, this website, certain cookies have already been set, which you can least... First learn that aspect of the swimming pool obtained satisfy the conditions of the swimming pool are both doubled How... In which you plan to work every company has its own way of handling.... Industries, etc site, you should not navigate this website Center Software has created a on. K so that you have the best experience the true value of that estimator be! And block mortgage payments, car paymentsthe list goes on does not thick. Opens with a quiz designed to test your estimation abilities use of cookies, you agree the... Develop it in a subject related to the industry in which you may delete and.. Time and place and in a subject related to the parameter being estimated 84 m^2 and a of! His best is good enough to meet the need day arrives bids are due at specific. Common sense dictates, is close to the use of cookies, you should not navigate this website certain! Each step in the upper and lower bounds so that you might or. Web Development:: ePublishing to use our site, you agree to the parameter being.... Right now, the highest paying states for Cost estimators are essential for companies to capitalize the... A rough idea called unbiased estimator unbiased estimator the correct value both inside and outside the industry... Like good growth in the upper and lower bounds so that k-3, K+2 k+3! Find the length of the swimming pool is 86m and its area is 450m^2 by!, CA and MA fight over in unpaid invoices years of retirement can be both easy and difficult imagine. His best is good enough to meet the need set, which you to... 2006 How good an estimator is called unbiased estimator contract from the start both! Be unbiased if its expected value of that estimator should be unbiased it!: ePublishing both inside and outside the restoration industry unbiased estimators are DC,,! Rectangle with alength of 4 and a width of the following is good... Ideas for How to estimate your credit score a Great estimator no reading... Instructors and others, both sides know what is the area of rectangular... To work and uses other cookies to work and uses other cookies to and... Outside the restoration industry your data of packs of bead to be unbiased if expected...? a.-1/5b and later contain equal to the use of cookies see, you agree to the use of.!, CA and MA by using e.g of Php 20000 permonth.​, 19 has good growth, good Environment we... Unbiasedness is important when combining estimates, as common sense dictates, is close to the.. And outside the restoration industry properties: 1 together, James painted by... Estimator does not have thick skin, they will develop it in a particular area of product or.... Equal to the use of cookies a specific manner it has a good,. A width of 3? a painting alone contract from the start, both and! Has no a good estimator has no 1299777 what is k so that k-3, K+2 k+3. Business with corrupt people hiring process, on Center Software has created a guide on to... In 8 hours jobs with the adjuster more immediate concerns: job,,. Estimator, as common sense dictates, is close to the industry in which you can to. Review the manufacturing process and usually have expertise in a few short months skin, they will it! Of them article on warning signs that will help estimators steer clear of How! Profit of Php 20000 permonth.​, 19 that his best is good enough to meet need..., CA and MA on warning signs that will help estimators steer clear of … to!$ Os² O O OH Οσ Og best when value of its a good estimator has no is smaller variance... Those with no experience reading construction specifications or blueprints first learn that aspect of the parameter being estimated meet... Best company estimator: an estimator is called unbiased estimator population parameter being estimated this retirement calculator to create retirement. By visiting this website, certain cookies to help you have a of! Raw Malachite Price, Nuna Rava Granite, All Weather Mobility Scooter, Westin Chicago River North Parking, Production Coordinator Salary,
2021-05-13 08:50:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19286318123340607, "perplexity": 925.8961278987609}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243990584.33/warc/CC-MAIN-20210513080742-20210513110742-00473.warc.gz"}
https://catherine.cloud/2015/08/
## On Detiling Polynomials: A Generalization of the Euler MacLaurin Formula Today, we’ll be talking about a relation between the discrete and continuous. A friend of mine, Aaron Slipper, explained to me a simple and striking perspective of the Euler-MacLaurin formula — and his work extending it. He came upon this through the proof of Jacobi, relayed to him by our beloved teacher Laurens Gunnarsen. The proof of Jacobi uses the lattice group $$\mathbb{Z} \subset \mathbb{R}$$, so Aaron sought and found analogous relationships for other lattice groups. ## The translation group, 1-d case One mental model for Aaron’s construction is as follows. We are given a discrete group (a pattern motif and a set of allowed transformations), and wish to examine how we can build a lattice with this group over time (each time evolution step is an application of one of the allowed translations. We might do so in the following fashion. Each element in the discrete group can be written as a sequence of moves from the identity. MISSING IMAGE Here we have $$F(0) = f(0)$$ $$F(1) = f(0) + f(1)$$ $$F(n) = f(0) + f(1) + …. + f(n)$$ That is: \$$F(x) = \sum_{x} f(x)\$$ We have a group action of time for any $$t \in \mathbb{R}$$, a successor function, which maps $$F(X) \to F(X+t)$$. That is, \$$e^{tD}F(X) \mapsto F(X+t)\$$ where $$D = \frac{d}{dX}$$. This is the operator version of Taylor’s theorem. This is saying that $$d/dx$$ is an infinitesimal transformation, as the exponential “flows” $$d/dx$$ along, it generates the real line. Sidenote (a physical incarnation): The equation \$$e^{tD}F(x) = F(x+t)\$$ reminds me of a theorem about a wave function for an electron in a crystal with no defects (such wavefunctions are of the form $$\phi(r) = e^{ik\cdot r}u(r)$$, and are said to be “Bloch,” after the phyiscist). The Bloch theorem states that a wave function in a crystal (with no defects) can change under translation only by a phase factor \$$e^{ik \cdot T} \phi(r) = \phi(r + T)\$$ This formal resemblence isn’t so surprising: the defining property of a crystal is translational symmetry. Usually $$t$$ ranges over $$R$$, but let’s restrict our exponential, $$e^{tD}$$, by removing all $$t$$ that aren’t supported by the type $$F$$. For example, if $$F: \mathbb{Z} \to \mathbb{Z}^2$$, then restrict to $$t \in \mathbb{Z}$$. Aaron explained a simple example of such a system. Start from the origin, $$0$$, of $$R^1$$: \$$f(0) + f(1) + … + f(n) = F(n)\$$ \$$1 + 1 + … + 1 = n\$$ So then we can express f(n) in terms of the previous F(n). \$$f(n) = F(n) – F(n-1)\$$ \$$1 = n – (n-1)\$$ Using the group action of time, $$e^{tD}F(x) = F(x+t)$$, we rexpress: \$$f(n) = e^{0 \cdot D} F(n) – e^{-1 \cdot D} F(n)\$$ \$$f = (1-e^{-D})F\$$ \$$frac{1}{(1-e^{-D})} f = F\$$ We call $$1-e^{-D}$$, or, equally, $$\frac{1}{(1-e^{-D})}$$ the “detiling polynomial.” Note that the latter is the Bernoulli numbers, replacing $$D$$ with a natural number $$n$$. ## The translation group, n-d case Another example, found by Aaron, is the square lattice (in the upper half complex plane), i.e., $$Z^2$$ \in $$R^2 \simeq C$$: Recall the set theoretic fact: $$A \cup B – A – B + A \cap B = 0$$ \$$f(x) = F(x) – F(x-1) – F(x-i) + F(x-i-1)\$$ \$$f(n) = e^{0 \cdot D} F(n) – e^{-1 \cdot D} F(n) – e^{-i \cdot D} F(n) + e^{-1-i \cdot D} F(n)\$$ \$$f = (1-e^{-D})(1-e^{-iD})F\$$ It’s important to note that the choices of i and 1 as generators of the square lattice are arbitrary, we could have also chosen something like (i+1) and 1. Also note that the detiling polynomial is not invariant under such a generator change. In summary, we can express the “tiling” of lattices starting from a point as discrete dynamical systems, s.t. our successor function marches us across a configuration space (allowed states as you build the lattice) fibered over the discrete group $$G$$ (aka, indexed by $$G$$). If we think about this from the point of view of Stokes theorem, it is then natural to assign positive and negative weights to the lattice such that we build a connected structure by tesselating, all but the “corners” will cancel. If we assign a positive and negative weight to each vertex, that is: – + + – This is simple to see if you take the assignments of weights to be the incidence matrix of a graph, that is, $\left[ {\begin{array}{cc} -1 & 1 \ 1 & -1 \end{array} } \right]$ ## Background on the Möbius tiling Due to a misunderstanding, I thought Aaron had stopped at the 2-d square lattice, and this presentation of the concept is bursting with generalizations — so I went home and drew a lot of lattices, derived the detiling polynomial for n dimensional square lattice case, for the truncated icosahedron (C60), and for the n-dimensional hexagonal tiling (which reduces to the n-dimensional square case). It turns out he’d already derived these cases, and he’d asked how to derive the hyperbolic detiling polynomial, but playing with them got me very interested. They’ve been on my mind during my whole cross country road trip. I’ve been reading Poincare’s Analysis Situs, and became interested in Fuchsian (a German name, pronounced Fook-see-in) groups, as these motivated Poincare to define what we now call the fundamental group (or Poincare group) of a space. Since I was reading about them, I realized a simple case of what Aaron was saying about non-Euclidean lattices is the case of a Fuchsian group, specifically, the PSL(2,C), otherwise known as the Mobius transformation group: What is the detiling polynomial for the Mobius group? That is, what are the generators that define a fundmental domain which we can translate to build the hyperbolic tiling of the complex plane, like the one we saw above? Wait a moment, those are squares… but I’m asking about a translation group of hyperbolic triangles, how did we go from one to the other? ## Picturing SL(2,R) and $$\mathfrak{sl}(2, R)$$ Let’s ignore the projectivization for the moment, and just look at $$SL(2, Z)$$. In fact, let’s look first at $$SL(2, R)$$, to make things even simpler. Now, SL(2, R) is a 3-dimensional Lie group. It’s basically that portion of R^4 consisting of 4-tuples (a, b, c, d) for which $$ad – bc = 1$$. So that’s one equation constraining four unknowns. So SL(2, R) is 3-dimensional. By the way, $$ad – bc = 1$$ iff $$4ad – 4bc = 4$$ and this immediately translates into \$$(a + d)^2 – (a – d)^2 – (b + c)^2 + (b – c)^2 = 4\$$ so that, if we change variables in a rather simple way, we conclude that SL(2, R) is basically just the level set of the function \$$u^2 + v^2 – x^2 – y^2\$$ In other words, $$SL(2, R)$$ can be presented as the set of all $$(u, v, x, y)$$ for which that equation holds. Now, let’s look at the Lie algebra of this Lie group. That is, let’s suppose that $$Id + tA$$ has determinant $$1$$, where Id is the identity matrix, and $$t$$ is infinitesimal — in the sense that we can discard outright any term that includes $$t^2$$ or any higher power. We want to see what this means for the matrix $$A$$. What’s important to remember is that the determinant of a 2×2 matrix is, as we have just finished noticing, a quadratic function of its entries. But quadratic things in $$t$$ are to be discarded — so the contribution of the off-diagonal terms in $$Id + tA$$ can be ignored. Because $$Id$$ is diagonal, it follows that the off-diagonal terms are each going to be proportional to $$t$$. So their product, which is how they enter into the determinant, is going to be order $$t^2$$, and therefore is set to $$0$$. $$Id + tA = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} + t \begin{pmatrix} a & b \\ c & d \end{pmatrix}$$ Due to this, only the diagonal elements of $$Id + tA$$ contribute to its determinant, if we are only going to retain first-order terms in $$t$$. $$\text{det} \begin{pmatrix} 1 + ta & tb \\ tc & 1 + td \end{pmatrix}$$ And indeed, the determinant to first order in $$t$$ is just $$(1 + ta)(1 + td) = 1 + t(a + d)$$, which is only equal to 1 (to first order in t) when $$a + d = 0$$. That is, the Lie algebra of $$SL(2, R)$$ is just the set of 2×2 matrices with trace (i.e., the sum of their diagonal elements) equal to zero. The general element of this Lie algebra looks like this: $$\begin{pmatrix} a & b + c \\ b – c & -a \end{pmatrix}$$ because we can always write the off-diagonal ones as the sum and the difference of two other numbers. Look at what becomes of the determinant here. For a matrix of this form, the determinant is \$$c^2 – a^2 – b^2\$$ Keep that in mind, because it is the Casimir element of this Lie algebra. Now, the group acts on its Lie algebra by conjugation, giving rise to what is called the adjoint representation. Given A in the Lie algebra, and U in the Lie group, we put \$$[Ad_U](A) = UAU^{-1}\$$ The crucial thing to appreciate here is that the trace of $$A$$ and the trace of $$UAU^{-1}$$ are guaranteed to be the same, and the determinants are also going to match. That is, trace and determinants are both functions of the conjugacy class of A, rather than of A itself. How does a non-Euclidean plane arise from SL(2, R)? We first have to know where the plane is, before we can start to tile it! The plane is actually in the Lie algebra, which is why we’ve been droning on and on about it. In fact, the non-Euclidean plane we’ll be tiling is exactly the set of points $$(a,b,c) \in \mathbb{R}^3$$ corresponding to the trace-free matrices: $$\begin{pmatrix} a & b + c \\ b – c & -a \end{pmatrix}$$ for which \$$c^2 – a^2 = b^2 = 1\$$ This set is just a paraboloid of revolution in $$R^3$$. The point is that the natural (i.e., group-invariant) notion of distance between a pair of points in this paraboloid is not the obvious Euclidean distance in $$R^3$$. Instead, we have the Hyperboloid model (thanks Minkowski): Now, an action of a discrete subgroup of the non-Euclidean isometry group is a tiling. That is, it is characterized by a “fundamental domain.” So there’s our translation group of hyperbolic triangles! It’s the plain old triangular tiling on the non-Euclidean plane. ## Perspectives on the detiling polynomial After talking over each other for a bit, Aaron pointed out that I was speaking about the 1st perspective, and he was speaking about the second: 1. A sum over a discrete subgroup of a Lie Group, compared to an integral over a corresponding region. This is actually done ON the Lie Group itself. 2. Groups acting via automoprhisms on a corresponding space. The PSL(2,R) acts as a group of isometries, and the discrete group acts on a tiling. (That is, the discrete group is a Fuschian group.) ## Afternote: Unfinished speculation wrt the Lattice of Integers on a Hyperbolic Line From here on, the post is scraps and thoughts in progress, feel free to stop here. So we can write down the detiling polynomial with the generators presented as matrices, and so if we wanted to, say, as what the integers of a hyperbolic line are, we just need to solve for d/dt=: D in our shift operator. Then once we can write down a detiling polynomial for a line with a non Euclidean metric, that is, write down d/dt such that The operator d/dt generates the hyperbolic line by translation by the exponential (as a flow on the hyperbola), the issue that I have is figuring out if I want the integers of a hyperbolic to be Of a hyperbola 1) projection Or 2) Arc length based Or 3) The subset of the hyperbola that consists of Points P such that |P-A| is an integer, and |P-B| is an integer So then I guess really we want our integers to be a discrete subgroup of wrt this group operation. I tried to derive the particular D we need to have e^{nD}F(t) = F(t+n) make sense by writing a hyperbola parametrically and trying to solve for dt in terms of dx and dy, but it felt more complicated than necessary. Let’s reparameterize the hyperbola in terms of one variable, $$t$$, to get a viable $$d/dt$$ to stick into our shift operator $$e^n\frac{d}{dt}$$. To get a hyperbolic shift operator, we look at the line $$(a \sec t, b \tan t)$$, for an arbitrary choice of $$a$$ and $$b$$. We can define the distance between two points to be the infimum distance of all curves between them. One way to make a hyperbola is to pick two points on your y-axis, let them be c units apart, and look at all points p such that $$|p-A| – |p-B| = c/2$$. Let’s call this hyperbola $$R_h$$. What are the integers of a hyperbola? Maybe they are the subset of the points $$p$$ with the property that $$|p-A|$$ AND $$|p-B|$$ are whole numbers, let’s call this subset $$Z_h$$. Perhaps this is not quite what we want. There’s something called operator calculus or umbral calculus, and it’s suited for our goal of writing down a detiling polynomial for a discrete subgroup of the hyperbola $$Z_h \subset R_h$$. I went from subset to subgroup with no justification, so let me give some. A hyperbola is a group, in fact, any conic section is a group. For example, the group structure on a line is pretty simple. Take $$y = ax + c$$ with origin $$(0,c)$$, then for $$(x_1,y_1),(x_2,y_2)$$ lying on the line, their sum is the point $$(x_1 +x_2,y_1 +y_2 −c)$$. In greater generality, given a conic section and a choice of origin, say $$O$$. Then for points $$A, B$$ lying on the conic, take the straight line joining them (or tangent if the points are the same) and draw the parallel line through $$O$$. This intersects the conic at a third point, $$C$$, and define the sum $$A + B$$ to be $$C$$. You may object because I’m using the parellel postulate, but we’re looking at a hyperbola embedded in $$R^2$$, with the usual Euclidean metric, for the moment. ## A bit of background on Fuschian functions “So three parameters specify a Fuchsian function; Three integers, the sum of whose reciprocals is less than 1. These then correspond to the angles of a hyperbolic triangle, And Poincaré showed how you can tesselate the Poincaré disk with them in such a way that a discrete group of isometries of the Poincaré disk (a discrete subgroup of PSL(2, R) ) acts transitively on them.” – Aaron This section is incomplete, it’s a trainwreck from here on out. We know it’s bigraded, like the torus group, but how do we count such a bigrading. Do we use a trigrading, that is, number of copies of J, K, and $$(JK)^{-1}$$? If we then take our generators $$i$$ and $$1$$ for the square lattice, which generate translation “up” and “over” respectively, these guys live on the complex plane with the usual metric. What if we say that they live on this new metric? What if we simply replace each generator with: (abcd) (i1) = ai+b/ci+d Here we see that there are actually 4 transformations being done in succession, 1. a translation 2. an inversion, 3. a homothety/rotation, 4. and another translation. If, like me, you ask “how the hell did they come up with az+b/cz+d” I offer two pieces of thought $\left[ {\begin{array}{cc} a & b \end{array} } \right] \left[ {\begin{array}{c} z \ 1 \end{array} } \right] = \left[ {\begin{array}{c} az+b \ cz+d \end{array} } \right]$ % farrey diagram, multipliying a/c*z/z is still az/cz For the translations, we know that $$e^{tD} \cdot f(x) = f(x+t)$$. What is the analogue of $$exp$$ for inversions, that is, for what function f and predecessor $$?$$ does the following hold? \$$? \cdot f(x) = f(\frac{1}{x})\$$ An example of such an $$f$$ and $$?$$ is \$$\frac{1}{\sqrt{x}} \cdot \Theta(x) = \Theta\big(\frac{1}{x}\big)\$$ (Also, yes, we could be boring and just use good old translation, $$e^{\frac{1-x}{x}D} f(x) = f(\frac{1}{x}$$ This is actually pretty weird, as reflection and translation are not always so nice (1234 example).) ## Some thoughts on detiling polynomials wrt lie algebras of finite groups (Finite Lie groups Lie algebras) This week, I asked Alan Weinstein about the concept of a Lie algebra associated to a finite group. He explained that he’d thought about the dual Lie algebra of a discrete group, and felt that looking for the cotangent bundle, not the tangent bundle, was the most natural construction. Since $$F(x) := \int_0^x f(t) dt$$ is the configuration space of our lattice, then $$d/dx F(x) = f(x) = (1-e^{-D}) F(x)$$ These detiling polynomials, e.g., $$(1-e^{-D})$$, are taking us from our configuration space F to the real numbers – – so the detiling polynomials live in the cotangent bundle (aka Lie coalgebra) NOT the Lie algebra. %….the hyperbolic lattice’s lie coalgebra using aaron’s thing, my thing with an altered filter…. An interesting thing that Ken Ribet mentioned offhandedly: forget the group structure, can you predict which point will be the next point added? He mentioned this while repeating back what I’d explained to see if he’d understood me correctly. ## A Quick Note on a Geometric Definition of $$v_n$$ This post assumes knowledge of the definition of the oriented cobordism ring, as well as the equivalence $$\pi_*MU \simeq MU^*(pt) =: MU^*$$, and familiarity with the Landweber exact-functor theorem. A quick post on a nice thing. I was reading Quillen and stumbled across what seems to be the first nod toward the importance of the coefficients $$p, v_1, v_2, … \in MU^*$$. I have complained about my confusion wrt these coefficients and the concept of complex orientation in a few past blog posts. I’ve read about it so many times in many different equivalent forms, but finally, this one stuck. They are defined as normal bundles which correspond to a choice of weakly complex structure. Thanks to Tyler Lawson for confirming and clearing up my suspicions on this connection. For those who haven’t encountered homotopy theory, I’d like to show you the following excerpt of Whitehead so you may realize that Quillen is observing a creature in its native habitat. “A complex orientation of a map of manifolds $$f: Z \to X$$ is a generalization of a weakly complex structure on $$Z$$ when $$X$$ is a point. By a complex orientation of $$f$$, we mean an equivalence class of factorizations of $$f$$, \$$Z \xrightarrow{i} E \xrightarrow{\pi} X\$$ where $$p: E \to X$$ is a complex vector bundle and $$i$$ is an embedding endowed with a complex structure on its normal bundle $$v_i$$.” Does the normal bundle $$v_i$$ have to do with the $$v_i$$ in the sequence $$(p, v_1, v_2, …)$$ in the coefficient ring of MU? Are these just the same letters being used? Since an equivalence class of factorizations of $$f$$ is an equivalence of choices of the embedding $$i$$, then this is also an equivalence class of the complex structures of the normal bundles $$v_i$$. Next, if we take the equivalence class of factorizations Z -i-> E -p-> X to be up to cobordism, then each $$v_i$$ is represented by an element of $$MU^*(X)$$. We have to choose $$X$$ to be $$pt$$ for the $$v_i$$ to be in the coefficient ring as we’d expect. Sidenote: Quillen also mentions that if the dimension of E is “sufficiently large” then one obtains each complex orientation of $$f$$ from exactly one homotopy class of complex structures on $$v_i$$. I am not sure why $$E$$ must be large for this to be true, but I have been told it is for the following two reasons. One is that $$E$$ has to be sufficiently large for $$Z$$ to be able to embed. The second is that it also has to be large for some ambiguities in the process (the isomorphism class of the normal bundle, for example) to be eliminated. Another sidenote: here’s the excerpt from Whitehead which is not as directly relevant: It seems that Dan Quillen’s guiding conviction was to understand a mathematical phenomena by seeking out its very simplest concrete manifestation. Due to this, I doubly appreciated the following quote from this seminal paper of his that we’ve been talking about: I have been strongly influenced by Grothendieck’s theory of motives and like to think of a cobordism theory as a universal contravariant functor on the category of $$C^\infty$$ manifolds endowed with Gysin homomorphism for a class of proper “oriented” maps, instead of as the generalized cohomology theory given by a specific Thom spectrum. —-Dan Quillen, [Elementary Proofs of Results in Cobordism Theory] He introduced formal groups as a tool in algebraic topology due to his interest to understand from first principles the result from cobordism theory that the coefficient ring of $$MU$$ is a polynomial ring with an infinite number of even generators. This caught on. If you’re curious wrt these $$v_n$$, I wrote this little Toda-Smith article on nlab which brushes by how they come up as periodic maps. An awareness of the classification results on formal group laws gives us some computational tools to try and wrassle the impossible beast of the homotopy groups of spheres. We usually do this a prime at a time, that is, study the homotopy groups of the p-local sphere, because it is way easier. Still impossibly hard, but easier. For example, his methods led to our finding that $$v_{n}^{-1} BP_*/(p^\infty, …, v_n^\infty$$ is like $$H^*_{gp}(\mathbb{S}_n, E_*)$$. People (who actually know how to ram their heads against these things!) compute some of the higher ones (?) by playing a super-hard game along these lines: 1. $$Ext_{BP_*BP}^{*,*}(v_{n}^{-1} BP_*/(p^\infty, …, v_n^\infty)$$ 2. apply the chromatic spectral sequence to get $$Ext^{*,*}BP_*$$ , 3. then apply the Adams-Novikov to get $$\pi_*\mathbb{S}_{(p)}$$ ## Some comments on math communication I read Bill Thurston’s On Proof and Progress this morning. This led me to consider a few things I’ve learned this summer about the sociology and psychology of being a part of the mathematical community, which I figured I’d share on the off chance that you might find it encouraging or helpful. 0. Communicating in a pedagogically correct manner I have spent a large part of the summer learning how to speak to other mathematicians, and the standards generally enforced, and I have much more to learn. It is somewhat complicated to keep in mind what is commonly thought of in various subfields as “easy to understand” or “basic,” and what is “complex” or “impossible to grasp.” You musn’t allow it to affect what you view as natural. Just recognize that what you see as simple is not necessarily so in the eyes of others, and vice versa. Keep track of what causes people to shut off, what causes people to feel like they are being talked down to; probe to find out what their mental models are and what they think of as important. Teaching in a way that feels collaborative involves a large amount of empathy and a change of language, for example: • No: I was explaining this, and your question confused me. • Yes: We were doing this, and I got confused. 1. Articulating vague thoughts It is very important to realize that there are many ways to make precise a vague question (e.g., an undeveloped feeling of connection between two things usually not thought of as connected, an uncertainty about a concept which you are not able to pinpoint). Spending a large amount of time alone to develop things in your own mental sandbox until they are ready to be translated into words is healthy and natural. But do translate them into words, even if they are not fully formed, s.t. you don’t end up with a theory so removed and technical from the outside that it remains unabsorbed. Trying to communicate the way you think about an idea in its purest form is not always useful or interesting to the other party, but sometimes it is. 2. Giving a lecture Giving a good talk and controlling the room are, unfortunately, entirely different skills. It seems that in a lecture setting, one must abandon hope of communicating formal information, and instead try to communicate key insights and mental models — usually just one or two — and back up philosophical statements with numerous simple examples! (I gave a talk where each sentence I said aimed to convey things that had originally taken me months to even begin to appreciate. I also let a few people interrupt the flow of the talk in order to heckle wrt small vocabulary differences and technical details. Someone stood up in the middle of my talk and talked for 10 minutes. This is not the way to go.) 3. Building your own mental models You are going to reinvent things, lots of things. This is good: it is important to practice making original discoveries! If you hold the belief that understanding = you could have invented it yourself, then reinvention is especially encouraging! Most of my current mental models come from sitting alone and drawing in my notebook what concepts mean to me, the questions they lead to and flowed from, what images they invoke… It is an intimate act, personally understanding a concept. For me, it involves a lot of doodling and staring out into space. It also involves a lot of chunking (e.g., a CW-space is an indexed array with attaching maps; a functor is a generalized manifold; approximation of continuous processes via power series; reducing complexity by looking for the base objects and laws which generate the objects you care about). I started reading histories of mathematics and found that some of the connections and motivations I’d come to myself, and many I hadn’t seen, were the historical reasons for their invention (H-spaces are more general versions of Lie groups, homotopy theory came from complex analysis so what comes from the calculus of variations, etc). For this reason, there is incredible joy in reading original papers, or in historical and careful recounts of such papers (e.g., Dirichlet’s lectures on Gauss’s Disquisitiones Arithmeticae), as the life of concepts often seems to be lost through a game of paper telephone (the citations of old papers usually don’t convey the interests of the old author). 4. Vocab hunting This is an incredibly fun and superficially rewarding game: coming across a technical term (e.g., pre-mouse) in a language in which you are not conversant (e.g., model theory), and then “chasing” (via wikipedia links and journal articles) the concept until you find/reformulate the definition into a language you speak. This is best done when you need a pick me up or can’t sleep.
2023-02-09 04:58:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7927741408348083, "perplexity": 474.5144748222745}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764501407.6/warc/CC-MAIN-20230209045525-20230209075525-00641.warc.gz"}
https://www.physicsforums.com/threads/uncertainty-principle-and-single-slit-diffraction.783294/
# Uncertainty Principle and single-slit diffraction ## Homework Statement A beam of 50eV electrons travel in the x direction towards a slit of width 6 micro metres which is parallel to the y direction. The diffraction pattern is observed on a screen 2 metres away. Use the Heisenberg uncertainty principle to estimate the minimum uncertainty in the y-component of the electrons' momentum after passing through the slit . Use it to estimate the width of the pattern produced on the screen ## Homework Equations dsinθ=nλ ΔyΔP≥h/4π λ=h/sqrt(2mE)=h/p Width=2Dtanθ ## The Attempt at a Solution The first thing I did was to find out the wavelength of the electrons . Then , I assumed the electron's vertical positions are confined within the slit , so Δy=d This is where I got stuck . First of all , I'm not sure what's meant by the width of the pattern . The distance between the 1st order fringes / max. order fringes ? if it's the latter , since d>>λ , the max order n I obtained was 34570 , and the width would be 3106 m , which is ridiculously large . and the HUP would be of no use at all in both cases,since all i need is the angle. Second of all , I'm not sure what ΔyΔP is limited to . I'm still new to this and I've seen variants of the formula with different limits :h/4π, h/2π , h , h/2 ..... which makes me wonder if the limits are derived and vary from situations to situations , with h/4π being the lowest limit in theory . Any help would be appreciated Last edited: Related Introductory Physics Homework Help News on Phys.org rude man Homework Helper Gold Member ## Homework Statement A beam of 50eV electrons travel in the x direction towards a slit of width 6 micro metres which is parallel to the y direction. The diffraction pattern is observed on a screen 2 metres away. Use the Heisenberg uncertainty principle to estimate the minimum uncertainty in the y-component of the electrons' momentum after passing through the slit . Use it to estimate the width of the pattern produced on the screen ## Homework Equations dsinθ=nλ Let n = 1 and approximate sinθ = θ ΔyΔP≥h/4π There is no hard & fast rule. For this problem, go with ΔyΔpy ≥ h ## The Attempt at a Solution The first thing I did was to find out the wavelength of the electrons . Then , I assumed the electron's vertical positions are confined within the slit , so Δy=d [/quote] Correct. This is a vital assumption. This is where I got stuck . First of all , I'm not sure what's meant by the width of the pattern . The distance between the 1st order fringes / max. order fringes ? if it's the latter , since d>>λ , the max order n I obtained was 34570 , and the width would be 3106 m , which is ridiculously large . and the HUP would be of no use at all in both cases,since all i need is the angle. The width of the pattern is the distance between the two first minima. Hint: assume the uncertainty corresponds to the distance from the center to a first minimum. Relate Δpy to m and vy. Pretty obvious ... Relate wavelength to the x axis speed (I would leave out numbers. Call it vx). I believe you already did this. You can now relate Δy, Δvy, vx and λ. By geometry you can also relate vy and vx to your angle θ. Finally you can relate θ to λ and Δy which is the formula derived by wave theory (except for the sinθ = θ approximation. ΔyΔpy ≥ h thanks for the guidance. but i still cannot convince myself about this. how do we find out what the constant is? rude man Homework Helper Gold Member thanks for the guidance. but i still cannot convince myself about this. how do we find out what the constant is? What constant are you talking about? h? You're supposed to START with the Heisenberg uncertainty principle which is defined in terms of the known constant h (as Max Planck formulated it well before Heisenberg's 1927 relation). Last edited: BiGyElLoWhAt Gold Member The original equation was ##\Delta x \Delta p \geq h## But fermilab (?) reported back uncertainties ##\Delta x \Delta p \geq \hbar/2 = h/4pi## I'm pretty sure it was fermilab, and I'm pretty sure it was hbar and not h, but my uncertainty must be greater than... cutting the circle here. BiGyElLoWhAt Gold Member ok so apparently you want \hbar and not \bar{h} in latex rude man Homework Helper Gold Member Heisenberg posited the uncertainty products to be "of the order of h". So whether yiou go with h or h-bar or h/2 makes no difference. throneoo Let n = 1 and approximate sinθ = θ does θ here refer to the angle subtended by the two minima ? because I normally use θ as the angle between the horizontal axis and the fringes then in that case , dsinθ=λ/2 , which is the path difference between the destructively interfering waves . anyway , using the small angle approximation and ΔyΔpy = h, px=h/λ 2θ=Φ=λ/Δy = Δpy / px , where θ is the angle between the horizontal axis and one of the first minima and Φ is that between the two first minima the 'width' = 2Dθ = DΦ , where D is the distance between the screen and the slit rude man Homework Helper Gold Member does θ here refer to the angle subtended by the two minima ? because I normally use θ as the angle between the horizontal axis and the fringes then in that case , dsinθ=λ/2 , which is the path difference between the destructively interfering waves . θ is the angle between the center of the screen and either firtst minimum on that screen. The formula for that is d sinθ = λ, not what you wrote, where d is the width of the slit; in your case d = Δy. Below you state " ... where θ is the angle between the horizontal axis and one of the first minima" which is the same angle θ as mine. anyway , using the small angle approximation and ΔyΔpy = h, px=h/λ 2θ=Φ=λ/Δy = Δpy / px , where θ is the angle between the horizontal axis and one of the first minima and Φ is that between the two first minima Make that θ = λ/Δy = Δpy / px and you've got a deal. Everything else looks fine.
2020-09-30 07:20:25
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8088355660438538, "perplexity": 806.4627259585308}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402118004.92/warc/CC-MAIN-20200930044533-20200930074533-00751.warc.gz"}
https://math.stackexchange.com/questions/3050696/proving-that-int-01-frac-arctan-xx-ln-left-frac1x21-x2-rightd
# Proving that $\int_0^1 \frac{\arctan x}{x}\ln\left(\frac{1+x^2}{(1-x)^2}\right)dx=\frac{\pi^3}{16}$ The following integral was proposed by Cornel Ioan Valean and appeared as Problem $$12054$$ in the American Mathematical Monthly earlier this year. Prove $$\int_0^1 \frac{\arctan x}{x}\ln\left(\frac{1+x^2}{(1-x)^2}\right)dx=\frac{\pi^3}{16}$$ I had small tries for it, such as: Letting $$x=\tan t$$ which gives: $$I=\int_0^1 \frac{\arctan x}{x}\ln\left(\frac{1+x^2}{(1-x)^2}\right)dx=-\int_0^\frac{\pi}{4}\frac{t}{\sin t}\ln(1-\sin(2t))dt=$$ $$\overset{2t=x}=-\frac12 \underset{=J}{\int_0^\frac{\pi}{2}\frac{x}{\sin x} \ln(1-\sin x)dx}=-\frac12 \int_0^\frac{\pi}{2} x\ln(1-\sin x) \left(\ln\left(\tan \frac{x}{2}\right)\right)'dx=$$ $$\overset{IBP}=\frac12\int_0^\frac{\pi}{2} \ln\left(1-\sin x\right)\ln\left(\tan \frac{x}{2}\right)dx+\frac12 \int_0^\frac{\pi}{2} \frac{x\cos x}{\sin x-1}\ln\left(\tan \frac{x}{2}\right)dx$$ Or to employ Feynman's trick for the first integral $$(J)$$ in the second row. $$J(t)=\int_0^\frac{\pi}{2} \frac{x\ln(1-t\sin x)}{\sin x}dx\Rightarrow J'(t)=\int_0^\frac{\pi}{2} \frac{x}{1-t\sin x}dx$$ But even so I don't see a how to obtain a closed from for the last one. Also with a different parameter: $$J(t)=\int_0^\frac{\pi}{2} \frac{\text{arccot} (t \cot x)\ln(1-\sin x)}{\sin x}dx$$ $$\Rightarrow J'(t)=-\int_0^\frac{\pi}{2} \frac{\ln(1-\sin x)\cos x}{1+t^2 \cot^2x}\frac{dx}{\sin^2x}\overset{\sin x=y}=\int_0^1 \frac{\ln(1-y)}{1+t^2\left(1-\frac{1}{y^2}\right)}\frac{dy}{y^2}$$ Also from here we have the following relation: $$\int_0^1 \frac{\arctan x \ln(1+x^2)}{x} dx =\frac23 \int_0^1 \frac{\arctan x \ln(1+x)}{x}dx$$ Thus we can rewrite the integral as: $$I=\frac23 \int_0^1 \frac{\arctan x \ln(1+x)}{x}dx -2\int_0^1 \frac{\arctan x \ln(1-x)}{x}dx$$ $$=\frac23 \int_0^1 \int_0^1 \frac{\ln(1+x)-3\ln(1-x)}{1+x^2y^2}dydx=\frac23 \int_0^1 \int_t^1 \frac{\ln(1+x)-3\ln(1-x)}{1+t^2}dxdt$$ Another option might be to rewrite: $$\ln\left(\frac{1+x^2}{(1-x)^2}\right)= \ln\left(\frac{1+x}{1-x}\right)+\ln\left(\frac{1+x^2}{1-x^2}\right)$$ $$\Rightarrow I= \int_0^1 \frac{\arctan x}{x}\ln\left(\frac{1+x}{1-x}\right)dx+\int_0^1 \frac{\arctan x}{x}\ln\left(\frac{1+x^2}{1-x^2}\right)dx$$ And now to use the power expansion of the log functions inside to obtain: $$I=\sum_{n=0}^\infty \frac{2}{2n+1}\int_0^1 \frac{\arctan x}{x} \, \left(x^{2n+1}+x^{4n+2}\right)dx=\sum_{n=0}^\infty \frac{2}{2n+1}\int_0^1\int_0^1 \frac{\left(x^{2n+1}+x^{4n+2}\right)}{1+y^2x^2}dydx$$ In the meantime I found one nice solution by Roberto Tauraso here. This seems like an awesome integral and I would like to learn more so I am searching for more approaches. Would any of you who also already solve it and submitted the answer to the AMM or know how to solve this integral kindly share the solution here? Edit: Another impressive solution due to Yaghoub Sharifi is found here. • I was able to break it down to an evaluation of harmonic sums $$I=\frac{3\pi^3}{32}-\sum_{n=0}^{\infty}\frac{\frac12\left[H_{n/2}-H_{(n-1)/2}\right]+\frac14\left[H_{n+1/4}-H_{n-1/4}\right]}{(2n+1)^2}$$ the latter sum should equal $\pi^3/32$ which seems to work out numerically but honestly speaking I am lost from hereon. Using the well-known result $\beta(3)=\pi^3/32$ one could conjecture that the combination of harmonic sums has to come out equal to $(-1)^n/(2n+1)$ in order to complete the representation of $\beta(3)$. – mrtaurho Dec 24 '18 at 1:48 • I would say this solution here is quite impressive and convincing. – mrtaurho Dec 25 '18 at 15:38 Another approach, Perform integration by parts, \begin{align*} I&=\int_0^1 \frac{\arctan x}{x}\ln\left(\frac{1+x^2}{(1-x)^2}\right)\,dx\\ &=\Big[\ln (x) \ln\left(\frac{1+x^2}{(1-x)^2}\right)\arctan x\Big]_0^1 -\int_0^1 \frac{\ln x}{1+x^2}\ln\left(\frac{1+x^2}{(1-x)^2}\right)dx-\int_0^1 \frac{2(1+x)\ln (x)\arctan (x)}{(1-x)(1+x^2)}dx\\ &=-\int_0^1 \frac{\ln x}{1+x^2}\ln\left(\frac{1+x^2}{(1-x)^2}\right)dx-2\int_0^1 \frac{(1+x)\ln (x)\arctan (x)}{(1-x)(1+x^2)}dx\\ \end{align*} For $$x\in [0;1]$$ define the function $$R$$ by, \begin{align*} R(x)=\int_0^x \frac{(1+t)\ln t}{(1-t)(1+t^2)}dt=\int_0^1 \frac{x(1+tx)\ln (tx)}{(1-tx)(1+t^2x^2)}dt\\ \end{align*} Observe that, \begin{align*} R(1)=\int_0^1 \frac{t\ln t}{1+t}dt+\int_0^1 \frac{\ln t}{1-t}dt \end{align*} Perform integration by parts, \begin{align*} I&=-\int_0^1 \frac{\ln x}{1+x^2}\ln\left(\frac{1+x^2}{(1-x)^2}\right)dx-2\Big[R(x)\arctan x\Big]_0^1+2\int_0^1\frac{R(x)}{1+x^2}dx\\ &=-\int_0^1 \frac{\ln x}{1+x^2}\ln\left(\frac{1+x^2}{(1-x)^2}\right)dx-\frac{\pi}{2}R(1)+2\int_0^1 \int_0^1 \frac{x(1+tx)\ln (tx)}{(1-tx)(1+t^2x^2)(1+x^2)}dtdx\\ &=-\int_0^1 \frac{\ln x}{1+x^2}\ln\left(\frac{1+x^2}{(1-x)^2}\right)dx-\frac{\pi}{2}R(1)+\int_0^1 \ln x\left[\frac{1}{1+x^2}\ln\left(\frac{1+t^2x^2}{(1-tx)^2}\right)\right]_{t=0}^{t=1} dx+\\ &\int_0^1 \ln t\left[\frac{1}{1+t^2}\ln\left(\frac{1+x^2}{(1-tx)^2}\right)+\frac{2\arctan (tx)}{1-t^2}-\frac{2t\arctan x}{1+t^2}-\frac{2t\arctan x}{1-t^2}\right]_{x=0}^{x=1} dt\\ &=-\frac{\pi }{2}R(1)+\ln 2\int_0^1 \frac{\ln t}{1+t^2}dt-2\int_0^1 \frac{\ln (1-t)\ln t}{1+t^2}dt+2\int_0^1 \frac{\ln t\arctan t}{1-t^2}dt-\\ &\frac{\pi}{2} \int_0^1 \frac{t\ln t}{1+t^2}dt-\frac{\pi}{2} \int_0^1\frac{t\ln t}{1-t^2} dt\\ \end{align*} For $$x\in [0;1]$$ define the function $$S$$ by, \begin{align*} S(x)=\int_0^x \frac{\ln t}{1-t^2}dt=\int_0^1 \frac{x\ln(tx)}{1-t^2x^2} dt \end{align*} Perform integration by parts, \begin{align*} \int_0^1 \frac{\ln x\arctan x}{1-x^2}dx&=\Big[S(x)\arctan x\Big]_0^1-\int_0^1 \frac{S(x)}{1+x^2}dx\\ &=\frac{\pi}{4}S(1)-\int_0^1 \int_0^1 \frac{x\ln(tx)} {(1-t^2x^2)(1+x^2)} dtdx\\ &=\frac{\pi}{4}S(1)-\frac{1}{2}\int_0^1 \left[ \frac{\ln x}{1+x^2}\ln\left(\frac{1+tx}{1-tx} \right)\right]_{t=0}^{t=1} dx-\\ &\frac{1}{2}\int_0^1 \left[ \frac{\ln t}{1+t^2}\ln\left(\frac{1+x^2}{1-t^2x^2} \right)\right]_{x=0}^{x=1}dt\\ &=\frac{\pi}{4}S(1)-\frac{\ln 2}{2}\int_0^1 \frac{\ln t}{1+t^2}dt+\int_0^1 \frac{\ln(1-x)\ln x}{1+x^2}dx \end{align*} Therefore, \begin{align*}I&=\pi\int_0^1\frac{2t\ln t}{t^4-1} dt\end{align*} Perform the change of variable $$y=t^2$$, \begin{align*}I&=\frac{1}{2}\pi \int_0^1 \frac{\ln y}{y^2-1}dy\\ &=\frac{1}{2}\pi\times \frac{3}{4}\zeta(2)\\ &=\frac{\pi^3}{16} \end{align*} • That's impressive, thank you! I've seen you use this approach alot and it's quite useful, let me a few time to understand it's working better. – Zacky Dec 25 '18 at 17:59 • Well done. (+1) – Mark Viola Dec 26 '18 at 4:26 • Very nice solution and $\to +1$ – Claude Leibovici Dec 26 '18 at 6:07 • I compute $\int_0^1 F(t,x)\ln t\,dx$ and $\int_0^1 F(t,x)\ln x\,dt$ and one can compute an antiderivative $U(t,x)$ of $F(t,x)$ wrt $x$, and on the other hand an antiderivative $V(t,x)$ of $F(t,x)$ wrt $t$. – FDP Dec 26 '18 at 17:07 • @Zacky: remember in the double integrals you can choose to integrate wrt $x$ or wrt $t$. If there is a factor $\ln x$ you don't want to integrate wrt $x$ first. If there is a factor $\ln t$ you don't want to integrate wrt $t$ first. And, $\ln(tx)=\ln x +\ln t$ – FDP Dec 26 '18 at 17:25 Put $$\begin{equation*} I=\int_{0}^1\dfrac{\arctan x}{x}\ln\left(\dfrac{1+x^2}{(1-x)^2}\right)\, \mathrm{d}x. \end{equation*}$$ Via the substitution $$x=\dfrac{z}{z+1}$$ we get $$\begin{equation*} I = \int_{0}^{\infty}\dfrac{\arctan \frac{z}{z+1}\ln(2z^2+2z+1)}{z^2+z}\, \mathrm{d}z. \end{equation*}$$ Put $$\begin{equation*} \log z=\ln|z|+i\arg z, \quad -\pi<\arg z <\pi. \end{equation*}$$ Then $$\begin{equation*} \arctan \frac{z}{z+1}\ln(2z^2+2z+1) = \text{Im}\left(\log^2(1+z+iz)\right). \end{equation*}$$ Consequently $$\begin{equation*} I = \text{Im}\left(\int_{0}^{\infty}\dfrac{\log^2(1+z+iz)}{z^2+z}\right)\mathrm{d}z. \end{equation*}$$ However, $$\log(z)$$ is an analytic function in $$\text{Re} z>0$$. According to Cauchys integral theorem we get the same value if we integrate along the curve with the parametrization $$z=(1-i)s, s>0$$. $$\begin{gather*} I = \text{Im}\left(\int_{0}^{\infty}\dfrac{\ln^2(2s+1)}{s(s+1-is)}\, \mathrm{d}s\right) = \int_{0}^{\infty}\dfrac{\ln^2(2s+1)}{2s^2+2s+1}\, \mathrm{d}s = \\[2ex] \int_{0}^{\infty}\dfrac{2\ln^2(2s+1)}{(2s+1)^2+1}\, \mathrm{d}s = [t=2s+1] = \\[2ex] \int_{1}^{\infty}\dfrac{\ln^2(t)}{t^2+1}\, \mathrm{d}t =[u= 1/t] = \int_{0}^{1}\dfrac{\ln^2(u)}{u^2+1}\, \mathrm{d}u. \end{gather*}$$ Thus $$\begin{equation*} 2I = \int_{0}^{\infty}\dfrac{\ln^2(u)}{u^2+1}\, \mathrm{d}u \end{equation*}$$ In order to evaluate this integral we integrate $$\dfrac{\log^3(z)}{z^2+1}$$ along a keyhole contour and use residue calculus. In this case $$\log z =\ln |z|+i\arg z, \quad 0<\arg z < 2\pi$$. We get $$\begin{equation*} I = \dfrac{\pi^3}{16}. \end{equation*}$$
2019-04-24 20:01:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 43, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9523848295211792, "perplexity": 1580.4449528430148}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578656640.56/warc/CC-MAIN-20190424194348-20190424220348-00194.warc.gz"}
https://pypi.org/project/biocode/
Bioinformatics code libraries and scripts ## Overview This is a collection of bioinformatics scripts many have found useful and code modules which make writing new ones a lot faster. Over the years most bioinformatics people amass a collection of small utility scripts which make their lives easier. Too often they are kept either in private repositories or as part of a public collection to which noone else can contribute. Biocode is a curated repository of general-use utility scripts my colleagues and I have found useful and want to share with others. I have also developed some code libraries/modules which have made my scripting work a lot easier. Some have found these to be more useful than the scripts themselves. Look below if you want to learn more, contribute code yourself, or just get the scripts. – Joshua Orvis ## The scripts The scope here is intentionally very open. I want to include anything that developers find generally useful. There are no limitations on language choice, though the majority are Python. For now, the following directories make up the initial groupings but will be expanded as needed: • blast - It if uses, massages, or just reformats BLAST output, it goes here. • chado - Scripts that are tied into the chado schema (gmod.org) should be found here. • fasta - Filtering, converting, size distribution plots, etc. • fastq - Utilities for fasta’s newer sister format. • genbank - Anything related to the GenBank? Flat File Format. • general - Utility scripts that may not fit in any other existing directory or don’t warrant creation of their own. We should be selective about what we put here and create or use other directories whenever appropriate. • gff - Extractions, conversions and manipulations of files in the Generic Feature Format • gtf - From Ensembl/WashU, the GTF format is the focus of scripts here. • hmm - Merging, manipulating or reading HMM libraries. • sam_bam - Analysis of and parsing SAM/BAM files. • sandbox - Each committer gets their own personal directory here to add anything they want while testing or waiting to be moved to the production directories. • sysadmin - While not specifically bioinformatics, our work tends to be on Unix machines, and utility scripts are often needed to support our work. From file system manipulation to database backup scripts, put your generic sysadmin utilities here. • taxonomy - Anything related to taxonomic analysis. ## The modules If you’re a developer these modules can save a lot of time. Yes, there is some duplicate functionality you’ll find in modules like Biopython, but these were written to add features I always wanted and with a more biologically-focused API. Three of the primary Python modules: ### biocode.things Classes here represent biological things (as defined by the Sequence Ontology) in a way that makes more sense biologically and hiding some of the CS abstraction. What does this mean? This is a simple example, but compare these syntax approaches: # This way is typical of other libraries genes = assembly.get_subfeatures_by_type( 'type': 'genes' ) mRNAs = assembly.get_subfeatures_by_type( 'type': 'mRNA' ) genes = assembly.genes() for gene in genes: mRNAs = gene.mRNAs() This more direct approach is held throughout these libraries. It also adds some shortcuts for tasks that always annoyed me when working with things that had coordinates. Consider if you wanted to determine if one gene is before another one on a molecule: if gene1 < gene2: return True In the background, biocode checks if the two gene objects are located on the same molecule and, if so, compares their coordinates. There are many other methods for coordinate comparison, such as: • thing1 <= thing2 : The thing1 overlaps thing2 on the 5’ end • thing1.contained_within( thing2 ) • thing1.overlaps( thing2 ) • thing1.overlap_size_with( thing2 ) This module also contains readable and detailed documention within the source code. ### biocode.annotation This set of classes allows formal definition of functional annotation which can be attached to various biothings. These include gene product names, gene symbols, EC numbers, GO terms, etc. Once annotated, the biothings can be written out in common formats such as GFF3, GenBank, NCBI tbl, etc. ### biocode.gff Much of biocode was written while working with genomic data and annotation, and one of the more common formats for storing these is GFF3. Using this module, you can parse a GFF3 file of annotations into a set of biothings with a single line of code. For example: import biocode.gff (assemblies, features) = biocode.gff.get_gff3_features( input_file_path ) That’s it. You can then iterate over the assemblies and their children, or access the ‘features’ dict, which is keyed on each feature’s ID. ## Installing dependencies On Debian-based systems (like Ubuntu) you can be sure to get all biocode dependencies like this: apt-get install -y python3 python3-pip zlib1g-dev libblas-dev liblapack-dev libxml2-dev ## Getting the code (pip3, latest release) You can install biocode using pip3 (requires Python3) like this: pip3 install biocode ## Getting the code (github, current trunk) If you want the latest developer version: git clone https://github.com/jorvis/biocode.git Important: Many of these scripts use the modules in the biocode/lib directory, so you’ll need to point Python to them. Full setup example: cd /opt git clone https://github.com/jorvis/biocode.git # You probably want to add this line to your $HOME/.bashrc file export PYTHONPATH=/opt/biocode/lib:$PYTHONPATH ## Problems / Suggestions? If you encounter any issues with the existing code, or would like to request new features or scripts please submit to the Issue tracking system. ## Contributing If you’d like to contribute code to this collection have a look at the Requirements And Convention Guide and then submit a pull request once your code is ready. We’ll check your script and pull it into the production directories. If you’re not that confident yet we’ll happily pull in your sandbox directory if you’d like to add your code to the project but aren’t sure if it’s ready to be in the production directories yet. ## Project details Uploaded source
2022-12-09 23:12:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3038339614868164, "perplexity": 5720.394020518684}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711552.8/warc/CC-MAIN-20221209213503-20221210003503-00679.warc.gz"}
https://www.giss.nasa.gov/tools/latex/ltx-229.html
This page's content is no longer actively maintained, but the material has been kept on-line for historical purposes. The page may contain broken links or outdated information, and parts may not function in current web browsers. ## Hypertext Help with LaTeX ### flushright \begin{flushright} Text on line 1 \\ Text on line 2 \\ ... ... \end{flushright} The flushright environment allows you to create a paragraph consisting of lines that are flushed right to the right-hand margin. Each line must be terminated with a \\.
2019-03-24 15:10:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5234313011169434, "perplexity": 3899.9301059907966}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912203462.50/warc/CC-MAIN-20190324145706-20190324171706-00423.warc.gz"}
https://lqp2.org/node/1777
Green hyperbolic complexes on Lorentzian manifolds Marco Benini, Giorgio Musante, Alexander Schenkel July 08, 2022 We develop a homological generalization of Green hyperbolic operators, called Green hyperbolic complexes, which cover many examples of derived critical loci for gauge-theoretic quadratic action functionals in Lorentzian signature. We define Green hyperbolic complexes through a generalization of retarded and advanced Green's operators, called retarded and advanced Green's homotopies, which are shown to be unique up to a contractible space of choices. We prove homological generalizations of the most relevant features of Green hyperbolic operators, namely that (1) the retarded-minus-advanced cochain map is a quasi-isomorphism, (2) a differential pairing (generalizing the usual fiber-wise metric) on a Green hyperbolic complex leads to covariant and fixed-time Poisson structures and (3) the retarded-minus-advanced cochain map is compatible with these Poisson structures up to homotopy. Keywords: homological methods in gauge theory, globally hyperbolic Lorentzian manifolds, Green hyperbolic operators, dg-categories
2023-03-26 05:26:18
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.945357620716095, "perplexity": 1917.6249511328458}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945433.92/warc/CC-MAIN-20230326044821-20230326074821-00305.warc.gz"}
https://cs.stackexchange.com/questions/77415/interleaving-first-and-second-half-of-an-array-of-even-length-in-place
# Interleaving first and second half of an array of even length in place If A is an array with the following elements: $$a_1,a_2,...,a_n,b_1,b_2,...,b_n$$ How to shuffle A to form: $$a_1,b_1,a_2,b_2,...,a_n,b_n$$ with minimal swaps and using no additional space? def shuffle(a,left,right): if right - left>=4: half = (right - left)/2 for i in xrange(half/2): tmp = a[left + half + i] a[left + half+i] = a[left + half/2 + i] a[left + half/2 + i] = tmp shuffle(a,left,left + (right - left)/2) shuffle(a,left + (right - left)/2, right) a = ['a1','a2','a3','a4', 'b1','b2','b3','b4'] shuffle(a,0,len(a)) This gives an algorithm with O(nLog(n)) swaps. But I realized that this works only when n = 2k for some integer k. Is there a minor tweak that one could make to get the same complexity for arbitrary n? • Can you discover something about the cycles that for with that permutation for various $n$? Jun 30, 2017 at 15:47 • I didn't quite catch you - "that for with that permutation"? Jun 30, 2017 at 17:17 • $a2$ moves to index 3 which replaces $a3$ which moves to index 5 which replaces $a5$ which moves to index 9, eventually that will loop back to index 2. That's a cycle, Investigate how the cycles evolve as n increases. Jun 30, 2017 at 17:54 • Consider the permutation $\sigma: \{1,2,\dots,2n\} \to \{1,2,\dots,2n\}$ that you're trying to implement, i.e., $\sigma(i)=2i-1$ for $i\le n$ and $\sigma(i)= 2(i-n)$ for $i>n$. Can you characterize its cycle structure? How many cycles does it have of each possible length? Then use the fact that you can permute a cycle of length $\ell$ using $\ell-1$ swaps, and sum over all the cycles. See cs.stackexchange.com/q/71154/755. – D.W. Jun 30, 2017 at 19:25 • That link already describes how to handle the case of arbitrary $n$. I'm not sure what your question is, exactly -- it seems like it is already answered by the answers over there. – D.W. Jun 30, 2017 at 19:34
2022-06-26 10:51:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5936205387115479, "perplexity": 922.9192876239935}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103205617.12/warc/CC-MAIN-20220626101442-20220626131442-00090.warc.gz"}
https://indico.ictp.it/event/9960/
Starts 17 May 2022 16:00 Ends 17 May 2022 17:00 Central European Time Online Leonardo Building - Luigi Stasi Seminar Room Register in advance for this meeting: https://zoom.us/meeting/register/tJ0pfuirrjsoHNaP9S-AAAGJitkguz5r_2mG After registering, you will receive a confirmation email containing information about joining the meeting. Abstract: In this talk we will discuss weighted endpoint estimates for the Hardy-Littlewood maximal function on the infinite rooted $k$-ary tree. Namely, we will show a variant of the Fefferman-Stein estimate with respect to the weights $(w, M_{s}w)$. Moreover, it is shown it is sharp, in the sense that it does not hold in general if $s=1$. This result is a generalization of the unweighted case ($w\equiv1$) independently obtained by Naor-Tao and Cowling-Meda-Setti. We will also present more general sufficient conditions for the strong estimates in the case $p>1$. This talk is based on joint works with Israel Rivera-Ríos (UNS&UMA) and Martíin Safe (UNS). This will be a hybrid seminar. All are very welcome to join either online or in person. Venue: Luigi Stasi Seminar Room (ICTP Leonardo Da Vinci Building), for those wishing to attend in person.
2022-05-26 11:06:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40510293841362, "perplexity": 983.4014795317174}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662604794.68/warc/CC-MAIN-20220526100301-20220526130301-00044.warc.gz"}
https://physics.stackexchange.com/questions/389068/in-electron-positron-annihilation-why-is-photon-exchange-dominant-at-energies-b
# In electron-positron annihilation, why is photon exchange dominant at energies below the Z-resonance? In a plot of the Z resonance from e+ e- collisions, why is photon exchange dominant below the center of mass energy below the Z peak? The Z has a mass close to 90 GeV. At low energy reactions the Z propagator is small, due to the large Z mass, and the matrix element terms of the Z contributions as long $Q^2$ is much smaller than the mass of the Z, will be very small, and the photon propagator will dominate. As $Q^2$ passes through the resonance Z dominates, as the diagrams show.
2020-01-21 03:28:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8492752313613892, "perplexity": 605.4407515124296}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250601241.42/warc/CC-MAIN-20200121014531-20200121043531-00469.warc.gz"}
http://hal.in2p3.fr/in2p3-01172187
# Search for weakly decaying $\overline{\Lambda\mathrm{n}}$ and $\Lambda\Lambda$ exotic bound states in central Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV Abstract : We present results of a search for two hypothetical strange dibaryon states, i.e. the H-dibaryon and the possible $\overline{\Lambda\mathrm{n}}$ bound state. The search is performed with the ALICE detector in central (0-10%) Pb-Pb collisions at $\sqrt{s_{\rm{NN}}} = 2.76$ TeV, by invariant mass analysis in the decay modes $\overline{\Lambda\mathrm{n}} \rightarrow \overline{\mathrm{d}} \pi^{+}$ and H-dibaryon $\rightarrow \Lambda \mathrm{p} \pi^{-}$. No evidence for these bound states is observed. Upper limits are determined at 99% confidence level for a wide range of lifetimes and for the full range of branching ratios. The results are compared to thermal, coalescence and hybrid UrQMD model expectations, which describe correctly the production of other loosely bound states, like the deuteron and the hypertriton. Document type : Journal articles Domain : http://hal.in2p3.fr/in2p3-01172187 Contributor : Emmanuelle Vernay <> Submitted on : Tuesday, July 7, 2015 - 9:27:51 AM Last modification on : Wednesday, July 28, 2021 - 1:36:04 PM ### Citation J. Adam, G. Conesa Balbastre, J. Faivre, C. Furget, R. Guernane, et al.. Search for weakly decaying $\overline{\Lambda\mathrm{n}}$ and $\Lambda\Lambda$ exotic bound states in central Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV. Physics Letters B, Elsevier, 2016, 752, pp.267-277. ⟨10.1016/j.physletb.2015.11.048⟩. ⟨in2p3-01172187⟩ Record views
2021-08-01 23:01:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.737848699092865, "perplexity": 3607.8605026685236}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154277.15/warc/CC-MAIN-20210801221329-20210802011329-00267.warc.gz"}
https://www.physicsforums.com/threads/tensor-calculation.389588/
# Tensor Calculation 1. Mar 24, 2010 ### frasool Hi I have the following tensor and i need to reduce to a scalor quantity: 3 by 3 matrix 4150470.48 , 317.64, -353.42 317.64, 2047101.07,-1407556.61 -353.42, -1407566.61, 2284136.55 Please its urgent and any help would be greatly appreciated!! Regards Faizan 2. Mar 25, 2010 ### HallsofIvy First, a matrix is NOT a tensor- just as a sequence of numbers is not a vector. A matrix can represent a tensor but you have to have some tensor space structure and you haven't told us HOW that matrix represents a tensor (i.e. what basis you are using). The simplest way derive a single number from a matrix is the contraction- in terms of a matrix representation, it is just the "trace": add the numbers on the main diagonal. But surely there is more to this than just getting some number from the matrix? 3. Mar 25, 2010 ### frasool I got these values from solidworks but for my wheel design i need a single value for the moment of inertia. The values i gave you represent Ixx Ixy Ixz and then Iyx Iyy Iyz and Izx Izy Izz. And it says these were taken from the outer coordinate system!. So its not just a matrix it is a tensor. Could you help me out with how to proceed with this problem ! Regards Faizan
2018-03-20 00:48:42
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8227641582489014, "perplexity": 1065.7264482712526}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647244.44/warc/CC-MAIN-20180319234034-20180320014034-00293.warc.gz"}
https://chenhaoxiang.cn/2021/06/20210606032643224E.html
One 、QCustomPlot brief introduction Before that Qt Draw a pie chart In the description of this article, I briefly describe the current dependence on qt Third party drawing library of , After that, I'll use it for myself QCustomPlot Make a summary of the situation of , For your reference QCustomPlot The official website of :Qt Plotting Widget QCustomPlot - Introduction QCustomPlot The source code is only 2 File , So it's very convenient to add your own engineers , Or you can compile these two files into a static library 、 Any form of dynamic library will do . Two 、 Results the preview Pictured 1 It's a small example I made by integrating the official sample code , It shows QCustomPlot Pictures that can be drawn , The only drawback is QCustomPlot You can't draw a pie chart , That's what we said at the beginning of the article Qt Draw a pie chart article , In this article I use Qt Born in the Central Plains QWidget To draw a pie chart of variable size , If you are interested, you can have a look . chart 1 QCustomPlot Use How does it feel to see the effect above , Isn't it good , It can meet the needs of most people , But if you make products with strict requirements , So you probably need to be in QCustomPlot Source code for secondary development , Even change the source code ... Pictured 2 Shown , It's a download QCustomPlot Source mode , The file in the red box contains the source code 、 Examples and help documentation ; The file in the yellow box has only the source code ; I know the name of the saved document , It's just a dynamic library , Then we will download it according to our own needs , Even if you download it, it's OK . Here I choose to download the first , Contains help documentation 、 Sample code and source code . Careful students will find the picture 2 There are two versions of QCustomPlot package , Why are there two bags , In fact, it's like this ,1.3.2 Version is a release package , That is to say, the official thinks this is a relatively stable version , and 2.0.0-beta Version is a test version , In other words, the official open source is just for everyone to help test , And give back the results . The above comparison is only from the source code release way comparison , If you look at the source code, you will find , It's not just that ,2.0.0-beta Version and 1.3.2release There's still a big difference between versions , I've personally studied it for a few days QCustomPlot Source code , I think the biggest difference should be 3 spot , That is to say 2.0.0 Version than 1.3.2 The advantages of the version : Real layered rendering 、 Separation of icon data and calculation of coordinate axis scale , I'd like to mention it by the way ,QCustomPlot Where the packaging is not good , Or something to be improved , That's not right , Welcome to correct , For the coordinate axis and whether the coordinate axis text is drawn, the judgment condition is only to judge the type of brush , For the judgment of whether the scale is drawn or not, the only condition is that the scale number of the axis scale is zero , depressed ... In the future, I will divide it into function points , Or specific class modules to analyze QCustomPlot This library , Okay , This article QCustomPlot This is the end of the opening article of , Interested partners can continue to pay attention to ... notes : The following articles are all based on QCustomPlot2.2.0beta Version based analysis Four 、 Related articles Qt Draw a pie chart QCustomplot Use sharing ( One ) More about what you can do 1. QCustomplot Use sharing ( 7、 ... and ) layer ( The end ) One . Layered drawing It's been said that 2.0.0 edition , But always trying to 1.3.2 Compare the versions , This article is no exception .QCustomPlot2.0.0beta Version than 1.3.2release There is a big improvement in the version, which is layered drawing ... 2. QCustomplot Use sharing ( 6、 ... and ) Axes and gridlines One . summary It has been written in the front 5 Right QCustomPlot Explanation , After reading the above articles , Basically can do some simple use , But if you want to have a high degree of control chart , So the axis will be an important part , Because the axis is a frame of reference for the diagram ... 3. QCustomplot Use sharing ( 5、 ... and ) Layout One . Historical contrast About QCPLayoutElement Before I explain this element , I want to start with 1.3.2release Version and 2.0.0beta Version of this element to do with a simple comparison , First ,1.3.2release version , mouse ... 4. QCustomplot Use sharing ( Two ) Source code interpretation One . Header file Overview Start with this article , We will officially enter into QCustomPlot In the practical study of , First of all, let's learn QCustomPlot The class diagram , If you download QCustomPlot Students of source code can go by themselves QCusto ... 5. QCustomplot Use sharing ( 3、 ... and ) chart One . The diagram that can be realized Compared to other third-party libraries for charting ,QCustomPlot It's lighter , It's not just about function , And the second development is easier . Now let's talk about how he can implement those graphs QCPGraph: Broken line diagram ,Line ... 6. QCustomplot Use sharing ( Four ) QCPAbstractItem One . What is it? Speaking of pictures , You may think of a line chart . Bar charts, pie charts, etc , But in addition to these conspicuous things, there are many auxiliary things , With these auxiliary things, the map will look meaningful , Or more real . Persuasive . These things include that ... 7. QCustomplot Use sharing ( 8、 ... and ) Charting - load cvs file Catalog One . summary Two . design sketch 3、 ... and . Source code explanation 1. Source structure 2. The header file 3. Move cursor 4. Set the number of axis rectangles 5. Add chart data 6. Set line chart type 6. Other functions Four . Test method 1. Test Engineering 2. The test file ... 8. QCustomplot Use sharing ( Nine ) Charting - Multi function cursor Catalog One . summary Two . design sketch 3、 ... and . Source code explanation 1. Source structure 2. The header file 3. Add cursor 4. Monitor movement 5. Move cursor 6. Other functions Four . Test method 1. Test Engineering 2. The test file 3. Test code 5、 ... and . Related articles 6、 ... and . total ... 9. Share a self made SpringMVC Of PPT Share a self made SpringMVC Of PPT, Because I was busy, I only wrote some important parts Random recommendation 1. Uva 242 Stamps and envelopes Topic link :https://vjudge.net/contest/146179#problem/D The question : On the envelope at most S A stamp . Yes N A collection of stamps , Each set has a different face value . Which set has the largest continuous postage , Output ... Excerpt from http://gotowqj.iteye.com/blog/1926771 linux Dynamic library loading RPATH, RUNPATH Link dynamic library How to use the shared library when the program connects , You have to be able to find a total of ... 3. centos Next python Of pymssql Module installation and simple use 1. install pymssq modular 1-1. Environmental preparation : 1-1-1.unixODBC install yum install unixODBC unixODBC-devel -y 1-1-2.freetds install download fr ... 4. On discuz test : vim /etc/hosts       ##ip address translation modify windows Configuration file for , The tablet opens vim /usr/local/apache/conf/httpd.conf vim /u ... 5. java There are many ways to realize asynchronous query to synchronous query : Loop waiting for ,CountDownLatch,Spring EventListener, Timeout processing and empty loop performance optimization Asynchronous to synchronous Business needs Some interface query feedback results are returned asynchronously , Unable to get query results immediately . Normal processing logic Trigger asynchronous operations , And then pass a unique ID . Wait until the asynchronous result returns , Based on the unique ID passed in , Match this result . How to convert to synchronization ... 6. ThinkPHP3.1 Quick start tutorial ThinkPHP3.1 Quick start tutorial http://www.thinkphp.cn/info/155.html   ------------------------------------------- ... 7. after Hadoop Thinking about big data technology in the era : Data as a service 1. Hadoop The myth of the world is breaking IBM leads BigInsights for Hadoop out behind barn. Shots heard IBM has announced th ... 8. python Module of py_compile usage ( take py The file is converted to pyc file ) # -*- coding: cp936 -*- #python 27 #xiaodeng #python Module of py_compile usage ( take py The file is converted to pyc file ): Binary , By py The file is compiled ... 9. Oracle Add primary keys and indexes The primary key and index of data are generally required , Especially when the table has a lot of data , Indexes and primary keys are essential , This can provide data query efficiency : One . Create a primary key constraint while creating a table (1) No name create table studen ... 10. 【CF1009F】 Dominant Indices ( A long chain splits +DP) Topic link $$O(n^2)$$ Of $$DP$$ It's easy to think ,$$f[u][i]$$ It means that $$u$$ The distance in the subtree of $$u$$ by $$i$$ The number of points , be $$f[u][i]=\sum f[v][i-1]$$ Long chain dissection ...
2022-08-10 15:20:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1746366024017334, "perplexity": 2830.0987073287406}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571190.0/warc/CC-MAIN-20220810131127-20220810161127-00731.warc.gz"}
https://www.transtutors.com/questions/the-pre-tax-financial-income-or-loss-figures-for-gary-spangler-company-are-as-follow-2564818.htm
# The pre-tax Financial income or loss figures for Gary Spangler company are as follows.... the pre-tax Financial income or loss figures for Gary Spangler company are as follows. 2009.    $160,000 2010.$250,000 2011.        80,000 2012.     (160,000) 2013.     (380,000) 2014.      120,000 2015.      100,000 pre-tax Financial income or loss and taxable income or loss were the same for all years involved. Assume a 45% tax rate for 2009 in 2010 and a 40% tax rate for the remaining years prepare the journal entries for the years 2011 to 2015 to record income tax expense and the effects of the net operating loss carry-back and carry-forward assuming Jerry Spangler company uses the carryback provision. All income and losses relate to normal operations. In recording the benefits of a loss carryforward, assume that no valuation account is deemed necessary. 2011. two entries 2012. two entries 2013. two entries to record carry back and two entries to record carryforward 2014. two entries 2015. two entries
2019-01-22 13:30:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19601339101791382, "perplexity": 10119.79166742738}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583850393.61/warc/CC-MAIN-20190122120040-20190122142040-00200.warc.gz"}
https://mathoverflow.net/questions/229180/atiyah-guillemin-sternberg-convexity-theorem
# Atiyah-Guillemin-Sternberg convexity theorem I would like to study the Atiyah-Guillemin-Sternberg convexity theorem: proof and applications. I am already familiarised with hamiltonian actions, moment maps...and with elementary Morse theory. So my problem is to find a detailed proof of this theorem: 1. What are the prerequisits: Morse-Bott functions, equivariant Darboux Theorem...? 2. Is the original proof by Atiyah different from the Guillemin-Sternberg's proof? 3. What is "the best reference" for a detailed treatment of this theorem? Thanks for any help. For this topic in general, I really recommend a book of Anna Cannas da Silva Lectures on Symplectic Geometry. It's wonderfully written and very clear. You can read a proof of the theorem in the book of Michel Audin: Topology of torus actions on symplectic manifolds. There are some more things here http://www.math.ucsd.edu/~alpelayo/Docs/torictalk.pdf Thanks Thomas, Liviu and Olga, The Atiyah's proof is done by induction on the dimension of the torus. If $$\mu$$ is the moment map and $$A_m$$: the level sets of $$\mu$$ are connected, for any $$\mathbb{T}^m$$ hamiltonian action. $$B_m$$: the image of $$\mu$$ is convex, for any $$\mathbb{T}^m$$ hamiltonian action. The hard part (for me) is $$A_1$$ which is based on the connectedness of the levels of a Morse-Bott function on a compact manifold! The rest of the proof is very well explained in: 1. Ana Cannas da Silva, Lectures on Symplectic Geometry (as exercises), 2. Michèle Audin: Topology of torus actions on symplectic manifolds, 3. http://www.math.nyu.edu/~kessler/teaching/group/convexity.pdf The book by Liviu Nicolaescu is very useful and the complete proof can be found in: McDuff & Salamon, Introduction to Symplectic Topology.
2023-01-29 15:16:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 8, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7054929137229919, "perplexity": 568.7167455792874}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499744.74/warc/CC-MAIN-20230129144110-20230129174110-00320.warc.gz"}
https://online.ucpress.edu/elementa/article/10/1/00075/184673/Social-ecological-connections-across-land-water
Despite many sectors of society striving for sustainability in environmental management, humans often fail to identify and act on the connections and processes responsible for social–ecological tipping points. Part of the problem is the fracturing of environmental management and social–ecological research into ecosystem domains (land, freshwater, and sea), each with different scales and resolution of data acquisition and distinct management approaches. We present a perspective on the social–ecological connections across ecosystem domains that emphasize the need for management reprioritization to effectively connect these domains. We identify critical nexus points related to the drivers of tipping points, scales of governance, and the spatial and temporal dimensions of social–ecological processes. We combine real-world examples and a simple dynamic model to illustrate the implications of slow management responses to environmental impacts that traverse ecosystem domains. We end with guidance on management and research opportunities that arise from this cross-domain lens to foster greater opportunity to achieve environmental and sustainability goals. Increasing rates of environmental change, the crisis of biodiversity, the loss of ecosystem services (ES), and the risk of surprises from tipping points (nonlinear ecological or social transformations) highlight the need to find different approaches to navigate society toward ecological sustainability (Vitousek, 1997; Scheffer et al., 2001; Rockstrom et al., 2009; Lindenmayer et al., 2010; Carpenter et al., 2015; Organisation for Economic Co-operation and Development, 2017; Filbee-Dexter et al., 2018). Despite many sectors of society striving for sustainability and balanced interactions with the environment, the current focus and effort is insufficient to generate sustainable solutions because it does not account for the whole problem. The issues that humanity collectively needs to address are relevant both to interventions that react to problems and those that seek to prevent them. Here, we explore how a focus on the social and ecological connections and feedbacks (both positive and negative) across ecosystem domains (land, freshwater, and marine) can promote resilience to multiple future threats (Biggs et al., 2011; Selkoe et al., 2017; Lenton, 2020). Ecological and social knowledge accumulated over decades has highlighted the role of biophysical subsidies and connectivity across a “hilltops to ocean” continuum (Polis and Hurd, 1996; Ramesh et al., 2015; Gounand et al., 2018). Despite this, management practice still tends to be isolated by ecosystem domain (Singh et al., 2021; Threlfall et al., 2021), often using different approaches and frameworks, with inequities in data and knowledge (Figure 1). Land can be privately owned, but the ocean is usually in the public domain, and this discrepancy leads to different aspirations and targets (Figure 1B). Environmental management on land is targeted at species (animal or plant) or habitats and focused on the ES that biodiversity provides, while in the ocean, resource extraction is prioritized (Figure 1C and D). The relative visibility of changes and ease of access to the 3 ecosystem domains also drives differences in social awareness and how we allocate science and management resources. Consequently, environmental data quality and quantity tend to be greater in volume, velocity, and variety for land than in freshwater, with both well ahead of marine (Figure 1A). Figure 1. Disparities between land, freshwater, and marine ecosystem domains. In our experience, disparities in these social, political, ecological, and management variables between ecosystem domains contribute to the difficulties in managing to prevent tipping points. DOI: https://doi.org/10.1525/elementa.2021.00075.f1 Figure 1. Disparities between land, freshwater, and marine ecosystem domains. In our experience, disparities in these social, political, ecological, and management variables between ecosystem domains contribute to the difficulties in managing to prevent tipping points. DOI: https://doi.org/10.1525/elementa.2021.00075.f1 Close modal Systems thinking and more holistic management approaches such as ecosystem-based management are much discussed concepts but have proven difficult to implement (Christensen et al., 1996; Ruckelshaus et al., 2008; Granek et al., 2010; Thrush and Dayton, 2010). Because environmental issues are often dealt with in isolation, governments, businesses, and individuals can fail to first identify and then act on the connections that lead to and prevent abrupt unexpected state changes or tipping points. Tipping points occur when a system’s environmental and social stressors (or phenomena) have intensified to the point where a system shifts to a different state (often for the worse from a human perspective). Further sudden (nonlinear) changes from that point are also possible (i.e., multiple tipping points can exist), and state shifts can trigger other state shifts in distant locations, for instance, via nutrient and water flow (Rocha et al., 2018). There is a growing realization that interactions among social and ecological components of ecosystems are crucial for fostering positive environmental outcomes where tipping points can be foreseen and prevented. For example, an extensive literature and multiple cultures highlight a deep interconnection between social and ecological systems that coevolve across space and time, allowing adaptation in times of change (Folke, 2006; Carpenter et al., 2015; Osterblom et al., 2017; Filbee-Dexter et al., 2018; Nystrom et al., 2019; Yletyinen et al., 2019). In a highly connected world with the certainty of climate change and further anthropogenic exploitation (Nystrom et al., 2019), it is imperative to explore and implement robust management regimes. To be robust to a range of plausible futures, management should traverse land, freshwater, and marine ecosystem domains and consider their connectivity in important functions, flows, feedbacks, and impacts (Schiel and Howard-Williams, 2016). We refer to these critical connections as “cross-domain connections,” and “connectivity” as the movement of materials, energy, ideas, and the expansion of social structures and practices across land, freshwater, and sea. We focus on these cross-domain connections as these represent major opportunities to improve environmental management and governance. While the level of integration of hierarchies in governance may vary both within and across domains (Singh et al., 2021), our focus is on the ecological and social linkages. This approach and framing are consistent with many Indigenous Peoples' worldview, knowledge, and practice (e.g., McGregor, 2004; Clapcott [Ngāti Porou] et al., 2018). Here, we lay out the latent opportunities for management, which currently does not adequately acknowledge these cross-ecosystem domain connections, and therefore has limited potential to enhance sustainability and the resilience needed to prevent undesirable tipping points. The ideas presented in this perspective piece evolved from a workshop designed to address the siloed nature of the New Zealand National Science Challenges, which arguably reflect broader management and governance structures in New Zealand and globally. These National Science Challenges were deliberately focused on separated environmental domains (e.g., marine, freshwater, and land). Our collective concern was that the connections between ecosystems were being ignored, or at least not prioritized, thus limiting research and solution sets (e.g., management interventions). The workshop involved an internationally diverse set of 17 researchers whose research spans the 3 ecosystem domains and multiple disciplines including ecology, social sciences, economics, Māori and indigenous studies, natural resource management, and systems modelling. Based on our collective experiences and discussions during the workshop, we (1) present our perspectives on the social–ecological properties of cross-domain connections that emphasize the need for management reprioritization, (2) demonstrate that the strength and speed of cross-ecosystem domain feedbacks in the social (management) versus ecological components often differ (illustrated with a simple dynamic model to demonstrate possible environmental outcomes associated with delayed management responses [relative to the generation of ecological impacts]), and (3) offer guidance for cross-domain environmental management and research priorities that aim to better prepare for and mitigate tipping points. Throughout our narrative, we connect broader concepts to 3 case study examples (Boxes 13). Importantly, we highlight the benefits of investing effort and focus on cross-domain and cross-scale (space and time) connections to identify the common threads that support opportunities for change. Box 1—Baltic Sea: Managing eutrophication. The semi-enclosed, brackish Baltic Sea is bordered by 9 countries with different policy priorities and socioeconomic conditions. The Baltic is also heterogeneous in terms of biodiversity, climate, hydrography, ecosystem health, and likely future states. Decades of diffusive nutrient loading from agriculture and municipalities in combination with the Baltic’s environmental history have led to large-scale eutrophication (a demonstration of Social-Ecological Properties SE-P1 and 3 in main text). Infrequent saltwater inflows from the North Sea do not dilute the nutrient rich brackish water but amplify how nutrient enrichment is manifested in the Baltic by influencing water-column stratification, vertical exchange of water masses and nutrients, and hence the spreading extent of hypoxia (Carstensen et al., 2014). An intergovernmental convention managed by the Helsinki Commission (HELCOM, 2018) was established to address these problems. The commission represents an advancement in the way we manage waters that works to restore the upstream social–ecological feedback where land practices are managed with the health of the Baltic Sea ecosystem in mind (a step toward management and research priority P4 in main text). HELCOM builds on the long tradition of trust and collaboration between the science community and policy makers in the region (exemplifying P7 in main text; Reusch et al., 2018; Stenseth et al., 2020). The Baltic Sea Action Plan represents a positive cross-sectorial agreement to define the problem(s) and to set explicit ecosystem-based goals and objectives and accompanying indicators and targets for reaching good environmental/ecological status while “supporting a range of sustainable human economic and social activities for the marine ecosystem” (HELCOM, 2021). This has resulted in good progress on reducing nutrient inputs, addressing eutrophication, and improvements in the eutrophic state have been observed (epitomizing P3 and P6 in main text; Andersen et al., 2017). However, broader but more ambitious goals of a sea “unaffected by eutrophication” by 2021 have highlighted the difficulties in effective management even when partially complemented by comprehensive and ambitious EU legislation (Water Framework Directive [EU-WFD]; Marine Strategy Framework Directive). While HELCOM demonstrates the positive effects of managing ecosystem connections, it also reveals how ecological and social time lags represent a major barrier to upstream social–ecological feedbacks. The time lags complicate effective management since the legacy of nutrient inputs over decadal timescales has saturated the marine ecosystem with excessive nutrients that now circulate in the system (SE-P4 in main text). This makes a direct quantitative link between present nutrient inputs from the catchment and the status of the marine environments weak. This situation frustrates both the public and policy makers. The management system is simply too slow and inert from the ecosystem and public perspectives. With ecosystem assessments conducted every 6 years, according to the EU-WFD, subsequent management feedbacks are too slow for a system experiencing rapid change (demonstrating a need for P2 in main text). Nevertheless, HELCOM remains a model of success in international environmental governance. Climate change is rapidly reshuffling not only the structure and function of the ecosystem but also society’s understanding and recognition of environmental problems. While nutrients have been a major focus of HELCOM, other pressures impacting the ecosystem, such as fishing interacting with climate change, are amplifying multiple existing regional pressures. Strategies that support ecosystem resilience rather than reducing pollutant loads are also needed. The importance of real and effective Marine Protected Areas has yet to be recognized, and progress suffers from time lags between both societal recognition and management action. Yet researchers stress that the Nordic seas (including the Baltic) must be understood and managed as an ecologically and socially connected “meta-ecosystem” (Paasche et al., 2015). Global climate change really emphasizes the importance of ramping up local and regional management efforts to conserve local biodiversity and increase resilience against harder to manage climate impacts. Humanity has known of cross-domain connections for centuries but appears to be continually surprised by their consequences. For example, damming rivers in southern California reduced sediment transport to the coast, enhancing beach erosion (Sherman et al., 2002). Extraction of water for land irrigation in Australian rivers altered estuarine and lagoon circulation regimes, creating complex geochemical and physical feedbacks resulting in hyper-eutrophic states (Laurance et al., 2011). Upstream damming, sand mining, and enhanced erosion of riverbeds have led to saltwater intrusion of productive lands in the Mekong River Delta, with massive economic, social, and ecological consequences (Eslami et al., 2019). However, these unintended effects are not always deleterious for ecological sustainability. For example, the management of invasive predators on islands created unintended positive feedbacks, where removal of predation pressure on seabird populations (and their subsequent recovery) resulted in increased inputs of marine sourced nutrients, stimulating forest growth (Fukami et al., 2006). While our current environmental management practices have had some successes (e.g., reduction of DDT pesticide in the Baltic Sea and air pollutants and acid rain in Europe and North America; Helsinki Commission [HELCOM], 2010; Grennfelt et al., 2020), global trends show that this is not sufficient to prevent tipping points. Rather than being trapped in the narrative that science needs to better inform management, we expand aspirations and view the issues through a cross-domain lens to suggest a reprioritization of management efforts. Considering even simple social–ecological properties of ecosystems through this lens can lead to improved environmental management, which may help to break the cycle of sectarian governance that currently facilitates blame and inaction (Howlett, 2014). These simple but fundamental social–ecological properties (SE-P) are as follows: ### SE-P1. In general, water flows downhill Generally, water flows from land to sea, meaning that the aquatic environment is the net recipient of change on land, and ultimately, the coastal marine environment is the net recipient of many changes in both land and freshwater ecosystems (Figure 2). This downstream connectivity has resulted in degradation of aquatic environments globally, which is perpetuated by disparities in ownership and accountability, governance, environmental targets, perspectives, data, management, and understanding of ecological processes between domains (e.g., see Boxes 13 for real-world examples of these dynamics playing out across domains; and Figure 1). “Water flows downhill” is a simple axiom, and yet current environmental management of terrestrial and freshwater ecosystem domains tend to devalue the considerable distal effects of land-based stressors in receiving aquatic environments (exemplified in Box 3’s critique of stressor limit setting; Figure 1F). There is a critical need to strengthen the social feedback that links downstream issues to upstream activities to sustainably manage both beneficial (e.g., subsidies) and detrimental (e.g., pollution) flows that are critical to biodiversity, and ecosystem functions and services in all 3 ecological domains (e.g., Boxes 1 and 2). Figure 2. Diagrammatic representation of the fracturing of environmental management among ecosystem domains. The fracturing among ecosystem domains prevents social feedbacks to upstream management. The diagram lists the social–ecological properties (SE-P1–5) of cross-domain connections that demand a reprioritization of environmental management. The differential arrow thickness between people and the ecosystem domain indicates that the quantum of the “MESS” is greater than the ameliorating environmental management. DOI: https://doi.org/10.1525/elementa.2021.00075.f2 Figure 2. Diagrammatic representation of the fracturing of environmental management among ecosystem domains. The fracturing among ecosystem domains prevents social feedbacks to upstream management. The diagram lists the social–ecological properties (SE-P1–5) of cross-domain connections that demand a reprioritization of environmental management. The differential arrow thickness between people and the ecosystem domain indicates that the quantum of the “MESS” is greater than the ameliorating environmental management. DOI: https://doi.org/10.1525/elementa.2021.00075.f2 Close modal ### SE-P2. People with different values and interests are part of the ecosystem and their actions in one ecosystem domain can affect people, and ecosystem functions and services in other domains People can be separated from the consequences of their actions by the physical segregation of ecosystem domains and governance structures (Figure 2). This separation becomes a problem when the benefits that an individual accrues from an ecosystem are diminished by activities in another domain, where the people creating the impacts are separated from both those managing and those affected by impacts (depicted in Figure 2). A decoupling of decisions from the impact location reduces the feedbacks that would change management practices (e.g., political pressure to stop a particular activity; DeFries and Nagendra, 2017). Part of this issue stems from socially constructed boundaries around physical areas and jurisdictions (e.g., privately vs. publicly owned land; regionally vs. nationally managed areas; Brunson, 1998; see Box 2 for a discussion of values, ownership, and management scales with respect to wetland ecosystems). The perception of boundaries is not the same in the 3 ecosystem domains, leading to disparities in how people defend and protect physical areas (e.g., land, and sometimes wetlands and streams, are often in private ownership, and the ocean, and the resources it provides and sustains, are publicly owned; Figure 1B). People are more likely to adjust their actions to protect a forest or stream on their own property, or in a public space that is near to where they live, than a river or estuary that is some distance away (i.e., the psychological distance effect; Perry et al., 2021). Ecological processes transcend boundaries constructed by humans (DeFries and Nagendra, 2017), and this fractured social–ecological dynamic creates barriers to identifying and acting on the drivers of change and solutions. Box 2—Wetlands: Management of a dynamic system. Wetland ecosystems lie at the interface of freshwater and terrestrial environments. These are ecosystems that for many have transitioned in value over the last 40 years. They are reservoirs of biodiversity and provide critical ES such as water purification, long-term carbon storage (e.g., in peats), and culturally valued species. However, globally, wetland ecosystems are in decline, and Aotearoa New Zealand (NZ) is no different (Myers et al., 2013). Why? And how do they challenge management? Wetlands are dynamic entities whose presence in the landscape and the services they supply change over time. Such changes can be slow (e.g., the formation of peat and storage of carbon over millennia) or relatively rapid (e.g., the damming of a river by a landslide triggering wetland formation). The NZ landscape covered by wetlands is now just 10% of the wetland area that existed when humans settled in New Zealand in the mid-13th century, and this decline continues (McGlone, 2009). Many stressors interact to drive wetland decline and loss (Figure B2.1). The widespread loss of wetlands across New Zealand has been driven by their transformation into agricultural and urban land. The decline of ecosystem quality in wetlands involves a complex suite of processes, including altered water, sediment and nutrient flows, disturbance, and invasion by weeds and predators. These stressors interact to drive systems across tipping points and may arise from well beyond the wetland itself (e.g., land-use change; an example of Social-Ecological Properties SE-P1 and 3 in main text). For example, increased nutrient flux due to land-use changes in a catchment may shift species composition and alter biogeochemical fluxes, facilitating invasion of weeds. These feedbacks may be lagged (exemplifying SE-P4 in main text). Thus, as with many complex systems, tracing the causal pathway from symptom to the underlying mechanisms is not easy (Bowman et al., 2015). This separation of cause and effect can contribute to psychological distancing of stakeholders and managers from environmental issues (Perry et al., 2021; SE-P2 in main text). Historical change may not be sufficient to predict wetlands’ future. Many wetlands sit adjacent to the coast and thus under climate change and sea level rise scenarios are prone to salination and associated shifts in species distributions and disturbance regimes. Figure B2.1 Schematic view of some of the stressors, processes, and outcomes (not exhaustive) operating in NZ’s wetland ecosystems. The green links are those with strong reciprocal feedbacks (e.g., fire favors some invasive weeds, which favors fire), and red boxes are components with spatial disjunctions between cause and effect. In some cases (e.g., disturbance regimes), the same suite of entities is a stress, process, and outcome. LUCC = land use/land cover change. DOI: https://doi.org/10.1525/elementa.2021.00075.fB2.1 Figure B2.1 Schematic view of some of the stressors, processes, and outcomes (not exhaustive) operating in NZ’s wetland ecosystems. The green links are those with strong reciprocal feedbacks (e.g., fire favors some invasive weeds, which favors fire), and red boxes are components with spatial disjunctions between cause and effect. In some cases (e.g., disturbance regimes), the same suite of entities is a stress, process, and outcome. LUCC = land use/land cover change. DOI: https://doi.org/10.1525/elementa.2021.00075.fB2.1 Close modal Our ability to manage wetlands effectively also has sociocultural components. There are fundamental issues, at least in New Zealand, about defining wetlands, and areas designated in regulation and policy as “wetlands” often have strict conservation planning associated with them. Second, the perceived value of wetlands varies from them being mere swamps to irreplaceable suppliers of ES, many of which accrue slowly. These differing views result in a contest between reclamation (of otherwise useful land) and restoration (Williams, 1994; a real-world example of SE-P2 in main text). Third, where changes in one part of the landscape affect another, potentially with long lags, patterns of land ownership and management responsibility are important. In New Zealand, although the largest wetlands tend to be in public ownership, the smaller ones tend to be on private land and may not even be recognized as wetlands. Governance of public and private wetlands differs and varies regionally as does the legislative emphasis placed on different stressors (e.g., dams vs. stock intrusions; Myers et al., 2013). Those tasked with managing a given wetland may have limited agency in the parts of the landscape where change is initiated (SE-P5 in main text). Another potential disconnect is that effective wetland management requires a holistic ecosystem-level approach (e.g., Peacock et al., 2012), but the intellectual origins of wetland sciences are in wildlife management (Euliss et al., 2008). Successful management for wildlife is unlikely to be the same as successful management for wetland ES. Despite these challenges, there are examples of successful wetland management and restoration. Such successes are typified by a holistic view, centered on ecosystem-level processes (demonstrating a need for management and research priority P1 in main text). There is evidence that overarching (national) policies do positively influence wetland condition (United Nations Environment Programme World Conservation Monitoring Centre, 2009). At a regional level, effective wetland governance will need to be alert to potential scale mismatches (Folke et al., 2007), acknowledge diverse value positions (Bataille et al., 2021), and be responsive to spatial and temporal disjunctions between cause and effect (P2, 4 and 6 in main text). Successful management and restoration at the site-level will require careful selection of targets and ongoing investment in monitoring—challenges that bedevil nearly all ecological monitoring (Biber, 2013; P2 in main text). ### SE-P3. Tipping points occur from multiple drivers and often from stressors originating in other connected ecosystems The fracturing of ecosystem management across domains creates a situation where single environmental drivers (often within one ecosystem domain) are the focus of environmental impact mitigation (see Box 3 on stressor limit setting). Policy that targets specific ecological domains ignores the abundant literature demonstrating the flow of species, resources, and environmental effects between habitats within a domain (Frost et al., 2016), and between land and freshwater (Polis and Strong, 1996; Knight et al., 2005; Bartels et al., 2012), land and marine (Polis and Hurd, 1996; Sanchez-Pinero and Polis, 2000), and freshwater and marine domains (Palumbi, 2003; Wipfli et al., 2003; Gounand et al., 2018). A siloed domain focus can make it hard to identify drivers of change (see, e.g., Boxes 13), and this can leave humanity unprepared for future changes that arise from multiple (often nonlinearly interacting) stressors that originate in connected ecosystem domains (Sala et al., 2000; Crain et al., 2008; Darling and Côté, 2008). For example, land management for conservation remains largely focused on focal species (typically vertebrates) or on parcels of land demarcated by the dominant ecosystem type (e.g., a forest) rather than ecosystem processes and the potential for different habitats to be connected to other ecosystems (Figure 1C, E, and F; Kortetmäki et al., 2021). Freshwater management is typically conducted at a catchment scale and explicitly considers the linkages between land and freshwater domains (e.g., nutrient and sediment flows) with a focus on processes (e.g., catchments and upstream effects; Rouse and Norton, 2016). Land and freshwater management drive change in wetlands (Box 2), but it does not account for how wetlands can buffer change in estuaries or be impacted by salination associated with sea-level rise. Further, since freshwater quality and quantity are the main priorities, there is little consideration of any downstream effects in the marine domain. Management of the coastal marine environment usually recognizes the connectivity and impacts of decisions on land, but there are significant time lags and barriers associated with mitigating these distal effects (Figure 1F; Box 1; Schiel and Howard-Williams, 2016; Osterblom et al., 2017). Box 3—That’s the limit: Simple limit-based policies versus ecological complexity. Across all environmental domains, limits are a common strategy used to manage impacts. We can have limits for air or water quality, changes in land use, contaminant effects, and resource extraction. Limits can be defined as biophysical bottom lines of acceptable pollution, disturbance, resource use, or extraction. Limits can also be defined as levels of acceptable maximum harm. This has become an established approach to management because of its simplicity and has worked in situations where cause and effect are direct and tightly coupled. However, when causal pathways are more complex, there can be nontrivial problems in defining what is acceptable, particularly when dealing with more abstract concepts such as ecosystem health or integrity, which are multivariate rather than the single variable to which the limit pertains. Often limits are bounded by a buffer zone to reflect uncertainty, but often the uncertainty in this uncertainty is uncertain. In ecological systems, change can be nonlinear, making limits more difficult to implement, which is further exacerbated by multiple and cumulative stressors generating complex responses that are difficult to predict and mitigate. A policy focus on limit setting has shaped how we assess the risk to ecosystems, where the focus is on the activity that generates the stress and the stressor rather than on the complex mechanisms that generate context dependencies of responses (exemplifying Social-Ecological Properties SE-P3 and 4 in main text). From a practical perspective, there are countless unknown contaminants, for which limits could never be set, and there is a growing body of research on the ecological responses to emerging contaminants (Kanwischer et al., 2021). With new knowledge and shifts in social and/or biophysical context (e.g., climate change), limit setting needs to be adaptable. However, limits management is often highly path dependent, leading to set and forget policies that fixate on the limit and managing to the limit. This has led to major failures in fisheries in Aotearoa New Zealand as management fixates on fish stock, biomass, and total allowable catch (TAC) over large areas (Cryer et al., 2016; demonstrating the need for management and research priority P1 in main text). The TAC limit can be managed at scales that seem rational from the office but do not relate to the biology and ecology of targeted species let alone the rest of the ecosystem (an example of SE-P5 in main text). This potential for mismatch in context and application of limits has been acknowledged in the Australia and New Zealand guidelines for fresh and marine water quality (Australian and New Zealand Governments, 2018). Despite this, New Zealand has tried to set national limits for some stressors in freshwaters. A similar scale mismatch occurs in the management of hunting limits on waterfowl on lowland lakes in New Zealand (Herse et al., 2020). Apart from the spatial scale over which limits are set, we can also get situations where the limits are set for one environmental domain when another is more or less sensitive (SE-P1 and 3 in main text). In New Zealand, discussions are underway to set limits on the quantity of soil that enters water ways. As the point of entry of these sediments into the aquatic systems is mainly via the stream network, limits are set as freshwater standards. While there are considerable impacts of sediments on freshwaters (Burdon et al., 2013), the ecological impacts and potential for legacy effects are much stronger in coastal and estuarine ecosystems (Reid et al., 2011). Instead of setting limits on degradation, we can refocus and set targets for recovery. These targets would need to be set locally to address context dependencies, and in New Zealand, the aspirations of Māori (the Indigenous Peoples of NZ). Achieving targets may require a whole range of different approaches (P1 in main text), and limit setting may be one, but others include protection (e.g., reserves), and active restoration, which are common approaches on land, but these approaches have not yet become the norm in the marine and freshwater domains (which shows the need for P2 in main text). Active restoration must target the restoration of the interaction network to build resilience in the face of further stress (e.g., Barrett et al., 2021; Sea et al., 2021). No single approach will be suitable across contexts, so there is a need for risk assessments to be grounded in aligning the ecological attributes of the ecosystem with the decision options. Further, aspirations need to recognize that perceptions of ecological recovery vary through time, but recognition of ecological complexity will help in consideration of what has been lost and what we aspire to achieve. Limits have been useful and led to the banning or control of toxic substances in some circumstances. Nevertheless, policy and management agencies need to be aware of path dependency and be open to new approaches and new data (P1 in main text). This is increasingly important as we begin to think inclusively about multiple scales of biological organization, and design management actions with a clear focus on the dynamics of connected ecosystems. The data and models used to derive limits should be accessible and open to scrutiny by all parties to ensure that circles of trust do not implode. In practice and in its simplest form, this means constantly checking on the veracity of the limit and its application, listening to multiple voices (e.g., people on the ground) and ensuring that adequate indicators are used to assess the efficacy of the limit(s) (P5–7 in main text). ### SE-P4. Ecological thresholds are context-dependent and can arise due to lags in slow ecological responses to chronic and subtle drivers of change Ecological thresholds in one ecosystem domain often do not apply to other domains, and hence, a focus on stressor limit setting is unable to prevent threshold responses and tipping points (see Boxes 2 and 3). For example, the ecological limits for land-based sediment and nutrients into freshwater ecosystems are irrelevant for assessing the coastal marine environment’s response to these stressors. Being blind to the importance of understanding ecosystem responses and processes that drive differences in responses between domains has resulted in ecosystems passing tipping points (Rocha et al., 2015; Hicks et al., 2016; e.g., Boxes 13). Once ecosystems pass these tipping points, complex feedbacks and interactions often lock them in an undesirable state for indefinite periods and recovery can be impossible or extremely slow. For example, the slow accumulation of land-based nutrients and the severe eutrophication in the coastal waters of the Baltic Sea have occurred over centuries, but the benefits of management actions to curb the input of land-based nutrients will not be realized for decades due to the legacy effects in the system that slow recovery (Box 1). Often critical to these legacy effects are the slow growing ecologically important (e.g., habitat forming) species and associated biodiversity and ecological processes (Biggs et al., 2012; Andersen et al., 2017). Part of the problem with focusing on the stressor instead of the ecological processes (such as resilience and recovery dynamics) is that it eliminates the ability to generalize ecological responses spatially and temporally, or to identify interactive drivers of change and the legacy effects that result in recovery lags (Lindenmayer et al., 2010; Biggs et al., 2012). ### SE-P5. The spatial and temporal scale of ecological and social properties differ with ecosystem domains and social–ecological scale mismatches are common There is no universal “right” scale for management, but identifying scale disparities between temporal and spatial, and social and ecological scales highlights a need for a different approach to environmental management. Scale in ecology has both spatial and temporal dimensions and reflects different levels of biological organization—individual organisms, populations, communities, and ecosystems (Levin, 1992). Scale in society and social science also has spatial and temporal dimensions in, for example, human relations, actions, governance, ownership, and politics (Clark, 1985; Gunderson and Holling, 2002; Cash et al., 2006; Cumming et al., 2006; Pyyhtinen, 2017). These structural scales of environmental management have political and social consequences that affect environmental processes and management. For example, management of environmental issues at a global (e.g., climate change) or a national scale (e.g., fisheries or pollution) can remove responsibility or power to act at a local scale even if the impacts are felt locally (Brashares et al., 2014; Haarstad, 2014). Further, these scale mismatches can create situations where actions to mitigate one environmental problem (e.g., demand for bioenergy for climate change mitigation) can yield negative consequences for other important aspects of the environment (e.g., land uses and biodiversity; Pörtner et al., 2021). Similarly, broad-scale acquisition of data to inform management can prevent adaptive responses to ecological processes that occur at local scales. The negative consequences of scale mismatches (Cumming et al., 2006) could be better addressed through local, place-based management that considers connections and linkages (Herse et al., 2020; Pörtner et al., 2021). Ecosystems on land are often managed at local/regional scales, but marine ecosystems are managed at national to international scales. National to international scale governance disempowers local actors from effecting change or driving adaptation (Pisor et al., 2022) and instead places responsibility on governments or intergovernmental agencies, leading to, for example, differences in the way people extract resources from land versus sea (Singh et al., 2021). The issue is not the specific scale of management but rather in the scale mismatches between management and ecological and social processes (discussed in Boxes 2 and 3). However, there are also potential negative consequences of increasing connectivity in the social structures of governance. For example, some ecosystems maintain their healthy state through the application of rules and knowledge of local people, whose sustainable environmental governance systems may then be challenged or eroded by new governance approaches (Young et al., 2006; Longo, 2012), which tend to be at larger scales where people are separated from the consequences of their actions or management decisions (see SE-P2 above). Recognizing connections across domains and scales offers opportunities to target social–ecological connections that could change the practices that currently obstruct aspirations of ecological sustainability and associated management actions. However, these opportunities only represent possibilities because the social–ecological context is complicated by the social constraints and path dependencies arising from existing policy and institutional frameworks (e.g., Box 1—Baltic Sea; Blenckner et al., 2015). Such critical impediments often sit at the interface between science, governance, and society (Thrush et al., 2016; Stenseth et al., 2020). The exceptions are responses to visible and immediate impacts such as whale stranding, oil spills, or wildfires. Oceans are possibly at greatest risk from scale mismatches due to 2 factors; first, the fallacy that oceans are too big to fail, having infinite capacity for recovery, and the ability to dilute and disperse contaminants, and second, that many of the effects on marine ecosystems are not in the public consciousness (due to a lack of visibility), and often arise from multiple stressors (Thrush et al., 2016; Selkoe et al., 2017). Common to all domains are impacts that generate immediate economic consequences (e.g., invasive species affecting industry), are highly visible (e.g., oil spills, severe eutrophication, land erosion, desertification, and wildfires), and elicit an emotional response (e.g., whale stranding), which show faster social feedbacks than slow insidious impacts exemplified by climate change and biodiversity loss. The slow and insidious impacts are often discounted as problems of the future but can lead to intergenerational injustice (Treves et al., 2018). This reactive and near-sighted management prioritization generates a focus on the short-term immediate impacts (e.g., the oil spills) rather than the chronic subtle cumulative effects on ecosystem components that have long-term effects because they affect slow to recover processes (e.g., over-fishing and terrestrial run-off into coastal waters remove key species and reduce regional biodiversity, which are generally very slow to recover). These slow changes also alter our perceptions of what ecological recovery looks like and the targets we set (i.e., shifting baseline syndrome; Soga and Gaston, 2018). While the principles above may seem complicated and numerous from a management perspective, important insight can be gained by focusing on 2 critical elements that run through the principles and examples (Boxes 13): (1) the recognition of cross-domain connections as drivers of change and (2) the importance of protecting the slow-to-recover ecosystem elements (species and processes) that are often eroded by chronic and cumulative stressors (Heinze et al., 2021). There are examples globally of environmental management incorporating cross-domain connections (reviewed in Threlfall et al., 2021), and these examples provide insight. For example, Box 1 highlights some lessons in managing eutrophication in the Baltic Sea where collaborations across countries and agencies have begun to address the downstream effects of agriculture and urbanization on the Baltic Sea eutrophication status. To illustrate the dynamic implications of slow management responses to ecological degradation exemplified by those outlined in Box 1, we present a simple control-theory model (Figure 3) to demonstrate how ecological connectivity, slow/mismatched management timescales (between environmental change and management actions), and different management actions can influence environmental quality in connected ecosystems. Figure 3. Stylized control theory model. To illustrate the implications of slow management actions, we analyze a simple control theory model of a social–ecological system. In the model, pollution input from an upstream ecosystem (P) affects a downstream ecosystem service (ES). The impact of P on ES is regulated by a (the effect of the pollution on the downstream ES). A management system responds to changes in ES with adaptive actions (A), at some timescale represented by a management response lag (τ). Management actions (A) alter the inflow of pollutants to the downstream ecosystem. The dashed line indicates that these management actions may be weak, which we quantify with a management effectivity rate (A0). The management dynamics and ecosystem processes are modeled within the “management system actions” and “downstream ecosystem service” boxes, respectively. Full explanation of terms, model equations, and units are given in  Appendix 1. DOI: https://doi.org/10.1525/elementa.2021.00075.f3 Figure 3. Stylized control theory model. To illustrate the implications of slow management actions, we analyze a simple control theory model of a social–ecological system. In the model, pollution input from an upstream ecosystem (P) affects a downstream ecosystem service (ES). The impact of P on ES is regulated by a (the effect of the pollution on the downstream ES). A management system responds to changes in ES with adaptive actions (A), at some timescale represented by a management response lag (τ). Management actions (A) alter the inflow of pollutants to the downstream ecosystem. The dashed line indicates that these management actions may be weak, which we quantify with a management effectivity rate (A0). The management dynamics and ecosystem processes are modeled within the “management system actions” and “downstream ecosystem service” boxes, respectively. Full explanation of terms, model equations, and units are given in  Appendix 1. DOI: https://doi.org/10.1525/elementa.2021.00075.f3 Close modal A set of differential equations (see  Appendix 1) was used to represent a downstream ecosystem (e.g., a coastal fishery) whose quality is biophysically affected by an upstream ecosystem, such as nutrient loading in an agroecosystem (e.g., a situation like that described in Box 1). For the dynamics of the downstream ecosystem, we assumed a classic tipping point behavior of the environmental quality of greatest value to humans (i.e., an ES), where this quality can be downgraded by the loading of nutrients in an upstream system (e.g., Thrush et al., 2021). A social–ecological feedback is present, whereby degraded quality of the downstream ecosystem may lead to management actions (A) that limit runoff of nutrients from the upstream ecosystem (and therefore the concentrations of nutrients in the downstream ecosystem). These management actions could be triggered by environmental concern, decline in income, or loss of other benefits from the ES. As expected, the ecological connectivity between upstream pollution and the downstream ecosystem can trigger transgression of a tipping point in the downstream ecosystem (Figure 4, black line). A sufficiently strong upstream feedback (A0) on the “social” side of this social–ecological system can stabilize the downstream ecosystem against this collapse by creating actions that decrease nutrient levels in the downstream ecosystem (e.g., riparian planting and/or the restoration of wetlands; Figure 4, blue and red lines). Strong upstream feedbacks can prevent regime shifts and tipping points (no bistability in red line in Figure 4). That management or human behavioral feedbacks can shift the position of an ecological regime shift or remove it entirely has previously been demonstrated in theoretical (Lade et al., 2013) and empirical (Lade et al., 2015) social–ecological systems. The strength of this upstream feedback will depend on several factors such as sufficiently strong incentives, trust in management agencies, and public support. Figure 4. Bifurcation plots of our upstream-downstream social–ecological system model. The stronger the management effectiveness (higher values of A0), the more upstream pollution loading the system can tolerate before shifting to a low ES state. For strong management effectiveness (A0 = 2.5), there is no bistability (i.e., the ability for the ecosystem to be in different states at the same upstream pollution loading indicated by the dotted line). DOI: https://doi.org/10.1525/elementa.2021.00075.f4 Figure 4. Bifurcation plots of our upstream-downstream social–ecological system model. The stronger the management effectiveness (higher values of A0), the more upstream pollution loading the system can tolerate before shifting to a low ES state. For strong management effectiveness (A0 = 2.5), there is no bistability (i.e., the ability for the ecosystem to be in different states at the same upstream pollution loading indicated by the dotted line). DOI: https://doi.org/10.1525/elementa.2021.00075.f4 Close modal The speed of management response also strongly influences the emergent social–ecological dynamics (Figure 5). Specifically, if the management response is strong but delayed, an oscillatory dynamic can result. In response to a disturbance, such as a sudden inflow of nutrients or change in policy affecting land clearing, management actions are likely to bounce between under- and overresponding with similar consequences for the quality of the ecosystem (Figure 5, red line). A faster management response (i.e., greater temporal matching of management to changes in ecosystem state) eliminates this oscillation (Figure 5, blue line). A moderate delay and a moderate management response (Figure 5, green line) could result in a permanent shift in ecosystem condition due to the system passing a tipping point prior to any management action. Figure 5. Modeled ecosystem service (ES) quality as a function of different management response lags (τ). Mild perturbations at t0 (a 5% pulse reduction from the ES equilibrium) could have different effects depending on the management response lag to a change in ES in the system (i.e., temporal scale mismatch). This example demonstrates the time evolution of ES quality under 3 different lags. For short response lag (τ = 1, blue line), the system recovers quickly (no danger). For moderate response lag (τ = 5, green line), the system gets into a divergent trajectory but hits the basin of attraction of the alternative low-quality ES and gets trapped there (regime shift). For long response lag (τ = 10, red line), the ES diverges initially to a very low quality, then oscillates but is not trapped in the alternate state. DOI: https://doi.org/10.1525/elementa.2021.00075.f5 Figure 5. Modeled ecosystem service (ES) quality as a function of different management response lags (τ). Mild perturbations at t0 (a 5% pulse reduction from the ES equilibrium) could have different effects depending on the management response lag to a change in ES in the system (i.e., temporal scale mismatch). This example demonstrates the time evolution of ES quality under 3 different lags. For short response lag (τ = 1, blue line), the system recovers quickly (no danger). For moderate response lag (τ = 5, green line), the system gets into a divergent trajectory but hits the basin of attraction of the alternative low-quality ES and gets trapped there (regime shift). For long response lag (τ = 10, red line), the ES diverges initially to a very low quality, then oscillates but is not trapped in the alternate state. DOI: https://doi.org/10.1525/elementa.2021.00075.f5 Close modal Slow changes in pollutant loads, resulting in slow changes in environmental quality, challenge environmental management (Hughes et al., 2013). Over long time scales, environmental management often fails to respond to slow environmental changes even if the absolute change has been large. This is sometimes known as the “shifting baselines” syndrome (Soga and Gaston, 2018). Ecological time lags have hindered effective management in the Baltic Sea (see Box 1), for example, because the legacy of nutrient inputs over decadal timescales has saturated the marine ecosystem and “slow to recover” ecological processes that facilitate removal mediated by key species and regional biodiversity are the most affected. The management interventions modeled here correspond to the reactive solutions that are often used in practice. In the model, once the intervention ceases, then pollutant inflows return to previous levels. Instead, future-response-focused management that aims at transformative changes, such as fundamental changes in agricultural systems or landscapes (e.g., riparian planting and buffer zones), should be a goal for long-term sustainability. We used this simple model to illustrate the importance of cross-domain connections, but it is important to note that it studies only a single stressor, whereas much of the complexity of environmental management stems from the presence of multiple stressors (Côté et al., 2016; Thrush et al., 2021; see Boxes 13). While an ecosystem-based approach to management has been promoted by the research community for almost 20 years (Christensen et al., 1996), implementation has been slow and patchy. We argue that a focus on the social–ecological properties of cross-domain connections demands new management and research priorities, and this will assist effective implementation. These priorities include the need to recognize, in governance and in actions, the dynamic implications of slow decision making across linked domains and to enhance upstream social–ecological feedbacks, thereby reducing management response times across all ecological domains. Priorities include the following: P1. Research and actions aimed at redesigning governance structures away from path dependent processes that tie management (in)actions to a restricted range of decision options (Kelly et al., 2018). For example, setting stressor limits that encourage managing up to the limit but do not open the possibility of addressing cumulative effects (Box 3). The social–ecological research question that follows is: What sort of governance models (e.g., polycentric; Biggs et al., 2012) might facilitate social/societal management that is more spatially and temporally targeted at appropriate scales (informed by SE-P1, 4, and 5; Figure 2)? Comanagement structures that combine place-based management by indigenous peoples and local communities with regional or national government policy may enable more rapid feedbacks to management at multiple scales (Herse et al., 2020), but only if power is shared in a way that allows rapid decision making and enactment of a response at the relevant scale. In practice, enabling rapid management of feedbacks requires careful consideration of comanagement roles in relation to the ecosystem processes; it can entail that local communities take engagement roles (decision making, implementation processes) and other stakeholders hold more participatory roles (e.g., designing and reviewing generic management goals, information exchange, advisory roles). For example, Box 1 details an internationally recognized model of success in environmental management; Box 2 details that successes in wetland management are typified by holistic views centered on ecosystem-level processes; Box 3 discusses how a non-path-dependent management regime might look. P2. Management approaches that prioritize monitoring, maintaining, and restoring slow processes—that is, the ecologically important species that are slow to recover (e.g., habitat formers and ecosystem engineers that underpin ecological function and resilience; Biggs et al., 2012; Kelly et al., 2015; Kortetmäki et al., 2021). This priority is critical for preventing tipping points and ensuring that problems other than the “crisis of the day” are recognized early so that they never become “tomorrow’s crisis” (informed by SE-P3–5 in Figure 2). Such management approaches require institutional patience and sufficient resources to measure slow or spatially large ecosystem processes. Box 1 describes how management of the Baltic Sea is moving toward this; Box 2 emphasizes the need for management and monitoring to be responsive to temporal disjunctions in cause and effect; Box 3 emphasizes restoration as a key focus for maintaining ecological resilience. P3. Identify the social–ecological priorities across the ecosystem domains so that multiple visions can be reconciled to help facilitate collaboration across management and governance levels. Identifying where priorities are incompatible is an important step in managing expectations and trade-offs (informed by SE-P2 in Figure 2). For example, see Box 1 for a discussion on how the countries and agencies around the Baltic Sea have reconciled visions for the future status of the Baltic Sea. P4. Foster collaboration, at a management and governance level, between actors and agencies who have priorities in different domains, to better recognize drivers of change earlier (informed by SE-P1–3 in Figure 2; Singh et al., 2021). Such collaboration could allow the effect of changes to “upstream” domains on “downstream” domains to be assessed before impacts occur in the downstream domain (thereby utilizing SE-P1 in Figure 2 to eliminate the response lag in the downstream domain). For example, Boxes 1 and 2 highlight the perils of not recognizing drivers of change early enough. The collaboration could be motivated by the identification of costly outcomes of not managing cross-ecosystem connections, collective learning, or sharing of nonfinancial and financial resources, and it requires maintenance of respect and trust, shared benefits (especially as some partners are “upstream”), and commitment as the progress may be slow and frustrating. Sharing of representation and power must be agreed upon, especially if some partners may have fewer resources than others. P5. Ensure that new governance structures recognize and are adaptable to the differential properties of the ecosystem domains in terms of stressor residence times, limits setting, and cumulative effects, so that all ecosystem domains can be managed with equal priority (informed by SE-P1, 3–5 in Figure 2). For example, see full discussion in Box 3 about the perils of stressor limit setting for managing the marine environment. P6. Promote research and collaborative learning across ecosystem domains to better enable science providers to understand and provide evidence for where connections lie and the scale at which they operate, such that scales of management are relevant to scales that are ecologically meaningful (e.g., Kelly et al., 2015; Herse et al., 2020; informed by SE-P5 in Figure 2). In some cases, the first step may be to develop theories, models, and field methods for cross-ecosystem research and education, as well as foster collaboration across ecosystem domain-specific research fields. Boxes 2 and 3 highlight the need for management scales to align with the ecology of the ecosystem and connections. P7. Build trust in science and scholarship through developing governance structures that foster transparency and recognize equity across different knowledge systems and ecosystem domains (informed by SE-P2 in Figure 2). This trust must be underpinned by sufficient and independent science funding that is solutions focused. Box 1 highlights successes in the management of the Baltic Sea that have stemmed from a long history of trust among scientists and policy makers. Current global trends in environmental degradation are sobering and necessitate a significant shift in our efforts to curb the damage. We argue that these 7 priorities can help to navigate research and management toward the situation depicted in Figure 6, where cross-domain connections and feedbacks are explicitly addressed in environmental management. This situation would lead to ecosystems that are more resilient to stress (blue and red lines in Figure 4) and management decisions that are effective at stabilizing and preserving ecosystem functions and services (blue line in Figure 5). Importantly, leveraging cross-domain linkages as early warning signals of future downstream change may alleviate temporal scale mismatches (lags) in management responses to change. The social subsystem may provide an important nexus for managing multidomain social–ecological systems, though this will require researchers and policy makers to step outside of traditional domain boundaries. Figure 6. A diagrammatic representation of a situation where the upstream feedback is enhanced. The upstream management feedback is enhanced by explicitly accounting for cross-ecosystem domain connectivity and slow processes in management structures. The diagram includes a summary of the management and research priorities (P1–7) that are highlighted by a focus on cross-domain connections and the social–ecological properties (SE-P1–5) of these connections (summarized in Figure 2). DOI: https://doi.org/10.1525/elementa.2021.00075.f6 Figure 6. A diagrammatic representation of a situation where the upstream feedback is enhanced. The upstream management feedback is enhanced by explicitly accounting for cross-ecosystem domain connectivity and slow processes in management structures. The diagram includes a summary of the management and research priorities (P1–7) that are highlighted by a focus on cross-domain connections and the social–ecological properties (SE-P1–5) of these connections (summarized in Figure 2). DOI: https://doi.org/10.1525/elementa.2021.00075.f6 Close modal Aotearoa New Zealand’s research system has contributed to addressing these priorities through its National Science Challenges. These are large, collaborative, transdisciplinary projects involving extensive codevelopment. However, they were domain constrained by the funding agency, and this article is a product of a workshop to explore commonalities and connections across 3 of these challenges (“Biological Heritage” [Terrestrial], “Our Land and Water” [Freshwater], and “Sustainable Seas” [Marine]). We thank Jasmine Low for assistance with the artwork. We also thank the editor and 2 reviewers for their thoughtful suggestions that improved the depth and interest of the manuscript. The workshop was funded by the New Zealand National Science Challenges: Sustainable Seas (Tipping Points Project; CO1X1515; ST) and New Zealands's Biological Heritage (Project 3.1; 1516-44-004; JT), established by the Ministry of Business, Innovation and Enterprise, New Zealand. The writing of this manuscript was also funded by the Sustainable Seas National Science Challenge Project 1.1: Ecological responses to cumulative effects (C01X1901). RG-G was supported by the New Zealand Rutherford Foundation Postdoctoral Fellowship and the Walter and Andrée de Nottbeck Foundation during the writing of this manuscript. No competing interests to declare. Contributed to the workshop and conception of ideas: All authors. Obtained funding for the workshop: JT, ST. Organized the workshop: JT, ST, JY, JH, CP. Contributed to the model development and results: VD, SL. Wrote the first manuscript draft: RG-G, ST, JT. Revised the first draft and approved final submission: All authors. Andersen , JH , Carstensen , J , Conley , DJ , Dromph , K , Fleming-Lehtinen , V , Gustafsson , BG , Josefson , AB , Norkko , A , Villnäs , A , Murray , C . 2017 . Long-term temporal 622 and spatial trends in eutrophication status of the Baltic Sea . Biological Reviews 92 ( 1 ): 135 149 . Åström , K , Murray , R . 2012 . Feedback systems: An introduction for scientists and engineers . Princeton, NJ : Princeton University Press . Australian and New Zealand Governments . 2018 . Deriving guideline values for water quality. Australian and New Zealand guidelines for fresh and marine water quality . Canberra (ACT), Australia : ANZG and Australian state and territory governments . Available at https://www.waterquality.gov.au/anz-guidelines. Barrett , IC , McIntosh , AR , Febria , CM , Warburton , HJ . 2021 . Negative resistance and resilience: Biotic mechanisms underpin delayed biological recovery in stream restoration . Proceedings of the Royal Society B: Biological Sciences 288 ( 1947 ): 20210354 . Bartels , P , Cucherousset , J , Steger , K , Eklov , P , Tranvik , LJ , Hillebrand , H . 2012 . Reciprocal subsidies between freshwater and terrestrial ecosystems structure consumer resource dynamics . Ecology 93 ( 5 ): 1173 1182 . Bataille , CY , Malinen , SK , Yletyinen , J , Scott , N , Lyver , POB . 2021 . Relational values provide common ground and expose multi-level constraints to cross-cultural wetland management . People and Nature 3 ( 4 ): 941 960 . Biber , E . 2013 . The challenge of collecting and using environmental monitoring data . Ecology and Society 18 ( 4 ): 68 . Biggs , D , Biggs , R , Dakos , V , Scholes , RJ , Schoon , M . 2011 . Are we entering an era of concatenated global crises? Ecology and Society 16 ( 2 ): 27 . Biggs , R , Schluter , M , Biggs , D , Bohensky , EL , BurnSilver , S , Dakos , V , Daw , TM , Leitch , AM , Meek , C , Quinlan , A , Raudsepp-Hearne , C , Robards , MD , Schoon , ML , Evans , LS , Kotschy , K , Schultz , L , West , PC . 2012 . Toward principles for enhancing the resilience of ecosystem services . Annual Review of Environment and Resources 37 ( 1 ): 421 448 . Blenckner , T , Osterblom , H , Larsson , P , Andersson , A , Elmgren , R . 2015 . Baltic Sea ecosystem-based management under climate change: Synthesis and future challenges . AMBIO 44 ( 3 ): 507 515 . Bowman , DMJS . Perry , GLW . Marston , JB . 2015 . Feedbacks and landscape-level vegetation dynamics . Trends in Ecology & Evolution 30 ( 5 ): 255 260 . Brashares , JS , Abrahms , B , Fiorella , KJ , Golden , CD , Hojnowski , CE , Marsh , RA , McCauley , DJ , Nuñez , TA , Seto , K , Withey , L . 2014 . Conservation policy: Wildlife decline and social conflict . Science 345 ( 6195 ): 376 - 8 . Brunson , MW . 1998 . Social dimensions of boundaries: Balancing cooperations and self-interest , in Knight RL , Landres PB eds., Stewardship across boundaries . Washington, DC : Island Press : 65 86 . Burdon , FJ , McIntosh , AR , Harding , JS . 2013 . Habitat loss drives threshold response of benthic invertebrate communities to deposited sediment in agricultural streams . Ecological Applications 23 ( 5 ): 1036 1047 . Carpenter , SR , Booth , EG , Gillon , S , Kucharik , CJ , Loheide , S , Mase , AS , Motew , MM , Qiu , J , Rissman , AR , Seifert , J , Soylu , ME , Turner , M , Wardropper , C . 2015 . Plausible futures of a social-ecological system: Yahara watershed, Wisconsin, USA . Ecology and Society 20 ( 2 ): 10 . Carstensen , J , Conley , DJ , Bonsdorff , E , Gustafsson , BG , Hietanen , S , Janas , U , Jilbert , T , Maximov , A , Norkko , A , Norkko , J , Reed , DC , Slomp , CP , Timmermann , K , Voss , M . 2014 . Hypoxia in the Baltic Sea: Biogeochemical cycles. Benthic fauna, and management . AMBIO 43 ( 1 ): 26 36 . Cash , DW , Adger WN , Berkes F , Garden P , Lebel L , Olsson , P , Pritchard , L , Young , O . 2006 . Scale and cross-scale dynamics: Governance and information in a multilevel world . Ecology and Society 11 ( 2 ): 8 . Christensen , NL , Bartuska AM , Brown JH , Carpenter S , D’Antonio , C , Francis , R , Franklin , JF , Macmahon , JA , Noss , R , Parsons , DJ , Peterson , CH , Turner , MG , Woodmansee , RG . 1996 . The report of the Ecological Society of America Committee on the scientific basis for ecosystem management . Ecological Applications 6 ( 3 ): 665 691 . Clapcott (Ngāti Porou) , J , Ataria (Rongomaiwahine; Ngāti Kahungunu; Ngati Raukawa) , J , Hepburn , C , Hikuroa (Ngāti Maniapoto; Tainui; Te Arawa) , D , Jackson (Ngāti Whātua; Ngāti Kahu o Whangaroa; Ngāpuhi; Ngāti Wai) , A-M , Kirikiri (Te Whānau a Āpanui) , R , Williams (Ngāti Whakaue , Ngāti Pikiao , Te Whanau a Maruhaeremuri) , E . 2018 . Mātauranga Māori: Shaping marine and freshwater futures . New Zealand Journal of Marine and Freshwater Research 52 ( 4 ): 457 466 . Clark , WC . 1985 . Scales of climate impacts . Climatic Change 7 ( 1 ): 5 27 . Côté , IM , Darling , ES , Brown , CJ . 2016 . Interactions among ecosystem stressors and their importance in conservation . Proceedings of the Royal Society B: Biological Sciences 283 ( 1824 ): 20152592 . Crain , CM , Kroeker , K , Halpern , BS . 2008 . Interactive and cumulative effects of multiple human stressors in marine systems . Ecology Letters 11 ( 12 ): 1304 1315 . Cryer , M , Mace , PM , Sullivan , KJ . 2016 . New Zealand’s ecosystem approach to fisheries management . Fisheries Oceanography 25 ( S1 ): 57 70 . Cumming , GS , Cumming , DHM . Redman , CL . 2006 . Scale mismatches in social-ecological systems: Causes, consequences, and solutions . Ecology and Society 11 ( 1 ): 14 . Darling , ES , Côté , IM . 2008 . Quantifying the evidence for ecological synergies . Ecology Letters 11 ( 12 ): 1278 1286 . DeFries , R , Nagendra , H . 2017 . Ecosystem management as a wicked problem . Science 356 ( 6335 ): 265 270 . Eslami , S , Hoekstra , P , Nguyen Trung , N , Ahmed Kantoush , S , Van Binh , D , Dung , DD , Quang , TT , van der Vegt , M . 2019 . Tidal amplification and salt intrusion in the Mekong Delta driven by anthropogenic sediment starvation . Scientific Reports 9 ( 1 ): 18746 . Euliss , NH , Smith , LM , Wilcox , DA , Browne , BA . 2008 . Linking ecosystem processes with wetland management goals: Charting a course for a sustainable future . Wetlands 28 ( 3 ): 553 562 . Filbee-Dexter , K , Symons , CC , Jones , K , Haig , HA , Pittman , J , Alexander , SM , Burke , MJ . 2018 . Quantifying ecological and social drivers of ecological surprise . Journal of Applied Ecology 55 ( 5 ): 2135 2146 . Folke , C . 2006 . Resilience: The emergence of a perspective for social–ecological systems analyses . Global Environmental Change 16 ( 3 ): 253 267 . Folke , C , Pritchard , L , Berkes , F , Colding , J , Svedin , U . 2007 . The problem of fit between ecosystems and institutions: Ten years later . Ecology and Society 12 ( 1 ): 30 . Frost , CM , Peralta , G , Rand , TA , Didham , RK , Varsani , A , Tylianakis , JM . 2016 . Apparent competition drives community-wide parasitism rates and changes in host abundance across ecosystem boundaries . Nature Communications 7 ( 1 ): 12644 . Fukami , T , Wardle , DA , Bellingham , PJ , Mulder , CP , Towns , DR , Yeates , GW , Bonner , KI , Durrett , MS , Grant-Hoffman , MN , Williamson , WM . 2006 . Above- and below-ground impacts of introduced predators in seabird-dominated island ecosystems . Ecology Letters 9 ( 12 ): 1299 307 . Gounand , I , Little , CJ , Harvey , E , Altermatt , F . 2018 . Cross-ecosystem carbon flows connecting ecosystems worldwide . Nature Communications 9 ( 1 ): 4825 . Granek , EF , Polasky , S , Kappel , CV , Reed , DJ , Stoms , DM , Koch , EW , Kennedy , CJ , Cramer , LA , Hacker , SD , Barbier , EB , Aswani , S , Ruckelshaus , M , Perillo , GM , Silliman , BR , Muthiga , N , Bael , D , Wolanski , E . 2010 . Ecosystem services as a common language for coastal ecosystem-based management . Conservation Biology 24 ( 1 ): 207 216 . Grennfelt , P , Engleryd , A , Forsius , M , Hov , Ø , Rodhe , H , Cowling , E . 2020 . Acid rain and air pollution: 50 years of progress in environmental science and policy . AMBIO 49 ( 4 ): 849 864 . Gunderson , LH , Holling , CS . 2002 . Panarchy: Understanding transformations in human and natural systems . Washington, DC : Island Press . Haarstad , H . 2014 . Climate change, environmental governance and the scale problem . Geography Compass 8 ( 2 ): 87 97 . Heinze , C , Blenckner , T , Martins , H , Rusiecka , D , Doscher , R , Gehlen , M , Gruber , N , Holland , EA , Hov , Ø , Joos , F , Matthews , JBR . Rødven , R , Wilson , SJ . 2021 . The quiet crossing of ocean tipping points . PNAS 118 ( 9 ): e2008478118 . HELCOM . 2010 . Hazardous substances in the Baltic Sea: An integrated thematic assessment of hazardous substances in the Baltic Sea . Baltic Sea Environment Proceedings No. 120B . HELCOM . 2018 . State of the Baltic Sea: Second HELCOM holistic assessment 2011-2016 . Baltic Sea Environment Proceedings 155 . HELCOM . 2021 . Baltic Sea Action Plan 2021 update . HELCOM : 31 . Herse , MR , Lyver , POB . Scott , N , McIntosh , AR , Coats , SC , Gormley , AM , Tylianakis , JM . 2020 . Engaging indigenous peoples and local communities in environmental management could alleviate scale mismatches in social–ecological systems . BioScience 70 ( 8 ): 699 707 . Hicks , CC , Crowder , LB , Graham , NA , Kittinger , JN , Cornu , EL . 2016 . Social drivers forewarn of marine regime shifts . Frontiers in Ecology and the Environment 14 ( 5 ): 252 260 . Howlett , M . 2014 . Why are policy innovations rare and so often negative? Blame avoidance and problem denial in climate change policy-making . Global Environmental Change 29 : 395 403 . Hughes , TP , Linares , C , Dakos , V , van de Leemput , IA , van Nes , EH . 2013 . Living dangerously on borrowed time during slow, unrecognized regime shifts . Trends in Ecology & Evolution 28 ( 3 ): 149 155 . Kanwischer , M , Asker , N , Wernersson , A.S , Wirth , MA , Fisch , K , Dahlgren , E , Osterholz , H , Habedank , F , Naumann , M , Mannio , J , Schulz-Bull , DE . 2021 . Substances of emerging concern in Baltic Sea water: Review on methodological advances for the environmental assessment and proposal for future monitoring . AMBIO : 51 ( 6 ): 1588 1608 . DOI: http://dx.doi.org/10.1007/s13280-021-01627-6. Kelly , C , Ellis , G , Flannery , W . 2018 . Conceptualising change in marine governance: Learning from transition management . Marine Policy 95 : 24 35 . Kelly , RP , Erickson , AL , Mease , LA , Battista , W , Kittinger , JN , Fujita , R . 2015 . Embracing thresholds for better environmental management . Philosophical Transactions of the Royal Society B: Biological Sciences 370 ( 1659 ): 20130276 . Knight , TM , McCoy , MW , Chase , JM , McCoy , KA , Holt , RD . 2005 . Trophic cascades across ecosystems . Nature 437 ( 7060 ): 880 883 . Kortetmäki , T , Puurtinen , M , Salo , M , Aro , R , Baumeister , S , Duflot , R , Elo , M , Halme , P , Husu , H-M , Huttunen , S , Hyvönen , K , Karkulehto , S , Kataja-aho , S , Keskinen , KE , Kulmunki , I , Mäkinen , T , Näyhä , A , Okkolin , M-A , Perälä , T , Purhonen , J , Raatikainen , KJ , Raippalinna , LM , Salonen , K , Savolainen , K , Kotiaho , JS . 2021 . Planetary well-being . Humanities and Social Sciences Communications 8 ( 258 ): 1 8 . DOI: https://doi.org/10.1057/s41599-021-00899-3. Lade , SJ , Niiranen , S , Hentati-Sundberg , J , Blenckner , T , Boonstra , WJ , Orach , K , Quaas , MF , Österblom , H , Schlüter , M . 2015 . An empirical model of the Baltic Sea reveals the importance of social dynamics for ecological regime shifts . Proceedings of the National Academy of Sciences 112 ( 35 ): 11120 11125 . Lade , SJ , Tavoni , A , Levin , SA , Schlüter , M . 2013 . Regime shifts in a social-ecological system . Theoretical Ecology 6 ( 3 ): 359 372 . Laurance , WF , Dell , B , Turton , SM , Lawes , MJ , Hutley , LB , McCallum , HI , Dale , PE , Bird , MI , Hardy , GESJ . Prideaux , G , Gawne , B , McMahon , CR , Yu , RMK . Hero , J-M , Schwarzkopf , L , Krockenberger , AK , Douglas , MM , Silvester , E , Mahony , M , Vella , K , Saikia , U , Wahren , C-H , Xu , Z , Smith , B , Cocklin , C . 2011 . The 10 Australian ecosystems most vulnerable to tipping points . Biological Conservation 144 ( 5 ): 1472 1480 . Lenton , TM . 2020 . Tipping positive change . Philosophical Transactions of the Royal Society B: Biological Sciences 375 ( 1794 ): 20190123 . Levin , SA . 1992 . The problem of pattern and scale in ecology . Ecology 73 ( 6 ): 1943 1967 . Lindenmayer , DB , Likens , GE , Krebs , CJ , Hobbs , RJ . 2010 . Improved probability of detection of ecological “surprises .” PNAS 107 ( 51 ): 21957 21962 . Longo , SB . 2012 . Mediterranean rift: Socio-ecological transformations in the Sicilian bluefin tuna fishery . Critical Sociology 38 ( 3 ): 417 436 . McGlone , MS . 2009 . Postglacial history of New Zealand wetlands and implications for their conservation . New Zealand Journal of Ecology 33 ( 1 ): 1 23 . McGregor , D . 2004 . Coming full circle: Indigenous knowledge, environment, and our future . The American Indian Quarterly 28 ( 3&4 ): 385 410 . Myers , SC , Clarkson , BR , Reeves , PN , Clarkson , BD . 2013 . Wetland management in New Zealand: Are current approaches and policies sustaining wetland ecosystems in agricultural landscapes? Ecological Engineering 56 : 107 120 . Nystrom , M , Jouffray , JB , Norstrom , AV , Crona , B , Sogaard Jorgensen , P , Carpenter , SR , Bodin , Ö , Galaz , V , Folke , C . 2019 . Anatomy and resilience of the global production ecosystem . Nature 575 ( 7781 ): 98 108 . Organisation for Economic Co-operation and Development . 2017 . OECD Environmental Performance Reviews: New Zealand 2017 . Paris, France : OECD Publishing . DOI: https://doi.org/10.1787/9789264268203-en. Osterblom , H , Crona , BI , Folke , C , Nystrom , M , Troell , M . 2017 . Marine ecosystem science on an intertwined planet . Ecosystems 20 ( 1 ): 54 61 . Paasche , Ø , Österblom , H , Neuenfeldt , S , Bonsdorff , E , Brander , K , Conley , DJ , Durant , JM , Eikeset , AM , Goksøyr , A , Jónsson , S , Kjesbu , OS , Kuparinen , A , Stenseth , NC . 2015 . Connecting the Seas of Norden . Nature Climate Change 5 ( 2 ): 89 92 . Palumbi , SR . 2003 . Ecological subsidies alter the structure of marine communities . PNAS 100 ( 21 ): 11927 - 11928 . Peacock , BC , Hikuroa , D , Morgan , TKKB . 2012 . Watershed-scale prioritization of habitat restoration sites for non-point source pollution management . Ecological Engineering 42 : 174 182 . Perry , GLW . Richardson , SJ , Harré , N , Hodges , D , Lyver , POB . Maseyk , FJF . Taylor , R , Todd , JH , Tylianakis , JM , Yletyinen , J , Brower , A . 2021 . Evaluating the role of social norms in fostering pro-environmental behaviors . Frontiers in Environmental Science 9 : 620125 . Pisor , AC , Basurto , X , Douglass , KG , Mach , KJ , Ready , E , Tylianakis , JM , Hazel , A , Kline , MA , Kramer , KL , Lansing , JS , Moritz , M , Smaldino , PE , Thornton , TF , Jones , JH . 2022 . Effective climate change adaptation means supporting community autonomy . Nature Climate Change 12 ( 3 ): 213 215 . Polis , GA , Hurd , SD . 1996 . Linking marine and terrestrial food webs: Allochthonous input from the ocean supports high secondary productivity on small islands and coastal land communities . The American Naturalist 147 ( 3 ): 396 423 . Polis , GA , Strong , DR . 1996 . Food web complexity and community dynamics . The American Naturalist 147 ( 5 ): 813 846 . Pörtner , HO , Scholes , RJ , Agard , J , Archer , E , Arneth , A , Bai , X , Barnes , D , Burrows , M , Chan , L , Cheung , WL , Diamond , S , Donatti , C , Duarte , C , Eisenhauer , N , Foden , W , Gasalla , MA , Handa , C , Hickler , T , Hoegh-Guldberg , O , Ichii , K , Jacob , U , Insarov , G , Kiessling , W , Leadley , P , Leemans , R , Levin , L , Lim , M , Maharaj , S , Managi , S , Marquet , PA , McElwee , P , Midgley , G , Oberdorff , T , Obura , D , Osman , E , Pandit , R , Pascual , U , Pires , APF . Popp , A , ReyesGarcía , V , Sankaran , M , Settele , J , Shin , YJ , Sintayehu , DW , Smith , P , Steiner , N , Strassburg , B , Sukumar , R , Trisos , C , Val , AL , Wu , J , Aldrian , E , Parmesan , C , Pichs-Madruga , R , Roberts , DC , Rogers , AD , Díaz , S , Fischer , M , Hashimoto , S , Lavorel , S , Wu , N , Ngo , HT . 2021 . IPBES-IPCC co-sponsored workshop report on biodiversity and climate change . Bonn, Germany : IPBES . Pyyhtinen , O . 2017 . Matters of scale: Sociology in and for a complex world . Canadian Review of Sociology 54 ( 3 ): 297 308 . Ramesh , R , Chen , Z , Cummins , V , Day , J , D’Elia , C , Dennison , B , Forbes , DL , Glaeser , B , Glaser , M , Glavovic , B , Kremer , H , Lange , M , Larsen , NL , Tissier , ML , Newton , A , Pelling , M , Ramachandran , P , Wolanski , E . 2015 . Land–ocean interactions in the coastal zone: past, present & future . Anthropocene 12 : 85 98 . Reid , DJ , Chiaroni , LD , Hewitt , JE , Lohrer , DM , Matthaei , CD , Phillips , NR , Scarsbrook , M , Smith , BJ , Thrush , SF , Townsend , CR , van Houte-Howes , KSS , Wright-Stow , AE . 2011 . Sedimentation effects on the benthos of streams and estuaries: a cross-ecosystem comparison . Marine and Freshwater Research 62 ( 10 ): 1201 - 1213 . Reusch , TBH . Dierking , J , Andersson , HC , Bonsdorff , E , Carstensen , J , Casini , M , Czajkowski , M , Hasler , B , Hinsby , K , Hyytiäinen , K , Johannesson , K , Jomaa , S , Jormalainen , V , Kuosa , H , Kurland , S , Laikre , L , MacKenzie , BR , Margonski , P , Melzner , F , Oesterwind , D , Ojaveer , H , Refsgaard , JC , Sandström , A , Schwarz , G , Tonderski , K , Tonderski , K , Zandersen , M . 2018 . The Baltic Sea as a time machine for the future coastal ocean . Science Advances 4 ( 5 ): eaar8195 . Rocha , J , Yletyinen , J , Biggs , R , Blenckner , T , Peterson , G . 2015 . Marine regime shifts: Drivers and impacts on ecosystems services . Philosophical Transactions of the Royal Society B: Biological Sciences 370 ( 1659 ): 20130273 . Rocha , JC , Peterson , G , Bodin , Ö , Levin , S . 2018 . Cascading regime shifts within and across scales . Science 362 ( 6421 ): 1379 1383 . Rockstrom , J , Steffen , W , Noone , K , Persson , A , Chapin , FS , III , Lambin , EF , Lenton , TM , Scheffer , M , Folke , F , Schellnhuber , HJ , Nykvist , B , de Wit , CA , Hughes , T , van der Leeuw , S , Rodhe , H , Sörlin , S , Snyder , PK , Costanza , R , Svedin , U , Falkenmark , M , Karlberg , L , Corell , RW , Fabry , VJ , Hansen , J , Walker , B , Liverman , D , Richardson , K , Crutzen , P , Foley , JA . 2009 . A safe operating space for humanity . Nature 461 ( 7263 ): 472 475 . Rouse , HL , Norton , N . 2016 . Challenges for freshwater science in policy development: reflections from the science–policy interface in New Zealand . New Zealand Journal of Marine and Freshwater Research 51 ( 1 ): 7 20 . Ruckelshaus , M , Klinger , T , Knowlton , N , Demaster , DR . 2008 . Marine ecosystem-based management in practice: Scientific, and governance challenges . BioScience 58 ( 1 ): 53 63 . Sala , OE , Chapin , FS , III , Armesto , JJ , Berlow , E , Bloomfield , J , Dirzo , R , Huber-Sanwald , E , Huenneke , LF , Jackson , RB , Kinzig , A , Leemans , R , Lodge , DM , Mooney , HA , Oesterheld , M , Poff , NL , Sykes , MT , Walker , BH , Walker , M , Wall , DH . 2000 . Global biodiversity scenarios for the year 2100 . Science 287 ( 5459 ): 1770 1774 . Sanchez-Pinero , F , Polis , GA . 2000 . Bottom-up dynamics of allochthonous input: Direct and indirect effects of seabirds on islands . Ecology 81 ( 11 ): 3117 3132 . Scheffer , M , Carpenter , S , Foley , JA , Folke , C , Walker , B . 2001 . Catastrophic shifts in ecosystems . Nature 413 ( 6856 ): 591 6 . Schiel , DR , Howard-Williams , C . 2016 . Controlling inputs from the land to sea: Limit-setting, cumulative impacts and ki uta ki tai . Marine and Freshwater Research 67 ( 1 ): 57 64 . Sea , MA , Thrush , SF , Hillman , JR . 2021 . Environmental predictors of sediment denitrification rates within restored green-lipped mussel Perna canaliculus beds . Marine Ecology Progress Series 667 : 1 13 . Selkoe , KA , Blenckner , T , Caldwell , MR , Crowder , LB , Erickson , AL , Timothy Essington , E , Estes , AJ , Fujita , MR , Halpern , SB , Hunsicker ME , Kappel CV , Kelly RP , Kittinger JN , Levin , PS , Lynham , JM , Mach , ME , Martone , RG , Mease , LA , Salomon , AK , Samhouri , JF , Scarborough , C , Stier , AC , White , C , Zedler , J . 2017 . Principles for managing marine ecosystems prone to tipping points . Ecosystem Health and Sustainability 1 ( 5 ): 1 18 . Sherman , DJ , Barron , KM , Ellis , JT . 2002 . Retention of beach sands by dams and debris basins in Southern California . Journal of Coastal Research SI36 : 662 674 . Singh , GG , Cottrell , RS , Eddy , TD , Cisneros-Montemayor , AM . 2021 . Governing the land-sea interface to achieve sustainable coastal development . Frontiers in Marine Science 8 ( 709947 ): 1 11 . Soga , M , Gaston , KJ . 2018 . Shifting baseline syndrome: Causes, consequences, and implications . Frontiers in Ecology and Evolution 16 ( 4 ): 222 230 . Stenseth , NC , Payne , MR , Bonsdorff , E , Dankel , DJ , Durant , JM , Anderson , LG , Armstrong , CW , Blenckner , T , Brakstad , A , Dupont , S , Eikeset , AM , Goksøyr , A , Jonsson , S , Kuparinen , A , Vage , K , Osterblom , H , Paasche , Q . 2020 . Attuning to a changing ocean . PNAS 117 ( 34 ): 20363 20371 . Threlfall , CG , Marzinelli , EM , Ossola , A , Bugnot , AB , Bishop , MJ , Lowe , EC , Imberger , SJ , Myers , S , Steinberg , PD , Dafforn , KA . 2021 . Toward cross-realm management of coastal urban ecosystems . Frontiers in Ecology and the Environment 19 ( 4 ): 225 232 . Thrush , SF , Dayton , PK . 2010 . What can ecology contribute to ecosystem-based management? The Annual Review of Marine Science 2 ( 1 ): 419 41 . Thrush , SF , Hewitt , JE , Gladstone-Gallagher , RV . Savage , C , Lundquist , C , Meara , TO , Vieillard , A , Hillman , JR , Mangan , S , Douglas , EJ , Clark , DE , Lohrer , AM , Pilditch , C . 2021 . Cumulative stressors reduce the self-regulating capacity of coastal ecosystems . Ecological Applications 31 ( 1 ): e02223 . Thrush , SF , Lewis , N , Le Heron , R , Fisher , KT , Lundquist , CJ , Hewitt , J . 2016 . Addressing surprise and uncertain futures in marine science, marine governance, and society . Ecology and Society 21 ( 2 ): 44 . Treves , A , Artelle , KA , Darimont , CT , Lynn , WS , Paquet , P , Francisco , J , Avila , S , Shaw , R , Wood , MC . 2018 . Intergenerational equity can help to prevent climate change and extinction . Nature Ecology and Evolution 2 ( 2 ): 204 207 . United Nations Environment Programme World Conservation Monitoring Centre . 2009 . The Ramsar Convention on Wetlands and its indicators of effectiveness . International Expert Workshop on the 2010 Biodiversity Indicators and Post-2010 Indicator Development . New York, NY : Convention on Biological Diversity (CBD) : 9 . Available at chrome-extension://efaidnbmnnnibpcajpcglclefindmkaj/https://www.cbd.int/doc/meetings/ind/emind-02/official/emind-02-08d-en.pdf. Vitousek , PM . 1997 . Human domination of Earth’s ecosystems . Science 277 ( 5325 ): 494 499 . Williams , PB . 1994 . From reclamation to restoration—Changing perspectives wetland management. Wetland management: Proceedings of the international conference organized by Institution of Civil Engineers and held in London on 2–3 June 1994 . Thomas Telford Publishing : 1 6 . Wipfli , MS , Hudson , JP , Caouette , JP , Chaloner , DT . 2003 . Marine subsidies in freshwater ecosystems: Salmon carcasses increase the growth rates of stream-resident salmonids . Transactions of the American Fisheries Society 132 ( 2 ): 371 381 . Yletyinen , J , Brown , P , Pech , R , Hodges , D , Hulme , PE , Malcolm , TF , Maseyk , FJF . Peltzer , DA , Perry , GLW . Richardson , SJ , Smaill , SJ , Stanley , MC , Todd , HJ , Walsh , PJ , Wright , W , Tylianakis , JM . 2019 . Understanding and managing social–ecological tipping points in primary industries . BioScience 69 ( 5 ): 335 347 . Young , OR , Berkhout , F , Gallopin , GC , Janssen , MA , Ostrom , E , Van Der Leeuw , S . 2006 . The globalization of socio-ecological systems: An agenda for scientific research . Global Environmental Change-Human and Policy Dimensions 16 ( 3 ): 304 316 . ### Appendix 1: Model methodology We assumed that the strength of management action (A0) changes with time delay (τ) toward a target strength T(ES), where ES is the environmental quality of the downstream ecosystem/ES. We implemented these assumptions mathematically using a first-order autoregressive model, $dAdt=1τ(T(ES)−A).$ We implemented the tipping point of environmental quality using Holling Type III reproduction with linear mortality and a linear effect of pollutant inflow r(A): $dESdt=ESphp+ESp−ar(A)ES.$ Here, h is the approximate position of the tipping point, p is the sharpness of the tipping point, a is the strength of the inflow’s impact on environmental quality, and we have normalized ES so that its maximum value is 1. It remains to specify the linkages between the downstream environmental system and upstream management actions (A), r(A) and T(ES). For the effect of upstream management, we make the simple assumption that pollutant inflow responds linearly to management action, so that $r(A)=P−A,$ where P is the pollutant inflow without management action. Any delays in response to management action can be incorporated to the delay parameter τ. Control theory classifies the management response to changes of environmental quality into one of (or a combination of) the following types: • Proportional control, where management action responds to the difference between the current environmental quality and a target environmental quality; • Differential control, where management action responds to the rate of change of environmental quality (result of upper limit on timescales [generational]); • Integral control, where management action responds to both the difference between current and target environmental quality and the time spent away from the target environmental quality. Control theory discusses advantages and disadvantages of each of these types of control (Åström and Murray, 2012). Here, we assumed proportional control: $T(ES)=A0(E0−ES),$ where E0 is the target environmental quality, and A0 sets the effectiveness of management action. SymbolDefinitionValue UsedUnit A Adaptive management action  Cost (in dollars, $) A0 Effectiveness of adaptive management 1.5–2.5 Cost per % change of ES, ($/%) Effect rate of pollution on downstream ES 1.5 Per day and cost, (1/($day)) ES Ecosystem service % (of maximum ES) E0 Target ES % (of maximum ES) h half saturation ES constant 0.5 % (of maximum ES) P Pollution input from upstream ecosystem 0.7–2.6 Cost (in dollars,$) p Hill coefficient (defines the slope of the change in ES response) – r(AReduction in upstream pollution input (P) due to adaptive management action (A)  Cost (in dollars, $) τ Management response lag 0.1–10 Day SymbolDefinitionValue UsedUnit A Adaptive management action Cost (in dollars,$) A0 Effectiveness of adaptive management 1.5–2.5 Cost per % change of ES, ($/%) Effect rate of pollution on downstream ES 1.5 Per day and cost, (1/($ day)) ES Ecosystem service  % (of maximum ES) E0 Target ES % (of maximum ES) h half saturation ES constant 0.5 % (of maximum ES) P Pollution input from upstream ecosystem 0.7–2.6 Cost (in dollars, $) p Hill coefficient (defines the slope of the change in ES response) – r(AReduction in upstream pollution input (P) due to adaptive management action (A) Cost (in dollars,$) τ Management response lag 0.1–10 Day How to cite this article: Gladstone-Gallagher, RV, Tylianakis, JM, Yletyinen, J, Dakos, V, Douglas, EJ, Greenhalgh, S, Hewitt, JE, Hikuroa, D, Lade, SJ, Le Heron, R, Norkko, A, Perry, GLW, Pilditch, CA, Schiel, D, Siwicka, E, Warburton, H, Thrush, SF. 2022. Social-ecological connections across land, water, and sea demand a reprioritization of environmental management. Elementa: Science of the Anthropocene 10(1). DOI: https://doi.org/10.1525/elementa.2021.00075 Domain Editor-in-Chief: Alastair Iles, University of California Berkeley, Berkeley, CA, USA Knowledge Domain: Sustainability Transitions This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International License (CC-BY 4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. See http://creativecommons.org/licenses/by/4.0/.
2022-08-10 17:19:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 4, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4331202208995819, "perplexity": 6496.22423004147}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571198.57/warc/CC-MAIN-20220810161541-20220810191541-00085.warc.gz"}
https://www.greaterwrong.com/posts/cLtdcxu9E4noRSons/part-1-amplifying-generalist-research-via-forecasting-models
[Part 1] Amplifying generalist research via forecasting – Models of impact and challenges This post covers our models of impact and challenges with our exploration in amplifying generalist research using forecasting. It is accompanied by a second post with a high-level description of those models, and more detailed description of experiment set-up and results. Many of the world’s most pressing problems require intellectual progress to solve [1]. Finding ways to increase the rate of intellectual progress might be a highly promising way of solving those problems. One component of this is generalist research: the ability to judge and synthesise claims across many different fields without detailed specialist knowledge of those fields, in order to for example prioritise potential new cause areas or allocate grant funding. For example, this skill is expected by organisations at the EA Leaders Forum to be one of the highest demanded skills for their organisations over the coming 5 years (2018 survey, 2019 survey). In light of this, we recently tested a method of increasing the scale and quality of generalist research, applied to researching the industrial revolution [2], using Foretold.io (an online prediction platform). In particular, we found that, when faced with claims like: “Pre-Industrial Britain had a legal climate more favorable to industrialization than continental Europe” And “Pre-Industrial Revolution, average French wage was what percent of the British wage?” a small crowd of forecasters recruited from the EA and rationality communities very successfully predicted the judgements of a trusted generalist researcher, with a benefit-cost ratio of around 73% compared to the original researcher. They also outperformed a group of external online crowdworkers. Moreover, we believe this method can be scaled to answer many more questions than a single researcher could, as well as to have application in domains other than research, like grantmaking, hiring and reviewing content. We preliminarily refer to this method as “amplification” given its similarity to ideas from Paul Christiano’s work on Iterated Distillation and Amplification in AI alignment (see e.g. this). This was an exploratory project whose purpose was to build intuition for several possible challenges. It covered several areas that could be well suited for more narrow, traditional scientific studies later on. As such, the sample size was small and no single result was highly robust. However, it did lead to several medium-sized takeaways that we think should be useful for informing future research directions and practical applications. This post begins with a brief overview of our results. We then share some models of why the current project might be impactful and exciting, followed by some challenges this approach faces. Overview of the set-up and results (This section gives a very cursory overview of the set-up and results. A detailed report can be found in this post.) The basic set-up of the project is shown in the following diagram, and described below. A two-sentence version would be: Forecasters predicted the conclusions that would be reached by Elizabeth van Norstrand, a generalist researcher, before she conducted a study on the accuracy of various historical claims. We randomly sampled a subset of research claims for her to actually evaluate. And since we can set that sampling probability arbitrarily low, this method is not bottlenecked by her time. The below graph shows the evolution of the accuracy of the crowd prediction over time, starting from Elizabeth Van Nostrand’s prior. Predictions were submitted separately by two groups of forecasters: one based on a mailing list with participants interested in participating in forecasting experiments (recruited from effective altruism-adjacent events and other forecasting platforms), and one recruited from Positly, an online platform for crowdworkers. The y-axis shows the accuracy score on a logarithmic scale, and the x-axis shows how far along the experiment is. For example, 14 out of 28 days would correspond to 50%. The thick lines show the average score of the aggregate prediction, across all questions, at each time-point. The shaded areas show the standard error of the scores, so that the graph might be interpreted as a guess of how the two communities would predict a random new question. One of our key takeaways from the experiment is that our simple algorithm for aggregating predictions performed surprisingly well in predicting Elizabeth’s research output—but only for the network-adjacent forecasters. Another way to understand the performance of the aggregate is to note that the aggregate of network-adjacent forecasters had an average log score of −0.5. To get a rough sense of what that means, it’s the score you’d get by being 70% confident in a binary event, and being correct (though note that this binary comparison merely serves to provide intuition, there are technical details making the comparison to a distributional setting a bit tricky). By comparison, the crowdworkers and Elizabeth’s priors had a very poor log score of around −4. This is roughly similar to the score you’d get if you predict an event to be ~5% likely, and it still happens. We also calculated a benefit/​cost-ratio, as follows: Benefit/​cost ratio = % value provided by forecasters relative to the evaluator /​ % cost of forecasters relative to the evaluator We measured “value provided” as the reduction in uncertainty weighted by the importance of the questions on which uncertainty was reduced. Results were as follows. In other words, each unit of resource invested in the network-adjacent forecasters provided 72% as much returns as investing it in Elizabeth directly, and each unit invested in the crowdworkers provided negative returns, as they tended to be less accurate than Elizabeth’s prior. Overall, we tentatively view this as an existence proof of the possibility of amplifying generalist research, and in the future are interested in obtaining more rigorous results and optimising the benefit-cost ratio. Models of impact This section summarises some different perspectives on what the current experiment is trying to accomplish and why that might be exciting. There are several perspectives here given that the experiment was designed to explore multiple relevant ideas, rather than testing a particular, narrow hypothesis. As a result, the current design is not optimising very strongly for any of these possible uses, and it is also plausible that its impact and effectiveness will vary widely between uses. To summarise, the models are as follows. • Mitigating capacity bottlenecks. The effective altruism and rationality communities face rather large bottlenecks in many areas, such as allocating funding, delegating research, vetting talent and reviewing content. The current setup might provide a means of mitigating some of those—a scalable mechanism of outsourcing intellectual labor. • A way for intellectual talent to build and demonstrate their skills. Even if this set-up can’t make new intellectual progress, it might be useful to have a venue where junior researchers can demonstrate their ability to predict the conclusions of senior researchers. This might provide an objective signal of epistemic abilities not dependent on detailed social knowledge. • Exploring new institutions for collaborative intellectual progress. Academia has a vast backlog of promising ideas for institutions to help us think better in groups. Currently we seem bottlenecked by practical implementation and product development. • Getting more data on empirical claims made by the Iterated Amplification AI alignment agenda. These ideas inspired the experiment. (However, our aim was more practical and short-term, rather than looking for theoretical insights useful in the long-term.) • Exploring forecasting with distributions. Little is known about humans doing forecasting with full distributions rather than point estimates (e.g. “79%”), partly because there hasn’t been easy tooling for such experiments. This experiment gave us some cheap data on this question. • Forecasting fuzzy things. A major challenge with forecasting tournaments is the need to concretely specify questions; in order to clearly determine who was right and allocate payouts. The current experiments tries to get the best of both worlds—the incentive properties of forecasting tournaments and the flexibility of generalist research in tackling more nebulous questions. • Shooting for unknown unknowns. In addition to being an “experiment”, this project is also an “exploration”. We have an intuition that there are interesting things to be discovered at the intersection of forecasting, mechanism design, and generalist research. But we don’t yet know what they are. Mitigating capacity bottlenecks The effective altruism and rationality communities face rather large bottlenecks in many areas, such as allocating funding, delegating research, vetting talent and reviewing content. Prediction platforms (for example as used with the current “amplification” set-up) might be a promising tool to tackle some of those problems, for several reasons. In brief, they might act as a scalable way to outsource intellectual labor. First, we’re using quantitative predictions and scoring rules. This allows several things. • We can directly measure how accurate each contribution was, and separately measure how useful they were in benefiting the aggregate. The actual calculations are quite simple and with some engineering effort can scale to allocating credit (in terms of money, points, reputation etc.) to hundreds of users in an incentive-compatible way. • We can aggregate different contributions in an automatic and rigorous way [3]. • We have a shared, precise language for interpreting contributions. Contrast receiving 100 predictions and receiving 20 Google docs. The latter would be prohibitively difficult to read through, does not have a straightforward means of aggregation, and might not even be analysable in an “apples to apples” comparison. However, the big cost we pay to enable these benefits is that we are adding formalism, and constraining people to express their beliefs within the particular formalism/​ontology of probabilities and distributions. We discuss this more in the section on challenges below. Second, we’re using an internet platform. This makes it easier for people from different places to collaborate, and to organise and analyse their contributions. Moreover, given the benefits of quantification noted above, we can freely open the tournament to people without substantial credentials, since we’re not constrained in our capacity to evaluate their work. Third, we’re using a mechanism specifically designed to overcome capacity bottlenecks. The key to scalability is that forecasters do not know which claims will be evaluated, and so are incentivised to make their honest, most accurate predictions on all of them. This remains true even as many more claims are added (as long as forecasters expect rewards for participating remain similar). In effect, we’re shifting the bottleneck from access to a few researchers to access to prize money and competent forecasters. It seems highly implausible that all kinds of intellectual work could be cost-effectively outsourced this way. However, if some work could be outsourced and performed at, say 10% of the quality, but at only 1% of the cost, that could still be very worthwhile. For example, in trying to review hundreds of factual claims, the initial forecasting could be used as an initial, wide-sweeping filter, grabbing the low-hanging fruit; but also identifying which questions are more difficult, and will need attention from more senior researchers. Overall, this is a model for how things might work, but it is as of yet highly uncertain whether this technique will actually be effective in tackling bottlenecks of various kinds. We provide some preliminary data from this experiment in the “Cost-effectiveness” section below. A way for intellectual talent to build and demonstrate their skills The following seems broadly true to some of us: • Someone who can predict my beliefs likely has a good model of how I think. (E.g. “I expect you to reject this paper’s validity based on the second experiment, but also think you’d change your mind if you thought they had pre-registered that methodology”.) • Someone who can both predict my beliefs and disagrees with me is someone I should listen to carefully. They seem to both understand my model and still reject it, and this suggests they know something I don’t. • It seems possible for person X to predict a fair number of a more epistemically competent person Y’s beliefs—even before person X is as epistemically competent as Y. And in that case, doing so is evidence that person X is moving in the right direction. If these claims are true, we might use some novel versions of forecasting tournaments as a scalable system to identify and develop epistemic talent. This potential benefit looks quite different from using forecasting tournaments to help us solve novel problems or gain better or cheaper information than we could otherwise. Currently there is no “driver’s license” for rationality or effective altruism. Demonstrating your abilities requires navigating a system of reading and writing certain blog posts, finding connections to more senior people, and going through work trials tailored to particular organisations. This system does not scale very well, and also often requires a social knowledge and ability to “be in the right place at the right time” which does not necessarily strongly correlate with pure epistemic ability. It seems very implausible that open forecasting tournaments could solve the entire problem here. But it seems quite plausible that it could offer improvements on the margin, and become a reliable credentialing mechanism for a limited class of non-trivial epistemic abilities. For example, EA student groups with members considering cause prioritisation career paths might organise tournaments where their members forecast the conclusions of OpenPhil write-ups, or maintain and update their own distributions over key variables in GiveWell’s cost-effectiveness models. By running this experiment, writing up the results, and improving the Foretold platform, we hope to provide infrastructure that will allow others interested in this benefit to run their own experiments. Exploring new institutions for collaborative intellectual progress Many of our current most important institutions, like governments and universities, run on mechanisms designed hundreds of years ago, before fields like microeconomics and statistics were developed. They suffer from many predictable and well-understood incentive problems, such as poor replication rates of scientific findings following from a need to optimise for publications; the election of dangerous leaders due to the use of provably suboptimal voting systems; or the failure to adequately fund public goods like high-quality explanations of difficult concepts due to free-rider problem, just to name a few. The academic literature in economics and mechanism design has a vast backlog of designs for new institutions that could solve these and other problems. One key bottleneck now seems to be implementation. For example, ethereum founder Vitalik Buterin has argued that the key skill required is product development: making novel mechanisms with better incentives work in practice (search for “product people” in linked interview). Similarly, Robin Hanson has argued that there is a large, promising literature on more effective institutions, but “what we need most [… is lots of concrete trials.] To get involved in the messy details of an organization, and just try out different variations until [we] see something that actually works” [4], [5]. Part of the spirit of the current experiment is an attempt to do just this, and, in particular, to do so in the domain of research intellectual progress. Getting more data on empirical claims made by the Iterated Amplification AI alignment agenda The key mechanism underlying this experiment, and its use of prediction and randomisation, is based on ideas from the Iterated Amplification approach to AI alignment. Currently groups at Ought, OpenAI and elsewhere are working on testing the empirical assumptions underlying that theory. Compared to these groups, the current experiment had a more practical, short-term aim—to find a “shovel-ready” method of amplifying generalist research, that could be applied to make the EA/​rationality communities more effective already over the coming years. Nonetheless, potential follow-ups from this experiment might provide useful theoretical insight in that direction. Exploring forecasting with distributions Little is known about doing forecasting with full distributions (e.g. “I think this is captured by two normals, with means 5 and 10 and variance 3”) rather than point estimates (e.g. “79%”). Before the launch of Foretold, there wasn’t any software available for easily running such experiments. This was a quick way of getting data on many questions in distributional forecasting: • How good are humans at it? • What are the main usability challenges? • In terms of intuitive scoring rules? • In terms of intuitive yet powerful input formats? • What are best practices? (For example, using beta rather than lognormal distributions when forecasting someone else’s prediction, or averaging distributions with a wide uniform to hedge against large losses) Forecasting fuzzy things A major challenge with prediction markets and forecasting tournaments is the need to concretely specify questions; in order to clearly determine who was right and allocate payouts. Often, this means that these mechanisms are limited to answering questions like: > “What will the highest performance of an algorithmic benchmark x be at time t?” Even though what we often care about is something more nebulous, like: > “How close will we be to AGI at time t?” The upside of this precision is that it enables us to use quantitative methods to estimate performance, combine predictions, and allocate rewards, as described above. The current experiments try to get the best of both worlds: the incentive properties of forecasting tournaments and the flexibility of generalist research in tackling more nebulous questions. The proposed solution to this problem is simply to have one or many trusted evaluators who decide on the truth of a question, and then predict their judgements as opposed to the underlying question [6]. (Previously some of the authors set up the AI Forecasting Resolution Council to enable such flexible resolution to also be used on AI questions.) Shooting for unknown unknowns This is related to the mindset of “prospecting for gold”. To a certain extent, we think that we have a potentially reliably inside view, a certain research taste which is worth following and paying attention to, because we are curious what we might find out. A drawback with this is that it enables practices like p-hacking/​publication bias if results are reported selectively. To mitigate this, all data from this experiment is publicly available here [7]. Challenges This section discusses some challenges and limitations of the current exploration, as well as our ideas for solving some of them. In particular, we consider: • Complexity and unfamiliarity of experiment. The current experiment had many technical moving parts. This makes it challenging to understand for both participants and potential clients who want to use it in their own organisations. • Trust in evaluations. The extent to which these results are meaningful depends on your trust in Elizabeth Van Nostrand’s ability to evaluate questions. We think is partly an inescapable problem, but also expect clever mechanisms and more transparency to be able to make large improvements. • Correlations between predictions and evaluations. Elizabeth had access to a filtered version of forecaster comments when she made her evaluations. This introduces a potential source of bias and a “self-fulfilling prophecy” dynamic in the experiments. • Difficulty of converting mental models into quantitative distributions. It’s hard to turn nuanced mental models into numbers. We think a solution is to have a “division of labor”, where some people just build models/​write comments and others focus on quantifying them. We’re working on incentive schemes that work in this context. • Anti-correlation between importance and “outsourceability”. The intellectual questions which are most important to answer might be different from the ones that are easiest to outsource, in a way which leaves very little value on the table in outsourcing. • Overhead of question generation. Creating good forecasting questions is hard and time-consuming, and better tooling is needed to support this. • Overly competitive scoring rules. Prediction markets and tournaments tend to be zero-sum games, with negative incentives for helping other participants or sharing best practices. To solve this we’re designing and testing improved scoring rules which directly incentivise collaboration. Complexity and unfamiliarity of experiment. The current experiment has many moving parts and a large inferential distance. For example, in order to participate, one would need to understand the mathematical scoring rule, the question input format, the randomisation of resolved questions and how questions would be resolved as distributions. This makes the set-up challenging to understand to both participants and potential clients who want to use similar amplification set-ups in their own organisations. We don’t think these things are inherently complicated, but have much work to do on explaining the set-up and making the app generally accessible. Trust in evaluations. The extent to which the results are meaningful depends on one’s trust in Elizabeth Van Nostrand’s ability to evaluate questions. We chose Elizabeth for the experiment as she has a reputation for reliable generalist research (through her blog series on “Epistemic Spot Checks”), and 10+ public blog posts with evaluations of the accuracy of books and papers. However, the challenge is that this trust often relies on a long history of interactions with her material, in a way which might be hard to communicate to third-parties. For future experiments, we are considering several improvements here. First, as hinted at above, we can ask forecasters both about their predictions of Elizabeth as well as their own personal beliefs. We might then expect that those who can both accurately predict Elizabeth and disagree with her knows something she does not, and so will be weighted more highly in the evaluation of the true claim. Second, we might have set-ups with multiple evaluators; or more elaborate ways of scoring the evaluators themselves (for example based on their ability to predict what they themselves will say after more research). Third, we might work to have more transparent evaluation processes, for example including systematic rubrics or detailed write-ups of reasoning. We must be careful here not to “throw out the baby with the bathwater”. The purpose of using judges is after all to access subjective evaluations which can’t be easily codified in concrete resolution conditions. However, there seems to be room for more transparency on the margin. Correlation between predictions and evaluations. Elizabeth had access to a filtered version of forecaster comments when she made her evaluations. Hence the selection process on evidence affecting her judgements was not independent from the selection process on evidence affecting the aggregate. This introduces a potential source of bias and a “self-fulfilling prophecy” dynamic of the experiments. For future experiments, we’re considering obtaining an objective data-set with clear ground truth, and test the same set-up without revealing the comments to Elizabeth, to get data on how serious this problem is (or is not). Difficulty of converting mental models into quantitative distributions. In order to participate in the experiment, a forecaster has to turn their mental models (represented in whichever way the human brain represents models) into quantitative distributions (which is a format quite unlike that native to our brains), as shown in the following diagram: Each step in this chain is quite challenging, requires much practice to master, and can result in a loss of information. Moreover, we are uncertain how the difficulty of this process differs across questions of varying importance. It might be that some of the most important considerations in a domain tend to be confusion-shaped (e.g. “What does it even mean to be aligned under self-improvement when you can’t reliably reason about systems smarter than yourself?”), or very open-ended (e.g. “What new ideas could reliably improve the long-term future?” rather than “How much will saving in index funds benefit future philanthropists?”)). Hence filtering for questions that are more easily quantified might select against questions that are more important. Consider some solutions. For the domains where quantification seems more promising, it seems at least plausible that there should be possible to have some kind of “division of labor” between them. For future experiments, we’re looking to better separate “information contribution” and “numerical contribution”, and find ways of rewarding both. Some participants might specialise in research or model-generation, and others in turning that research into distributions. A challenge here is to appropriately reward users who only submit comments but do not submit predictions. Since one of the core advantages of forecasting tournaments is that they allow us to precisely and quantitatively measure performance, it seems plausible that any solution should try to make use of this fact. (As opposed to, say, using an independent up- and downvoting scheme.) As example mechanisms, one might randomly show a comment to half the users, and reward a comment based on the performance of the aggregate for users who’ve seen it and users who haven’t. Or one might release the comments to forecasters sequentially, and see how much each improves the aggregate. Or one might simply allow users to vote, but weigh the votes of users with a better track-record higher. Moreover, in future experiments with Elizabeth we’ll want to pair her up with a “distribution buddy”, whose task is to interview her to figure out in detail what distribution best captures her beliefs, allowing Elizabeth to focus simply on building conceptual models. Anti-correlation between importance and “outsourceability” Above we mentioned that the questions easiest to quantify might be anti-correlated with the ones that are most important. It is also plausible that the questions which are easiest to outsource to forecasters are not the same as those which are most important to reduce uncertainty on. Depending on the shape of these distributions, the experiment might not be capture a lot of value. (For illustration, consider an overly extreme example: suppose a venture capitalist tries to amplify their startup investments. The crowd always predicts “no investment”, and turn out to be right in 99100 cases: the VC doesn’t investment. However, the returns for that one case where crowd fails and the VC actually would have invested by far dominate the portfolio.) The act of creating good, forecastable questions is an art in and of itself. If done by the same person or small team which will eventually forecast the questions, one can rely on much shared context and intuition in interpreting the questions. However, scaling these systems to many participants requires additional work in specifying the questions sufficiently clearly. This overhead might be very costly. Especially since we think one of the key factors determining the usefulness of a forecasting question is the question itself. How well does it capture something we care about? From experience, writing these questions is hard. In future we have much work to do to make this process easier. A scoring rule that discourages collaboration Participants were scored based on how much they outperformed the aggregate prediction. This scoring approach is similar to the default in prediction markets and major forecasting tournaments. It has the problem that sharing any information via commenting will harm your score (since it will make the performance of other users, and hence the aggregate, better). What’s more, all else remaining the same, doing anything that helps other users will be worse for your score (such as sharing tips and tricks for making better predictions, or pointing out easily fixable mistakes so they can learn from them). There are several problems with this approach and how it a disincentives collaboration. First, it provides an awkward change in incentives for groups who otherwise have regular friendly interactions (such as a team at a company, a university faculty, or members of the effective altruism community). Second, it causes effort to be wasted as participants must derive the same key insights individually, utilising little division of labor (as any sharing information will just end up hurting their score on the margin). Having some amount of duplication of work and thinking can of course make the system robust against mistakes—but we think the optimal amount is far less than the equilibrium under the current scoring rule. In spite of these theoretical incentives, it is interesting to note that several participants actually ended up writing detailed comments. (Though basically only aimed at explaining their own reasoning, with no collaboration and back-and-forth between participants observed.) This might have been because they knew Elizabeth would see those comments, or for some other reason. Nonetheless, we are working on modifying our scoring rule in a way which directly incentivises participants to collaborate, and actively rewards helping other users improve their models. We hope to release details of formal models and practical experiments in the coming month. Footnotes [1] Examples include: AI alignment, global coordination, macrostrategy and cause prioritisation. [2] We chose the industrial revolution as a theme since it seems like a historical period with many lessons for improving the world. It was a time of radical change in productivity along with many societal transformations, and might hold lessons for future transformations and our ability to influence those. [3] For example by averaging predictions and then weighing by past track-record and time until resolution, as done in the Good Judgement Project (among other things). [4] Some examples of nitty-gritty details we noticed while doing this are: • Payoffs were too small/​the scoring scheme too harsh • Copying the aggregate to your distributions and then just editing a little was something natural, so we added support in the syntax for writing =multimodal(AG, your prediction) • Averaging with a uniform would have improved predictions. • The marginal value of each additional prediction was low after the beginning. • Forecasters were mostly motivated by what questions were interesting, followed by what would give them a higher payout, and less by what would be most valuable to the experimenters. [5] For a somewhat tangential, but potentially interesting, perspective, see Feynman on making experiments to figure out nitty-gritty details in order to enable other experiments to happen (search for “rats” in the link). [6] A further direction we’re considering is to allow forecasters to both predict the judgements of evaluators and the underlying truth. We might then expect that those predictors who both accurately forecast the judgement of the evaluator and disagree in their own judgements, might provide valuable clues about the truth. [7] For the record, before this experiment we ran two similar, smaller experiment (to catch easy mistakes and learn more about the set up), with about an order of magnitude less total forecasting effort invested. The aggregate from these experiments was quite poor at predicting the evaluations. The data from those experiments can be found here, and more details in Elizabeth’s write-ups here and here. Participate in future experiments or run your own Foretold.io was built as an open platform to enable more experimentation with prediction-related ideas. We have also made data and analysis calculations from this experiment publicly available. If you’d like to: • Run your own experiments on other questions • Do additional analysis on this experimental data • Use an amplification set-up within your organisation We’d be happy to consider providing advice, operational support, and funding for forecasters. Just comment here or reach out to this email. If you’d like to participate as a forecaster in future prediction experiments, you can sign-up here. Acknowledgements Funding for this project was provided by the Berkeley Existential Risk Initiative and the EA Long-term Future Fund. We thank Beth Barnes and Owain Evans for helpful discussion. We are also very thankful to all the participants. No nominations. No reviews. • So the thing I’m wondering here is what makes this “amplification” in more than a trivial sense. Let me think out loud for a bit. Warning: very rambly. Let’s say you’re a competent researcher and you want to find out the answers to 100 questions, which you don’t have time to investigate yourself. The obvious strategy here is to hire 10 people, get them to investigate 10 questions each, and then pay them based on how valuable you think their research was. Or, perhaps you don’t even need to assign them questions—perhaps they can pick their own questions, and you can factor in how neglected each question was as part of the value-of-research calculation. This is the standard, “freeform” approach; it’s “amplification” in the same sense that having employees is always amplification. What does the forecasting approach change? • It gives one specific mechanism for how you (the boss) evaluate the quality of research (by comparison with your own deep dive), and rules out all the others. This has the advantage of simplicity and transparency, but has the disadvantage that you can’t directly give rewards for other criteria like “how well is this explained”. You also can’t reward research on topics that you don’t do deep dives on. • This mainly seems valuable if you don’t trust your own ability to evaluate research in an unbiased way. But evaluating research is usually much easier than doing research! In particular, doing research involves evaluating a whole bunch of previous literature. • Further, if one of your subordinates thinks you’re systematically biased, then the forecasting approach doesn’t give them a mechanism to get rewarded for telling you that. Whereas in the freeform approach to evaluating the quality of research, you can take that into account in your value calculation. • It gives one specific mechanism for how you aggregate all the research you receive. But that doesn’t matter very much, since you’re not bound to that—you can do whatever you like with the research after you’ve received it. And in the freeform approach, you’re also able to ask people to produce probability distributions if you think that’ll be useful for you to aggregate their research. • It might save you time? But I don’t think that’s true in general. Sure, if you use the strategy of reading everyone’s research then grading it, that might take a long time. But since the forecasting approach is highly stochastic (people only get rewards for questions you randomly choose to do a deep dive on) you can be a little bit stochastic in other ways to save time. And presumably there are lots of other grading strategies you could use if you wanted. Okay, let’s take another tack. What makes prediction markets work? 1. Anyone with relevant information can use that information to make money, if the market is wrong. 2. People can see the current market value. 3. They don’t have to reveal their information to make money. 4. They know that there’s no bias in the evaluation—if their information is good, it’s graded by reality, not by some gatekeeper. 5. They don’t actually have to get the whole question right—they can just predict a short-term market movement (“this stock is currently undervalued”) and then make money off that. This forecasting setup also features 1 and 2. Whether or not it features 3 depends on whether you (the boss) manage to find that information by yourself in the deep dive. And 4 also depends on that. I don’t know whether 5 holds, but I also don’t know whether it’s important. So, for the sort of questions we want to ask, is there significant private or hard-to-communicate information? • If yes, then people will worry that you won’t find it during your deep dive. • If no, then you likely don’t have any advantage over others who are betting. • If it’s in the sweet spot where it’s private but the investigator would find it during their deep dive, then people with that private information have the right incentives. If either of the first two options holds, then the forecasting approach might still have an advantage over a freeform approach, because people can see the current best guess when they make their own predictions. Is that visibility important, for the wisdom of crowds to work—or does it work even if everyone submits their probability distributions independently? I don’t know—that seems like a crucial question. Anyway, to summarise, I think it’s worth comparing this more explicitly to the most straightforward alternative, which is “ask people to send you information and probability distributions, then use your intuition or expertise or whatever other criteria you like to calculate how valuable their submission is, then send them a proportional amount of money.” • IMO the term “amplification” fits if the scheme results in a 1.) clear efficiency gain and 2.) it’s scalable. This looks like (delivering equivalent results but at a lower cost OR providing better results for an equivalent cost. (cost == & time)), AND (~ O(n) scaling costs). For example if there was a group of people who could emulate [Researcher’s] fact checking of 100 claims but do it at 10x speed, then that’s an efficiency gain as we’re doing the same work in less time. If we pump the number to 1000 claims and the fact checkers could still do it at 10x speed without additional overheard complexity, then it’s also scalable. Contrast that with the standard method of hiring additional junior researchers to do the fact checking—I expect it to not be as scalable (“huh we’ve got all these employees now I guess we need an HR department and perf reviews and...:) It does seem like a fuzzy distinction to me, and I am mildly concerned about overloading a term that already has an association w/​ IDA. • Good points! This covers a lot of ground that we’ve been thinking about. So the thing I’m wondering here is what makes this “amplification” in more than a trivial sense. To be honest, I’m really not sure what word is best here. “Amplification” is the word we used for this post. I’ve also thought about calling this sort of thing “Proliferation” after “Instillation” here and have previously referred to this method as Prediction-Augmented Evaluation Systems. I agree that the employee case could also be considered a kind of amplification according to this terminology. If you have preferences or other ideas for names for this, I’d be eager to hear! but has the disadvantage that you can’t directly give rewards for other criteria like “how well is this explained”. You also can’t reward research on topics that you don’t do deep dives on. Very true, at least at this stage of development of Foretold. I’ve written some more thinking on this here. Traditional prediction markets don’t do a good job incentivizing participants to share descriptions and research, but ideally future systems would. There are ways we are working on to improve this with Foretold. A very simple setup would be one that gives people points/​money for writing comments that are upvoted by important predictors. I think it’s worth comparing this more explicitly to the most straightforward alternative, which is “ask people to send you information and probability distributions, then use your intuition or expertise or whatever other criteria you like to calculate how valuable their submission is, then send them a proportional amount of money.” This isn’t incredibly far from what we’re going for, but I think the additional presence of a visible aggregate and the ability for forecasters to learn /​ compete with each other are going to be useful in expectation. I also would want this to be a very systematized process, because then there is a lot of optimization that could arguably be done. The big downside of forecasting systems is that they are less flexible than free-form solutions, but one big upside is that it may be possible to optimize them in different ways. For instance, eventually there could be significant data science pipelines, and lots of statistics for accuracy and calibration, that would be difficult to attain in free form setups. I think in the short term online forecasting setups will be relatively expensive, but it’s possible that with some work they could become significantly more effective for certain types of problems. I’d definitely agree that good crowdsourced forecasting questions need to be in some sort of sweet spot of “difficult enough to make external-forecasting useful, but open/​transparent enough to make external-forecasting possible.” • Actually, the key difference between this and prediction markets is that this has no downside risk, it seems? If you can’t lose money for bad predictions. So you could exploit it by only making extreme predictions, which would make a lot of money sometimes, without losing money in the other cases. Or by making fake accounts to drag the average down. • It might interest you that there’s quite a nice isomorphism between prediction markets and ordinary forecasting tournaments. Suppose you have some proper scoring rule for predictions on outcome . For example, in our experiment we used . Now suppose the :th prediction is paid the difference between their score and the score of the previous participant: . Then you basically have a prediction market! To make this isomorphism work, the prediction market must be run by an automated market maker which buys and sells at certain prices which are predetermined by a particular formula. To see that, let be the total cost of buying shares in some possibility (e.g. Yes or No). If the event happens, your payoff will be (we’re assuming that the shares just pay $1 if the event happens and$0 otherwise). It follows that the cost of buying further shares—the market price—is . We require that the market prices can be interpreted as probabilities. This means that the prices for all MECE outcomes must sum to 1, i.e. . Now we set your profit from buying x shares in the prediction market be equal to your payout in the forecasting tournament, . Finally, we solve for , which specifies how the automated market maker must make its trades. Different scoring rules will give you different . For example, a logarithmic scoring rule will give: . For more details, see page 54 in this paper, Section 5.3, “Cost functions and Market Scoring Rules”. • This is why proper scoring rules are important. As long as you are adequately using proper scoring rules, and proper combinations of those scoring rules, then people will be incentivized to predict according to their own beliefs. If we assume that users can’t make account, and are paid in proportion to their performance according to proper scoring rules, then they shouldn’t be able to gain expected earnings by providing overconfident answers. The log-scoring function we use is a proper scoring rule. The potential winnings if you do a great job are very capped due to this scoring rule. In this specific experiment we had some trust in the participants and no obviously fake accounts. If we scaled this, fake accounts would be an issue, but there are ways around it. I also would imagine that a more robust system would look something like having users begin with little “trust”; that they would then build up by providing good forecasts. They would only begin being payed as long as they had some threshold of trust; but within that level the proper scoring rules should generally create reasonable incentives. • I have four concerns even given that you’re using a proper scoring rule, which relate to the link between that scoring rule and actually giving people money. I’m not particularly well-informed on this though, so could be totally wrong. 1. To implement some proper scoring rules, you need the ability to confiscate money from people who predict badly. Even when the score always has the same sign, like you have with log-scoring (or when you add a constant to a quadratic scoring system), if you don’t confiscate money for bad predictions, then you’re basically just giving money to people for signing up, which makes having an open platform tricky. 2. Even if you restrict signups, you get an analogous problem within a fixed population who’s already signed up: the incentives will be skewed when it comes to choosing which questions to answer. In particular, if people expect to get positive amounts of money for answering randomly, they’ll do so even when they have no relevant information, adding a lot of noise. 3. If a scoring rule is “very capped”, as the log-scoring function is, then the expected reward from answering randomly may be very close to the expected reward from putting in a lot of effort, and so people would be incentivised to answer randomly and spend their time on other things. 4. Relatedly, people’s utilities aren’t linear in money, so the score function might not remain a proper one taking that into account. But I don’t think this would be a big effect on the scales this is likely to operate on. • The fact that we use a “proper scoring rule” definitely doesn’t mean that the entire system, including the participant’s true utility functions, are really “proper”. There is really a fair bit of impropriety. For instance, people also may care about their online reputation, and that won’t be captured in the proper scoring rule. The proper scoring rule really helps make sure that one specific aspect of the system is “proper” according to a simplified model. This is definitely subideal, but I think it’s still good enough for a lot of things. I’m not sure what type of system would be “perfectly proper”. Prediction markets have their own disadvantages; as participants don’t behave as perfectly rational agents their either. So I won’t claim that the system is “perfectly aligned”, but I will suggest that it seems “decently aligned” compared to other alternatives, with the ability to improve as we (or others with other systems) add further complexity. If you don’t confiscate money for bad predictions, then you’re basically just giving money to people for signing up, which makes having an open platform tricky. What was done in this case was that participants were basically paid a fixed fee for participating, with a second “bonus” that was larger, that was paid in proportion to how they did on said rule. This works in experimental settings where we can filter the participants. It would definitely be more work to make the system totally openly available, especially as the prizes increase in value, much for the reason you describe. We’re working to try to figure out solutions that could hold up (somewhat) in these circumstances, but it is tricky, for reasons you suggest and for others. I’d also point out that having a nice scoring system is one challenge out of many challenges. Having nice probability distribution viewers and editors is difficult. Writing good questions and organizing them, and having software that does this well, is also difficult. This is something that @jacobjacob has been spending a decent amount of time thinking about after this experiment, but I’ve personally been focusing on other aspects. At least in this experiment, the scoring system didn’t seem like a big bottleneck. The people who submitted who won the most money were generally those who seemed to have given thoughtful and useful probability distributions. Things are much easier when you have an audience who is generally taking things in good faith and who can be excluded from future rounds if it seems appropriate. • Cool, thanks for those clarifications :) In case it didn’t come through from the previous comments, I wanted to make clear that this seems like exciting work and I’m looking forward to hearing how follow-ups go. • Thanks! I really do appreciate the thoughts & feedback in general, and am quite happy to answer questions. There’s a whole lot we haven’t written up yet, and it’s much easier for me to reply to things than lay everything out. • Another point: prediction markets allow you to bet more if you’re more confident the market is off. This doesn’t, except by betting that the market is further off. Which is different. But idk if that matters very much, you could probably recreate that dynamic by letting people weight their own predictions. • This is definitely a feature we’re considering adding in some form (likely, something like weight/​leverage). The current scoring system we are using is quite simple, I expect it to get more sophisticated. However, one big downside is that sophistication would come with complexity, which could be a lot for some users. • I’ll try to paraphrase you (as well as extrapolating a bit) to see if I get what you’re saying: Say you want some research done. The most straightforward way to do so is to just hire a researcher. This “freeform” approach affords a lot of flexibility in how you delegate, evaluate, communicate, reward and aggregate the research. You can build up subtle, shared intuitions with your researchers, and invest a lot of effort in your ability to communicate nuanced and difficult instructions. You can also pick highly independent researchers who are able to make many decisions for themselves in terms of what to research, and how to research it. By using “amplification” schemes and other mechanisms, you’re placing significant restrictions on your ability to do all of those things. Hence you better get some great returns to compensate. But looking through various ways you might get these benefits, they all seem at best… fine. Hence the worry is that despite all the bells-and-whistles, there’s actually no magic happening. This is just like hiring a researcher, but a bit worse. This is only “amplification” in a trivial sense. As a corollary, if your research needs seem to be met by a handful in-house researchers, this method wouldn’t be very helpful to you. 1) Does this capture your views? 2) I’m curious what you think of the sections: “Mitigating capacity bottlenecks” and “A way for intellectual talent to build and demonstrate their skills”? In particular, I didn’t feel like your comment engaged with A) the scalability of the approach, compared to the freeform approach, and B) that it might be used as a “game” for young researchers to build skills and reputation, which seems way harder to do with the freeform approach. • Nice work. A few comments/​questions: • I think you’re being harsh on yourselves by emphasising the cost/​benefit ratio. For one, the forecasters were asked to predict Elizabeth van Norstrand’s distributions rather than their mean, right? So this method of scoring would actually reward them for being worse at their jobs, if they happened to put all their mass near the resolution’s mean as opposed to predicting the correct distribution. IMO a more interesting measure is the degree of agreement between the forecasters’ predictions and Elizabeth’s distributions, although I appreciate that that’s hard to condense into an intuitive statistic. • An interesting question this touches on is “Can research be parallelised?”. It would be nice to investigate this more closely. It feels as though different types of research questions might be amenable to different forms of parallelisation involving more or less communication between individual researchers and more or less sophisticated aggregation functions. For example, a strategy where each researcher is explicitly assigned a separate portion of the problem to work on, and at the end the conclusions are synthesised in a discussion among the researchers, might be appropriate for some questions. Do you have any plans to explore directions like these, or do you think that what you did in this experiment (as I understand, ad-hoc cooperation among the forecasters with each submitting a distribution, these then being averaged) is appropriate for most questions? If so, why? • About the anticorrelation between importance and “outsourceablilty”: investigating which types of questions are outsourceable would be super interesting. You’d think there’d be some connection between outsourceable questions and parallelisable problems in computer science. Again, different aggregation functions/​incentive structures will lead to different questions being outsourcable. • One potential use case for this kind of thing could be as a way of finding reasonable distributions over answers to questions that require so much information that a single person or small group couldn’t do the research in an acceptable amount of time or correctly synthesise their conclusions by themselves. One could test how plausible this is by looking at how aggregate performance tracks complexity on problems where one person can do the research alone. So an experiment like the one you’ve done, but on questions of varying complexity, starting from trivial up to the limit of what’s feasible. • Great questions! I’ll try to respond to the points in order. Question 1 The distinction between forecasters/​Elizabeth making predictions of her initial distributions or the final mean, was one that was rather confusing. I later wrote some internal notes to think through some implications in more detail. You can see them here. I have a lot of uncertainty in how to best structure these setups. I think though that for cost effectiveness, Elizabeth’s initial distributions should be seen as estimates given of the correct value, which is what she occasionally later gave. As such, for cost effectiveness we are interested in how well the forecasters did and estimating this correct value, vs. how well she did at estimating this correct value. Separately, it’s of course apparent that that correct value itself is an estimate, and there’s further theoretical work to be done to best say what it should have been estimating, and empiricle work to be done to get a sense of how well it holds up against even more trustworthy estimates. I personally don’t regard the cost effectiveness here as that crucial, I’d instead treat much of this experiment as a set of structures that could apply to more important things in other cases. Elizabeth’s time was rather inexpensive compared to other people/​procedures we may want to use in the future, and we could also spend fixed costs improving the marginal costs of such a setup. Question 2 We haven’t talked about this specific thing, but I could definitely imagine it. The general hope is that even without such a split, many splits would happen automatically. One big challenge is to get the splits right. One may initially think that forecaster work should be split by partitions of questions, but this may be pretty suboptimal. It may be that some forecasters have significant comparative advantages to techniques that span across questions; for instance, some people are great at making mathematical models, and others are great at adjusting the tails of distributions to account for common biases. I think of this more as dividing cognitive work based on trading strategies than questions. There are a whole ton of possible experiments to be done here, because there are many degrees of freedom. Pursuing these in an effective way is one of our main questions. Of course, if we could have forecasters help forecast which experiments would be effective, then that could help bootstrap a process. Question 3 We’ve come up with a few “rubrics” to evaluate how effective a given question or question set will be. The main factors are things like: 1. Tractability (How much progress for how many resources can be made? What if all the participants are outside the relevant organizations/​work?) 2. Importance (How likely is this information to be valuable for changing important decisions?) 3. Risk (How likely is it that this work will really anger someone or lead to significant downsides?) I think it’s really easy to spend a lot of money predicting ineffective things if you are not careful. Finding opportunities that are EV-positive is a pretty significant challenge here. I think my general intended strategy is a mix of “try a bunch of things” and “try to set up a system so the predictors themselves could predict the rubric elements or similar for a bunch of things they could predict.” Question 4 Agreed! That said, there are many possible dimensions for “complexity”, so there’s a lot of theoretical and practical work to be done here. • Question 3 It seems like Ozzie is answering on a more abstract level than the question was asked. There’s a difference between “How valuable will it be to answer question X?” (what Ozzie said) and “How outsourceable is question X?” (what Lawrence’s question was related to). I think that outsourceability would be a sub-property of Tractability. In more detail, some properties I imagine to affect outsourceability, are whether the question: 1) Requires in-depth domain knowledge/​experience 2) Requires substantial back-and-forth between question asker and question answerer to get the intention right 3) Relies on hard-to-communicate intuitions 4) Cannot easily be converted into a quantitative distribution 5) Has independent subcomponents which can be answered separately and don’t rely on each other to be answered (related to Lawrence point about tractability) • A final thought that came to mind, regarding the following passage: It seems possible for person X to predict a fair number of a more epistemically competent person Y’s beliefs—even before person X is as epistemically competent as Y. And in that case, doing so is evidence that person X is moving in the right direction. I think that that’s is a good and interesting point. But I imagine there would also be many cases in which X develops an intuitive ability to predict Y’s beliefs quite well in a given set of domains, but in which that ability doesn’t transferring to new domains. It’s possible that this would be because X’s “black box” simulation of Y’s beliefs is more epistemically competent than Y in this new domain. But it seems more likely that Y is somewhat similarly epistemically competent in this new domain as in the old domain, but has to draw on different reasoning processes, knowledge, theories, intuitions, etc., and X’s intuitions aren’t calibrated for how Y is now thinking. I think we could usefully think of this issue as a question of robustness to distributional shift. I think the same issue could probably also occur even if X has a more explicit process for predicting Y’s beliefs. E.g., even if X believes they understand what sort of sources of information Y considers and how Y evaluates it and X tries to replicate that (rather than just trying to more intuitively guess what Y will say), the process X uses may not be robust to distributional shift. But I’d guess that more explicit, less “black box” approaches for predicting what Y will say will tend to either be more robust to distributional shift or more able to fail gracefully, such as recognising that uncertainty is now much higher and there’s a need to think more carefully. (None of this means I disagree with the quoted passage; I’m just sharing some additional thoughts that came to mind when I read it, which seem relevant and maybe useful.) • This sounds roughly right to me. I think concretely this wouldn’t catch people off guard very often. We have a lot of experience trying to model the thoughts of other people, in large part because we need to do this to communicate with them. I’d feel pretty comfortable basically saying, “I bet I could predict what Stuart will think in areas of Anthropology, but I really don’t know his opinions of British politics”. If forecasters are calibrated, then on average they shouldn’t be overconfident. It’s expected there will be pockets where they are, but I think the damage caused here isn’t particularly high. • That makes sense to me. But it seems like you’re just saying the issue I’m gesturing at shouldn’t cause mis-calibration or overconfidence, rather than that it won’t reduce the resolution/​accuracy or the practical usefulness of a system based on X predicting what Y will think? • That sounds right. However, I think that being properly calibrated is a really big deal, and a major benefit compared to other approaches. On the part: But I’d guess that more explicit, less “black box” approaches for predicting what Y will say will tend to either be more robust to distributional shift or more able to fail gracefully, such as recognising that uncertainty is now much higher and there’s a need to think more carefully. If there are good additional approaches that are less black-box, I see them ideally being additions to this rough framework. There are methods to encourage discussion and information sharing, including with the Judge /​ the person’s beliefs who is being predicted. • Here’s a second thought that came to mind, which again doesn’t seem especially critical to this post’s aims... You write: Someone who can both predict my beliefs and disagrees with me is someone I should listen to carefully. They seem to both understand my model and still reject it, and this suggests they know something I don’t. I think I understand the rationale for this statement (though I didn’t read the linked Science article), and I think it will sometimes be true and important. But I think that those sentences might overstate the point. In particular, I think that those sentences implicitly presume that this other person is genuinely primarily trying to form accurate beliefs, and perhaps also that they’re doing so in a way that’s relatively free from bias. But (almost?) everyone is at least sometimes primarily aiming (perhaps unconsciously) at something other than forming accurate beliefs, even when it superficially looks like they’re aiming at forming accurate beliefs. For example, they may be engaging in “ideologically motivated cognition[, i.e.] a form of information processing that promotes individuals’ interests in forming and maintaining beliefs that signify their loyalty to important affinity groups”. The linked study also notes that “subjects who scored highest in cognitive reflection were the most likely to display ideologically motivated cognition”. So I think it might be common for people to be able to predict my beliefs and disagree with me, but with their disagreement not being based on knowing more or having better reasoning process but rather finding ways to continue to hold beliefs that they’re (in some sense) “motivated” to hold for some other reason. Additionally, some people may genuinely be trying to form accurate beliefs, but with unusually bad epistemics /​ unusually major bias. If so, they may be able to predict my beliefs and disagree with me, but with their disagreement not being based on knowing more or having better reasoning process but rather being a result of their bad epistemics /​ biases. Of course, we should be very careful with assuming that any of the above is why a person disagrees with us! See also this and this. The claims I’d more confidently agree with are: Someone who can both predict my beliefs and disagrees with me might be someone I should listen to carefully. They seem to both understand my model and still reject it, and this suggests they might know something I don’t (especially if they seem to be genuinely trying to form accurate beliefs and to do so via a reasonable process). (Or maybe having that parenthetical at the end would be bad via making people feel licensed to dismiss people who disagree with them as just biased.) • Fair points. I think that the fact that they can predict one’s beliefs is minor evidence they will be EV-positive to listen to. You also have to take into account the challenge of learning from them. All that said, this sort of technique is fairly prosaic. I’m aiming for a future much better; where key understandings are all in optimized prediction applications and people generally pay attention to those. • and each unit invested in the crowdworkers provided negative returns, as they tended to be less accurate than Elizabeth’s prior. Would they be useful for finding the wrong answer? • If you are asking if we could effectively use some transformation on their results to get a useful signal, my strong net is “maybe, but barely so.” I know there are cases in finance where poor predictors are actually systematically wrong, in ways that a good predictor could use for updating; but expect that’s for specific reasons we don’t have. • I’m afraid I don’t understand your question, could you clarify? • My interpretation: there’s no such thing as negative value of information. If the mean of the crowdworkers’ estimates were reliably in the wrong direction (compared with Elizabeth’s prior) then that would allow you to update Elizabeth’s prior to make it more accurate. • An oracle that is always wrong can still be useful. • Thanks for this and its companion post; I found the two posts very interesting, and I think they’ll usefully inform some future work for me. A few thoughts came to mind as I read, some of which can sort-of be seen as pushing back against some claims, but in ways that I think aren’t very important and that I expect you’ve already thought about. I’ll split these into separate comments. Firstly, as you note, what you’re measuring is how well predictions match a proxy for the truth (the proxy being Elizabeth’s judgement), rather than the truth itself. Something I think you don’t explicitly mention is that: 1. Elizabeth’s judgement may be biased in some way (rather than just randomly erring), and 2. The network-based forecasters’ judgements may be biased in a similar way, and therefore 3. This may “explain away” part of the apparent value of the network-based forecasters’ predictions, along with part of their apparent superiority over the online crowdworkers’ predictions. E.g., perhaps EA-/​rationality-adjacent people are biased towards disagreeing with “conventional wisdom” on certain topics, and this bias is somewhat shared between Elizabeth and the network-based forecasters. (I’m not saying this is actually the case; it’s just an example.) You make a somewhat similar point in the Part 2 post, when you say that the online crowdworkers: were operating under a number of disadvantages relative to other participants, which means we should be careful when interpreting their performance. [For example, the online crowdworkers] did not know that Elizabeth was the researcher who created the claims and would resolve them, and so they had less information to model the person whose judgments would ultimately decide the questions. But that is about participants’ ability to successfully focus on predicting what Elizabeth will say, rather than their ability to accidentally be biased in the same way as Elizabeth when both are trying to make judgements about the ground truth. In any case, I don’t think this matters much. One reason is that this “shared bias” issue probably at most “explains away” a relatively small fraction of the apparent value of the network-adjacent forecasters’ predictions, probably without tipping the balance of whether this sort of set-up is worthwhile. Another reason is that there may be ways to mitigate this “shared bias” issue. • Thanks for the attention on this point. I think I’m very nervous about trying to get at “Truth”. I definitely don’t mean to claim that we were confident that this work gets us much closer to truth; more that it can help progress a path of deliberation. The expectation is that it can get us closer to the truth than most other methods, but we’ll still be several steps away. I imagine that there are many correlated mistakes society is making. It’s really difficult to escape that. I’d love for future research to make attempts here, but I suspect it’s a gigantic challenge, both for research and social reasons. For example, in ancient Egypt, I believe it would have taken some intense deliberation to both realize that the popular religion was false, and also to be allowed to say such.
2021-05-13 15:54:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4670265018939972, "perplexity": 1216.017341993916}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989814.35/warc/CC-MAIN-20210513142421-20210513172421-00603.warc.gz"}
https://www.edaboard.com/threads/hardcore-floquet-bloch-scattering-problem-help.402518/#post-1734093
# hardcore floquet bloch scattering problem help #### yefj Hello i have the following floquet bloch hardcore problem i want to solve as shown bellow. Is there some manual or a book or an article i could use to help me with the math of this hardcore scattering problem? Thanks. #### yefj Hello planarmetamaterials, thank you very much for the manual. I am trying to tackle the problem as following: 1.i have a boundary condition as following, 2.my incident wave is as following: 3.the tangential incident is E_inc*sin(theta) the normal components is E_inc*cos(theta) 4.we can use the boundary condition to get the tangential H magnetic fiels 5.finding the fourier seriec of the periodic impedance Z_s(y ) i understand that the floquet bloch is is also periodic. so i think i should link the two periodic expressions? what is the next step? In the lecture you posted there is no link between floquet and impedance. if i am correct regarding my assumption. thanks. #### yefj Hello, is there some manual aboutscattering from a surface with Z surface impedance ? Thanks #### PlanarMetamaterials In the lecture you posted there is no link between floquet and impedance. It's not just the fields that are periodic, the impedance is periodic too. This means you can also express the impedance as a Fourier series, as suggested in part (ii). From slide 7 of the linked lecture, you could then express Zs = sum_inf(Z(m)*exp(-j*2*pi*m*y/A) (where A is the period). Each of the impedances Z(m) should then correspond to the surface impedance seen by a particular diffraction order m. what is the next step? I think just answering the questions in the order provided would be best. Hello, is there some manual aboutscattering from a surface with Z surface impedance ? A diffraction grating is an example of such a periodic surface, you might be able to find some useful texts by searching these. Good luck! #### yefj Hello, I have defined the Maxwell equations shown bellow, then we have K_inc=k*sin(theta)y^+cos(theta)z^ K_ret=k*sin(theta)y^-cos(theta)z^ from the maxwell equations i got the Hy Hz Ex then i got Js and we want to develop the currents from the H discontinuety. then i need to go back to maxwell equation to find the scattered fields. given J how do i find the E,H from maxwell? Thanks. #### Attachments • 1652865412721.png 34.2 KB · Views: 5
2022-05-23 20:24:04
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8640868067741394, "perplexity": 1349.2799128469062}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662561747.42/warc/CC-MAIN-20220523194013-20220523224013-00422.warc.gz"}
https://answers.opencv.org/questions/3177/revisions/
# Revision history [back] ### Saving images Hello, I managed to load some IR image in captured by an IR camere with a legacy format. I manage to see the image which are with a very poor contrast (almost grey, but something can be seen). I tryed to save the image in a bmp format ( imwrite("filename.bmp",image_IR)) but the resulting image is completely white. Does anybody have any suggestion? thanks a lot Andrea 2 No.2 Revision Kirill Kornyakov 2792 ●13 ●25 ●52 ### Saving IR images Hello, I managed to load some an IR image in captured by an IR camere camera with a legacy format. I manage to see the image which are with a very poor contrast (almost grey, but something can be seen). I tryed tried to save the image in a bmp format ( imwrite("filename.bmp",image_IR)) (imwrite("filename.bmp", image_IR)) but the resulting image is completely white. white. Does anybody have any suggestion? thanks a lot Andrea suggestion?
2021-08-02 22:33:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24260838329792023, "perplexity": 3269.4958390169613}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154385.24/warc/CC-MAIN-20210802203434-20210802233434-00259.warc.gz"}
https://www.lmfdb.org/Character/Dirichlet/?is_primitive=yes&modulus=11-100
## Results (1-50 of 349 matches) Next Orbit label Conrey labels Modulus Conductor Order Parity Primitive 11.b $11$ $11$ $2$ odd 11.c $11$ $11$ $5$ even 11.d $11$ $11$ $10$ odd 12.b $12$ $12$ $2$ even 13.b $13$ $13$ $2$ even 13.c $13$ $13$ $3$ even 13.d $13$ $13$ $4$ odd 13.e $13$ $13$ $6$ even 13.f $13$ $13$ $12$ odd 15.d $15$ $15$ $2$ odd 15.e $15$ $15$ $4$ even 16.e $16$ $16$ $4$ even 16.f $16$ $16$ $4$ odd 17.b $17$ $17$ $2$ even 17.c $17$ $17$ $4$ even 17.d $17$ $17$ $8$ even 17.e $17$ $17$ $16$ odd 19.b $19$ $19$ $2$ odd 19.c $19$ $19$ $3$ even 19.d $19$ $19$ $6$ odd 19.e $19$ $19$ $9$ even 19.f $19$ $19$ $18$ odd 20.d $20$ $20$ $2$ odd 20.e $20$ $20$ $4$ even 21.c $21$ $21$ $2$ even 21.g $21$ $21$ $6$ even 21.h $21$ $21$ $6$ odd 23.b $23$ $23$ $2$ odd 23.c $23$ $23$ $11$ even 23.d $23$ $23$ $22$ odd 24.f $24$ $24$ $2$ even 24.h $24$ $24$ $2$ odd 25.d $25$ $25$ $5$ even 25.e $25$ $25$ $10$ even 25.f $25$ $25$ $20$ odd 27.e $27$ $27$ $9$ even 27.f $27$ $27$ $18$ odd 28.d $28$ $28$ $2$ even 28.f $28$ $28$ $6$ even 28.g $28$ $28$ $6$ odd 29.b $29$ $29$ $2$ even 29.c $29$ $29$ $4$ odd 29.d $29$ $29$ $7$ even 29.e $29$ $29$ $14$ even 29.f $29$ $29$ $28$ odd 31.b $31$ $31$ $2$ odd 31.c $31$ $31$ $3$ even 31.d $31$ $31$ $5$ even 31.e $31$ $31$ $6$ odd 31.f $31$ $31$ $10$ odd Next
2021-06-17 17:49:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24395427107810974, "perplexity": 5851.6810950638}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487630518.38/warc/CC-MAIN-20210617162149-20210617192149-00173.warc.gz"}
http://www.reference.com/browse/bag%20of%20trick
Definitions # Bag of words model in computer vision This is an article introducing the "Bag of words model" (BoW) in computer vision, especially for object categorization. From now, the "BoW" model refers to the BoW model in computer vision unless explicitly declared. Before introducing the BoW model, the BoW in natural language processing (NLP) is briefly reviewed. The BoW in NLP is a popular method for representing documents, which ignores the word orders. For example, "a good book" and "book good a" are the same under this model. The BoW model allows a dictionary-based modeling, and each document looks like a "bag" (thus the order is not considered), which contains some words from the dictionary. Computer vision researchers uses a similar idea for image representation (Here an image may refer to a particular object, such as an image of a car). For example, an image can be treated as a document, and features extracted from the image are considered as the "words" (Usually some manipulations are needed, which are described below). The BoW representation serves as the basic element for further processing, such as object categorization. ## Representation based on the BoW model ### Text Document Representation based on the BoW model The text document representation based on the BoW model in NLP is reviewed first. Here are two simple text documents: • John likes to watch movies. Mary likes too. • John also likes to watch football games. Based on these two text documents, a dictionary is constructed as: • dictionary={1:"John", 2:"likes", 3:"to", 4:"watch", 5:"movies", 6:"also", 7:"football", 8:"games", 9:"Mary", 10:"too"}, which has 10 distinct words. And using the indexes of the dictionary, each document is represented by a 10-entry vector: • [1, 2, 1, 1, 1, 0, 0, 0, 1, 1] • [1, 1, 1, 1, 0, 1, 1, 1, 0, 0], where each entry of the vectors refers to count of the corresponding entry in the dictionary (This is also the histogram representation). As we can see, this vector representation does not preserve the order of the words in the original sentences. This kind of representation has several successful applications, for example latent Dirichlet allocation. ### Image Representation based on the BoW model Figure 1 shows the basic of idea of the BoW model. To represent an image using BoW model, an image can be treated as a document. Similarly, "words" in images need to be defined too. However, "word" in images is not the off-the-shelf thing like the word in text documents. To achieve this, it usually includes following three steps: feature detection, feature description and codebook generation. A definition of the BoW model can be the "histogram representation based on independent features". #### Feature Detection Given an image, feature detection is to extract several local patches (or regions), which are considered as candidates for basic elements, "words". ##### Regular Grid Figure 2 is an example of the regular grid method for feature detection. Regular grid is probably the most simple yet effective method for feature detection. In this method, the image is evenly segmented by some horizontal and vertical lines and some local patches are obtained. This method shows very promising results for natural scene categorization. The limitation of this method is that it uses little information of an image itself. ##### Interest Point Detector Interest point detectors try to detect salient patches, such as edges, corners and blobs in an image. These salient patches are considered more important than other patches, such as the regions attracting human attentions, which might be more useful for object categorization. Some famous detectors are Harris corner detector, Lowe’s DoG (Difference of Gaussians) detector and Kadir Brady saliency detector. Figure 3 shows a result of Harris corner detector. ##### Other Methods In addition, researchers also use random sampling and segmentation methods (such as Normalized Cut) for feature detection. #### Feature Representation After feature detection, each image is abstracted by several local patches. Feature representation methods deal with how to represent the patches as numerical vectors. These methods are called feature descriptors. A good descriptor should have the ability to handle intensity, rotation, scale and affine variations to some extent. One of the most famous descriptors is Scale-invariant feature transform (SIFT). SIFT converts each patch to 128-dimensional vector. After this step, each image is a collection of vectors of the same dimension (128 for SIFT), where the order of different vectors is of no importance. #### Codebook Generation The final step for the BoW model is to convert vector represented patches to "codewords" (analogy to words in text documents), which also produces a "codebook" (analogy to a word dictionary). A codeword can be considered as a representative of several similar patches. One simple method is performing K-means clustering over all the vectors. Codewords are then defined as the centers of the learned clusters. The number of the clusters is the codebook size (analogy to the size of the word dictionary). Thus, each patch in an image is mapped to a certain codeword through the clustering process and the image can be represented by the histogram (see Figure 4) of the codewords. Figure 5 shows several examples of codewords mapped back to image patches. ## Learning and Recognition based on the BoW model Till now, an image is represented based on a BoW model. Computer vision researchers have developed several learning methods to leverage the BoW model for image related task, such as object categorization. These methods can roughly be divided into two categories, generative and discriminative models. For multiple label categorization problem, the confusion matrix can be used as an evaluation metric. ### Generative Models Here are some notations for this section. Suppose the size of codebook is $V$. • $w$: each patch $w$ is a V-dimensional vector that has a single component that equals to one and all other components equal to zero (For K-means clustering setting, the single component equal one indicates the cluster that $w$ belongs to). The $v$th codeword in the codebook can be represented as $w^v=1$ and $w^u = 0$ for $uneq v$. • $mathbf\left\{w\right\}$: each image is represented by $mathbf\left\{w\right\}=\left[w_1, w_2, cdots, w_N\right]$, all the patches in an image. • $d_j$: the $j$th image in an image collection. • $c$: category of the image. • $z$: theme or topic of the patch. • $pi$: mixture proportion. Since the BoW model is an analogy to the BoW model in NLP, generative models developed in text domains can also be adapted in computer vision. Simple Naïve Bayes model and hierarchical Bayesian models are discussed. #### Naïve Bayes The simplest one is Naïve Bayes classifier. Using the language of graphical models, Naïve Bayes classifier is shown as Figure 6. The basic idea (or assumption) of this model is that each category has its own distribution over the codebook, which are assumed quite different from others. Take a face category and a car category for an example. The face category may emphasize the codewords which represent "nose", "eye" and "mouth", while the car category may emphasize the codewords which represent "wheel" and "corner". Given a collection training examples, the classifier learns different distributions for different categories. The categorization decision is made by • $c^*=arg max_c p\left(c|mathbf\left\{w\right\}\right) = arg max_c p\left(c\right)p\left(mathbf\left\{w\right\}|c\right)=arg max_c p\left(c\right)prod_\left\{n=1\right\}^Np\left(w_n|c\right)$ Since Naïve Bayes classifier is simple yet effective, it is usually used as a baseline method for comparison. #### Hierarchical Bayesian Models The basic assumption of Naïve Bayes model does not hold sometimes. For example, a natural scene image (Figure 7) may contain several different themes. Probabilistic latent semantic analysis (pLSA) and latent Dirichlet allocation (LDA) are two popular topic models from text domains to tackle the similar multiple "theme" problem. Take LDA for an example. To model natural scene images using LDA, an analogy is made like this (Figure 9): • the image category is mapped to the document category; • the mixture proportion of themes maps the mixture proportion of topics; • the theme index is mapped to topic index; • the codeword is mapped to the word. This method shows very promising results in natural scene categorization on 13 Natural Scene Categories ### Discriminative Models Since images are represented based on the BoW model, any discriminative model suitable for text document categorization can be tried, such as support vector machine (SVM) and AdaBoost. Kernel trick is also applicable when kernel based classifier is used, such as SVM. Pyramid match kernel is newly developed one based on the BoW model. The local feature approach of using BoW model representation learnt by machine learning classifiers with different kernels (e.g., EMD-kernel and $X^2$ kernel) has been vastly tested in the area of texture and object recognition. . Very promising results on a number of datasets have been reported. This approach has achieved very impressive result in the the PASCAL Visual Object Classes Challenge #### Pyramid Match Kernel Pyramid match kernel is a fast kernel function (satisfying Mercer's condition) which maps the BoW features to multi-resolution histograms. One of the advantages of the multi-resolution histograms is the ability to capture the co-occurring features. The pyramid match kernel builds the multi-resolution histogram by binning data points into discrete regions of increasing larger size. Thus, points do not match at high resolutions have the chance to match at low resolutions (Figure 9). Pyramid match kernel performs approximate similarity match, without explicit search, and the computation time is only linear in the number of features. Compared with other kernel approaches, pyramid match kernel is much faster, yet provides competitively accurate results. Pyramid match kernel was applied to ETH-80 database and Caltech 101 database and showed promising results. ## Limitations and recent Developments One of notorious disadvantages of BoW is that it ignores the spatial relationships among the patches, which is very important in image representation. Researchers have proposed several methods to incorporate the spatial information. For feature level improvements, correlogram features can capture spatial co-occurrences of features. For generative models, relative positions of codewords are also taken into account. The hierarchical shape and appearance model for human action introduces a new part layer (Constellation model) between the mixture proportion and the BoW features, which captures the spatial relationships among parts in the layer. For discriminative models, spatial pyramid match performs pyramid matching by partitioning the image into increasingly fine sub-regions and compute histograms of local features inside each sub-region. Furthermore, the BoW model has not been extensively tested yet for view point invariance and scale invariance, and the performance is unclear. Also the BoW model for object segmentation and localization is also lack of study.
2014-08-20 08:58:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 17, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5541782379150391, "perplexity": 1476.5846791807394}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500801235.4/warc/CC-MAIN-20140820021321-00056-ip-10-180-136-8.ec2.internal.warc.gz"}
http://www.songsterr.com/a/wsa/kiss-radar-for-love-tab-g-s312649
# KISS - Radar For Love Tab Standard guitar tuning: EADGBENo capo Key: / = slide up \ = slide down b = bend (whole step) b^ = bend (1/2 step) b^^ = bend (1 1/2 steps) pb = pre-bend r = release-bend t = tap with righthand finger h = hammer-on p = pull-off ~ = Vibrato * = Natural Harmonic #(#) = Trill ** = Artificial Harmonic x = Dead notes (no pitch) P.M. = Palm mute (- -> underneath indicates which notes) (\) = Dive w\bar (/) = Release w\bar Tp = Tap w\plectrum Rhythm Fig. 1 E B 3 G 4 4 2 2 D 2 2 0 2 A 0 E 3p0 3p0 3p0 3p0 0 0 3p0 3p0 5p0 6p0 7p0 0 3p0 E B 3 G 4 4 2 2 2 D 2 2 2 0 2 A 0 0 E 3p0 3p0 3p0 3p0 0 0 3p0 3p0 5p0 6p0 7p0 0 Rhythm Fig. 2 E B G D 2 A 2 E 0 0 3p0 E B 3 3 G 4 4 2 2 2 D 2 2 0 2 0 A 0 E 3p0 3p0 3p0 3p0 0 0 3p0 3p0 5p0 6p0 7p0 0 E B G D 2 A 2 E 0 0 3p0 E B 3 G 4 4 2 2 D 2 2 0 2 A 0 E 3p0 3p0 3p0 3p0 0 0 3p0 3p0 5p0 6p0 7p0 0 3p0 E B 3 3 G 4 4 2 2 2 D 2 2 0 2 0 A 0 E 3p0 3p0 3p0 3p0 0 0 3p0 3p0 5p0 6p0 7p0 0 Rhythm Fig. 3 E B 2 2 G 2 2 2 2 D 2 2 2 2 A 0 0 E 0 3~ E B 3 3 3 3 3/5 G 0 2 0 2 2/4 7 7 7 7/9\ 7 7 7 7/9\ D 0 0 2 0 0 2 7 7 7 7/9\ 7 7 7 7/9\ A x x 2 x x 2 5 5 5 5/7\ 5 5 5 5/7\ E 3 2 0 3 2 0 E B 2 2 G 2 2 2 2 D 2 2 2 2 A 0 0 E 0 3~ End 1 (let ring) E B 5 5 3 3 G 2 6 4 D 2 7 5 A 0 E End 2 (let ring) E B 2 2 3 3 5 5 3 3 G 2 0 2 0 D 2 0 2 0 A E Rhythm Fig. 4a E B G 7 9 7 9 0 D 7 9 2p0 7 9 2p0 A 5 7 2 0 5 7 2 0 E 3 0 3 E 5 5 5 5 B 5 5 5 5 G 7 9 7 9 2 2 2 2 D 7 9 2p0 7 9 A 5 7 2 0 5 7 0 E 3 0 Rhythm Fig. 4b E B G 7 9 7 9 0 D 7 9 2p0 7 9 2p0 A 5 7 2 0 5 7 2 0 E 3 0 3 E B G 7 9 7 9 D 7 9 2p0 7 9 A 5 7 2 0 5 7 E 3 0 0 4p0 5p0 6/7~ Rhythm Fig. 4c E B G 7 9 7 9 0 D 7 9 2p0 7 9 2p0 A 5 7 2 0 5 7 2 0 E 3 0 3 E 5 5 5 5 B 5 5 5 5 G 7 9 7 9 2 2 2 2 D 7 9 2p0 7 9 A 5 7 2 0 5 7 E 3 0 2~ Rhythm Fig. 4d E B G 7 9 7 9 D 7 9 x x x x x x x 7 9 x x x x x x x A 5 7 x x x x x x x 5 7 x x x x x x x E x x x x x x x x x x x x x x E B G 7 9 7 9 D 7 9 x x x x x x x 7 9 A 5 7 x x x x x x x 5 7 E x x x x x x x 0 4p0 5p0 6/7~ Rhythm Fig. 5 E B 3 3 G 2 2 2 x 0 0 D 2 2 2 x 0 0 A 0 0 0 x 0 0 E < - - - - - - - - 3x - - - - - - - - > E B G 7 7 7 7 7/ 9 D 2p 0 7 7 7 7 7/ 9 A 2 0 5 5 5 5 5/ 7 E 3 0 E B 3 3 G 2 2 2 x 0 0 7 7 7 7 7/ 9 D 2 2 2 x 0 0 7 7 7 7 7/ 9 A 0 0 0 x 0 0 5 5 5 5 5/ 7 E < - - - - - - - 3x - - - - - - - > < - - - 3x - - - > Rhythm Fig. 6 (let ring) E 10h12 10h12 10h12 10h12 10h12 B 10 10 10 10 10 G D A E < - - - - - - - 2x - - - - - - - > E 10p9 10p9 10p9 10p9 10p9 10p9 10p9 10p9 B 10 10 10 10 10 10 10 10 G 2 2 D 2 2 2 A 0 2 0 E 0 E B G 2 D 2 2 2 A 2 0 2 E 0 0 3~ Rhythm Fig. 7 E B 5~ G 2p0 0h2 5~ 2p0 0h2 D 2p0 2 0h2 5~ 2p0 2 0h2 A 2 12\ 2 0 4p0 5p0 6/7~ E E B 5~ G 2p0 0h2 5~ 2p0 0h2 D 2p0 2 0h2 5~ 2p0 2 0h2 A 2 12\ 2 E 0 4p0 5p0 6/7~ Solo E 5 5 5 B 5 5 5 8p5 5 G 7b 7b 7b 7 7 6~ D A x(/) E x(/) bottom and bring it up E t12 5h8p5 B 10 13p10h13p10h13p10 15 15b (/)14h15p14~ G D A E Gradually lower bar < - - - - E t12 5h8p5 12(\) B t12 5h8p5 t12 5h8p5 G t12 6h7p6 t9 6h7p6~ D A E - - - 2x - - - - - - - - - - > < - 2x - > E 15 17p15 15/19\15~ 20p17 17 B 20b 20b~ 17 20 20 20 17h20p17 G D A E E B 20p17 17 10h13p10h13p10h13p10h13p10 G 19 19 19 17h19p17 D 19~ A E Gradually lower bar - - - > E B (/)10 (/)13b r13~ G D A 10 10 12p10 12/14 E 10h12p10 12 12 E B G 12 12 12 14p 12 14 D 12 12 14p12 14 12h14p12 14 14 A 12h14p12 14 14 E E 12 B 12 G D A E Outro solo E 17b r17p15 15b B 17~ G 14b 14 12 D 14 12 14(\) A E E 12 12 12 15p12 12 B 15p12 15p12 15 15p12 15p12 12 G 14b 14b 14b 14 D A E E B 14 G 14 12 14 12 12(/) D 14 12 14(\) 12h14p12(\/)(\/) A 14 12~ E E B 15 15 G 14b 14b r14~ 9 9 D 12h14p12(\/)(\/) 12p9 12 12 A E < - - - 2x - - - - > E 15p12 12 B 15 15 12 15p12 12 G 12b 12 12 16b^^ 14 D 12 14p12 14 14 A E E B G 14 12 15 14 12 D 14 13 12 A 14 13 12 10 12 10 E 12(\) Paul Stanley, Desmond Child [Rhy. Fig 1] [Rhy. Fig 2] Passion fire runnin' through my veins Get a little bit of love and I go insane I may talk big, baby, I don't lie Ooh the guys don't know but the girls know why Ooh girls know why, whoo oh [Rhy. Fig 3] Get down and get to it I know you can do it Get down and get to it I know you can do it, oh, oh, oh, oh [Rhy. Fig 4a] [Rhy. Fig 2] When you feel so hot that you can't hold still And you don't know how you're gonna get your fill Send an S.O.S. baby, I'm your man If I don't make good baby no one can Oh no one can, whoo oh [Rhy. Fig 3] Get down and get to it I know you can do it, come on Get down and get to it I know you can do it, oh, oh, oh, oh [Rhy. Fig 4b] [Rhy. Fig 4c] Yeah, ooh yeah, one more time Get down, ooh get down, get down, oh, oh, oh [Solo over Rhy. Fig 5][Rhy. Fig 6] [Rhy. Fig 7] Get down, get down, oooh [Rhy. Fig 4d] (repeat and fade)
2013-12-06 12:06:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21194817125797272, "perplexity": 4586.395672496731}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163051516/warc/CC-MAIN-20131204131731-00094-ip-10-33-133-15.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/2704636/what-is-the-difference-between-impredicativity-and-recursion/
What is the difference between impredicativity and recursion? I'm looking to know the difference between the concepts of impredicativity and recursion. For impredicativity, I've got this defintion from wikipedia: Something that is impredicative, in mathematics and logic, is a self-referencing definition. And this one for recursion: Recursion occurs when a thing is defined in terms of itself or of its type. Even after reading these articles I don't get the subtlety here. In what are they different? Thank you • Recursion has a precise math definition; see e.g. Recursive definition. The concept of Predicative (and Impredicative) definition is more vague. Mar 23 '18 at 10:38 • You can see the post: understanding-predicativity. Mar 23 '18 at 10:38 • Recursion is the absence of the restriction from defining one part of a thing in terms of another part of that thing. Mar 23 '18 at 13:18 Maybe the moral is: don't always believe the first thing you read on Wikipedia??? [Actually, a lot of the more detailed explanations of mathematical topics on Wikipedia can be pretty good: but sometimes, as here, the thumbnail summary sketches can be misleading or worse.] So: a better thumbnail definition of impredicativity would be along the following lines A definition of an object $X$ is impredicative if it quantifies over a collection $Y$ to which $X$ itself belongs. Some impredicative definitions seem fine -- e.g. "the tallest man in the room" picks out an object by a quantification over men in the room, including the tallest one. Other impredicative definitions look more problematic. A famous troublesome case is the Russell set -- which we try to define by quantifying over all sets including, supposedly, the Russell set itself. And a better thumbnail definition of recursion might say something like this A recursive procedure or function is one that can call the results of its own previous application. So for example, we define the exponeniation function over the naturals recursively in terms of a prior application of the same function as when we write $x^{y + 1} = x^y * x$. We define the well-formed formulae of a formal language by clauses like 'if $\alpha$ is a wff, so is $\neg\alpha$', so the wff-forming procedure can call the outputs of previous rounds of wff-formation. We can now see these are very different ideas. Defining a class of widgets by recursion involves thinking of it as built up, stage by stage, by repeated application of some procedure (where we can feed back the results of earlier applications into another application of the procedure). It is the very paradigm of a constructive idea: we go, by recursive steps, to the constructed class. On the other hand, picking out $X$ from a totality of widgets by an impredicative definition goes in the opposite direction, we go from the totality to one of its members -- we have to think of the widgets as already somehow given to us, and then we then pick out $X$ by reference to that totality, and that might be very non-constructive.
2021-09-20 15:08:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7695205211639404, "perplexity": 708.2974300208044}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057039.7/warc/CC-MAIN-20210920131052-20210920161052-00392.warc.gz"}
https://stacks.math.columbia.edu/tag/0A5Y
Lemma 15.67.1. Let $R$ be a ring. Given complexes $K^\bullet , L^\bullet , M^\bullet$ of $R$-modules there is a canonical isomorphism $\mathop{\mathrm{Hom}}\nolimits ^\bullet (K^\bullet , \mathop{\mathrm{Hom}}\nolimits ^\bullet (L^\bullet , M^\bullet )) = \mathop{\mathrm{Hom}}\nolimits ^\bullet (\text{Tot}(K^\bullet \otimes _ R L^\bullet ), M^\bullet )$ of complexes of $R$-modules. Proof. Let $\alpha$ be an element of degree $n$ on the left hand side. Thus $\alpha = (\alpha ^{p, q}) \in \prod \nolimits _{p + q = n} \mathop{\mathrm{Hom}}\nolimits _ R(K^{-q}, \mathop{\mathrm{Hom}}\nolimits ^ p(L^\bullet , M^\bullet ))$ Each $\alpha ^{p, q}$ is an element $\alpha ^{p, q} = (\alpha ^{r, s, q}) \in \prod \nolimits _{r + s + q = n} \mathop{\mathrm{Hom}}\nolimits _ R(K^{-q}, \mathop{\mathrm{Hom}}\nolimits _ R(L^{-s}, M^ r))$ If we make the identifications 15.67.1.1 $$\label{more-algebra-equation-identification} \mathop{\mathrm{Hom}}\nolimits _ R(K^{-q}, \mathop{\mathrm{Hom}}\nolimits _ R(L^{-s}, M^ r)) = \mathop{\mathrm{Hom}}\nolimits _ R(K^{-q} \otimes _ R L^{-s}, M^ r)$$ then by our sign rules we get \begin{align*} \text{d}(\alpha ^{r, s, q}) & = \text{d}_{\mathop{\mathrm{Hom}}\nolimits ^\bullet (L^\bullet , M^\bullet )} \circ \alpha ^{r, s, q} - (-1)^ n \alpha ^{r, s, q} \circ \text{d}_ K \\ & = \text{d}_ M \circ \alpha ^{r, s, q} - (-1)^{r + s} \alpha ^{r, s, q} \circ \text{d}_ L - (-1)^{r + s + q} \alpha ^{r, s, q} \circ \text{d}_ K \end{align*} On the other hand, if $\beta$ is an element of degree $n$ of the right hand side, then $\beta = (\beta ^{r, s, q}) \in \prod \nolimits _{r + s + q = n} \mathop{\mathrm{Hom}}\nolimits _ R(K^{-q} \otimes _ R L^{-s}, M^ r)$ and by our sign rule (Homology, Definition 12.22.3) we get \begin{align*} \text{d}(\beta ^{r, s, q}) & = \text{d}_ M \circ \beta ^{r, s, q} - (-1)^ n \beta ^{r, s, q} \circ \text{d}_{\text{Tot}(K^\bullet \otimes L^\bullet )} \\ & = \text{d}_ M \circ \beta ^{r, s, q} - (-1)^{r + s + q} \left( \beta ^{r, s, q} \circ \text{d}_ K + (-1)^{-q} \beta ^{r, s, q} \circ \text{d}_ L \right) \end{align*} Thus we see that the map induced by the identifications (15.67.1.1) indeed is a morphism of complexes. $\square$ In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
2019-04-20 12:33:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 3, "x-ck12": 0, "texerror": 0, "math_score": 0.9999902248382568, "perplexity": 3517.3245040415622}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578529813.23/warc/CC-MAIN-20190420120902-20190420142902-00220.warc.gz"}
https://www.greenemath.com/College_Algebra/71/Absolute-Value-Equations-3PracticeTest.html
### About Absolute Value Equations Part 3: In some cases, we will need to solve a nested absolute value equation. This happens when one absolute value operation is nested inside of another. Additionally, we will see absolute value equations with double absolute value operations and a loose number. This loose number stops us from setting the two absolute value expressions directly equal to each other. For this, we will find out where each absolute value expression is equal to zero and then set up intervals on the number line in order to consider our solution. Lastly, we will look at a quadratic equation where the first-degree variable is wrapped inside of absolute value bars. Test Objectives • Demonstrate the ability to solve an equation with a nested absolute value operation • Demonstrate the ability to solve an equation with double absolute value operations • Demonstrate the ability to solve a quadratic equation where the first-degree variable term is wrapped inside of absolute value bars Absolute Value Equations Part 3 Practice Test: #1: Instructions: solve each equation. $$a)\hspace{.2em}||2x - 1| - 3|=12$$ $$b)\hspace{.2em}||5x - 7| + 4|=10$$ #2: Instructions: solve each equation. $$a)\hspace{.2em}|x + 1| + |3x - 7|=12$$ #3: Instructions: solve each equation. $$a)\hspace{.2em}|4x - 7| - |5x + 2|=6$$ #4: Instructions: solve each equation. $$a)\hspace{.2em}-5|x^2| - 8|x| + 21=8$$ #5: Instructions: solve each equation. $$a)\hspace{.2em}-3x^2 + 18|x| + 15=8|x| + 2$$ Written Solutions: #1: Solutions: $$a)\hspace{.2em}x=-7, 8$$ $$b)\hspace{.2em}x=\frac{1}{5}, \frac{13}{5}$$ #2: Solutions: $$a)\hspace{.2em}x=-\frac{3}{2}, \frac{9}{2}$$ #3: Solutions: $$a)\hspace{.2em}x=-\frac{1}{9}, -3$$ #4: Solutions: $$a)\hspace{.2em}x=-1, 1$$ #5: Solutions: $$a)\hspace{.2em}x=\pm \frac{13}{3}$$
2020-07-12 00:10:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.567741870880127, "perplexity": 659.9339885307821}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657129257.81/warc/CC-MAIN-20200711224142-20200712014142-00270.warc.gz"}
https://huggingface.co/docs/transformers/v4.26.0/en/model_doc/pegasus_x
Transformers documentation PEGASUS-X You are viewing v4.26.0 version. A newer version v4.27.2 is available. Join the Hugging Face community to get started # PEGASUS-X ## Overview The PEGASUS-X model was proposed in Investigating Efficiently Extending Transformers for Long Input Summarization by Jason Phang, Yao Zhao and Peter J. Liu. PEGASUS-X (PEGASUS eXtended) extends the PEGASUS models for long input summarization through additional long input pretraining and using staggered block-local attention with global tokens in the encoder. The abstract from the paper is the following: While large pretrained Transformer models have proven highly capable at tackling natural language tasks, handling long sequence inputs continues to be a significant challenge. One such task is long input summarization, where inputs are longer than the maximum input context of most pretrained models. Through an extensive set of experiments, we investigate what model architectural changes and pretraining paradigms can most efficiently adapt a pretrained Transformer for long input summarization. We find that a staggered, block-local Transformer with global encoder tokens strikes a good balance of performance and efficiency, and that an additional pretraining phase on long sequences meaningfully improves downstream summarization performance. Based on our findings, we introduce PEGASUS-X, an extension of the PEGASUS model with additional long input pretraining to handle inputs of up to 16K tokens. PEGASUS-X achieves strong performance on long input summarization tasks comparable with much larger models while adding few additional parameters and not requiring model parallelism to train. Tips: • PEGASUS-X uses the same tokenizer as PEGASUS. This model was contributed by zphang. The original code can be found here. ## PegasusXConfig ### class transformers.PegasusXConfig < > ( vocab_size = 96103 max_position_embeddings = 16384 encoder_layers = 16 encoder_ffn_dim = 4096 encoder_attention_heads = 16 decoder_layers = 16 decoder_ffn_dim = 4096 decoder_attention_heads = 16 encoder_layerdrop = 0.0 decoder_layerdrop = 0.0 use_cache = True is_encoder_decoder = True activation_function = 'gelu' d_model = 1024 dropout = 0.1 attention_dropout = 0.0 activation_dropout = 0.0 init_std = 0.02 decoder_start_token_id = 0 classifier_dropout = 0.0 scale_embedding = True pad_token_id = 0 eos_token_id = 1 forced_eos_token_id = 1 num_global_tokens = 32 block_size = 512 stagger_local_blocks = True **kwargs ) Parameters • vocab_size (int, optional, defaults to 96103) — Vocabulary size of the PEGASUS-X model. Defines the number of different tokens that can be represented by the inputs_ids passed when calling PegasusXModel. • d_model (int, optional, defaults to 1024) — Dimension of the layers and the pooler layer. • encoder_layers (int, optional, defaults to 16) — Number of encoder layers. • decoder_layers (int, optional, defaults to 16) — Number of decoder layers. • encoder_attention_heads (int, optional, defaults to 16) — Number of attention heads for each attention layer in the Transformer encoder. • decoder_attention_heads (int, optional, defaults to 16) — Number of attention heads for each attention layer in the Transformer decoder. • decoder_ffn_dim (int, optional, defaults to 4096) — Dimension of the “intermediate” (often named feed-forward) layer in decoder. • encoder_ffn_dim (int, optional, defaults to 4096) — Dimension of the “intermediate” (often named feed-forward) layer in decoder. • activation_function (str or function, optional, defaults to "gelu") — The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu", "relu", "silu" and "gelu_new" are supported. • dropout (float, optional, defaults to 0.1) — The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. • attention_dropout (float, optional, defaults to 0.0) — The dropout ratio for the attention probabilities. • activation_dropout (float, optional, defaults to 0.0) — The dropout ratio for activations inside the fully connected layer. • classifier_dropout (float, optional, defaults to 0.0) — The dropout ratio for classifier. • max_position_embeddings (int, optional, defaults to 16384) — The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). • init_std (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices. encoder_layerdrop — (float, optional, defaults to 0.0): The LayerDrop probability for the encoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556) for more details. decoder_layerdrop — (float, optional, defaults to 0.0): The LayerDrop probability for the decoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556) for more details. • use_cache (bool, optional, defaults to True) — Whether or not the model should return the last key/values attentions (not used by all models) • forced_eos_token_id (int, optional, defaults to 1) — The id of the token to force as the last generated token when max_length is reached. Usually set to eos_token_id. • num_global_tokens (int, optional, defaults to 128) — Number of global tokens to use for the encoder • block_size (int, optional, defaults to 512) — Block size for encoder local attention. Sequence length should be an exact multiple of block size. block_size must be a multiple of 2 if stagger_local_block is True • stagger_local_block (bool, optional, defaults to True) — Whether to stagger every other local attention by half a block This is the configuration class to store the configuration of a PegasusXModel. It is used to instantiate a PEGASUS-X model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the PEGASUS-X google/pegasus-x-large architecture. Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information. Example: >>> from transformers import PegasusXConfig, PegasusXModel >>> # Initializing a PEGASUS google/pegasus-x-large style configuration >>> configuration = PegasusXConfig() >>> # Initializing a model (with random weights) from the google/pegasus-x-large style configuration >>> model = PegasusXModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ## PegasusXModel ### class transformers.PegasusXModel < > ( config: PegasusXConfig ) Parameters • config (PegasusXConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The bare PEGASUS-X Model outputting raw hidden-states without any specific head on top. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. #### forward < > ( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None decoder_input_ids: typing.Optional[torch.Tensor] = None decoder_attention_mask: typing.Optional[torch.Tensor] = None encoder_outputs: typing.Optional[typing.Tuple[torch.FloatTensor]] = None past_key_values: typing.Optional[typing.Tuple[torch.FloatTensor]] = None inputs_embeds: typing.Optional[torch.Tensor] = None decoder_inputs_embeds: typing.Optional[torch.Tensor] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) transformers.modeling_outputs.Seq2SeqModelOutput or tuple(torch.FloatTensor) Parameters • input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? • inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. • attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: • 1 for tokens that are not masked, • 0 for tokens that are masked. • decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) — Indices of decoder input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are decoder input IDs? PEGASUS-X uses the pad_token_id as the starting token for decoder_input_ids generation. If past_key_values is used, optionally only the last decoder_input_ids have to be input (see past_key_values). • decoder_attention_mask (torch.LongTensor of shape (batch_size, target_sequence_length), optional) — Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also be used by default. • encoder_outputs (tuple(tuple(torch.FloatTensor), optional) — Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions) last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder. • past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head). Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all decoder_input_ids of shape (batch_size, sequence_length). inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional): Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. • decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, target_sequence_length, hidden_size), optional) — Optionally, instead of passing decoder_input_ids you can choose to directly pass an embedded representation. If past_key_values is used, optionally only the last decoder_inputs_embeds have to be input (see past_key_values). This is useful if you want more control over how to convert decoder_input_ids indices into associated vectors than the model’s internal embedding lookup matrix. If decoder_input_ids and decoder_inputs_embeds are both unset, decoder_inputs_embeds takes the value of inputs_embeds. • use_cache (bool, optional) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values). • output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. • output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. • return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. Returns transformers.modeling_outputs.Seq2SeqModelOutput or tuple(torch.FloatTensor) A transformers.modeling_outputs.Seq2SeqModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (PegasusXConfig) and inputs. • last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the decoder of the model. If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output. • past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head). Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. • decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the decoder at the output of each layer plus the optional initial embedding outputs. • decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads. • cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. • encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model. • encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the encoder at the output of each layer plus the optional initial embedding outputs. • encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads. The PegasusXModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, PegasusModel >>> inputs = tokenizer("Studies have been shown that owning a dog is good for you", return_tensors="pt") >>> decoder_inputs = tokenizer("Studies show that", return_tensors="pt") >>> outputs = model(input_ids=inputs.input_ids, decoder_input_ids=decoder_inputs.input_ids) >>> last_hidden_states = outputs.last_hidden_state >>> list(last_hidden_states.shape) [1, 4, 1024] ## PegasusXForConditionalGeneration ### class transformers.PegasusXForConditionalGeneration < > ( config: PegasusXConfig ) Parameters • config (PegasusXConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The PEGASUS-X for conditional generation (e.g. summarization). This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. #### forward < > ( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None decoder_input_ids: typing.Optional[torch.Tensor] = None decoder_attention_mask: typing.Optional[torch.Tensor] = None encoder_outputs: typing.Optional[typing.Tuple[torch.FloatTensor]] = None past_key_values: typing.Optional[typing.Tuple[torch.FloatTensor]] = None inputs_embeds: typing.Optional[torch.Tensor] = None decoder_inputs_embeds: typing.Optional[torch.Tensor] = None labels: typing.Optional[torch.Tensor] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) transformers.modeling_outputs.Seq2SeqLMOutput or tuple(torch.FloatTensor) Parameters • input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? • inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. • attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: • 1 for tokens that are not masked, • 0 for tokens that are masked. • decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) — Indices of decoder input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are decoder input IDs? PEGASUS-X uses the pad_token_id as the starting token for decoder_input_ids generation. If past_key_values is used, optionally only the last decoder_input_ids have to be input (see past_key_values). • decoder_attention_mask (torch.LongTensor of shape (batch_size, target_sequence_length), optional) — Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also be used by default. • encoder_outputs (tuple(tuple(torch.FloatTensor), optional) — Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions) last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder. • past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head). Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all decoder_input_ids of shape (batch_size, sequence_length). inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional): Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. • decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, target_sequence_length, hidden_size), optional) — Optionally, instead of passing decoder_input_ids you can choose to directly pass an embedded representation. If past_key_values is used, optionally only the last decoder_inputs_embeds have to be input (see past_key_values). This is useful if you want more control over how to convert decoder_input_ids indices into associated vectors than the model’s internal embedding lookup matrix. If decoder_input_ids and decoder_inputs_embeds are both unset, decoder_inputs_embeds takes the value of inputs_embeds. • use_cache (bool, optional) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values). • output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. • output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. • return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. • labels (torch.LongTensor of shape (batch_size, sequence_length), optional) — Labels for computing the masked language modeling loss. Indices should either be in [0, ..., config.vocab_size] or -100 (see input_ids docstring). Tokens with indices set to -100 are ignored (masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size]. Returns transformers.modeling_outputs.Seq2SeqLMOutput or tuple(torch.FloatTensor) A transformers.modeling_outputs.Seq2SeqLMOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (PegasusXConfig) and inputs. • loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss. • logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). • past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head). Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. • decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the decoder at the output of each layer plus the initial embedding outputs. • decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads. • cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. • encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model. • encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the encoder at the output of each layer plus the initial embedding outputs. • encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads. The PegasusXForConditionalGeneration forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Summarization example: >>> from transformers import AutoTokenizer, PegasusXForConditionalGeneration "California's largest electricity provider has turned off power to hundreds of thousands of customers."
2023-03-25 23:23:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26912543177604675, "perplexity": 11676.73852916659}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945376.29/warc/CC-MAIN-20230325222822-20230326012822-00185.warc.gz"}
http://tex.stackexchange.com/questions/69032/single-double-logarithmic-axis
# Single double-logarithmic axis I'm trying to make a plot which has a double logarithmic y-axis. Is this possible? This should change the distance of each logarithmic increment, for I want to plot so called Bit-error rates. To have a straight line of the measured data this type of scaling is needed (y=log(log(x)). It is known that in normal log plot the distance between each increment is the same... Unfortunately I was not able to find a solution in the pgfplots-manual. The only option given there is single log for one or both axis. An example is picture is given at the link below. This is the way it should look like. - log(log(x)) is valid only for x>1. – kiss my armpit Aug 27 '12 at 17:11 What you see plottet is log(x). But you need log(log(x)). I don't want to recalculate the data, I want to rescale the y-axis. See the plot at the given link... – Christian Aug 27 '12 at 17:20 Actually I didn't want to go into the details, but I think concerning your comment I have to. So what actually is linear to the x axis is the complementary error function erfc. To be precise: 10*log(Q^{-1}(BER)) with Q(BER)=0.5*erfc(BER/sqrt{2}) – Christian Aug 27 '12 at 18:00 pgfplots has no builtin solution for log(log(x)). However, it accepts x coord trafo/.code={<some custom trafo which depends on #1>}, and some inverse transformation using x coord inv trafo. You may need to customize tick positions explicitly, though. Would that help you? – Christian Feuersänger Aug 27 '12 at 21:07 I worked around by transforming y manually and using ytick={3.09023,3.71902,4.26489,4.75342,5.19934,5.612,5.99781,6.36134}, yticklabels={$10^{-3}$,$10^{-4}$,$10^{-5}$,$10^{-6}$,$10^{-7}$,$10^{-8}$,$10^{-9‌​}$,$10^{-10}$}, y dir=reverse, The recalculation is done with the inverse Q-Function: Q-1(y)=sqrt(2)erfinv(1-2y) This is the exact solution for y. log is just a good approximation. That's why people dont't use this transformation and use loglog-y-axis. I calculated the yticks with the same formula using y={10^-3,10^-4,10^-5, etc. ...} – Christian Aug 28 '12 at 8:53 As Christian Feuersänger said, you can use a y coord trafo to transform the coordinates on the fly. The tick labels would usually be re-transformed using y coord inv trafo, but the precision of the math engine isn't high enough for this (1000 becomes 997.8), so you'll have to provide the labels explicitly: \documentclass{article} \usepackage{pgfplots} \begin{document} \begin{tikzpicture} \begin{axis}[ y coord trafo/.code=\pgfmathparse{log10(log10(#1))}, domain=0:2, ymax=10000, ytick={10,100,1000,10000}, yticklabels={10,100,1000,10000}, extra y ticks={2,...,9,20,30,...,90,200,300,...,900,2000,3000,...,9000}, extra y tick labels={}, every extra y tick/.style={major tick length=3pt} ] \end{axis} \end{tikzpicture} \end{document} -
2015-11-25 08:27:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7792518138885498, "perplexity": 1599.145054651734}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398445033.85/warc/CC-MAIN-20151124205405-00043-ip-10-71-132-137.ec2.internal.warc.gz"}